Hacker News new | past | comments | ask | show | jobs | submit login

So, diffusion models are game engines as long as you already built the game? You need the game to train the model. Chicken. Egg?



here are some ideas:

- you could build a non-real-time version of the game engine and use the neural net as a real-time approximation

- you could edit videos shot in real life to have huds or whatever and train the neural net to simulate reality rather than doom. (this paper used 900 million frames which i think is about a year of video if it's 30fps, but maybe algorithmic improvements can cut the training requirements down) and a year of video isn't actually all that much—like, maybe you could recruit 500 people to play paintball while wearing gopro cameras with accelerometers and gyros on their heads and paintball guns, so that you could get a year of video in a weekend?


Why games? I will train it on 1 years worth of me attending Microsoft teams meetings. Then I will go surfing.


Even if you spend 40 hours a week in video conferences, you'll have to work for over four years to get one years' worth of footage. Of course, by then the models will be even better and so you might actually have a chance of going surfing.

I guess I should start hoarding video of myself now.


the neural net doesn't need a year of video to train to simulate your face; it can do that from a single photo. the year of video is to learn how to play the game, and in most cases lots of people are playing the same game, so you can dump all their video in the same training set


Ready to pay for this


most underrated comment here!


That feels like the endgame of video game generation. You select an art style, a video and the type of game you'd like to play. The game is then generated in real-time responding to each action with respect to the existing rule engine.

I imagine a game like that could get so convincing in its details and immersiveness that one could forget they're playing a game.


There are thousands of games that mimic each other, and only a handful of them are any good.

What makes you think a mechanical "predict next frame based on existing games" will be any good?


Oh, because we can link this in with biometric responses - heartrate, temperature, eye tracking etc.

We could build a 'game' which would learn and adapt to precisely the chemistry that makes someone tick and then provide them a map to find the state in which their brain releases their desired state.

Then if the game has a directive - it should be pointed to work as a training tool to allow the user to determine how to release these chemicals themselves at will. Resulting in a player-base which no longer requires anything external for accessing their own desired states.


IIRC, both 2001(1968) and Solaris(1972) depict that kind of things as part of alien euthanasia process, not as happy endings


Well, 2001 is actually a happy ending, as Dave is reborn as a cosmic being. Solaris, at least in the book, is an attempt by the sentient ocean to communicate with researchers through mimics.


Also The Matrix, Oblivion, etc.


Have you ever played a video game? This is unbelievably depressing. This is a future where games like Slay the Spire, with a unique art style and innovative gameplay simply are not being made.

Not to mention this childish nonsense about "forget they're playing a game," as if every game needs to be lifelike VR and there's no room for stylization or imagination. I am worried for the future that people think they want these things.


The problem is quite the opposite, that AI will be able to generate games so many game with so many play styles that it will totally dilute the value of all games.

Compare it to music gen algo's that can now produce music that is 100% indiscernible from generic crappy music. Which is insane given that 5 years ago it could maybe create the sound of something that maybe someone would describe as "sort of guitar-like". At this rate of progress it's probably not going to be long before AI is making better music than humans. And it's infinitely available too.


Its a good thing. When the printing press was invented there were probably monks and scribes who thought that this new mechanical monster that took all the individual flourish out of reading was the end of literature. Instead it became a tool to make literature better and just removed a lot of drudgery. Games with individual style and design made by people will of course still exist. They'll just be easier to make.


EXISTENZ IS PAUSED!


Holodeck is just around the corner


Except for haptics.


The Cloud Gaming platforms could record things for training data.


If you train it on multiple games then you could produce new games that have never existed before, in the same way image generation models can produce new images that have never existed before.


It's unlikely that such a procedurally generated mashup would be perfectly coherent, stable and most importantly fun right out of the gate, so you would need some way to reach into the guts of the generated game and refine it. If properties as simple as "how much health this enemy type has" are scattered across an enormous inscrutable neural network, and may not even have a single consistent definition in all contexts, that's going to be quite a challenge. Nevermind if the game just catastrophically implodes and you have to "debug" the model.


From what I understand that could make the engine much less stable. The key here is repetitiveness.


maybe the next step is adding text guidance and generating non-existing games.


I think the same comment could be said about generative images, no?


Maybe, in future, techniques of Scientific Machine Learning which can encode physics and other known laws into a model would form a base model. And then other models on top could just fine tune aspects to customise a game.


If only there was a rich 3-dimensional physical environment we could draw training data from.


Well, yeah. Image diffusion models only work because you can provide large amounts of training data. For Doom it is even simpler, since you don't need to deal with compositing.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: