High level - having a discussion with the LLM about different approaches and the tradeoffs between each
Low level - I'll write up the structure of what I want in the form of a set functions with defined inputs and outputs but without the implementation detail. If I care about any specifics with the functions I'll throw some comments in there. And sometimes I'll define the data structures in advance as well.
Once all this is set up it often spits out something that compiles and works first try. And all the context is established so iteration from that point becomes easier.
> High level - having a discussion with the LLM about different approaches and the tradeoffs between each
I honestly can't imagine this. If the AI says "However, a downside of approach B is that it takes O(n^2) time instead of the optimal O(nlog(n))", what do you think the odds are that it literally made up both of those facts? Because I'd be surprised if they were any lower than 30%. It's an extremely confident bullshitter, and you're going to use it to talk about engineering tradeoffs!?
> Once all this is set up it often spits out something that compiles and works first try
I'm sorry, but I'm extremely* doubtful that it actually works in any real sense. The fact that you even use "compiles and works first try" as some sort of metric that the code it's producing shows how easily it could slip in awful braindead bugs without you ever knowing. You run it and it appears to work!? The way to know whether something works -- not first try, but every try -- is to understand every character in the code. If that is your standard -- and it must be -- then isn't the AI just slowing you down?
> I honestly can't imagine this. If the AI says "However, a downside of approach B is that it takes O(n^2) time instead of the optimal O(nlog(n))", what do you think the odds are that it literally made up both of those facts? Because I'd be surprised if they were any lower than 30%. It's an extremely confident bullshitter, and you're going to use it to talk about engineering tradeoffs!?
Being confidently incorrect is not a unique characteristic of AIs, plenty of humans do it too. Being able to spot the bullshit is a core part of the job. If you can't spot the bullshit from AI, I wouldn't trust you to spot the bullshit from a coworker.
Characters in The Sims games technically have us human players as gods, it doesn't mean that when we uninstall the game those characters get to come into our earthly (to them) heaven or have any consequences for actions performed during the simulation?
Sure it would. If you had Sims that went around killing other Sims, there's no way in hell you would promote them, or use their stimulated experiences as a bases for more complex/serious projects.
I'm not deep into LLMs or AI safety right now, but if you have a bad performing AI, you aren't going to use it as a base for future work.
I was about to go to bed so I was rushing through my initial comment... I was just trying to understand the motivations for trying to create a stimulated reality... Look at the resources we spend on AI?
One would have to be rather optimistic and patient if one was to hold out hope for the humanity experiment to not be destined for the Trash bin in this scenario, with our track record.
I doubt we would even register as a blip. The universe is absolutely massive and there's celestial events that are unthinkably massive and complex. Black hole mergers, supernovae, galaxies merging. Hell, think of what chaos happens in the inside of our own sun, and multiply that by 100 billion stars in a galaxy, and multiply that by 100 billion galaxies. Humanity is ultimately inconsequential.
Surely it would depend on what the simulation actually was?
If you imagine simulations we can build ourselves, such as video games, it's not hard to add something at the edge of the map that users are prevented from reaching and have the code send "this thing is massive and powerful" data to the players. Who's to say that the simulation isn't actually focussed on earth, and everything including the sun is actually just a fiction designed to fool us?
The common trait that all hypothetical high-fidelity simulated universes possess is the ability to produce high-fidelity simulated universes. And since our current world does not possess this ability, it would mean that either humans are in the real universe, and therefore simulated universes have not yet been created, or that humans are the last in a very long chain of simulated universes, an observation that makes the simulation hypothesis seem less probable.
If we're a simulation of a parent universe that is exactly like us just of it's past or an alternate past, then we likely should be able to achieve simulating our own universe within ourselves. Otherwise we're not actually a simulation.
There's another line of counter argument that various results in QM and computing theory would suggest that it's mathematically impossible for the universe to be simulated on a computer (i.e. the parent universe would have to look very different from ours vs ours in the future). But I don't recall the arxiv paper.
Of course it is. Scientifically the simulation “hypothesis” is actually the simulation idea and isn’t scientifically valid yet seems to be treated as such for some reason.
For me the interesting thing is, assuming miny worlds AND simulation theory are both true. Many worlds would seem to be a way to essentially run a/b tests on the simulation. So how would you separate out/simplify details of your simulation like far away planet stars and galaxies? The speed of light and light cones, don't seem to be enough to make a difference except for on the largest scales.
Don't sleep on daycare. It's good for the Childs social development. And a good daycare centre will follow modern pedagogical practices. If and your partner both enjoy your jobs, you'll appreciate not having to compromise your careers. And the break during the day is welcome, believe me.
He's said in his interview on the Lex Fridman podcast, that he wants to get AI in front of people as fast as possible to give us the longest amount of time possible to start to adapt to it.
I think he wants to scare people a little in a somewhat controlled way to onramp us to this new reality as fast as possible.
Unfortunately, doing stuff while being subjected to random accelerating forces makes many people feel nauseous, and physical tasks become more difficult.
why? in what world are you so busy that you dont have "enough time" to go to your home, take time to unwind, clean up after a day's work, cook a meal for you and your family and enjoy a family dinner/lunch/breakfast?"
replace cooking with laundry/cleaning/bathing/repairing/small fixes around the home?
is life REALLY SO FAST AND TOUGH that you have to do multi-tasking basic human social/personal proceses?
BTW, there's a built in accessibility feature in iOS called background sounds. It does rain and various noise types. And it can be added to the lock screen.
Low level - I'll write up the structure of what I want in the form of a set functions with defined inputs and outputs but without the implementation detail. If I care about any specifics with the functions I'll throw some comments in there. And sometimes I'll define the data structures in advance as well.
Once all this is set up it often spits out something that compiles and works first try. And all the context is established so iteration from that point becomes easier.