Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>When it comes to computing, as simulated beings improve their own efficiencies, the savings from improved algorithms reduces for the simulators. Eventually, any computing we do, the simulators will also have to do 1:1; because our algorithm and their own will be identical. We get expensive fast.

The simulation doesn't have to correspond 1:1 to what we consider it to be, they can use all sorts of hacks.

They could just implant in our (simulated) minds or instrument printouts, etc. the idea that we saw some increased accuracy in our measurements, without changing anything special about the simulated world.



Let's say I am a simulated being and I query WolframAlpha (WA). WA then has to do some computation to give me a result. This means that the simulation has to compute this result. It may be the the simulations algorithm is much more advanced than WA's algorithm, which means the energy it seems to take WA to do the computation is much greater than the actual energy it took. This means the simulated universe saves that energy difference (minus some overhead).

Now let's say WA improves their algorithm and it becomes identical to the algorithm used by the simulators. Now the computation that WA does is 1:1 with the computation the simulator has to do. You could argue, well, maybe they use better hardware that can do it more efficiently. Sure. But then WA can eventually do it on that same hardware. As all this improves, it necessarily goes to 1:1.

You might, instead, just store all queries in a lookup table the first time they are made. That would reduce your energy requirements. But now you've just traded infinite energy requirements for a possibly smaller, yet still infinite space requirement.

So, I think what you're saying is that you would just fake it. You just "tell" the simulated beings that they did the thing, and you skip over it. I'm not really sure how that would work in practice. If I need the result of my WA query so I can do some other math, and I'm writing all this down, how did I get all this notation on the page and how is it correct without anybody doing the calculations? Do they just force me to see 'something' and tell me what I'm seeing is correct? How do I get to an ultimate useful result?

In any case, if you're just going to go mucking about with people's minds, why bother with conscious beings in the first place?


>Now let's say WA improves their algorithm and it becomes identical to the algorithm used by the simulators. Now the computation that WA does is 1:1 with the computation the simulator has to do. You could argue, well, maybe they use better hardware that can do it more efficiently. Sure. But then WA can eventually do it on that same hardware. As all this improves, it necessarily goes to 1:1.

This does not "necessarily goes to 1:1".

That would require continuous progress, which the simulation might very well not be capable of. After all they've designed the universe, its physical laws, and constraints (including the thermal death of it). If they started with billion or trillion times more CPU power than anything we're up to now -- it could very well be that we'd never reach it.

In fact that could be a hardwired cap in our simulation. The very materials, laws, etc of the simulation could put a constaint on the demands of it.

The same way a NN that might have N inputs and L layers, wont ever magically grown more.

Our whole idea of 'computing' might not be any more universal in the higher level universe (of the creators of the simulation) than e.g. a simple algebra of addition and substation, or a matrix operation in a NN, is to ours.

Like they've set our "physical laws" (with e.g. theirs being nothing like it) they could have as well designed our math and logic possible within the universe, as a much smaller (with less expressive power and power demands) version of their own.

(In other words, I find limiting the idea often implied that the higher universe of the simulation creators needs to be just like ours, just with more computing power and advanced technology. Fundamental tenets of physical laws, math, and logic could be different -- the same way I can design a program that can just do addition).

>You might, instead, just store all queries in a lookup table the first time they are made. That would reduce your energy requirements. But now you've just traded infinite energy requirements for a possibly smaller, yet still infinite space requirement.

Or you could just feed bogus answers, that take little/no time to compute, but wire the "players" such that they think they got more detailed ones, which would be dirt cheap.

>So, I think what you're saying is that you would just fake it. You just "tell" the simulated beings that they did the thing, and you skip over it.

Yes.

>I'm not really sure how that would work in practice. If I need the result of my WA query so I can do some other math, and I'm writing all this down, how did I get all this notation on the page and how is it correct without anybody doing the calculations?

Who said it has to be correct? It's enough that you (as a simulated being) believe is correct.

>Do they just force me to see 'something' and tell me what I'm seeing is correct?

Yes.

>How do I get to an ultimate useful result?

"Useful" just means "able to affect/be applied" to the universe you (the simulated being) lives in. Which of core is the main specialty of the creator of such a universe: they can produce "useful" at will!

>In any case, if you're just going to go mucking about with people's minds, why bother with conscious beings in the first place?

Isn't this like saying "if you're going to be mucking about with weights, why bother with a neural net in the first place"? One could very well want a NN, and want to mess with the weights at any point they feel like it.

E.g. because you want to see what a simulated person will do under certain inputs, and revise those inputs, and so on. They don't necessarily want to see it (that is, us) "live its life fully free".

And even what we call "conscious" could be what the level of a worm or a NN is to us when compared to their consciousness.

These grew too long, but I basically make 3 (different, though some can be mixed) arguments in this. To recap:

1) If we assume that we could (if we try enough) reach the power limits of the host from within the simulation, then they (the hosts) could just skimp on our more computationally demanding questions and have us convinced they run successfully and trust the BS answers we got. They could even hardwire the system so such answers appear to be fine when used.

2) The simulation could have a design-enforced cap of the computation it can use, the same way our simulated systems don't by themselves start demanding more energy. We might think our universe is "open ended" and could scale to all kind of high computational use, but it already has caps like laws of physics, logic, entropy, etc.

3) Our computations and their needs could be a grossly simplified version of the computational models on the "host" universe of the simulation creators, such thats it's laughingly low in resources (this is similar to 2, but from a different angle).


Ah. I think the core problem here is we're talking about a loosely defined "school project", but I frequently shift into the constraints of a Bostrom-style ancestor simulation.

To respond, you're right, you can hypothetically do any of those things in the school project. Although I fail to see the point of consciousness in such a scenario, because you're basically just having conscious beings play D&D at that point. Everything is a success roll. The result matters, but you're throwing away huge chunks of information about the thinking beings. So why not just have mindless automata instead?

For a Bostrom-style ancestor simulation, those little details are the entire point of the simulation. You simulate the consciousness on purpose. You simulate the entire process because you're changing only a single variable. You might simulate the world if Lincoln hadn't been assassinated. You're likely running that same scenario multiple times to get probabilities of different results. You don't want to have a cheater function that returns T/F whether Lincoln had breakfast the next day. You don't want him to remember having had breakfast. You want him to actually have breakfast. So the shortcuts are out, because as soon as you introduce one, you have a different type of simulation.


>You don't want him to remember having had breakfast. You want him to actually have breakfast.

For a full "ancestor simulation" yes, that's true.

Though does it have to be full, to look like reality does to us (e.g. for the simulation to be like what we live in)?

At some level, remembering X and having had X is the same with regard to the individual (especially if you can also make their organism not feel hungry, depleted, etc, the same as having had breakfast).

This reminds me of the "5 minute world hypothesis": "The five-minute hypothesis is a skeptical hypothesis put forth by the philosopher Bertrand Russell that proposes that the universe sprang into existence five minutes ago from nothing, with human memory and all other signs of history included".

This wouldn't be a "Bostrom-style ancestor simulation" that starts from the Big Bang say and lets everything unfold -- but it could still be a simulation that sees how such things as us, given the starting rules and conditions set at an arbitrary point in time behave.

I guess a problem with that is that it doesn't let our evolution/history fully freely unfold (from big bang and very limited initial configuration).

Then again, they could have had it "fully unfold", got to something like a "year 2500 A.D" humanity, and now are trying various changes at different eras to see what they get. In such a setup they can take shortcuts, because they know the tolerances of the system (and e.g. that a "fake full" stomach of Lincoln wont change anything of substance).


I think history needs to 'freely unfold' from the point where you're trying to make your observations. Up to that point, you can just include memories. There's a question on where to get those memories, though. Do you run simulations until you find a history that matches 'close enough' to actual history and just assume the memories formed there are good? (It's not even just memories. The placement of a rock, or the sting of an insect could have a big effect or none at all. And all that information is lost to the past.)

So, in the Lincoln assassination example, you could start the simulation at the moment you 'change history' because everything else that informs that moment is either present in your simulation (e.g., the temperature of the theater), or is in the past so actions on those items is informed by the participants memories. After that moment, I think you do need to simulate the whole thing. The reason is that the experiencing self makes decisions in the moment, and those decisions may inform the remembering self later.

And, of course, the whole thing rests on the nature of consciousness. If it's not a black box to the simulators they might be able to take shortcuts; or they may not have reason to generate consciousness at all if they know what consciousness will do in a given scenario.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: