The solution I'm suggesting is that nature does it in the really boring way : classically. It's almost like option 2, but the state is local.
This state is local and "inside" our universe, but we can't observe it. (A good analog for thing that are unobservable from inside the universe are seed of a pseudo-random generator).
The beauty of it, is just realising that Nature's simulator can be purely local and yet not be subjected to Bell Inequalities, but still reproduce the spurious quantum correlations, if you calculate the probabilities.
Violating Bell Inequalities is totally normal when you construct your theory such that Bell Inequalities don't apply.
I guarantee you can't break (for example) the CHSH inequality [1] with such a set-up (assuming I've understood your description of what you're proposing), and encourage you to try (with similar python script).
An easy formulation of the inequality is in the CHSH game section of the same article [2].
In the script I already gave you it shows an even stronger argument than CHSH inequality : Convergence (in law) towards the QM probas : It can replicate all the proba given by QM for any alpha,beta polarizer settings, up to epsilon, with epsilon that can be made vanishingly small.
QM breaks CHSH inequality, this replicates the proba of QM therefore it also breaks CHSH.
Of course I'm not banging against a math theorem wall, I just made some leeway to go around, based on the fact that conditional probabilities are not probabilities. Setting the problem such that measurements/observation are defined as a conditional probability (against an unobservable variable) suffice for making Bell theorem not applicable. It offers a whole class of solution to the seemingly paradoxical Bell Inequalities.
If I understand correctly what your script is doing, it emphatically does not meet the challenge I gave above (specifically it fails the "but the state is local" part of your comment).
This is because of the post-selection on line 44. This post selection involves information about the measurement settings of both party A and party B, and is therefore a (very strongly) non-local thing.
To give a more explicit example - imagine I am trying to break the CHSH inequality I linked above. My response functions are set up so Alice and Bob return completely random answers (0 or 1) independent of what they get sent and I add a line to the code much like your 44 except it just keeps the lines where xy = a+b (mod 2), i.e. we filter so that we keep only the trials where we won the CHSH game.
Then we have completely trivially "won" the CHSH game with probability greater that 75% entirely due to this magic non-local filtering.
That the subtlety of this post-selection scheme, the state is completely local :
By construction (L37) sela only depends on particle a, and (L38) selb only depends on particle b.
The measurement of a only depend on sela (and not selb), and the measurement of b only depend on selb (and not sela). There is no exchange of information.
The universe already has given you the observations it needed to give you by line 38.
The simulator only used local information to simulate the universe up to this point.
Like in qm once you have written down your measurements, you compare them to count coincidences. Sela just mean you registered a click on detector a, Selb just mean you registered a click on detector b. The logical_and is just you counting the observations as a coincidence or not, aka whether you got a click on both detector simultaneously. You are free to be as non-local as you want here, it is of no importance with regard to the state of the universe, the clicks already happened or not happened.
Ok I think I understand your intention with the code now. Sorry I was wrong before. I think what you're talking about here is what gets called the "detection loophole" in most of the literatire. The idea that if we detect only a small enough fraction of the events then they can be a sufficiently unrepresentative sample that we think we violated a Bell inequality even though the full statistics don't.
This has (in my opinion) been comprehensively addressed already. You can check out the references in the section of the wiki article here
but basically if you detect enough of the possible events in the experiment there is no way for nature to "trick" you in this way, "enough" is 83% for the standard CHSH inequality or 66% if you use a slightly modified. Recent experiments (in the last decade or so) are substantially over the threshold for the detection loophole to be a problem. This is one of the earliest papers where this loophole was closed with space-like seperated detectors from 2015. In this paper they used entangled NV centers in diamond as their qubits of choice, and so essentialy had zero events lost.
This is a second one from the same time. This one uses a more standard setup with photons and worked with about 75% detector efficiency for each party (well above the 66% required)
I therefore have a new challenge - break the CHSH inequality, while rejecting fewer than 17% of the events, or break the (easier) modified Bell inequality used in papers 2 & 3 while rejecting fewer than a third.
Edit: This is another, more recent paper where they use superconducting qubits and again lose no events
In your first paper, fig 1 (a), the "ready" box play the role of the "selected".
The universe tell you whether to select or not (it's not you missing events). It just tells it to you without giving any info on the underlying state. You can build a ready box without problem, and experimenters did, and that's all that is needed to break CHSH.
You've got to see it in an abstract way. Nature's is fuzzy and experimenters will always have to define box boundaries (spatial, temporal, and entanglement-pair selection boxes). This defining create conditioning which makes breaking bell inequalities something totally normal, expected and meaningless.
In a game of Chicken, you can get a better correlations between your actions that would seemingly be possible, by using a random variable oracle to coordinate. No information exchange needed. Measurement devices are kind of playing a continuous version of this game.
Its not remotely the same as the ready box, because the ready box sends its signal before the measurement directions have been chosen.
It would be equivalent to the ready box if your filtering happened without any reference to the measurement choices our outcomes.
If you're still unhappy with role of the ready box we can instead talk about either of the two purely photonic experiments which didn't use anything similar.
> The universe tell you whether to select or not (it's not you missing events).
In your numerics it is exactly missing events, there are a bunch of events and you postselect to keep only some of them. If you mean a different model you're going to need a python script which does something else.
>Nature's is fuzzy and experimenters will always have to define box boundaries (spatial, temporal, and entanglement-pair selection boxes)
Sure, but in each of the experiments I linked the selection in the experiments loses a small enough fraction of the events that the detection loophole is closed.
This state is local and "inside" our universe, but we can't observe it. (A good analog for thing that are unobservable from inside the universe are seed of a pseudo-random generator).
The beauty of it, is just realising that Nature's simulator can be purely local and yet not be subjected to Bell Inequalities, but still reproduce the spurious quantum correlations, if you calculate the probabilities.
Violating Bell Inequalities is totally normal when you construct your theory such that Bell Inequalities don't apply.