It's circular reasoning, hidden in the definition of your assumptions.
By defining not clearly what a measurement is and observations are.
You must let the cat step out of the box your definitions put you in.
You have infinite freedom in your choices of definitions, listing assumptions is creating a false dichotomy. Especially when doing so conclude to exclude the most probable assumption : Locality.
Preserve locality, and find another self consistent theory which define properly what according to it a measurement is, an not take measurement and observations as axioms.
Will you grant me that it is at least possible to derive Bell's inequality by listing out a complete set of assumptions (including assumptions that define what a measurement is and what observations are)?
Of course you personally may disagree with some of these axioms (indeed, if you take Bell's theorem seriously you must), but certainly it is possible to list them, and thereby derive Bell's inequality?
Bell's theorem is a theorem. If hypothesis applies conclusion must follow. That's math. Everything is fine with it (They are a reformulation of "Bonferroni inequalities" or "Boole's_inequality" by the way).
You've got to reframe the problem so that Bell's theorem doesn't apply. When you build your theory, if you manage to define what a measurement is, so that you don't satisfy the hypothesis of the Bell's theorem, you get to avoid having to have its conclusions.
One of Bell's theorem implied hypothesis is that measurements/observations are probabilities, so by defining measurement instead as a conditional probability, you get to avoid being subjected to Bell's inequalities.
It's inductive reasoning, you don't get truth you only get self consistency, and a model that looks much nicer than QM.
> You've got to reframe the problem so that Bell's theorem doesn't apply. When you build your theory, if you manage to define what a measurement is, so that you don't satisfy the hypothesis of the Bell's theorem, you get to avoid having to have its conclusions.
This (in my opinion) a bad way of explaining how the standard reasoning goes. We start with a list of assumptions, we prove this inequality which it turns out is not satisfied, we reject (at least) one of our assumptions. These is no crackpottery here, this is the norm.
> by defining measurement instead as a conditional probability
This sounds like it probably doesn't get you anywhere, but I'll bite - what are we conditioning on? In the standard formulation of Bell's theorem they are conditional on the "hidden variable" we are assuming exists, as well as any relevant measurement settings but it sounds like you're imagining something wilder than that.
The local hidden state, but you don't get to set it from inside the universe when you do an experiment (this local hidden state is unobservable).
From inside the universe based on this hidden state, everything behave classically, pseudo-randomly based on the local hidden state.
But because you don't get to set the local hidden state during your experiment if you want to calculate the probabilities, you have to integrate over the possible values of the unknown hidden state, and this allows you to recover the strange looking quantum correlations.
Doing repeated experiment inside a universe mean picking a different initial local hidden state (because it's unobservable).
[Spoiler ahead]
The original idea is not from me, if you want the nitty gritty details, look at the work of Marian Kupczynski (Closing the Door on Quantum Nonlocality https://philarchive.org/archive/KUPCTDv1 ). Or his more recent works.
I think, ultimately, there are only 3 possible explanations for the paradoxes of the quantum world. 1) superdeterminism (everything including our choices in quantum experiments today were fully determined at the instant of the Big Bang), 2) something "outside" our observable reality acting as a global hidden variable (whether something like the bulk in brane cosmology or whatever is running the simulation in simulation theory) or 3) emergent spacetime (if space and time are emergent phenomena then locality and causation are not fundamental).
You seem to be suggesting something similar to option 2. Or am I misunderstanding?
The solution I'm suggesting is that nature does it in the really boring way : classically. It's almost like option 2, but the state is local.
This state is local and "inside" our universe, but we can't observe it. (A good analog for thing that are unobservable from inside the universe are seed of a pseudo-random generator).
The beauty of it, is just realising that Nature's simulator can be purely local and yet not be subjected to Bell Inequalities, but still reproduce the spurious quantum correlations, if you calculate the probabilities.
Violating Bell Inequalities is totally normal when you construct your theory such that Bell Inequalities don't apply.
I guarantee you can't break (for example) the CHSH inequality [1] with such a set-up (assuming I've understood your description of what you're proposing), and encourage you to try (with similar python script).
An easy formulation of the inequality is in the CHSH game section of the same article [2].
In the script I already gave you it shows an even stronger argument than CHSH inequality : Convergence (in law) towards the QM probas : It can replicate all the proba given by QM for any alpha,beta polarizer settings, up to epsilon, with epsilon that can be made vanishingly small.
QM breaks CHSH inequality, this replicates the proba of QM therefore it also breaks CHSH.
Of course I'm not banging against a math theorem wall, I just made some leeway to go around, based on the fact that conditional probabilities are not probabilities. Setting the problem such that measurements/observation are defined as a conditional probability (against an unobservable variable) suffice for making Bell theorem not applicable. It offers a whole class of solution to the seemingly paradoxical Bell Inequalities.
If I understand correctly what your script is doing, it emphatically does not meet the challenge I gave above (specifically it fails the "but the state is local" part of your comment).
This is because of the post-selection on line 44. This post selection involves information about the measurement settings of both party A and party B, and is therefore a (very strongly) non-local thing.
To give a more explicit example - imagine I am trying to break the CHSH inequality I linked above. My response functions are set up so Alice and Bob return completely random answers (0 or 1) independent of what they get sent and I add a line to the code much like your 44 except it just keeps the lines where xy = a+b (mod 2), i.e. we filter so that we keep only the trials where we won the CHSH game.
Then we have completely trivially "won" the CHSH game with probability greater that 75% entirely due to this magic non-local filtering.
That the subtlety of this post-selection scheme, the state is completely local :
By construction (L37) sela only depends on particle a, and (L38) selb only depends on particle b.
The measurement of a only depend on sela (and not selb), and the measurement of b only depend on selb (and not sela). There is no exchange of information.
The universe already has given you the observations it needed to give you by line 38.
The simulator only used local information to simulate the universe up to this point.
Like in qm once you have written down your measurements, you compare them to count coincidences. Sela just mean you registered a click on detector a, Selb just mean you registered a click on detector b. The logical_and is just you counting the observations as a coincidence or not, aka whether you got a click on both detector simultaneously. You are free to be as non-local as you want here, it is of no importance with regard to the state of the universe, the clicks already happened or not happened.
Ok I think I understand your intention with the code now. Sorry I was wrong before. I think what you're talking about here is what gets called the "detection loophole" in most of the literatire. The idea that if we detect only a small enough fraction of the events then they can be a sufficiently unrepresentative sample that we think we violated a Bell inequality even though the full statistics don't.
This has (in my opinion) been comprehensively addressed already. You can check out the references in the section of the wiki article here
but basically if you detect enough of the possible events in the experiment there is no way for nature to "trick" you in this way, "enough" is 83% for the standard CHSH inequality or 66% if you use a slightly modified. Recent experiments (in the last decade or so) are substantially over the threshold for the detection loophole to be a problem. This is one of the earliest papers where this loophole was closed with space-like seperated detectors from 2015. In this paper they used entangled NV centers in diamond as their qubits of choice, and so essentialy had zero events lost.
This is a second one from the same time. This one uses a more standard setup with photons and worked with about 75% detector efficiency for each party (well above the 66% required)
I therefore have a new challenge - break the CHSH inequality, while rejecting fewer than 17% of the events, or break the (easier) modified Bell inequality used in papers 2 & 3 while rejecting fewer than a third.
Edit: This is another, more recent paper where they use superconducting qubits and again lose no events
In your first paper, fig 1 (a), the "ready" box play the role of the "selected".
The universe tell you whether to select or not (it's not you missing events). It just tells it to you without giving any info on the underlying state. You can build a ready box without problem, and experimenters did, and that's all that is needed to break CHSH.
You've got to see it in an abstract way. Nature's is fuzzy and experimenters will always have to define box boundaries (spatial, temporal, and entanglement-pair selection boxes). This defining create conditioning which makes breaking bell inequalities something totally normal, expected and meaningless.
In a game of Chicken, you can get a better correlations between your actions that would seemingly be possible, by using a random variable oracle to coordinate. No information exchange needed. Measurement devices are kind of playing a continuous version of this game.
Its not remotely the same as the ready box, because the ready box sends its signal before the measurement directions have been chosen.
It would be equivalent to the ready box if your filtering happened without any reference to the measurement choices our outcomes.
If you're still unhappy with role of the ready box we can instead talk about either of the two purely photonic experiments which didn't use anything similar.
> The universe tell you whether to select or not (it's not you missing events).
In your numerics it is exactly missing events, there are a bunch of events and you postselect to keep only some of them. If you mean a different model you're going to need a python script which does something else.
>Nature's is fuzzy and experimenters will always have to define box boundaries (spatial, temporal, and entanglement-pair selection boxes)
Sure, but in each of the experiments I linked the selection in the experiments loses a small enough fraction of the events that the detection loophole is closed.
> Everything up to the [spoiler ahead] in this comment is (as far as I can tell) exactly how things work in standard formulations of Bell's inequality. There's nothing weird or crackpot there.
Moreover, to clarify, it's not necessary that the hidden variables can be measurable or that you can set them. So if a system like the one you described must follow the Bell's Inequality if all the other hypothesis are true.
I read the code and it looks like an accurate implementation of the model proposed in the paper.
From the paper you liked:
> However, the expectation values E(X1X2), displayed
in (13) contain a factor 1/2, meaning that they do not violate CHSH inequality.
I agree with that part. The model should not violate the Bell's inequality or the equivalent version.
> The agreement with quantum predictions is obtained only after the “photon identification procedure”, which selects, from the raw data, final data samples.
The selection rule is the weird part. It's described in equations 7 and 8.
x := sign(1 + cos[2(a − φ)] − 2 · r1)
where r1 is a uniform random value between 0 and 1.
a is the angle of the polarizer
φ is the secret variable that is the angle of the photon. (QM says that this type of entangled photons have no a secret angle, this model assumes that each photon has a hidden variable that is the secret value φ.)
So far so good, this calculation gives the expected result if you assume that φ is chosen from a uniform distribution between 0° and 360°.
v := r2 |sin[2(a − φ)]|^d (Vmax − Vmin) − Vmax
selected := (v ≤ V)
where r2 is a uniform random value between 0 and 1.
With the numbers in your program
v := r2 |sin[2(a − φ)]|^2 (10 − 0) − 10
selected := (v ≤ -9.99)
that is equivalent to
selected := r2 |sin[2(a − φ)]|^2 ≤ -0.001
I've don't remember anything similar, and I can't imagine what it means experimentally.
Most of the times r2 is not tiny, so most of the times this means that the sine is tiny that means that the secret angle of the photon is almost aligned or almost orthogonal to the polarizer.
So this is a device that can measure the secret angle of the photon. This is not a real device, so it can't be proposed as an alternative explanation of the violation of the Bell's inequality.
You may be wondering why I claim it's not a real device.
If you have a detector of polarization, once you fix the angle 'a', you can't distinguish:
1) Unpolarized light, that is in particular the type of light used in a Bell's inequality test where the state is (|00> + |11>)/sqrt(2) or in other versions (|01> + |10>)/sqrt(2), where 0 is horizontal and v is vertical, or a uniform random values of φ in the model of the paper
2) Light polarized in 45° to the detector's angle, that is like a constant φ in both models.
In both cases, you detect 50% of the photons.
If you use the selection device of this paper,
1) with unpolarized light you will get selections when r2 is very small or when φ is almost paraller or orthogonal to the angle a.
2) with polarized light at 45° you will get selections when r2 is very small
So with polarized light at 45° the number of events will be much smaller than with not polarized light.
In particular if you have the source of not polarized light and the detector, adding a polarizer at 45° in the middle will reduce the number of events in the firs case to 1/4 and in the other to almost 0.
Should be -0.01 You have added an extra 0, but it's not the point. You can pick any V, but the bigger it is the more quantum like the correlation are.
>secret angle
Also called "phase" this is the thing there is to "see" : It has a definite value for a single experiment, but every time you do the experiment it has a different value. It behaves like a random variable and that's what allows you to replicate the behavior of what QM does by generating random numbers. That's the subtlety that makes it so that Bell's theorem don't apply.
>So this is a device that can measure the secret angle of the photon.
It uses the secret angle of the photon, to give you something observable, but doesn't leak info about the state. ("it mixes trajectory space" so that each trajectory behave the same, but trajectory are independent, each trajectory just cycle through all the possible hidden states (like the seeds of a linear congruential generator) ).
For a definite (monte-carlo) trajectory, the photon will be definitely absorbed, or not absorbed, (or maybe absorbed later), but the simulator has a state and knows unambiguously how to evolve it, you as an observer though will have to define measurements more ambiguously (due to Heisenberg uncertainty principle (but that's not the point here) )
One other way to see what we are trying to do is factorizing the QM integral.
In QM you have proba = integral( wavefunction ),
You introduce a random variable and condition on it by writing it as proba = integral( integral( wavefunction | hidden_state) dhidden_state )
The point being that you can be smart in the choice of the hidden_state such that the inner integral behaves classically : You push the quantum correlation to the outside integral.
If you want to calculate the probability, you use monte-carlo for the outside integral. And classical simulation for the inner one.
But once written in such a way, you realise that if you want to simulate a universe (like Nature does it), you don't have to simulate all trajectories : Any one will do, as they are all independent from each other.
From inside the universe because you don't have the initial phase, if you want to calculate the proba you have to do a monte-carlo.
> Also called "phase" this is the thing there is to "see" : It has a definite value for a single experiment, but every time you do the experiment it has a different value. It behaves like a random variable and that's what allows you to replicate the behavior of what QM does by generating random numbers. That's the subtlety that makes it so that Bell's theorem don't apply.
That's standard local hidden variable theory. Bell's theorem apply.
The problem is that the device that is used in the article gives the wrong prediction for a beam with 50% vertically (φ=0) polarized light and 50% horizontally polarizad light (φ=90°). What is the ratio of selected photons as a function of the angle a?
Everything up to the [spoiler ahead] in this comment is (as far as I can tell) exactly how things work in standard formulations of Bell's inequality. There's nothing weird or crackpot there.
Your numerical code is impossible for me to read without some basic idea of what you're trying to show, but I'd like to point out that numpy has functions like np.radians, and np.deg2rad to convert from degrees to radians, you don't have to make your own.
Why not?
I could see that it might not if you are not clear about your assumptions