It is just weird that papers like this can be published. "Deep learning signal prediction effectively eliminated EMI signals, enabling clear imaging without shielding." - this means that they have found a way to remove random noise, which if true, should be the truly revolutionary claim in this paper. If the "EMI" is not random you can just filter it so you don't need what they are doing. If it isn't random, whatever they are doing can "predict" the noise, they even use the word in that sentence. They are claiming that they can replace physical filtering of noise before it corrupts the signal (shielding) with software "removal" of noise after it has already corrupted the signal. This is simply not possible without loss of information (i.e. resolution). The images that they get from standard Fourier Transform reconstruction are still pretty noisy so on top they "enhance" the reconstruction by running it through a neural net. At that point they don't need the signal - just tell the network what you want to see. The fact that there are no validation scans using known phantoms is telling.
I'm a professional MR physicist. I genuinely think the profession is hugely up the hype curve with "AI" and to a far lesser extent low field. It's also worth saying that the rigorous, "proper" journal in the field is Magnetic Resonance in Medicine, run by the international society of magnetic resonance in medicine -- and that papers in nature or science generally nowadays tend to be at the extreme gimmicky end of the spectrum.
A) Many MR reconstructions work by having a "physics model", typically in the form of a linear operator, acting upon the required data. The "OG" recon, an FT, is literally just a Fourier matrix acting on the data. Then people realised that it's possible to I) encode lots of artefacts, and ii) undersample k-space while using the spatial information using different physical rf coils, and shunt both these things into the framework of linear operators. This makes it possible to reconstruct it-- and Tikhonov regularisation became popular -- so you have an equation like argmin _theta (yhat - X_1 X_2 X_3.... X_n y) + lambda Laplace(y) to minimise, which does genuinely a fantastic job at the expense, usually, of non normal noise in the image. "AI" can out perform these algorithms a little, usually by having a strong prior on what the image is. I think it's helpful to consider this as some sort of upper bound on what there is to find. But as a warning, I've seen images of sneezes turned into knees with torn anterior cruciate ligaments, a matrix of zeros turned into basically the mean heart of a dataset, and a fuck ton of people talking bollocks empowered by AI. This isn't starting on diagnosis -- just image recon. The major driver is reducing scan time (=cost), required SNR (=sqrt(scan time)) or/and, rarely measuring new things that take too long. This almost falls into the second category
The main conference in the field has just happened and ironically the closing plenty was about the risks of AI, as it happens.
B) Low field itself has a few genuinely good advantages. The T2 is longer, the risks to the patient with implants are lower, and the machines may be cheaper to make. I'm not sold on that last one at all. I personally think that the bloody cost of the scanner isn't the few km of superconducting wires in it -- it's the tens of thousands of phd-educated hours of labour that went into making the thing and their large infrastructure requirements, to say nothing of the requirements of the people who look at the pictures. There are about 100-250k scanners in the world and they mostly last about a decade in an institution before being recycled -- either as niobium titanium or as a scanner on a different continent (typically). Low field may help with siting and electricity, but comes at the cost of concomitant field gradients, reduced chemical shift dispersion, a whole set of different (complicated) artefacts, and the same load of companies profiteering from them.
Would it be easier to deploy devices like this to developing counties without the infrastructure to support liquid helium distribution? I imagine a much simpler device WRT exotic cooling and distribution of material requirements is a plus. Couple that with the scarcity and non-renewable nature of helium, maybe using devices like this at scale for gross MRI imagery makes sense?
The AI used here as I read it is a generative approach trying to specifically compensate for EMI artifacts rather than a physics model and it likely wouldn’t be doing macro changes like sneezes to knees, no?
Zero-boil-off "dry" magnets have been widely used for the last decade -- we engineered away the thousands of litres of liquid helium in exchange for bigger electricity bills and some added complexity (and arguably cost). They basically put the cryocompressor/cold head on a large heatsinked plate and use helium gas as a working fluid to cool it and through conduction the rest of the magnet. The supercon wire has a critical T/B/Ic surface and (to my knowledge) they essentially accept worse Ic in exchange for higher Tc.
The cold head vibration can introduce a bit more B0 drift per day, but it's not practically a problem.
Regarding artefacts, one of the other reasons that MRI rooms are expensive are the Faraday cages. They do help. Not just in terms of noise floors but because there tends to be a lot of intermittent RF transmission from people like paramedics. Did you know a) that the international mayday frequency is 121.5 MHz, b) that overhead helicopter flights may transmit with kW of RF on that frequency, c) that the larmour frequency of protons at ~2.9T is 121.5 MHz, d) that Siemens "3T" magnets are routinely around 2.9T, and e) the voltage of the signal you detect in MR is micro to millivolt at best? I've seen spurious peaks in spectra from this.
The DL method the paper talks about "may work", but as the OP says this is deeply unsatisfactory for a whole host of reasons and is, in my overly sarky opinion, a bit like fixing a wall with rising damp by putting a television in front of it showing a beautiful, high resolution picture of a brick wall in the same colour.
Perhaps a standard bit of kit for an imaging room ought to be a receiver at the operating frequency outside of the room that can pause the sequence when a potential jammer is active, and log the event so that you could potentially make a report to the relevant authorities (perhaps encourage them to keep the transmitters off near your facility).
Pausing the sequence is also not so much of an option when contrast was just administered either, I guess.
(I suppose if the signal weren't so hot that it was saturating the ADCs there might be some opportunity to subtract it off... but that's starting to sound like another ten thousand phd-educated hours of labour mentioned up thread)
Except there are other uses for an MRI and something that doesn’t require super conductors would be pretty awesome and deployable to places that lack the infra to support a complex machine depending on near absolute zero temperatures and the associated complexities.
The reason why they know about it at all is because it was tracked using military equipment designed to track potential threats in the air. They know the trajectory of the fireball with very high accuracy and they can model where exactly it impacted the ocean.
Right, but they had the exact coordinates of that F-35 that fell off the carrier into the Med, which is a far shallower sea, and it still took weeks to find it.
I'd like to see them succeed, but it's just too unlikely they will find anything. Might get a few detailed undersea maps out of the effort though, but that's about it.
Consciousness comes indirectly into quantum mechanics through the measurement postulate. You have to assume something special about measurement which breaks unitarity. Not all physicist are convinced, but it is the traditional way quantum mechanics is taught (all of the "transition probabilities" are really measurement probabilities). The problem is that you can't tell if your apparatus caused the collapse or if it was you that measured the apparatus that then caused the collapse. And because you can only know that something was "observed" when a conscious physicist makes that final readout you end up with a solipsistic situation where the only thing you can be certain of is that it was that last conscious observer that could have for sure collapsed the wave function. It could be that the collapse happened in the apparatus itself but as far as I know you have no way of telling the difference between it projecting the wave function of the measured system or you projecting the wave function of the apparatus into a pointer state. Basically consciousness sneaks in through the fact that there is no definition of what constitutes a measurement - what makes one physical process a measurement as opposed to all others that are unitary and the only thing you are certain of is that final "conscious" readout should count as measurement.
This is misleading at best. A measurement has nothing to do with a conscious observer taking the measurement. An automated experiment that could note down the result of an experiment would note down the result without any conscious observer having to check in and the result would be the exact same.
Traditional QM requires an outside observer but has nothing to do with consciousness. Consider the collision events recorded at the LHC: the vast majority have never been looked at by a human.
> Consider the collision events recorded at the LHC: the vast majority have never been looked at by a human
What tells you those records aren't in a superposition?
The only way to determine anything about the records is to observe them, and at that point you have a human (= consciousness) in the loop. Anything you put between yourself and the experiment might be in a superposition until you observe it.
> What tells you those records aren't in a superposition?
This! Now substitute the record taking device and record keeping substrate with a human brain and this stays true. To rephrase your question:
"What tells you the state of your brain isn't in superposition?"
A human in the loop is not the end of the story, it's just yet another interaction in the quantum system.
When we experience decoherence of a quantum system, we interpret it as if something happened to the thing we observe, yet what actually happens is that something happened to us (and in turn to every system that observes us, and so on).
This is all utterly unintuitive for us, who experience the world through those brains, who feel being there, conscious in the moment. That feeling is one of our strongest direct perceptions of the world, and yet it's affected by the mechanisms of the physical reality. It's hard to accept though; it runs counter to many deep intuitions we have about ourselves, our inner lives, our identity, our values, our belief systems.
Wouldn't it be valid to consider the LHC and its unobserved data as a superposition of all their possible states?
Note that the recording of information has effects outside the system, even if no human looks at the data - recording a zero or a one will require different levels of power that, while they average to a mostly constant power consumption, are there nevertheless.
They have not been explicitly looked at, but they have been observed in the QM sense. That’s because decoherence has spread those records to conscious observers.
If the results had been kept completely isolated from all people, you could still say a measurement hasn’t occurred.
That's just not what traditional QM says. Traditional QM separates quantum systems (microscopic) from measurements devices (macroscopic). A measurement occurs when the measurement device interacts with the quantum system. You do QM by predicting the results of these measurements. Consciousness does not enter into it.
So why doesn't a measurement and/or decoherence occur when double slits or a half-silvered mirror interact with a passing particle? Are they not macroscopic objects which interact with the quantum information?
Perhaps it's only a "measurement" if the (alleged) particle has nowhere else to go after the interaction. But how does the (alleged) particle know which macroscopic interactions are terminal and should be counted as "measurements" and which are part of the rest of the experiment?
There is no answer to this in QM. You can calculate the probabilities and you will get predictable answers, but there are still >20 interpretations of what is really happening, and they all disagree with each other in important ways.
Can it be considered as until it causally affects something else (the observer), its state is not defined? Isn't it a causal relation that collapses the superposition?
I guess I got confused at "If he thought he was in a superposition he would believe in many worlds."
Do you mean "he can either think he can be in superposition or he cannot be, if he thinks he can be then he would believe in many worlds, which is incompatible with 'traditional' QM, which thus implies 'Traditional QM does not allow Wigner’s friend to be in superposition.'" ?
I don't think the Traditional QM ever explicitly disallowed a person to be in superposition; being in superposition it's just not something "decent people" do, I suppose.
Thus we needed to find other ways to come to terms with what we observe. But the math is the same. The rest is only metaphors that help us reconcile what we observe with how we feel inside.
Thus, many worlds is not antithetic to traditional QM.
Do you think that many quantum physicists during the 21st century accepted the possibility of living in a splitting multiverse?
I don't think that's historically accurate. If you read about how Everett's ideas were treated it's pretty clear traditional QM is antithetic to many worlds.
> You can always consider your automated apparatus to be part of the system and hence governed by unitary evolution.
But then wouldn't an external measuring apparatus (or person) observe the system in a single state instead of a superposition? Isn't that the same as a series of nested systems, each measured and having the state recorded by an apparatus that's part of a system that encapsulates it?
How do you know it would be the same? That's an assumption. The measurement problem is a problem because quantum mechanics postulates two types of processes "normal" unitary evolution and "measurements" that project out the wave function into eigenstates of the measured operator. This is obviously inconsistent since there is no definition, formal or operational of what process is a measurement and what process isn't a measurement. How do you know that your measurement apparatus isn't just evolving unitarily (which is what you would expect if it was a "normal" system) until someone looks at it to read out the results. Consciousness enters only in the fact that for anyone to know what happened to the experiment someone has to do the readout (otherwise you're just writing equations). At the point of readout you can't tell who or what did the collapse.
Many worlds interpretation of QM is that everything continues evolving unitarily, even you when you hear about the result. It's just that "being in a superposition" doesn't feel like listening to the garbled sound of someone simultaneously telling you that the result was positive and that it was negative, or looking at a blurry instrument screen reporting two results at once. Each component of you in the superposition feels like it got a single clear definitive measurement result. It feels the same as not being in a superposition.
It appears as if one of the main reasons for almost a-priori rejection of the Many Worlds interpretation lies in the words "many worlds". More often this metaphor gets in the way instead of helping. It seems as if you have to accept something "more", yet, at its core, the Everettian interpretation is the simplest pure consequence of QM. We just have to grapple with the psychological consequences if being a cog in the machine and we devise further metaphors to help us talk about how it would "feel" to be in superposition.
Is there a better way to build intuitions how an information processing system would behave while being in superposition?
Things get complicated when we throw humans in the mix, perception and consciousness and all. But modelling even simpler machines and their "point of view" can be insightful.
In this day and age it shouldn't be hard to imagine the working of a computer that uses computer vision algorithms to perceive its world and take action as a result, acting as a "causality amplifier". For example image we feed it the output of some QM experiment and instruct it output a description of what it "sees" (as we routinely do with cat pictures classification).
I assume it would be far less controversial to think about the unitary evolution of the wave function if the system being described is a QM experiment plus a computer rather than the same QM experiment and a human.
Yet, there are many similarities. The output of this computer (the classification) would be in superposition, and when measured by us it would appear as if the wave function collapsed. But we could add another such computer in the mix and ask ourselves "does it also see the wave function collapsing"? Well can program it to take the measurement and record the answer and then convey the answer to us (or to another computer down the chain). These "answers" are the "point of view" of the computer. It will "observe" decoherence yet it won't be decohered itself, as its own statement about whether it observed decoherence is itself in superposition and thus can be used as a further input to other machines witnessing subjective decoherence.
> It's just that "being in a superposition" doesn't feel like listening to the garbled sound of someone simultaneously telling you that the result was positive and that it was negative, or looking at a blurry instrument screen reporting two results at once.
It seems like there's regularly articles saying stuff like "QM implies that both outcomes happen, but it's a longstanding mystery why we only see one outcome", as if they seriously expect your hypothetical to be the consequence of QM. I'm so frustrated at that, because as you say, seeing one outcome is exactly what you'd expect to see from inside of a superposition.
It's almost as frustrating to see as it would be to see an article saying "Newton's theory of gravity says mass attracts mass, but it's a longstanding mystery why we haven't all fallen into the sun". The theory already has an answer for that if you follow the chain of consequences from it.
I’m not sure why the measurement problem is so difficult to understand for some; your explanation of it is very clear. You don’t have to believe in anything supernatural or mysterious to recognize there is a clear inconsistency here with “quantum systems sometimes evolve unitarily”.
In quantum computing, I can implement important effects using the measurement operation. For example, I can reduce the number of Toffoli gates used during an addition by measuring at particular places in the circuit. From this we can easily see that measurement must matter a lot in quantum mechanics, since its presence allows you to reduce the cost of certain tasks.
No one has postulated a "conscious measurement operation" with additional useful effects beyond the measurement defined in textbooks. (Well, okay, Roger Penrose says things that kinda sound like that, but that's pretty fringe and beyond the scope of quantum mechanics.) In fact, there's literally no known experiment that a person or a machine could perform that would distinguish a "conscious measurement" from the usual mechanical measurements quantum computers use. That is the sense in which consciousness has nothing to do with measurement.
IMO quantum mechanics really doesn't have anything to say about consciousness that wasn't already present in classical mechanics. Instead of saying "but how can an assemblage of gears experience an outcome" we're saying "but how can an assemblage of superposed gears experience an outcome". It's just the same hard-problem-of-consciousness confusion dressed in new clothing.
> Consciousness comes indirectly into quantum mechanics through the measurement postulate. You have to assume something special about measurement which breaks unitarity. Not all physicist are convinced, but it is the traditional way quantum mechanics is taught (all of the "transition probabilities" are really measurement probabilities). The problem is that you can't tell if your apparatus caused the collapse or if it was you that measured the apparatus that then caused the collapse. And because you can only know that something was "observed" when a conscious physicist makes that final readout you end up with a solipsistic situation where the only thing you can be certain of is that it was that last conscious observer that could have for sure collapsed the wave function.
> Here we report matter wave interferometer experiments in which C70 molecules lose their quantum behaviour by thermal emission of radiation. We find good quantitative agreement between our experimental observations and microscopic decoherence theory. Decoherence by emission of thermal radiation is a general mechanism that should be relevant to all macroscopic bodies.
> We observe that at temperatures below 2,000 K the emission rate is negligible, whereas at higher temperatures the molecules may emit photons whose wavelengths are comparable to (or even smaller than) the maximum path separation of ,1 mm. They transmit (partial) which-path information to the environment, leading to a reduced observability of the fullerene wave nature. Around 3,000 K the molecules have a high probability to emit several visible photons
yielding sufficient which-path information to effect a complete loss of fringe visibility in our interferometer.
I don't think there is consensus that decoherence = measurement. This is a sophisticated experiment that controls the level of position information being "radiated" to the environment. But Born's rule and the measurement postulate are still baked into the fact that you are then measuring the interference pattern.
One thing I never quite understood about the decoherence argument for resolving the measurement problem is that it forces the "collapse" to happen with local interactions. This of course makes sense since you want to think of a measurement as a "normal", unitary process. This means the environment has to "enact" the projection operator through interactions. If you have the most basic EPR pair, how can this happen? How can you decohere the wave function of the spacelike separated counterpart (B) by locally measuring the other spin (A) and then letting the environment project out the state of B through local interactions? In this experiment https://arxiv.org/pdf/1511.03190.pdf they have two detectors 60 meters apart in a subbasement of a castle. What are the interactions in those hallways that are carrying the correlation from the decoherence/measurement on one side of the experiment to the other, faster than light to maintain the correlations. This seems too basic of a question so I don't know if I'm just missing something obvious, but to me it seems like a pretty straightforward argument that the collapse cannot happen through decoherence. I've only dug out one random paper that mentioned this argument with very little subsequent referencing of it.
Certain experiments have shown that quantum mechanics is incompatible with local realism, and this means either that locality is wrong, and the world has faster-than-light communication, or that realism is wrong, and that there are no single outcomes to events. Assuming that locality is true and realism is wrong, when you measure particle 1, it's not that the result is being instantly communicated to particle 2 and causing it to come out that way, but all possible outcomes for each particle measurement happen separately, and all versions of things affected by particle 1's measurement only interact with versions of the world where particle 2 has or will have a compatible measurement. No collapse ever happens, though from each of the perspectives of humans inside the superposition, they will see results that look like their own measurement caused a collapse, because they each will eventually see a single consistent outcome for the results of the particles' spins.
The results of nonlocal and nonrealism theories come out the same, though nonlocal theories imply a lot of mysteries around the contradiction between instant communication and relativity of simultaneity, while nonrealism just paints a picture of a MWI universe that's surprisingly larger than our expectations.
They did diffusion tensor imaging. What this does, using MRI, is determine the local anisotropy of water flow in each voxel. You assume that this anisotropy aligns with with the axis of axons, since they limit the diffusion of water across their axis (water diffuses along them). You can then use the principal directions of the diffusion tensor to estimate in what direction the water, i.e. the axons are "flowing" giving you an approximate picture of how axons connect different parts of the brain.
MRI is ~5 orders of magnitude less precise than EM. Not even close to cellular resolution, let along single axon resolution. You can only see axon tracts, where thousands of axons may make up one pixel.
This also let you make "memory peekers". It was just a simple assembly program that would offset the pointer to a bitplane based on the mouse vertical movement. You could "look" at the RAM and it was one way to rip images since you would see the bitmaps of images from the game still in RAM after the reset.
Yeah, that was a great way to "visualize" chip RAM contents.
Curiously, on some later A500 OCS models, you could also see that into "slow RAM" expansion module range! Just needed to point bitplane pointers above 0x80000.
It appeared at 0xc00000 for the CPU and 0x80000 for chipset.
You can't transmit information through entangled pairs. What is instantaneous is the change of the state for the whole system (the pair) after you measure one of particles. However the result of that measurement (if it's non-trivial, i.e. if the measurement actually changes the state) is fundamentally random so the only thing you would be seeing is perfectly and instantaneously correlated noise on both ends.
You're right, it's a typo, the sum was on the wrong side. I fixed the equation. Thanks for catching it. Please let me know if you find any other issues. I wrote the original notes a while ago and I had to do some adjustments to make the LaTeX source work with MathJax so there might be other typos.
BTW, |u> = u_i |e_i> would be the correct equation in Einstein notation where the u_i |e_i> would be a contraction in which the summation along repeated indices is implicit. When you're dealing with a lot of tensor multiplication this notation is very useful because of its clarity and compactness.