Latest update: the diamagnetism has been confirmed from a tiny LK-99 replica sample. The Author posted:
> Under the guidance of Professor Haixin Chang, postdoctor Hao Wu and PhD student Li Yang from the School of Materials Science and Technology of Huazhong University of Science and Technology successfully for the first time verified the LK-99 crystal that can be magnetically levitated with larger levitated angle than Sukbae Lee‘s sample at room temperature. It is expected to realize the true potential of room temperature, non-contact superconducting magnetic levitation.
Update: second video showing it's not paramagnetic
He repeatedly stated that the sample stands up(become straight) when the object is nearing it, and drop down when its far. The same applies for the negative polarity(same behaviour on N/S).
Source: Native Mandarin speaker.
Thanks, that makes sense from context. It sounded to me like the first syllable was rú, which got me wondering whether 铷 rubidium magnets are a thing... I found a patent for a "rubidium iron boron permanent magnet" https://patents.google.com/patent/CN102832038A/en but I guess that 铷铁硼 is a typo for 钕铁硼.
Yes, it is obviously not very accurate (sample is not in a rage :)) but I'm sure many are excited about the possibilities and don't know Chinese (Mandarin?).
These are a type of comments named "danmaku" (弹幕), originated from a Japanese video site Niconico. Under the player there's a button appearing as a TV icon with a 弹 character in it; click the button to toggle these comments.
Maybe there'll finally be a strong economic driver for ripping out all the old lead plumbing around the world, heh. Watch how quick Flint, MI gets their stuff updated if someone can make a buck on the old pipes!
I presume you're joking else you probably should - as an act of propitiation for your sins.
'Toxic effects of lead exposure on bioaccumulation, oxidative stress, neurotoxicity, and immune responses in fish: a review'
https://pubmed.ncbi.nlm.nih.gov/30884452/ and numerous other studies.
Doesn't anyone working in the physical sciences know how to record a decent video that isn't shaky, blurry, out-of-focus, or a handheld mobile phone recording of a computer screen?
He's using a digital microsocope! Just press the "record" button and upload that!
I can't wait for someone to successfully reproduce it, use a polaroid camera to take the photo, print it out, fax it, and have someone scan in the photocopy, take a photo of the scan from their screen, and upload it to a social media website that'll recompress it and put a watermark on it. Nobel prize winning stuff, right there.
PS: No, the sample does NOT "levitate". From what little I could see form the shaky and blurry video, it just moves around, like any magnetic substance would. It could be a piece of rusty iron that flaked off something else, for all we know.
I feel like one of the reasons is that it is a lot harder to fake a shaky video than a still one and he won't need to add in his voiceover in comparing to if he is directly recording from the digital microscope presumably.
Imagine you have dreamed of some new discovery your whole life. Let's say you are convinced that, I don't know, there exists in the world a naturally bright pink species of cat. You study cats; you know their behaviors, and can guess at the means of finding this elusive pink jungle cat. You search for years to no avail... when suddenly, without prior warning you are on vacation in some remote island and hear a rumor. This pink cat you've searched for is known to the locals, and they can tell you exactly how to find them. You follow the guidance and sure enough -- there they are! The legends are true! You're shaking with excitement and want to document this finding for other pink cat researchers, but you don't have any professional equipment. Just your phone... but any evidence is better than nothing, right??
TL;DR: You are not the intended audience for these videos. They're not trying to convince you -- it's meant for other researchers. We are just seeing the leaks.
Congratulations on owning an iPhone! (I do too, they’re great.) But I want to emphasize that the quality of the phone camera was not the point of that story.
Then what was the point? If I saw a unicorn and took out my phone in a hurry I could get a video of exceptional quality, granted I uploaded it somewhere without too much compression. Seems like you wanted to create a little abstract story ignoring the reality of hardware
I'm not him but I share his confusion. Honestly I can't figure out what sort of point you're trying to make here. What's the point of all the island and pink cats stuff if not to make the point that the researchers didn't have adequate video cameras available to them at the time?
jiggawatts's gripe is that the researchers seem generally incapable of effectively using the cameras they already possess (including iphones), where do pink cats and islands come into this?
Ha, I'll definitely admit it's not the best metaphor. My point was the last sentence of my original comment: when you're in the heat of discovery, witnessing something for the first time, you are NOT thinking about the quality of the video you are trying to capture! These videos were not intended for public consumption -- they were likely sent to a friend or something, showing the exciting thing they were witnessing.
It just irked me that people are looking at this video and thinking that this is a press release or something.
> jiggawatts's gripe is that the researchers seem generally incapable of effectively using the cameras they already possess (including iphones)
I’m not sure why there’s an expectation that they know how. Expertise in one subject doesn’t make you capable in others. I’m not sure if you’re ever worked with phds before, but there’s often an extreme contrast in ability and lack of ability, depending on the context.
There’s a good chance these people don’t spend much time messing with their phones.
Yes, that makes sense to me. Lot's of people are simply bad at taking photos, including myself. The isle of pink cats argument doesn't make sense to me though, I'm confused as to what that was even trying to say.
The claim is that the material in the video is the result of a replication attempt by the team at HUST and not something else, like an iron flake. This is their second attempt. Their first attempt failed. They’ve been posting about it all week.
Claims don't require special evidence whether they are extraordinary or not. "Extraordinary" claims can be proved with the same kind of evidence as other kinds of claims. Relativity was first tested by taking pictures of an eclipse, pretty pedestrian evidence relative to the cosmic scale of it's implications.
What is true is that, the higher the confidence of your prior belief, the less impact new evidence will have. So extraordinary claims simply require more evidence, because our prior is that they aren't true with fairly high confidence.
The point of that quote is of course not that the evidence must have some mythical flavor or whatever - that would be absurd, but rather that if your prior is a strongly held belief that something is not possible and thus the claim to the contrary is extraordinary, then the evidence sufficient to overturn that prior belief must naturally also be extraordinarily convincing.
It's just standard Bayesian reasoning (apologies if I'm misusing the terminology).
Given how much prior research has been done on superconductors, how attractive a target they are for research, and the demonstrated history of fraud in the field - it's not unnatural to retain skepticism concerning this story.
Doesn't mean the story is fraudulent or falsely believed to be fraudulent; it simply means that more evidence is needed to settle this convincingly. I'm sure will get that soon enough - either way.
Preprint by Griffin from Livermore Labs gives a theoretical explanation of LK-99's superconductivity and also explains why labs are having a hard time synthesizing it. There are two Pb sites in the crystal structure that Cu can substitute at, the lower energy one does nothing but it's the higher energy site that creates superconductivity.
"Finally, the calculations presented here suggest that
Cu substitution on the appropriate (Pb(1)) site displays
many key characteristics for high-TC superconductivity,
namely a particularly flat isolated d-manifold, and the
potential presence of fluctuating magnetism, charge and
phonons. However, substitution on the other Pb(2) does
not appear to have such sought-after properties, despite
being the lower-energy substitution site. This result hints
to the synthesis challenge in obtaining Cu substituted on
the appropriate site for obtaining a bulk superconducting
sample."
Are the superconductivity properties "all or nothing"? Meaning, if you get the synthesis wrong of the material, would you expect it to be "kind of" superconductive? (I guess, a very low, but non-0, resistance)?
Or is it something where it has normal resistance in all cases unless you get it just right, and then it becomes superconducting?
One more question - in physics, it seems like something having a value "0" is never really possible. Like, you can have something with low friction, but not zero friction. We pretend things have zero friction for high school physics problems, but they don't really. Is superconductivity like that? Is it actually, genuinely, exactly 0.0000(...)? Or is it just very close to zero, so close that it can behave like something with zero resistance (but in reality it's resistive, but just barely)?
It really is authentically zero. No resistance at all.
The thing that prevent infinities showing up, or letting us pour gigawatts of power into a single superconducting magnet until it turns into a black hole is that superconductors have a "critical current" over which it falls out of the superconducting state. (At which point it enters the normal, resistive state, suddenly becomes very hot, and boils off all your very expensive liquid helium, a "magnet quench" https://en.wikipedia.org/wiki/Superconducting_magnet#Magnet_... )
LK-99's published critical current is actually quite low, maybe because of sample purity issues. Hopefully it can be improved, otherwise we might be in the comical situation where room temperature superconductors exist and can be used but the cables end up being too thick and can't compete economically with copper.
> LK-99's published critical current is actually quite low, maybe because of sample purity issues. Hopefully it can be improved, otherwise we might be in the comical situation where room temperature superconductors exist and can be used but the cables end up being too thick and can't compete economically with copper.
This would btw be similar to a proof of P = NP, including the caveat that the best-case runtime of an NP-hard problem is a polynomial of degree 40000. It'd be polynomial runtime, but very much in the "ouch" and "infeasible" territory.
Have a friend who works in superconductivity and from what they say this is pretty standard - the way experimental superconductors are generally created is by spraying atoms onto a film over and over until you get one that works [< 10% success rate is common]. There are other ways to do it more reliably and in larger quantities once you have a confirmed superconductor, but they're also slower during the experimental stage.
I was a chemistry major, but didn’t take it up as a career so I won’t pretend to be an expert. There are a lot of tools in the chemist’s cookbook for dealing with exactly this sort of problem. It doesn’t mean it’s easy, but experienced chemists will understand how to approach solving this.
I have a PhD in material science and worked with superconductors, here are two things that is often misunderstood:
1) it takes a undergrad to make and measure a superconductor but a Nobel laureate to explain it.
2) in theory if you understand the mechanism of something you can improve it. In material science however almost always all you need is a trial and error approach to end up with a good result.
Conclusion: the effect is probably true, the explanation not
> 1) it takes a undergrad to make and measure a superconductor by a Nobel laureate to explain it.
Another example of Taleb’s idea that experimentation comes first and theories second. Universities pretend that it’s the other way around but apart from an Einstein it’s usually not.
That's what I am thinking when commenters compare LLMs with humans. Humans got the experimental loop going through the real world, pure LLMs are brains in vats. They need experimental confirmation to become truly creative.
But on the other hand are the commenters who think AGI to ASI is going to take hours. When is there time to experiment anything? Even ASI needs real world confirmation to do research.
AGI to ASI might take hours if all that's needed is more compute. Such an intelligence might exponentially spread across the internet. Though I don't think it's the most likely scenario.
not really?, Dirac, Von Neumann, etc. taking a look at the history of the nobel prize, its typically the case that something is predicted by theory, then once an experiment is able to verify, then prizes are given out.
Good science has often proceeded from 'weird' observations, leading to experiments trying to isolate the weirdness. The weird is necessarily outside the zone of existing theory, and requires experimentation to recreate the effect reliably and quantifiably. Once that's done, you can iterate on conjectures and experiments to try to get to the bottom of what's going on.
In reality, I suspect that the divide between theorists and experimentalists is really only a fundamental physics thing. And physics has been more-or-less at an impasse for the last thirty years, so that there are relatively few experiments worth running, and the bulk of the theorists are just (making stuff/rebranding as mathematicians) up because they don't know what else to do.
In other areas, there's so much weird still untouched that you don't get the same division of labor. Take a look at CRISPR - there was a lot of bench work and curiosity-driven exploration involved, simply because there was no theory describing what they were discovering... Or machine learning - the theory is still quite tenuous and mainly follows the experimental results.
If we are sufficiently expansive in our definition of what theory is, then sure, there is no experiment without theory, and experimentalists only discover things by being secretly theorists who occasionally stop thinking long enough to get their hands dirty. But I don't think this is typically what is meant by 'theory.'
CRISPR is again a good example - the initial 'weirdness' was an observation of lots of long-ish palindromic subsequences in bacterial DNA. To my knowledge, there was no pre-existing theory on the preponderance of palindromic subsequences in DNA.
Now, we could say that these researchers were proceeding from a theory that there's no easily discernable combinatorial macroscopic structure in DNA sequences, but I think that this would stretch the idea of 'theory' beyond common usage or even usefulness: There's no theorem or axioms of DNA sequencing being violated here.
In fact, I would expect that weirdness is a good sign of missing theory - a repeatable observation which is unsupported by existing theory. For perturbations in the orbit of Mercury, we see an explicit violation of Newtonian mechanics, but in many other cases (like CRISPR and palindromic subsequences) we have observations of structure in areas where theory simply does not yet exist.
(Apologies for the harsh first sentence; FWIW, your comment seemed belittling of the work of a large fraction of important scientists.)
how exactly has physics been at an impasse? this trope is so common now. did you hear it from Weinstein? we've had a metric ton of mindblowing theory emerge in the past 30 years. do you even study physics?
I was careful to say fundamental physics - there's great stuff happening in materials science.
We are in a world where exponentially growing amounts of energy are needed to confirm/deny an increasingly small slice of the standard model. We still don't know what dark matter or energy is (almost literal holes in theory), we haven't figured out how to scale up quantum computers, and we don't have scalable fusion reactors. The cost of progress is growing and the rewards are diminishing; I call that an impasse.
(Arguably we could call the ongoing advances in materials science cases of applied quantum mechanics, with some blurry line with fundamental physics.)
You are right that I'm not a physicist - I'm trained as a mathematician, and these days work in the intersection of machine learning, acoustics, and ecology. Having looked around a lot with impact-colored glasses, I don't see the argument for fundamental physics, but am happy to be wrong.
none of the things you listed have anything to do with fundamental or theoretical physics. It also seems that you don't know about some of the recent advances. we don't need to deny the standard model. Nature has already shown us proof that it's not sufficient. but the standard model is not actually fundamental physics. It's basically just a mash up of a bunch of aspects of particle systems. There's, again, a ton of work on alternatives. and I would have to disagree, it would seem that we do know how to scale, quantum computers, but we're still doing the material science for it. Physics is probably far more advanced than almost anyone really realizes... and that is the bigger issue. We are still working out what advances to pay attention to. Yet what we do know about , if the engineering work were done, is enough to get us "free" energy, for example.
Call me when someone finally schedules a test to possibly disprove string theory. It's been stuck in "The math works but we haven't solidified it" for like decades. It makes testable predictions right?
We discovered first "crystal diodes" in 1870s. There was a whole idustry doing "magic needle+crystal diodes" without understanding why they work. They just knew you had to touch a special spot on the crystal with a needle. Some spots worked better than others. It was art not science.
We had the first theory on how they operate in 1920s. Eventually we got transistors from that theory in 1950s.
I used to think this way but it’s significantly more liberating and true to see theory as plausible explanations. Theories are very often afterthoughts, where the utility is prediction of new events, or narrowing the search space of experiments. A theory explaining past events only is not really a theory at all. But you can narrow search space in many ways without elaborate theories. For instance, you may assume that since birds can fly, if we make a machine that looks similar, we might be able to fly as well. Terribly simplistic theory, but nevertheless a good starting point.
Calling theories useful if they predict future events is called Instrumentalism and it’s a dead end if your goal is to understand reality.
“The magician will pull a rabbit from the hat” might reliably predict what happens, and be testable, but it is not a good scientific explanation because it doesn’t help anyone understand how the trick works.
While correct you're being downvoted because you're equivocating on the term "theory." People are assuming you're talking about a fully baked, documented theory couched in precise language and equations. What you're actually talking about is tacit assumptions / models that are built in observations.
"Theory comes first" is particularly unfortunate wording because it seems to imply the naive point of view that people first come up with a thorough mathematical model of some phenomenon before testing and validating it with experiments. That's obviously not how science has worked throughout history.
Saying "all observations/measurements imply an underlying theory" would have been maybe more accurate.
You are assuming that we have thought of all theories that an experiment can confirm, before conducting the experiment. Considering the large search space, that would be quite surprising.
Can you tell me why a scientist would decide to do a certain experiment without a theory about what to test and why? Why that experiment and not any of infinite others?
We do this much of the time in genetics, genomics, and neuroscience.
New technology;
Large set of samples;
Quantify x<1> … x<n> mRNAs, proteins, lipids, metabolites;
Estimate how units stick together (statistically or literally);
Develop a “theory” of what units and groups of units interact to account for and predict higher order phenotypes (risk of neurodegeneration; lifespan).
Data take precedence. Mini-theories of molecular and cellular causality are assembled with some basic brain power and yes—-a dollop of theory and priors—-on the back of a massive pool of well structured data.
Exploratory biology of this type is/was insulted using the terms “fishing” or “mere description” but with current high throughput and high content technologies should be considered research “trawling” and factory-level science; not a cottage industry of small labs. This new style of science can be highly effective in biology and in astronomy as we are learning from Webb.
But it bugs the hell out of some classically trained reductionists who demand that clear hypotheses should drive science forward.
Much of the progress in modern biology falls into this alternative almost hypothesis-free style. I would say “story-free” style of science. Too damn many story-tellers.
I wouldn't disparage that method of discovery at all. But some of the theories required to do it are:
- Many brute-force combinations of these specific organic molecules will yield interesting results (this is the main hypothesis of the experiment)
- We don't need to include the molecules that we haven't included
- We have good criteria for determining what results are interesting
- Our instruments/methods of phenotype prediction are well-understood and working as expected
Once the experiment is run, the observations are meaningless unless interpreted in the context of whatever prevailing theories the scientists have in mind.
Which properties to check? Why that sample and not countless others? Why not spend the rest of his life checking different properties of the one sample?
Why would someone do an experiment without a theory to test? How would they know that a particular result was interesting without a theory about the expected result?
For the wetter sciences, the answer is somewhere in between. We usually have somewhat of an understanding of the prior theory. We then throw a lot of stuff at the wall…. Oh, this sticks? Eh, why? And then usually some more clever person will come up with reasons for why X works beyond what we already knew. A few years later, the field reaches some sort of consensus around one of the hypothesis.
This is only true from the perspective of developing new theories for why are things happen. Of course new theories arise from other theories.
It's not true for Discovery in general. New phenomenon can be created and observed without any theory for why they occur, either before or after observation
You can't even observe something without theories. For example:
- I need to observe here and not anywhere else
- I can reliably interpret my senses/the instrumentation is working correctly
- Objects of this type normally behave in X way, because of Y
- etc
Think of it a different way. You can come up with a theory without any observation whatsoever. Black holes, for example, were conjectured well before they were observed.
Well sure, that's the point that you're trying to make. I thought anything most people felt you were claiming you need a theory related to the novel Discovery or convention. That is to say how it might work and what the expected outcomes are. I can't go to the lab without a theory that my car can get me there. That doesn't mean I have a hypothesis for what will happen when I mix two substances in the lab.
Thinking that something will either happen or not happen if I mix those two substances is not a theory.
Nothing, just words that don't have any predictive value or convey any understanding of the world.
Maybe an example would help clear things up. go into the lab with a 1 lb weight and a 2 lb weight and weigh them together.
Saying that the total weight could either equal 3 lb or any value other than 3 lb does not constitute a predictive theory for how the physics of summing Mass works.
It might be a theory that a scale display a value when I put things on top of it, but that is a different topic, and not what I'm testing.
This Theory doesn't tell me how the world works and if the expected value is 0 lb, 3 lb, or 1 million pounds.
I could go into the lab with no operating Theory or hypothesis on what the value of two masses should be when added together and collect data.
I can collect data with no expectation of correlation, and after measuring the combination of many weights, deduce that there is a relation between the combined weights and the total mass, and in fact it is a simple sum.
> Maybe an example would help clear things up. go into the lab with a 1 lb weight and a 2 lb weight and weigh them together.
> Saying that the total weight could either equal 3 lb or any value other than 3 lb does not constitute a predictive theory for how the physics of summing Mass works.
The relevant theoretical background here is hidden in the "weigh them together" step: that there is such a thing as weight, it's described by a single real number, you can measure it in such and such a way, and so on.
You don't notice these considerations when it comes to weight and speed and size because they're hardwired into our brains by evolution. We're not so lucky when it comes to, for instance, the quark mixing angles - we can't even conceive of them without a background theory, let alone start measuring them.
Im not making the claim that theory Never informs experimentation. I'm making the claim that it possible to make discovery without theory about what you are exploring.
If you want to count distant theories like "I exist" or "The world exists", then sure, every action starts with theory. But like I said, that is every different than a specific theory about what outcome an experiment has, and the underlying physics that make it so.
If you think nobody can discover anything without a theory for what could be discovered, you are flat out wrong.
Consider the story of an artificial sweetener being discovered by a chemist who didn't wash their hands properly and the bread they ate that night was sweet.
Are you suggesting this was only possible because they had a "theory" that bread is not sweet?
Only if it's significantly more than pyrolytic graphite, which appears to be the record-holder at the moment. It would have to be sufficiently strongly diamagnetic to levitate over ordinary magnets, which could have interesting applications. Think low-friction and low-vibration bearings, etc...
Also PhD Materials Chemist. Agree completely. I follow superconductor publications and this is the only RT claim that has made me tell my friends "this could be the one". Using an electron paramagnetic resonance instrument like a beach combing detectorist is too hilarious to not be true.
I read that the development of the alloy used in the oxi pre-preburner side of Spacex’s Raptor engine (the main blocker to full flow combustion) was a trial/error grind vs an “a-ha” moment. Material Science sounds rough.
> material science however almost always all you need is a trial and error
Is it science if you essentially fiddle something until you get a result you are after and then try to fit a theory that would explain the result of the fiddle?
It is if you do it in a way that's reproducible, and if the methods used are based on already-known science. It wouldn't be scientific if there's only one machine in the entire world that can get the desired effect and nobody can figure out why, or if one of the steps involves praying to Cthulhu.
All this LK-99 related activity is exciting and all, too bad Twitter hype merchants hijacked the narrative (first Al now superconductivity they got them in every flavor it seems).
It's important to remember that scientists are people too and they'll bandwagon trending claims to get their name out there.
I'm no expert but the videos here are far from convincing and the other paper about simulating LK-99 is just that a simulation and parameters can be manipulated to induce almost any outcome.
And yet you'd think Elon Musk would've been quick the jump on this particular bandwagon – in past years I would've expected Musk to have setup competing teams at Tesla and SpaceX, first to replicate wins. Alas recently it seems that he does not have the singular focus he once had.
I am sure materials nerds (said affectionately, I am a computer nerd) at materials labs at top corporations, like SpaceX are working on this.
Picture this. You're a material's person. You have a collection of each of the elements in the table of elements. You've got a successful career in your field. You've got your dream job working in a materials lab as a scientist or engineer. Along comes this paper, describing a pretty easy process, with things you already have on hand, to create a revolutionary new material that will change the world, if the paper is true.
That person, at SpaceX, Apple, BASF, Google, GE, Samsung, DuPont, everywhere where there are passionate materials people in a lab who just want to believe, is going to do it over the weekend and after hours, just for the fun of it, nevermind Elon. They might not have the showmanship for a Twitter account with millions of followers but they have the lab and the materials and the drive to make it.
Musk doesn't need to setup competing teams, and if he had, would he be forced to tell the general public about it?
Absolutely Musk doesn't need to, but in the past his MO absolutely would be to do so and to tell the whole world about it. Remember the whole Thai Diver thing with the submarine? Or when he said they'd build loads of ventilators early on in the pandemic? Musk loves bandwagoning hype cycles.
For all of the "what will this mean if true" posts; this is a foundational technology that will produce other technologies and advancements we can't yet determine. If it's able to enhance current quantum computer's efficiencies, power, and reliability, by a magnitude, that in and of itself would catapult new technology discoveries in other areas which would likely create a self-reinforcing feedback loop of advancement.
It's not just about the trains or the chips, it's about the problem solving potential this material has by enhancing currently difficult to build and contain technologies.
What if we discover that a room temperature superconductor in a ceramic form is necessary for telepathic communication via prefrontal cortex plates? Sci-fi, but the point is we can't really know what will come of it especially when we're talking about problem solving machines that go far beyond the human brains capacity for problem recognition.
I just wrote a comment in another thread that's relevant here too. It looks like the first superconductor paper was released by a rouge researcher, without the agreement of the other two authors or the rest of the LK-99 group. This forced the LK-99 group to rush to publish the official paper, with a cost to quality. The LK-99 group released v2 a week later (on Saturday), and probably will continue to update it. A premature release explains many of the oddities about the two papers
The first paper submitted is titled "The First Room-Temperature Ambient-Pressure Superconductor." It lists three authors: Sukbae Lee, Ji-Hoon Kim, and Young-Wan Kwon. Its timestamp is Saturday, July 22, 2023 at 07:51:19 UTC. [1]
The second paper submitted is titled "Superconductor Pb10−xCux(PO4)6O showing levitation at room temperature and atmospheric pressure and mechanism." This paper lists six authors: Sukbae Lee, Jihoon Kim, Hyun-Tak Kim, Sungyeon Im, SooMin An, Keun Ho Auh [2]. Its timestamp is Saturday, July 22nd, 2023 at 10:11:28 UTC, or two hours and twenty minutes after the first paper. The second paper was updated a week later, on Saturday, July 29, 2023 at 01:53:47 UTC,
In both papers the first author is Sukbae Lee and the second author is Jihoon Kim, and in both their affiliation is given as "Quantum Energy Research center, Inc." in Seoul. The first paper posted has Young-Wan Kwon as third author. The second paper does not have Young-Wan Kwon as an author, and has four additional authors with various affiliations.
The second paper appears to have been was prepared in LaTeX, and the first paper appears to have been prepared in Word. The title and abstract of the first paper explicitly claim LK-99 is a room temperature superconductor. The title and abstract of the second paper don't explicitly claim that, though to me some of their terminology suggest LK-99 is a superconductor.
The accusation in [3] is that Young-Wan Kwon published the first paper without the consent of the rest of the LK-99 team, listed himself as third author, and left off the other four authors. The rest of the LK-99 team rushed to stuff what they had into the second paper, and released it 2 hours later [4]. This explains why there are two different papers from the same group submitted on the same day, it explains why the author lists are different between the two, and it explains why the second paper and not the first has been updated. I'm not in the field and have only read each paper once, so I'm not certain, but I'm betting it also explains a lot of the mistakes and messy bits of the papers.
This makes cautiously optimistic that this might be for real [5]. The papers on arXiv as of Monday night are consistent with a research group that succeeded in producing and identifying a room temperature superconductor using a fabrication process that is a bit tricky, and who were then forced to publish prematurely. There's nowhere near enough evidence to conclude LK-99 is a room temperature superconductor. But one failed replication doesn't prove LK-99 isn't a superconductor - if the fabrication process is finicky we'd expect to see a few dozen failed reproductions and a few successful reproductions.
Edit: Here's an appendix to bring us up to date to Monday night U.S. time. Two additional papers have been published in response to the LK-99 claims, for a total of four.
The 3rd paper is an unsuccessful attempt to reproduce the LK-99 group's results experimentally. It is titled "Semiconducting transport in Pb10-xCux(PO4)6O sintered from Pb2SO5 and Cu3P." 9 authors, all affiliated with the Materials Science department of Beihang University in Beijing. Timestamp Monday July 31st at 16:13:05 UTC. [6]
The fourth paper is a set of simulations of LK-99 that observes some similarities between LK-99 and other materials that are high-temperature superconductors. It is titled "Origin of correlated isolated flat bands in copper-substituted lead phosphate apatite." A single author, who is affiliated with Materials Science at Lawrence Berkeley National Lab in California. Published on Monday, July 31st, 2023 at 17:58:17 UTC. [7]
[4] Though I don't think we know publicly whether the all authors agreed to publish the second paper, either. It seems equally plausible that some portion of the group rushed to publish, or even that one author published it independently.
[5] And by cautiously optimistic, I really mean "extremely excited and nervous, enough to stay up until 3am collating arXiv timestamps"
> The accusation is that Young-Wan Kwon published the first paper without the consent of the rest of the LK-99 team, listed himself as third author, and left off the other four. Two hours later, the rest of the LK-99 team stuffed as much as they had into the second paper, and released it as soon as possible.
Kwon has already left the company early this year according to recent interviews. Moreover a supposed private communication with Auh [1] does suggest that Kwon was already offered an authorship in the joint 7-author paper but never replied back until the first arXiv submission went public. I don't know what Kwon actually wants out of all of this stunt, but it seems clear that there was already a big gap between Kwon and other authors and this "leak" only surfaced the gap along with LK-99.
> I don't know what Kwon actually wants out of all of this stunt
He may be just a grifter. Or he may have played a key role in the discovery but was going to be put as the 6th author (at best) in the joint paper. If so, he has already gained much more recognition than he would have otherwise.
Yeah, but that is quite amateurish. Just having the paper out first does not establish priority for the Nobel. What you need is the first paper with solid proof. It has happened more than once before that the guys with the second or third paper won the Nobel because the earlier paper was too weak.
To add a concrete example: Randy Hulet at Rice claimed BEC first, then Wieman and Cornell at Colorado, then Ketterle at MIT. Wieman, Cornell and Ketterle shared the Nobel. Hulet was passed over, because his data was judged insufficient to establish his claim.
Scientific collaborations have gotten steadily larger over time. Groups are no longer 3 people exchanging letters, and the 3 person limitation makes no sense.
>Who in their right mind would reward only 3 people out of 6?
In this case, the discovery, if it’s real, could get the awards for Physics and also for Chemistry. Because it’s such a once in a century level of scientific breakthrough
Perhaps Physics award mainly for a theoretical explanation and Chemistry award mainly for the experimental process
Therefore, six authors might be honored. It’s unlikely, but it’s a solution if the authors deserve it
Arbitrary punishing rules of the Nobel prize is part of the fun of science. As far as I can tell either wise they'd have dropped all these insane rules ages ago.
Perhaps with all the lower-hanging scientific fruit now getting more scarce, it makes sense to start allowing for a larger number of contributors to be recognized, even if they don't necessarily share in the prize money. Why not start such a new tradition when such a practical and wide-ranging technology comes into play? (assuming this is eventually validated)
Fun thought experiment, but I would probably go with something like the Haber Process to create ammonia. Enormous industrial applications which led to a boon in fertilizer development and feeding of humanity.
"This makes me way more excited this discovery miiight be for real."
I agree it is a strong indicator, that there might be more to it, when there is already fighting about the rewards. It is a shame though, if it really works, there should be enough fame (and money) for everyone involved.
I mean, a worldwide lossles energy grid, high speed trains, fusion plants and floating cars do sound nice .. assuming the material is real and can be mass produced eventually. Till then, I remain sceptical, though.
It's only a superconductor at milliamps, so we'll need another material for your high energy examples. The biggest deal would be providing a trajectory to materials science.
If it would be reliable working at milliamps, I am sure it can be put to practical use in my sci fi scenarios eventually (1000 cables with 1 milliamps each should add up to 1 amp?). Of course, unless it all just is a quirky side effect, that can never be put to practical use.
Only some applications require flexible cables. All of "my" use cases should be fine with using ceramics. I really don't think this will be the problem ...
In case I was missunderstood: it won't be the problem to use ceramic, compared to making mass produced room temperatur superconductors. This will be hard either way, if currently no other lab can even reproduce it.
"Competition in academia is so vicious because the stakes are so small."
think about this one though...
wonder what the real story is?
I remember reading about the story of the transistor and the hijinks by shockley:
"Bardeen and Brattain demonstrated the transistor device to Bell Lab officials Dec. 23, 1947. Shockley was reported to have called it "a magnificent Christmas present." But Shockley himself was not present when it happened and was said to be bitter over losing out on that day.
He had his revenge, though. Shockley continued to work on the idea and refine it. In early 1948, he came up with the bipolar or junction transistor, a superior device that took over from the point-contact type."
> It looks like the first superconductor paper was released by a rouge researcher, without the agreement of the other two authors or the rest of the LK-99 group
Please do cite any evidence, beyond a mere accusation.
The evidence is all the weird bits about how the paper was published, and other internet comments about the third author's relationship to the group.
The authorship of the papers is relevant to the science here, since both papers have quality issues, so I wrote out the accusation as best I understood it. The evidence is nowhere near enough to prove anything and largely rumor right now, but I do think it's suggestive.
What are the implications in general of room temperature superconductors? I just know that they needed typically ultra low temperatures right? But what would be the practical implications?
One of the reasons we can't build a practical fusion reactor is that we need to make very powerful electromagnets. Doing so requires a lot of electricity going through big circular circuits. Doing that without a superconductor creates too much heat and the whole thing melts.
They're trying to build them with existing superconductors instead, but those require super cold temperatures or super high pressure.
This is (maybe!) a superconductor that's cheap and exists at normal temperature and pressure.
It's got some deficiencies that would prevent it from being used in a fusion reactor, but it's existence may teach us how to build a better one that will work.
Cooler chips. Chips heat due to resistance, and superconductors have zero resistance (by definition). I suspect this is the company's intended application. The following quote from their patent is very suggestive if you know about semiconductor manufacturing.
> In addition, various energy sources used for deposition are not limited to chemical vapor deposition (CVD) using heat, but atomic layer deposition (ALD), sputtering, and thermal evaporation, e-beam evaporation, molecular beam epitaxy (MBE), pulsed laser deposition (PLD), etc. are also included without limitation as long as the raw material can be deposited.
Listed methods are those used in semiconductor manufacturing to introduce materials to the wafers. It is also significant that the company's marketing material describes its resistance as "1/10^4 less than copper", because copper is currently used as a conductor in chips. (It wasn't always so, it used to be aluminium, which is a fascinating story itself. Read more on https://en.wikipedia.org/wiki/Copper_interconnects.)
I think this terminology is really confusing for people not familiar with digital electronics. The term AC to a layman usually means a single sine wave like mains. The point is though that when square waves get really fast, they have a lot of high frequency content.
That becomes relevant for frequencies that are high enough to break Cooper pairs. But this material is claimed to be in the superconducting phase up to 400K, which corresponds to a superconducting gap of 8.3THz.
At the frequencies computer chips operate, it acts more like AC. Sure, you place some nice bypass capacitors capacitors very close to the chip so you can feed it nice clean DC power, but when you start switching those transistors in the GHz range, the signals inside the chip rapidly start to look a lot like AC.
You see the same thing with external signals, like a connection between CPU and memory. Operate the bus at 1MHz and it is effectively DC. The signal has a wavelength of 300 meters, so when the signal travels a distance literally two orders of magnitude smaller across your motherboard the AC behavior is negligible. Operate that same bus at 1GHz and your wavelength is down to 30cm. Got a 40cm-wide motherboard? Better treat it like a transmission line or it isn't going to work!
The latest versions of USB and PCIe are no longer binary digital signalling at all. It is a modulated radio signal [1] carried over a wire, similar to how DSL and cable modems work. Processor and memory busses will be next to switch over. Most digital systems will probably move in this direction as speeds continue to increase.
High-end CPUs are actually not powered directly by DC. Basically all server (and growing portion of consumer) CPUs are powered by multiphase buck regulators[1] which split the power from from the DC PSU rails into a parallel set of modulated buck regulator power stages. The outputs of the parallel regulators are recombined to generate DC (as the combined waveform of the phased AC parts).
The reason for this multiphase design is because it offers better power efficiency and better transient response to the CPU as the CPU moves between it's different power states (high vs low load).
Even on the DC input which ostensibly isn't doing anything it will probably be wiggling around a lot. You can attack electronic safes by analyzing the power draw into their computer as you type into
> Cooler chips. Chips heat due to resistance, and superconductors have zero resistance (by definition). I suspect this is the company's intended application. The following quote from their patent is very suggestive if you know about semiconductor manufacturing.
Power consumption due to R in metal stack is not a large proportion of total. There could be bigger opportunity in reducing wire delay though.
Anything that uses electricity would use less electricity because you aren’t wasting it as heat.
So in theory you could have a ridiculous computer that runs 1000GW of power through it without heating up. Or a flying car. Or power cables that lose nothing during travel.
Naturally that’s why it doesn’t make sense. It’s too game-breaking to be possible in normal conditions.
Glitching something out by making it basically maximum cold makes sense because you’re making it fully still so that it stops messing with the current. Glitching something out by squeezing it until it can’t move makes sense. Leaving it in a normal room means it would have to be something completely crazy.
> So in theory you could have a ridiculous computer that runs 1000GW of power through it without heating up.
I touched on this in a previous thread[1], but superconductors only stay superconducting below not just a critical temperature, but also below a critical current density and critical magnetic field strength.
These limits are different for different superconductors. It could very well be this superconductor has really high critical temperature but really low critical current density, in which case you can't have lots of current at zero resistance.
It makes about as much sense than a computer in your pocket that’s not even plugged in being many orders of magnitude better than a computer that takes up an entire room and can barely do on operation a second
>So in theory you could have a ridiculous computer that runs 1000GW of power through it without heating up.
That doesn't make sense. Power has to be consumed by something. All energy ends up as heat, so if the computer isn't heating up then the power consumption is 0W, not 1000GW.
>Or a flying car.
A flying car with superconducting motors would still have to expend the energy to stay aloft by pushing air downwards. Even without thermal losses to electrical resistance, you'd still have losses to friction from the moving components.
Converting magnetic force into electrical current (electromagnetic induction) is the principal behind of all electrical power generation. SMES (superconducting magnetic electrical storage) is essentially an inductor made using superconductors. Electrical current gets converted into magnetic force and then back into electrical current when the circuit is discharged.
I think we need force fields before we could make something like a lightsaber, and to the best of my knowledge nobody has any idea how we could make a force field.
Power conversion losses account for about 60% of the energy that goes into electricity generation in the US [1]. Superconducting generators and transformers could take that figure down to a fraction of that.
Is this actually true? It is not obvious to someone somewhat versed in physics how this is enabled by room temperature super conductors unless the entire riding surface is magnetized.
You can have a global grid of solar power. Since you don't loose energy on transmission and half of the planet is illuminated by the sun at any given time, you can have uninterrupted solar energy 24/7. The fuel is practically unlimited, the only cost would be the initial cost of building it and then only maintenance cost.
It doesn't need to be done as a single super-project. Each step along the way could be a sensible, contained, and iteratively beneficial project. Lots of countries are already building up solar; as that progresses they could construct cross-border grid connections to balance out the load, then further build up solar capacity to be able to transmit your daytime excess to countries further from your solar peak. At that point it would only take a couple cross-ocean power lines to go from multiple continent-wide power grids to a single global one, and then everyone involved is even more incentivized to build excess solar capacity.
It's not a piece of cake, but it's not a wild idea either. Adding a pair of LK-99 lines alongside each existing undersea optical line potentially gets you there by just increasing the maintenance budget. Lossless power transmission could change a lot of previously fundamental assumptions.
Many grids are interconnected already and the electricity distribution is already an international business, if the lk-99 becomes available as a final product I don't see why would be hard to built a SC backbone. Not only electricity but international pipelines that move fossil fuels to thousand of kilometres away are already a common thing.
How I get my head around it is, a material forged under ambient pressure and room temperature may be superconducting (lose resistance) as either pressure goes up or temperature goes down (or both).. so if the material itself is forged under high pressure and high temperature, the room temperature normal to us is very low for the material (?).
Probably all materials are superconductors at some crazy pressure/temperature.
If this is true, I'm really curious why SK didn't bury them. Samsung foundries have been also rans for years, this seems like completely game changing technology. If it's real...
I don't know much physics, but is there a reason we should be hoping for superconductivity specifically rather than a cheap material that's just a very very good conductor? Also, even if LK-99 were superconductive, that wouldn't imply it would be useful for computing, right? Surely there are materials that are better conductors than silicon but have mechanical and other physical quirks preventing us from using then for chips.
Silicon is not valuable because it is a good conductor, in fact it is actually more like an insulator in its natural state. However, mixing the right materials into it, it becomes a very good semiconductor. So it can let current flow easily or block it effectively, making it a good basis for transistors. The problem here are the normal (usually copper) wires between transistors that cause heat to build up whenever current flows through them. That limits how densely you can pack wires in a chip. With a superconductor, we could not just have much smaller and much faster processors, we could also have many designs that don't require cooling anymore. Imagine a power monster chip like the RTX 4090 running state of the art LLMs locally on your phone. That's the kind of stuff that's at stake here and that's also why everyone and their uncle wants to be author on the original paper.
> The problem here are the normal (usually copper) wires between transistors that cause heat to build up whenever current flows through them.
IIRC with the high frequencies of modern processors switching losses tend to be a larger factor than resistive losses. If you can remove the resistive losses that leaves you with a greater heat budget from switching losses which might help drive up frequency even more.
A few practical things that we need superconductors for which currently require very expensive cooling:
1) MRIs — could be way cheaper, smaller and more ubiquitous with room temperature superconductors
2) Maglev trains — superconducters expel a magnetic fields that can make things "levitate" (called the Meissner effect). Maglev trains have minimal friction and are incredibly energy-efficient.
3) Quantum computing — most designs require cooled superconductors. Room-temperature superconductors are a requirement for future portable quantum computers, or quantum chips that sit side-by-side in conventional computers
There's a ton more implications, just think about how everything from electric cars to smartphones was only possible due to modern batteries — that's the scale of technical innovations that could be built on this.
MRI is a potential application of room temperature superconductor in general, but not specifically for LK-99. One disadvantage of LK-99 is that it has very low critical current, see patent figure 10.
On the other hand, 300 mA is more than enough for computing applications.
We are still at the if-it-is-real stage. I would not take any wagers that the generation one release is indicative of an industrialized product performance.
It would be unlikely for the first observed room temperature superconductor to also be practical. LK99 contains lead which is something we are trying remove from consumer electronics.
What matters is all the investment and attention that would follow the verified discovery of such a material, the huge glut of research would surely yield even better candidates.
The ideal end goal is something like the superconducting tape that we already have, but that works without cooling and is preferably non-toxic.
Since there will be impurity anyway, there is no difference whatsoever in practice between zero and very very low resistance. Quantum Energy Research Centre in fact describes its resistance to be "lower than 1/10^4 of copper", not zero, which is very practical minded.
Since the company specifically developed thin film by vapor deposition method, I think we can be sure that computing is the intended application. Search for my other comments for details.
There is a lot more than just silicon wafers in our computers. Let's start with all those traces on board. Right now they are made of copper. But what if they could conduct with 0 resistance?
Sure, replacing copper wires help too, but the real deal is to replace copper interconnects inside the chips. It looks like the company is going for replacing copper interconnects in chips.
Is this abysmal quality of figures and charts normal? I'd imagine the devices used for measuring allow exports of CSVs and thereby enable external generation of proper figures instead of these pixely-screenshot-type pictures...
Since I know nothing when it comes to superconductors and material science, can anyone summarize the differences between the original paper and this one? (No ChatGPT responses please.)
In this version, in figure 6a and b the new log scale of the IV curve looks quite linear and increasing in the range of 150 to 250 mA. I thought that it should be flat if it was a superconductor (no resistance). Can anyone explain how that behavior still supports it being a superconductor?
Because you're not measuring a pure superconductor sample. To be able to observe the cliff on the IV curve you need enough of the current path in between the electrodes to be superconducting, but certainly not all of it will be. This is true even in commercial superconductors, just with the ohmic losses pushed down even further.
Are there any indications how scalable LK-99 production might be if proven out? Are we looking at exotic materials and processes or something that requires little tooling and common elements that will keep the cost low?
Sure it would work like copper wire minus the losses due to resistance.
Probably it would not make much sense to replace glass fiber because problems like (thermal) noise and cross contamination by other signals would still exist.
tl;dr: The only change, aside from the increased number of broken bookmarks, is that the misleading Fig. 4a in the page 7 has been splitted into Fig. 4a and 4b with different axes.
What's up with "Error! Bookmark not defined"? Is this some kind of trolling? I guess its presence in the first version is understandable if it was posted in hurry, but why leave it in the revised version?
This can happen when saving a Word document as PDF. You don’t see it in the Word document before, but it ends up that way in the PDF. They apparently didn’t double-check the PDF.
"In no case may a prize amount be divided between more than three persons." [1]
Sad state of affairs to expect a reward for brilliance and to have science bottlenecked by a merchant of death. Just the fact that the merchant of death gives "peace prizes" should have made the entire affair laughable a long time ago. Perhaps it's time to move on from this ancient and limited understanding of appraisal, awards.
I'm in no position to ever receive a Nobel prize, but I'm fairly certain I am moral enough to reject such a grandstanding award coming from such a troubled tradition. Yes, I just virtue-signalled calling immoral every recipient of the Nobel Prize, mostly out of sheer distaste for the ceremony, but also thinking of the terrible inefficiency which has plagued science and research in the last 122 years. How many discoveries were missed because following the course to unveil them was not "Nobel-worthy". How many scientists had their careers ruined for pursuing the Nobel award without being granted anything. Just the fact that only 1.8% of the Nobel Prize in Physics were awarded to women [2] shows the entire thing as the charade it is, no other context needed.
We detached this flamewar tangent from https://news.ycombinator.com/item?id=36953091. Can you please not do this? Your post broke several of the site guidelines, including "Eschew flamebait. Avoid generic tangents."
Edit: you've unfortunately been using HN primarily for ideological battle. That's not allowed, and we ban accounts that do it, regardless of what they're battling for or against. I've banned this account accordingly. For past explanations on this point see https://hn.algolia.com/?sort=byDate&dateRange=all&type=comme....
yet 95% of the prison inmates are male. There are obvious differences on the extreme ends of the spectrum between males and females and the nobel price only considers extremes to begin with.
Your [2] source only mentions statistics since 2012. It's disingenuous to think this could spread down to 1901 if you wish to use the 1.8% as a point to argue.
Society was very different a hundred years ago. Women's access to education amongst other things has improved severely over time, and we do have seen an increase in female Nobel laureates compared to the earlier days.
Sure, it doesn't spread down to 1901, but the uptick started around 1970s [1].
Anyway, my main rant was about the inefficiency of giving awards to "lone geniuses", at most three, of expecting said "lone wolves" to exist in the first place, and then comes the discussion if the wolves are male or not. Unfortunately it got derailed into talking about pseudoscientific scalars such as IQ and sex binaries.
Most Nobel winners get their prizes during their 60s-70s. It means that people who graduated in the 1970s are just starting to get their Nobels.
And the trend of aging winners continues as research become more complex and specialized and therefore requires more experience. The time between theory and the practical discovery (Nobel prizes are only for the latter) increase as experiments become more complex too. So much that it may become difficult for theorists to get a Nobel in their lifetime. That LK99 thing, if real, is more the exception than the rule.
True. That's why we probably need another mechanism for incentivizing high-risk/high-reward/multi-decennial research. Perhaps, even if LK99 is not real, something interesting will mutate out of it: some kind of platform merging arXiv, Twitch, Patreon, making the researchers to be more transparent, more willing to share partial results, failures, and even be rewarded to fail. Some time ago had this "fail database" in mind, where one would upload all the data of the experiments that didn't work. I see there is a "FailCon" [1].
The other replies to this are making terrible arguments. The actual strongest case is that the Nobel prize is only given to extremely old people, which means it reflects the gender ratio of the field a few decades ago and not the current one.
Of course, they did give Donna Strickland a prize for the first paper she published as a grad student.
Your arguments aren't unique by a long shot; you're simply witnessing two seperate people who have been discussing technical matters since pre-WWW Usenet days observing there are hackers who have been making those arguments for 40+ years.
I can't speak for Don but for my part they got dull 35+ years back.
I'm sure you can find a rebuttal or reinforcement, whichever you need, elsewhere.
All of a sudden it seems the commenters have started name-calling and stereotyping the poster (realjhol) without any evidence or links to dispute his statements. Maybe replying in good faith would be a more helpful discussion, with links to support/deny his arguments?
I've read his posting history. It's quite clear what kind of a person he is, especially from the patterns and subjects of all his heavily downvoted and flagged posts. As if that wasn't already obvious from his recent sexist posts. But go read them yourself and carry the water for his bigotry if you like, it'll just say a lot about yourself too. Like Elon Musk and his flashing "X", some people just desperately want you to know they're assholes.
Makes me wonder if ChatGPT can be leverage as a tool be used to come up with room temperate
superconductor material which human haven't thought of before?
ChatGPT is literally built on collective human knowledge and massive amount of data.
If ChatGPT has the power to replace software developer, this should be childplays.
Only on HN could someone say "well if a statistical language model can 'replace' a software developer, then one of the holy grails of material science should be child's play."
Because clearly being a software developer is more difficult than scientific research...
This is everyone's daily reminder that ChatGPT is not deterministic, does not think, does not reason, etc. It is a statistical language model that has been fed everything from star trek to research papers.
I'm not sure if ChatGPT can solve super conductivity, but the current version of ChatGPT can handle the bears in space question and the Monty Hall problem from the linked article.
GPT-4 was released a month after that article published and its problem solving capabilities are much more sophisticated than GPT-3.5 (which ChatGPT first used.)
It would be interesting to find some new examples to test it with since some people will claim the model didn't getter smarter – it only learned the answers to those questions.
In my anecdotal experience, GPT-4 is much better at problem solving.
---
how many bears russia has launched into space:
"As of my knowledge cut-off date in September 2021, there is no publicly available information to indicate that Russia has ever launched bears into space. The idea of sending bears into space might sound intriguing or humorous, but it's important to note that space missions are highly complex and expensive endeavors that typically focus on scientific research and exploration.
Animals like dogs, monkeys, and mice have been sent into space by various countries to study the effects of space travel on living organisms, but bears are not among the animals known to have been included in space missions."
###
The table didn't fit in the car because it was too small. What was too small?:
"The sentence is somewhat ambiguous, but the way it is constructed suggests that the car was too small to fit the table. The phrase "it was too small" refers to the car, and this is why the table didn't fit inside."
###
There are three doors with a $10,000 hidden behind one door and a bag of goldfish hidden behind each of the other two doors. You choose Door 3. No matter which door you choose, the host will show you a bag of goldfish behind one of the doors you did not choose (say, Door 1) and then ask if you wish to switch your choice to the remaining door. What is the probability of winning the $10,000 prize if you do switch?:
"The situation you're describing is known as the Monty Hall problem. It's a probability puzzle named after the host of the American television game show "Let's Make a Deal." The problem can be analyzed using conditional probability.
Here's how you can think about the situation:
1. *Initially*: When you first choose a door (e.g., Door 3), there is a 1/3 chance that the $10,000 prize is behind that door, and a 2/3 chance that the prize is behind one of the other two doors (Door 1 or Door 2).
2. *Host's Reveal*: The host, who knows what's behind each door, then opens one of the other two doors to reveal a bag of goldfish. This does not change the initial probabilities.
3. *Switching Doors*: If you switch your choice, you'll win the $10,000 prize if and only if your initial choice was incorrect (i.e., you initially picked a door with a goldfish behind it). Since there was a 2/3 chance of that happening, there is a 2/3 chance that switching doors will lead you to the prize.
So the probability of winning the $10,000 prize if you do switch is 2/3, or approximately 66.67%."
Interesting see Asian scientists working together on this. Grinding it out over years - doing the work.
Meanwhile in Europe and America they sit in their expensive “centers” and “institutes” and ask for hundreds of millions more for their experiments which have resulted in incremental at best results. And that some had the gaul to mock the Korean scientists and their poor grammar and amateur labs.
And now the West will iterate on the breakthrough. The opposite of how it has been.
This is why space exploration is important. If we could find some astroid having superconductor materials at room temperature, everything will be revolutioned here.
I strongly disagree with this view. We don’t even know if such material can exist, and the intersected odds of it 1) existing 2) naturally and 3) us stumbling into it in the short term are astronomically low.
Not saying space exploration is not important, but the reason you cite for it is about at the bottom of the list.
Latest update: the diamagnetism has been confirmed from a tiny LK-99 replica sample. The Author posted:
> Under the guidance of Professor Haixin Chang, postdoctor Hao Wu and PhD student Li Yang from the School of Materials Science and Technology of Huazhong University of Science and Technology successfully for the first time verified the LK-99 crystal that can be magnetically levitated with larger levitated angle than Sukbae Lee‘s sample at room temperature. It is expected to realize the true potential of room temperature, non-contact superconducting magnetic levitation.
Update: second video showing it's not paramagnetic
https://www.bilibili.com/video/BV13k4y1G7i1/