Well, as the article points out, the things being imaged are most definitely actual atoms. The only thing TFA is nitpicking about is that light isn't used to image those atoms.
This is about as interesting as pointing out the fact that an ultrasound picture of a baby isn't an "actual picture", since we use sound instead of light to make the image.
The concept of the scanning tunneling microscope is so simple and ridiculous that it was probably thought of and dismissed long before someone built one. "You're going to image atoms by dragging a tiny needle across them"? Yet that's pretty much how it works.
The first one was built in 1981, but one could have been built in the 1950s. Piezoelectric crystals were known. Raster scanning circuits were known. Feedback circuits for controlling the height were known. The tricks for making one-atom sharp points were not known, but the earliest STMs just stretched out a tiny platinum wire until it broke, and sometimes you got a one-atom point.
Hobbyists have built STMs.[1] It's simpler than building a 3D printer.
I know, I did a benchtop STM lab for my Nuclear Physics course some years ago. Most of the day was spent trying to make single atom needles by a wrist-flick technique for cutting platinum wire with some sharp pliers.
I worked in the lab at the Nation Physical Lab in the UK that had the low pressure + low temperature STM, we used to make the tips by dipping the ends in a very strong solution of NaOH. You relied on the surface tension as it dissolved to produce you a very sharp tip.
The AFM I used was mainly for teaching the concepts, so they wanted simple, cheap, low risk => strong NaOH was out.
The technique back then was, somehow, cutting at about 45° angle and twisting the pliers to make this angle go to zero while cutting/pulling. Then you had to put the tip in the machine and do test images to see whether the tip was good. Usually it wasn't, so you tried again. Finally you got a perfect tip, and maybe got two or three good images before you messed up the tip (from being a noob).
TBH, more people were using AFMs than STMs, because they're cheaper (I think) and the fact that the STM is based on conductivity limits what can be imaged.
Getting pictures of atoms is nice, but sometimes you want to image molecules, especially organic ones (tricky to do in a STM).
How do you know if your tip is one atom wide or not?
Also, what would the shape of the tip look like if you drew it? I'm wondering what kind of general angles the surface has. Is it like a cone with a single atom at the tip? What kind of slope?
The only real way to know if you had a single atom tip was to image a known surface. If the image was junk you probably had some funky tip states going on. HOPG (graphite) was the standard we typically used.
A good tip could be just about anything, from a nice cone to really jagged. One problem was any of the methods one has to view the tip can't actually resolve the single atom that is doing the imaging.
Man .. I know what my next hobby project is going to be :)
Dumb question .. I get that the needle scans a surface and you get the quantum tunneling effect between the atom you are "looking at" and the tip of the needle. What I don't get is how one figures out depth. For each X,Y position, do you just keep going down until you touch something, and then move up, and go to the next position? If so, apart from the issue with 1 atom tip, I imagine the next problem would how to increment X and Y by 1 atom.
P.S. I think some of the marketing put out on these things really confuses the issue. Sure .. it gets people excited about science but it gives people the wrong intuition. As a non-physics person, I got a lot out of this article.
In the simplest configuration have a X-Y stage that you raster, along with Z axis control - generally these are all piezo controlled. Your signal is then acquired by maintaining a constant tunneling current while you raster, so you track the Z axis position as you raster.
This means that you're getting a "pseudo-height" map - if you had a surface with 2 types of atoms, both the same size, but with different tunneling barriers, you would see them appear to be different sizes.
Yeah, we used to use graphite (as flat as we could get it, tried to ensure we had a single surface) to see how good it was. You could tell how good it was from how much noise you got and therefore how good your image was.
I think actually the hard part in the 50s might be the current amplifier but I'm not au fait enough with the tech of the time to know if nanoamp amplifiers were easy. I think it's not so different from radio but would be interested in an educated opinion.
That's a good point. A very high gain DC amplifier with low noise would have been hard then. A STM needs to sense about 1na at 0.1V, which means a series resistance of about 100 megohms. Vacuum tube voltmeters of the 1940s and 1950s era were getting close, with about 10 megohms input resistance.[1] Sensing 0.1V through 100 megohms with tubes doesn't seem out of reach.
High RF gains are easier to get than DC gains; you can filter out everything but the frequency of interest and reduce noise. That's basically how radios amplify weak signals. But I don't think you can run an STM on RF.
No I disagree. An ultrasound picture of a baby accurately reflects the actual form of the baby. A Scanning Tunneling Electron Microscope does not accurately reflect the form of an atom because it simply uses an artificial dot to represent the presence or absence of an atom. In short, the "map" that STM's give us is abstracted from reality and does not accurately portray an atom (in large part because STM's still simply due not measure at the scale of an atom).
I found the article really interesting and learned a few things from it.
I do know how an ultrasound image works, and I had some basic knowledge about scanning tunneling microscopes, but this article - and the many links in it - were definitely worth my time reading.
Isn't it possible to make an image using light, but by computing the "real" image from the interference pattern (of a single non-repeating atom/structure)? And if there is more than one solution, then perhaps by using images from multiple angles, or with different wavelengths?
Isn't that basically what X-ray crystallography is (taking the liberty to include X-rays as "light")? Though with the (significant) restriction that it only works for repeating crystal structures, not unique individual targets...
One of my first jobs out of college was doing manual chip layouts (which will give you some idea of how old I am). Once the layout was done, the design was sent to a "mask house" which would print the layout onto a series of glass plates (the "masks") which were then used in the chip fabrication process. When the masks for my first chip came back I popped one of them into a bench microscope to see what it looked like. I was expecting to see my design in black-and-white printing, but all I could see (of course) were rainbows because the mask was effectively a diffraction grating. This was my first visceral encounter with quantum mechanics. I knew there was a pattern there, but I could never see it directly with my own eyes.
What era of chip layout was this? I'm surprised that manual chip layout was still being done when chip features were that small. I look at 1970s chips a lot, and the features are easily visible under a microscope, as they are much larger than wavelength-sized.
This was in 1984, and it was for experimental chips, not production chips. The layout was done using CAD tools, but the placement was all manual.
Also, I think they were analog chips, not digital. I'm not sure. This was a co-op job in my first year at college at the IBM plant on Cottle Road, and I was never fully in touch with the big picture. I think they were making big-ass hard drives, and the chips I was working on were part of the circuitry that amplified the raw signal from the read-write head, but those details were way above my pay grade.
I have no idea. This was more than 30 years ago. But the number of transistors was in the hundreds or maybe low thousands. (I can't recall if I was laying out individual transistors or functional modules.)
At Motorola, in MCU, they (we?) were still doing some manual layout (not much, but still...) on 8-bit and 16-bit core products as late as 1989. Of course, none of the more advanced CPUs or soon-to-come DSPs did. IF I recall correctly (and I probably don't), depending on the fab, for those parts, we were at or around 1micron at the time.
It was an MCU, which many might call a SoC these days, but with all the timers and serial i/o and the A/D and D/A converters and PWMs - the 8-bit family (HC11) was probably around 50K transistors in an 80-pin QFP.
The HC11 had a 6809(enhanced) core. I think the 6809 was ~9K transistors, but I seem to recall that ours was a bit bigger than that. Still ~3/4 of the die was "not-CPU".
Connecting incredibly difficult to understand scientific achievements with the simple notion of the common human desire for play and creativity is truly glorious.
A beautiful symbol of prowess, intelligence, creativity and humanity. I don't think anyone should underestimate the impact that a video like that can have on a child (or even a curious adult!)
Well, an image is a representation of something in a way that humans can see. Whether waves, electrons or poking sticks were used doesn't change the fact of it being an image.
Say, a dolphin's echolocation might let him "see" a diver using sound - http://i.imgur.com/CS6wkNV.png , which would still be considered an image, even though it's using sound.
Anyway that's just semantics, and an STM is still a damn impressive piece of kit.
Yeah, it's not a photo but it is an image or picture. The article is doing a bad job of defining its terms, but still it's interesting for a layman like myself.
I built an STM a while back for fun and when people ask me how I managed to "image" atoms I usually use the analogy of the STM being like a "microscope sonar" where the atoms are "blips" on a plane (the sonar map). Sure it's not actual EM radiation landing on a CMOS detector but it's still pretty neat nonetheless.
That was the weirdest part of this article. I enjoyed the whole thing until the end, but why does the author call IBM's cute and amazing movie "A Boy and His Atom" jackassery?
Some people feel very strongly that sound should only be produced by natural (non-GMO) organic means, such as rubbing horsehair on catgut, banging sticks on skins or yelling painfully loud.
They're still sore about Moog.
(He was pulling your leg, and so am I. Mostly. Synthpop ... well, least said, soonest mended.)
I love that offhanded comment about how the helical structure of DNA is not at all obvious from that crystallography. All the more impressive then, the history of that discovery.
Like many things about the article it's hard to tell when the author is joking.
Knowing DNA was helical from the fiber diffraction images (not crystallography- they were working with DNA fibers, not crystals) was actually "obvious". A helix forms a distinctive cross pattern, this can be (and was) predicted easily from diffraction theory applied to a helical structure.
I strongly encourage anyone who is thinking about this to read: "A Different Universe" by Robert Laughlin. This is one of his main concerns of modern science, and although its kind of a muddled argument he thinks that the emergent phenomena captured by photographs are entirely misapplied to atomic forces.
I cannot see (pun intended) how "normal" seeing is fundamentally different. One basically sees by measuring their interactions with photons and deducing how something "looks like" vs. measuring interactions with electrons in STM and making a similar deduction.
His point was that it is common for a "new" technology to be used just ... because it's there. There was a lot of very bad music made in the early-80s when digital synths became available, and IBM using an AFM to make an advertising blurb could be considered in thie vein.
There's a philosophy idea that in order to say we can "see" something, we have to be able to collect consistent information by several different methods. For example if all we have is a STM, then we don't know which parts of the image are artifacts and which are real so we haven't seen anything. But if we have an STM and crystallography, we can have more faith in the features that are common to both images - such as interatomic distances and the geometry of crystal structure. But we still couldn't say that we've seen the shape of an atom since that would look different in each instrument's image.
A great example is people who "discovered" lost cities under the sea. They saw regular patterns of lines on the seafloor in Google Earth and interpreted them as ancient roads or walls. But they were only seeing artifacts from ships that had sailed back and forth in straight lines collecting data. If they had looked both at those sonar scans and some other data for the same location, they would have only seen the lines on one image and been able to conclude that they were either an artifact of the sonar or below the level of sensitivity of the other instrument.
This is about as interesting as pointing out the fact that an ultrasound picture of a baby isn't an "actual picture", since we use sound instead of light to make the image.