He even makes his own “Soap bubble” computer from people who claim that soap bubbles can be used for global parallel optimization problems. Spoiler, it doesn’t work well and gets stuck in local optima as the size gets larger.
Before anyone gets too excited about physical computing ideas, I think watching these 3 lectures from prestigious “Paul Barney’s lectures” is a must. They're about Quantum computing but Scott spends a full lecture about Turing/Extended-Turing thesis and what types of computers in past have tried to break those conjectures (and failed). That is why Quantum computing is interesting - he also dispells a lot of myths about QC we hear today in popular media. Fascinating stuff and I highly recommend spending 3 hours to watch these videos if you're interested in the future of computing.
Every time I hear about a sad story of "this scientist had spent their whole life trying to discover the better computing in DNA and failed" I feel like the reporters are not mentioning the fact that actually the scientist knew perfectly well that their model of computation was inherently unscalable and they were simply pulling money out of absolutely oblivious investors.
Oh god I love the _idea_ of better computing. But let's be honest, there isn't much of an alternative, is there?
At some point it becomes important to ask yourself, "What would I be content spending my life failing to accomplish?"
I mean, it'd be nicer to accomplish it, but if you can find something that you could spend fifty years failing at, that on your deathbed you would simply be happy you tried, that's something really special.
Yeah, but it doesn’t mean these people deserve any pity. They actually did something they love. And the fact is they almost certainly knew the risks involved - it’s not like they were jumping into unknown.
This is the furthest thing from the truth. That a human has a certain capability does not in the slightest imply that “DNA”, at least as commonly understood, encodes that capability, at all.
A bunch of molecular recipes encoded in a few gigs of nucleotides with some crude feedback loops do not a human make.
Quite apart from epigenetics as it’s commonly presented (methylation, and all sorts of histone antics), you might recall that DNA itself doesn’t magically grow up: you do need a cell.
It’s somewhat (and only poorly) analogous to being handed the source code to a C compiler written in C, without knowing C. Does the C code really encode C? Well, not without the compiler it doesn’t....
There’s then a very interesting discussion around how it’s even possible for a mammalian nervous system to bootstrap itself. Figuring out walking seems perhaps emergent: it’s a learnable technique based on not falling. But how do dog breeds retain intrinsic high-level behaviours even if they’ve never observed them? What makes a Shepherd so concerned when his assumed flock becomes dispersed?
We are a tremendously long way from answering these questions, but I would caution anyone who thinks it just “in the DNA”.
> We know this is true, because most people can compute that it is 30. The interesting thing is how little dna is required.
I was going to say that "No, it means a computer made from DNA can make another computer from DNA that can compute the square root of 900, but I'm the father of the two year old that will, presumably at some point, be able to calculate it as well. It'll just take somewhere around 10 years(?). I guess the achievement is how quickly it can calculate it, from creation.
Very first sentence: "A computer made from strands of DNA in a test tube can calculate the square root of numbers up to 900." (actually in the third sentence they clarify it is square numbers only).
Pretty sure most people can't do this. Even if we mean most people in a country where everyone graduates high school.
Well, there's the question of explicitly or implicitly. If someone (maybe your dog) can catch a thrown ball, does that mean they can do calculus and/or physics?
Long ago I once attended a conference whose keynote presentation was by a group which was involved in DNA computing. They were talking about using a beaker of solution to compute a Hamiltonian cycle of length, I dunno, let's say 15. I asked how large a beaker would be needed to compute a cycle of length 25. The answer was: about the size of the Pacific Ocean. :-)
Nah, everyone knows our brains are just quantum antennas for communicating with our immaterial souls that just coincidentally resemble neural networks.
Perhaps but I do wonder whether the architectures of ANN works with the real biological neural network. Maybe biological neurons can be an alternative to the power hungry GPU clusters.
For these techniques, the bottleneck in latency will always be DNA synthesis. AFAICT you're encoding your data in DNA as well, which means that actual, precise chemical reactions are required to do this computation, which is 1) always going to be slower than moving electrical charges and 2) going to have diminising yields exponential on the number of steps required to build it.
Here is the state of the art: Look at how wasteful it is.
And that's what I was speaking to up thread. Totally wasteful.
One (or more) layers up, we are not doing any computation via DNA. Maybe configuration and specification are better words for whatever does computation or some other task.
Imagine a sub-system. Maybe it's new sensory input, or some inherent comms interface or other we don't have as part of our nature right now.
If we think of DNA as some kind of Verilog, then:
Say I have a computer expressed in Verilog. I can push that to a chip, and that chip becomes that computer. Later, I want some additional computation capability, or I/O system, display, whatever. Just change the Verilog, and next push, that computer, being the same one a generation prior, now has this new thing.
So, we've got "human" expressed as DNA. Rather than compute with the actual DNA, expressing something that can compute is what I am getting at and in only the most general of ways.
The DNA basically makes an organism, or some functional bio-tech that is capable of computation, carrying electrical charge, do things we know biological systems can do. They have different strengths from electromechanical systems.
Our own brains are kind of slow, imprecise, but look at what they can do! What if a "coprocessor" of sorts could be woven in there? Or, maybe just an interface, form of comms that is not like the senses we have now.
Or maybe we leave ourselves alone. (good call, if you ask me), but we end up making living things, task based, maybe not even fully functional beings (probably also good call), that can interact with our existing systems.
Living neural nets for more advanced pattern recognition or memory recall, for example? Visualization?
That's where the real action / change is, IMHO.
And when I asked for speculation, I was kind of hoping people with this interest, or domain expertise might just riff on what such an idea may evoke in their thoughts, impressions, visions of what could come.
The "ten building blocks" in the press article are actually ten bits (probably implemented as short DNA sequences chained together). I'm not sure why they can't calculate the square root of 961.
These kind of DNA computers had a lot of hype a few years ago, but for the real applications they are too low and difficult to retarget.
I'm guessing that one reason is that I/O is problematic. But once you can do sufficiently large computations (e.g. factoring large primes), I/O becomes the easy part.
I wonder how they are a surprised since Leonard Adleman (the 'A' in RSA) developed the field in 1994 solving the the Hamiltonian path problem with seven points[1].
And most of the other top-level comments express the same sentiment as mine but at a later time. It’s definitely not worth trying to understand the whims of HN voting.
He even makes his own “Soap bubble” computer from people who claim that soap bubbles can be used for global parallel optimization problems. Spoiler, it doesn’t work well and gets stuck in local optima as the size gets larger.
Before anyone gets too excited about physical computing ideas, I think watching these 3 lectures from prestigious “Paul Barney’s lectures” is a must. They're about Quantum computing but Scott spends a full lecture about Turing/Extended-Turing thesis and what types of computers in past have tried to break those conjectures (and failed). That is why Quantum computing is interesting - he also dispells a lot of myths about QC we hear today in popular media. Fascinating stuff and I highly recommend spending 3 hours to watch these videos if you're interested in the future of computing.