Hacker Newsnew | past | comments | ask | show | jobs | submit | Footpost's commentslogin

Well since Neuromorphic methods can show that 138240 = 0, should it come as as surprise that they enable blockchain on Mars?

https://cointelegraph.com/news/neuromorphic-computing-breakt...


Neuromorphic computation has been hyped up for ~ 20 year by now. So far it has dramatically underperformed, at least vis-a-vis the hype.

The article does not distinguish between training and inference. Google Edge TPUs https://coral.ai/products/ each one is capable of performing 4 trillion operations per second (4 TOPS), using 2 watts of power—that's 2 TOPS per watt. So inference is already cheaper than the 20 watts the paper attributes to the brain. To be sure, LLM training is expensive, but so is raising a child for 20 years. Unlike the child, LLMs can share weights, and amortise the energy cost of training.

Another core problem with neuromorphic computation is that we currently have no meaningful idea how the brain produces intelligence, so it seems to be a bit premature to claim we can copy this mechanism. Here is what the Nvidia Chief Scientist B. Dally (and one of the main developers of modern GPU architectures) says about the subject: "I keep getting those calls from those people who claim they are doing neuromorphic computing and they claim there is something magical about it because it's the way that the brain works ... but it's truly more like building an airplane by putting feathers on it and flapping with the wings!" From "Hardware for Deep Learning" HotChips 2023 keynote. https://www.youtube.com/watch?v=rsxCZAE8QNA This is at 21:28. The whole talk is brilliant and worth watching.


The reasoning for this choice in the base ISA is discussed in the RISCV ISA manual, Section 2.4 on "Integer Computational Instructions" [1]. Given that RISCV is a modular ISA, it should be possible in principle to have suitable ISA extensions that do integer overflow detection. Maybe the absence of such an extension in 2025 indicates that this is not a pressing need for many RISC-V users?

[1] https://lists.riscv.org/g/tech-unprivileged/attachment/535/0...


While very convenient to quickly link some ancient draft spec, let's try and stick to the actual ratified specifications[0].

Of course, the rationale is still there in section 2.4 of the current (20250508) ratified version of The RISC-V Instruction Set Manual Volume I: Unprivileged ISA.

0. https://riscv.org/specifications/ratified/


Domain theory has reduced to a trickle, with almost no new results since the late 1990s. Most domain theorist have retired, or moved on to other things. Aside, Andy Pitts has been made a fellow of the Royal Society a few days ago!


Fantastic news and well deserved; even when Andy Pitts goes categorical his papers are very readable.

I got told a while ago that Streicher's "sequential" domains had solved the full abstraction problem for PCF [1] ... was it that or something else that killed off the work on game semantics?

It seems that Jon Sterling, author of the tool used to express the thoughts at the link, has made recent progress in domain theory [2] but perhaps the "synthetic" qualifier means it's not the real thing?

[1] Streicher's notes/book on domain theory sketches the construction but does not take it anywhere; I wonder what the reasoning principles are.

[2] see e.g. https://www.jonmsterling.com/jms-0064/index.xml


Andy Pitts' writing is extremely clear, whatever he writes about. This clarity is not easy to achieve and shows mastery!

The full abstraction for PCF was solved in the mid 1990s by Abramsky/Jagadeesan/Malacaria [1] Hyland/Ong [2] and Nickau [3]. All three appeared simultaneously. This was a paradigm shift, because all three used used interactive rather than functional models of computation. (There was also later work on domain theoretic full abstraction, e.g. OHearn and Riecke [4], but I can't recall details. Maybe Streicher's work was in this direction?) The beauty of interative models like games is that they can naturally encode more complex behaviour, including parallelism.

[1] S. Abramsky, R. Jagadeesan, P. Malacaria, Full Abstraction for PCF.

[2] J.M. E. Hyland, C.-H. L. Ong, On Full Abstraction for PCF: I, II, and III.

[3] H. Nickau, Hereditarily sequential functionals.

[4] P. O'Hearn, J. G. Riecke, Kripke logical relations and PCF.


Yes, AIUI Streicher's work was in the vein of your [4]. (I got a vague pointer to him a while back; I don't know who's responsible for the meat of the development.)

Game semantics is expressive but AFAIK it has not (yet) provided new tools for reasoning about programs. I wonder why those tools have (apparently) not been developed, or do they just add (not very useful?) information to the old LCF story ala Scott? Has its moment passed?

By parallelism I think you mean concurrency. (Scott's domains have a bit too much parallelism as shown by Plotkin in his classic paper on LCF; these are at the root of the failure of Scott's models to be fully abstract.) And Scott's big idea -- that computation aligns with his notion of continuity -- conflicts with fairness which is essential for showing liveness. For this reason I never saw the point in powerdomains, excepting the Hoare (safety) powerdomain.

As these notes show, models, even adequate models, are a dime a dozen. It's formulating adequate reasoning principles that is tough. And that's what Andy Pitts brought to classic domain theory in the 1990s.


That's interesting. What would you teach in the language design part of the course? By the time compilers are taught (2nd or 3rd year undergraduate), most students rarely understand advanced language concepts, whether call/cc, GADTs, higher-kinded types, or type-based lifetime tracking, message-passing vs shared memory parallelism etc, let alone the tradeoffs involved in the design space.

Note also that there is no scientific consensus on what makes for a good programming language.


The great G.-C. Rota quipped:

   Probability theory = combinatorics divided by n 
In this vein let me add:

   Information theory = log(probability theory)


The significant thing is not that we're arbitrarily applying the log function to probabilities. It is that there's a relationship between expected code length and the entropy of a source. It is surprising, to me at least, that the length of a thing is related to its probability. From that relationship come a number of fascinating connections to many other fields. Check out Cover and Thomas' "Elements of Information Theory" for a very approachable introduction to all these connections.


Isn't that exactly what Occam's Razor teaches us?

"Entities should not be multiplied without necessity."


Yes info theory is a formalization of Occam's razor


   practical DSLs
Isn't embedding a "practical DSLs" another toy compiler?

Except that with a DSL, you have to worry about gnarly but not-so-interesting problems like what happens to my syntax (given by a CFG) when I add another syntax (another CFG)? Are the combined CFGs ambiguous etc?

Keep things simple!


The main conceptual novelty that modern machine learning brings is the addition of computational complexity in the mix: from information theory's question

   what is learnable?
to

   what is learnable in
   poly-time?
(or similar resource constraints). This was pioneered, as far as I am aware, in Valiant's A Theory of the Learnable (please correct me if I'm wrong, I'm not an ML/AI historian). Interestingly, we see a similar evolution of Shannon's thinking about cryptography (what is secure information theoretically, i.e. against computationally unbounded adversaries?) to: what is safe against a poly-time restricted adversary?


   Co-evolving parasites
This paper strikes me as the earliest manifestation of what would later become GANs (generative adversarial networks). Yes, the mechanism is different (GAs vs NNs), but the spirit (having a competitive mechanism for speeding up local search) is similar.


One individual that has always interested me - and for some reason, has seemed to be "black-holed" by those in computer science (but maybe that is my own perception of things) is Hugo de Garis:

https://en.wikipedia.org/wiki/Hugo_de_Garis

A relatively eccentric person, in the late 1990s - early 2000s he along with others created a hardware system called the CAM Brain Machine (CBM) - his bio above contains more information about it. Numerous papers regarding it were published, some of which can still be found on the internet today.

The machine itself never achieved the successes he marketed it as being able to achieve, but it did seem like an interesting project. Much like Babbage, though, de Garis was constantly moving to "sell" the system to get more government grants to do more research with the machine, hopping from one place to another as the grants (or the government's patience) ran out and cancelled the project.

Perhaps that's why you don't here much about him any longer:

A combination of his eccentricity, his work not panning out as expected (indeed, from what I understand, the whole idea of "evolving neural networks" didn't pan out), his funding model likely sowing seeds of discontent among potential funding sources (that future AI/ML researchers would run up against, generating animosity or disdain toward him?), plus his later ideas being a bit too far out there for researchers to continue to take him seriously (expressed in his work of fiction "The Artilect War"); all of that contributing to his imposed seclusion from the subject matter...

While the idea of creating a "robotic kitten" (Robonoko) controlled by an "evolved neural network" didn't work out, the machines he created are still among the most "aesthetically pleasing to look at" computers around (I place them second only to the Cray Supercomputer designs); they were systems that weren't only technically advanced for their purpose, but also pleasing to look at and display (handy from a marketing perspective I suppose). Only a handful of them were built - I often wonder what happened to those machines; at least one example should belong in a museum - ideally CHM in Mountain View.


the whole idea of "evolving neural networks" didn't pan out

To the contrary, evolving neural networks is all the rage these days ("Neural Architecture Search", e.g. SOTA on Imagenet [1])

[1] https://arxiv.org/abs/1905.11946


It's interesting that some of his intuitions seem really really good like the idea that you'll get better results by using very large models. On the other hand, specialized hardware before the software works well has always been a disaster because it's so hard to iterate on new hardware. His idea of using evolution as a primary learning process also doesn't work well.


Social networks are a marvelous thing [1] (Yeah, somebody "stole" the idea in your comment)

[1] https://old.reddit.com/r/MachineLearning/comments/d05nfr/d_i...


That's interesting. I certainly did not make the Reddit post. Since there is interest in proto-GANs, here are two more.

- J. Schmidhuber, Learning Factorial Codes By Predictability Minimization. (1992)

- W. Li, M. Gauci, R. Gross, A Coevolutionary Approach to Learn Animal Behavior Through Controlled Interaction. (2013)

Schmidhuber's work will be widely known (and its relationship with GANs controversial), but I can't recall where I read about Li et al.


I stole your idea :) Imagine how Danny Hillis must have felt when he heard about Goodfellow's paper!


They are not mutually exclusive:

- Use HoTT or any other type theory as a logic / foundation of math.

- Use Hoare logic to reason about programs, using your logic / foundation of math to deal with inferences that are not given by the rules of your Hoare logic, i.e. for Hoare's "Rule of Consequence".

There are several ways of connecting the two. One is Hoare type theory [1]. Another is using Characteristic Formulae [2, 3].

[1] G. A. Delbianco, A. Nanevski, Hoare-Style Reasoning with (Algebraic) Continuations.

[2] L. Aceto, A. Ingolfsdottir, Characteristic Formulae: From Automata to Logic.

[3] A. Chargueraud, Formal Software Verification Through Characteristic Formulae.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: