Autopilot is clearly AI. If I didn’t mention 1914, you wouldn’t doubt it.
A feedback loop — namely between a system and its measure of its own performance— is central to the idea of AI. At least according to Peter Norvig, the director of research at Google, who defines intelligence as ‘the ability to select an action that is expected to maximize a performance measure’ (Russell & Norvig, 2016, p. 37).
I am forced to wonder, though, whether Norvig's definition is a useful one. By this measure we'd have to consider bacteria to be intelligent. Corporations, too, for that matter.
(I do go along with author Charles Stross, occasionally seen here on HN: that Corporations are Old Slow AIs, in which case we've already had AIs around for centuries.)
Personally I'm happy to grant a degree of intelligence to plants (and probably even bacteria, if I squint hard enough) though it's of a quite different nature to our own. Certainly feedback loops are central to the idea of intelligence, but there's a whole lot more wanted/needed than merely feedback. And so, I find Norvig's definition a tad wide of the sort of intentionality and sentience we'd want flying a plane, and certainly far, far short of the sort of thing we'd call AGI.
1. Venus fly traps detect if an insect is on the trap and close the trap. So far so known, but less people know that the trap will open again after a while if there is no movement detected (ie if a stone fell in). Likewise digestion will only start if there is movement detected for a while.
2. Mast years (https://en.wikipedia.org/wiki/Mast_(botany)) ... somehow trees communicate when to produce seeds on mass ... from what I gathered we have no idea how they do that.
It seems that quite a lot of tree-to-tree networking is done via mycorrhizal networks. Without doubt there are mutually beneficial interactions between plant roots and fungi for extracting nutrients, and quite a lot of good evidence that those networks are informational in nature, too. Whether that's related to seed-dispersal patterns... I have no idea.
Alternately, I have also read of trees exuding pheremones via their leaves as a warning to other trees in the vicinity when predators (antelope) come around to munch on the leaves, resulting in surrounding trees rapidly increasing tannin content in their own leaves to make them unpalatable to the browsers.
There's a whole lot of shit going on out there that we're scarcely aware of...
Is it useful to view corporations as old, slow AI? I certainly think so. Otherwise we get really confused about AI. Look at Zillow. That was a deep misunderstanding of AI— the product isn’t done until we take the humans out of the equation. No. What is intelligent is to have a system that uses its own measures of success to improve. This, by the way, is why cybernetics is so critical to understand in the context of AI design.
I cannot help but feel that this is just extending the confusion to Zillow. It seems utterly implausible that Zillow's ill-advised zeal for removing people out of the process was driven by a overarching desire to develop AI, as opposed to, say, for making more money.
I would yield to better evidence, but my suspicion is that it stemmed directly from executive-level confusion about AI. Consider: how many investor pitches have you seen that claim “AI” technology as a mechanism to increase perceived IP value? They were telling their investors that they had AI, and AI means people aren’t involved (fallacy).
No doubt some pitches do claim that AI will increase perceived IP value, but for an investor to go from perceived IP value to the conclusion that people are not involved seems to be a completely unjustified conclusion, and I see no evidence that people are actually thinking this way. Furthermore, I have no idea how you think this line of thought justifies calling 1913's very primitive autopilot technology AI.
I have encountered CEO boardroom thinking that, for instance, suggested that data scientists were not necessary because AI would replace them.
I have experience in adaptive education where massively expensive teams of engineers missed the point that the “smartness” of the system needs to be based on improving outcome measures (namely, learning outcomes) and instead focused on massive, complex modeling initiatives with no feedback loops to indicate whether the models were doing anything useful.
If more people understood why a steam governor or a 1914 autopilot or a corporate bylaw were primitive forms of AI, they wouldn’t be looking for magic. “If I can understand it, it must not be AI”
> If more people understood why a steam governor or a 1914 autopilot or a corporate bylaw were primitive forms of AI, they wouldn’t be looking for magic. “If I can understand it, it must not be AI”
By the same argument, a person could say "AI is like a steam governor. I understand completely how steam governors work, and I know they cannot possibly translate from one language to another or recognize faces, so any claim that AI can do so is nonsense." This, of course, would a completely fallacious argument - and where it goes wrong is precisely with the assumption that AI is anything like a governor, except in the broadest possible way that gives precedence to a commonplace resemblance over all the substantive ways in which they are almost completely different.
I understand your desire to persuade people to not regard AI as magic, but I do not think this is helping.
See the paper from DeepMind on “reward is enough” and Alfred Russel Wallace on the relationship between steam governors and evolution. From that perspective, systems like steam governors can eventually recognize faces.
Eventually - after they have evolved to the point that they are more unlike steam governors than they are like them, and become something else in the process (a different species, for example.) To the best of my knowledge, no steam governor has ever recognized a face - or evolved into something that has, for that matter.
I am also curious as to how you reward a steam governor - does it get more sex, with better governors? You might reward the inventor of a particularly effective governor with orders, but that isn't rewarding the governor.
> A feedback loop — namely between a system and its measure of its own performance— is central to the idea of AI.
A feedback loop may be necessary for AI (or even "natural" intelligence, and sentience (nevermind sapience)) to happen, but is simply having it sufficient?
Respectfully, while the optimization functions and constraint handling may have overlap, they differ in their intended applications. For optimal control there is no adaptation outside the initial ruleset, which is the core of AI. Optimal control is "keep things on path"
* https://www.electronics-tutorials.ws/systems/negative-feedba...
* https://en.wikipedia.org/wiki/Feedback
There are no heuristics involved:
* https://en.wikipedia.org/wiki/Heuristic_evaluation
* https://en.wikipedia.org/wiki/Heuristic_(computer_science)