Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> demonstrative of "intelligence"

Nothing. Nothing that just looks like intelligence is intelligence - convincing appearances cannot make a nature. Either it is an judicial ontology refiner, or it is not intelligence.



If you cannot come up with evidence that would convince you that you are wrong, then you are not making a scientific claim.


Logic is sufficient, there is no scientific claim involved: it has to respect a definition. It has to /be/ something, not /look like/ something - which in the latter case and in this context would make it the opposite.

If it refines its ontology through critical productive analysis that makes it "see details" then it may be an intelligence, while if it outputs mockeries it is /the opposite/ of intelligence.

And in fact, many tentative toys nowadays are completely astray with regard to the concept of intelligence, because the secondary meaning of the term, as used traditionally in this technical context, is "able to provide solutions somehow replacing an engineer": the solutions are verified on effectiveness, not on "vero-pseudo-similarity".


> If it refines its ontology through critical productive analysis that makes it "see details" then it may be an intelligence, while if it outputs mockeries it is /the opposite/ of intelligence.

To help me understand your definition well enough, how could I prove to you whether my Cousin Sarah is intelligent?

I have some experience in this field, and even I have no idea what you mean by "refining your ontology through critical productive analysis."

In practice, what's an example of that? What could someone (an actual person) do, to demonstrate that to you?

The Turing Test is a well-understood example.

How could we formalize your test?

I also don't understand what evidence I could possibly give you that something /is/ something, rather than something /looks like/ something. What's the measurable difference between /looking like/ and /being/?


Now we start being in a context of science, supposing we have to assess something in absence of the blueprint. But that is complementary to engineering, where we decide the blueprint - making it much easier to tell and plan "is" aut "seems".

And in a context of science, you cannot prove me whether your cousin is intelligent, until we open her and check the wiring - but I can be convinced that she is not intelligent, if her behaviour so shows.

Plenty of clear hints come from machines and, more unfortunately, humans - I listed a few recently elsewhere. This that does not know that you cannot ring the doorbell of Caius Julius Caesar though it read a biography; that which would indicate the swan right there as white in spite of being told differently; that which goes full delirious.

An intelligent entity has to be able to know entities and assess their relations with foundations. It is not difficult to reveal when this is not happening, and the literature contains a number of "codified" examples: one is that of understanding the pronoun in the sentences "The trophy will not enter the case: it is too big" and "The trophy will not enter the case: it is too small", which can betray the absence in the engine of a capability for (sub)world modelling.

Prof. Patrick Winston said that intelligence is "that you have never run with a bucket full of gravel but you know what it entails regardless", because you can reliably "tell yourself a story". That is a requirement: the "intelligent" entity has to know what "bucket", "gravel", "running", and all possibly related concepts, and be able to refine them sensibly combining them, and produce statements that are solid at the state of development of its ontology.

So, before going into finer details, let us have a system that definitely builds an ontology, made of virtual scenarios in which entities are known progressively, and let us see how it reasons about it. Because when this is missed, then Intelligence is missed in spite of any mockery of actual intelligence. A photocopier do not make one an artist, and capabilities for imitation are a risk, not a goal: they define a warning that makes the "real thing" trickier to discriminate. And since we already know that by design the "real thing" is not there, failure under some field condition is already foreseeable. If you cannot tell "looking like" from "being", then you have a problem - because what "is not" will probably reveal so.

You have to build the system so that the core is foreseen.

And if instead the "foundation" of the outputs comes from crunching the most likely fitting response, that is directly the opposite of intelligence - which understands /why/ something works, and does not just record what "works". It is the opposite because intelligence by definition "reads in", investigates, while the other fully "delegates out".

Did not you mention Science? Well, to be intelligent an entity has to be a Philosopher and a Scientist. And if it expresses like one, if it is a black box your duty is to check thoroughly with renewed effort that you are not tricked, that there is substance behind it - far from being contented.

--

Edit: you want a "moving goalpost"-like test? The Resnik-Halliday-Krane Physics textbook contains a number of exercises: the AI has to be able to solve them correctly and properly - "properly" more than "correctly". And then, for all disciplines, there will be similar exercises - from "Why would the radical school of Austrian Economics fail in the Kholomity region, and why would it succeed", to "Why would implementing radical school of Austrian Economics measures in the Kholomity region be immoral, and why would it be moral"; "legal/illegal" etc. Make it pass examinations for degrees. But it remains key that no tricks are employed: it has to build instances of subworlds, made of organically developed concepts, including laws, and make its tests and reasoning on them, and draw conclusions. Tricks are always possible, and "copying from the neighbour or learning answers etc" cannot count.


It sounds like my Cousin Sarah would fail most of your tests.

That, to me, means you have set the bar too high.

If your test excludes actual humans, don't you think it's possible your test is too difficult?


> set the bar

No, for many reasons.

It is not a matter of a "bar": intelligence is what it is. You cannot call the uncooked a biscuit.

But that is not the issue: there is a core function of intelligence, and if that is implemented, the engine is "intelligent" even when the entities (including relations) it manages are limited. To build "moving", a wheel is sufficient. The first goal can be to build a simple intelligence.

But to fulfill its promises, it has to be able to grow through its own constitutional features. The difficulty is to make that engine plastic, accommodating, "all-purpose", fertile enough so that it could achieve a Doctorate (through severe, intelligent and discriminating evaluators - let us always remember that tricks will never count) when enough resources will be invested in its expansion. The real difficulty is understanding what is the blueprint for such an engine, that makes modules emerge without installing them in - those complexities that are expected to be developed inside the engine cannot be implants. The real difficulty is to draw the lean engine for an intelligence - to define its essential, productive components.

Some humans are, very, very, very unfortunately, simply heavily lacking in displayed intelligence: but again, that does not change definitions. An underdeveloped and damaged function cannot count as a model. And when we build a machine, the goal is to implement a function optimally.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: