Delicious vegetarian food is already a thing, and doesn't require new technology, and it's not necessary to completely eliminate meat-eating to significantly reduce your ethical-harm footprint. It's a matter of changing food culture. Once you adapt to an omnivore diet that contains tasty meals from both meat and non-meat cuisine, it's actually quite easy to reduce your meat intake further.
I was raised as a meat eater and ate it for 30 years. I’ve been vegetarian for about a decade for ethical reasons that I do believe are incompatible with eating any meat. I consider myself a good cook and make vegetarian/vegan meals for my family every night. However: I will never stop thinking that the taste of chicken, pork, beef and lamb are desirable. The conditioning is too strong. Sticking with vegetarianism is still an act of willpower for me. This is why I like meat alternatives.
Purely symbolic AI has been tried and found wanting. Decades of research by hundreds of extremely bright people explored a large number of promising-looking approaches to no avail. Intuition tells us thinking is symbolic; the failure of symbolic systems tells us intution is most likely wrong.
What is interesting about current LLM-based systems is that they follow exactly the model suggested by this paper, by bolting together neural systems with symbol manipulation systems - to quote the paper "connectionism can be seen as a complementary component in a hybrid nonsymbolic/symbolic model of the mind, rather than a rival to purely symbolic modeling."
They are clearly also kludges. As you say, they are built on shaky foundations. But the success - at least compared to anything that has gone before - of kludged-together neural/symbolic systems suggests that the approach is more fertile than any that has gone before. They are also still far, far away from the AGI that has been predicted by their most enthusistic proponents.
My best guess is that future successful hard-problem-solving systems will combine neurosymbolic processing with formal theorem provers, where the neurosymbolic layer constructs proposals with candidate proofs, to submit to symbolic provers to test for success.
I think there is a misunderstanding - the whole point of my comment was that LLMs are lacking sensory input which could link the neural activations to real-world objects and thus provide a grounding of their computations.
I agree with you that purely symbolic AI systems had severe limitations (just think of those expert systems of the past), but the direction must not only go towards higher-level symbolic provers but also towards lower level sensory data integration.
You still have to rely on other drivers not being actually suicidal. Just to give one terrifying example scenario: you will pass hundreds, if not thousands of other drivers driving in the opposite direction in the course of a long journey. Any motorist driving in the opposing lane has the ability to engage other drivers in a head-on collision at any time by making a relatively trivial maneuver. Given human reaction times, and the very high closing velocity of such a collision, you ability to avoid this would seem to be non-existent. You certainly couldn't "just stop" to prevent it.
This is all true. It doesn't really apply to my personal driving situation where I can't recall the last time I personally was on a road with a speed limit above 40. I drive less than half the days a week. Thats part of maintaining control for me. I can't set plane schedules. I can drive when there's less drivers and on slow roads.
Also, there are numerous situations you're leaving out where just stopping(or just slowing down) does solve the safety issue. Far more than a suicidal driver choosing me as their target.
Our capacity for psychological projection of our unconscious desires onto inanimate objects is quite amazing. Given what is possible in terms of projection onto things as random as Ouija boards, tealeaves or Tarot cards, I'm surprised this sort of thing isn't more common with LLMs that sound just like conscious beings.
Not completely, anyways. But I can empathize with someone who is cold at night and someone is who is a Miami Dolphins fan. Both are typically displeasant.
I would be surprised if someone, somewhere, didn't still have a copy of what would nowadays be considered to be an absurdly small amount of data. If they do, I wouldn't be surprised if they read HN.
I think you might find that the Teletubbies have already solved this problem, in a variety of different configurations, including the rigid and curly antenna you mention above. The benefits of the circular and triangular configurations are as yet unclear to me, but I'm sure the Voice Trumpets and Sun Baby know what they're doing.
There would probably be a Universal Common Embedding used as an intermediate representation between people's individual private neural representations. Likely the distant descendant of our open-source neural models.
And machines would of course also use the Universal Common Embedding to communicate, as man and machine meld into a seamless distributed whole.
It all seems a little bit too inevitable for my liking at this point.
reply