Hacker News new | past | comments | ask | show | jobs | submit | alexilliamson's comments login

Most likely is that the side with the army and guns turns the blue areas into an occupied police state, and sells it to the constituency as a "civil war", despite it being completely one-sided.


Well-defined goal is the big one. We wanted a big bomb.

What does AGI do? AGI is up against a philosophical barrier, not a technical one. We'll continue improving AI's ability to automate and assist human decisions, but how does it become something more? Something more "general"?


"General" is every activity a human can do or learn to do. It was coined along with "narrow" to contrast with the then decidedly non-general AI systems. This was generally conceived of as a strict binary - every AI we've made is narrow, whereas humans are general, able to do a wide variety of tasks and do things like transfer learning, and the thinking was that we were missing some grand learning algorithm that would create a protointelligence which would be "general at birth" like a human baby, able to learn anything & everything in theory. An example of an AI system that is considered narrow is a calculator, or a chess engine - these are already superhuman in intelligence, in that they can perform their tasks better than any human ever possibly could, but a calculator or a chess engine is so narrow that it seems absurd to think of asking a calculator for an example of a healthy meal plan, or asking a chess engine to make sense of an expense report, or asking anything to write a memoir. Even in more modern times, with AlexNet we had a very impressive image recognition AI system, but it couldn't calculate large numbers or win a game of chess or write poetry - it was impressive, but still narrow.

With transformers, demonstrated first by LLMs, I think we've shown that the narrow-general divide as a strict binary is the wrong way to think about AI. Instead, LLMs are obviously more general than any previous AI system, in that they can do math or play chess or write a poem, all using the same system. They aren't as good as our existing superhuman computer systems at these tasks (aside from language processing, which they are SOTA at), not even as good at humans, but they're obviously much better than chance. With training to use tools (like calculators and chess engines) you can easily make an AI system with an LLM component that's superhuman in those fields, but there are still things that LLMs cannot do as well as humans, even when using tools, so they are not fully general. One example is making tools for themselves to use - they can do a lot of parts of that work, but I haven't seen an example yet of an LLM actually making a tool for itself that it can then use to solve a problem it otherwise couldn't. This is a subproblem of the larger "LLMs don't have long term memory and long term planning abilities" problem - you can ask an LLM to use python to make a little tool for itself to do one specific task, but it's not yet capable of adding that tool to its general toolset to enhance its general capabilities going forward. It can't write a memoir, or a book that people want to read, because they suck at planning or refining from drafts, and they have limited creativity because they're typically a blank slate in terms of explicit memory before they're asked to write - they have a gargantuan of implicitly remembered things from training, which is where what creativity they do have comes from, but they don't yet have a way to accrue and benefit from experience.

A thought exercise I think is helpful for understanding what the "AGI" benchmark should mean is: can this AI system be a drop-in substitute for a remote worker? As in, any labour that can be accomplished by a remote worker can be performed by it, including learning on the job to do different or new tasks, and including "designing and building AI systems". Such a system would be extremely economically valuable, and I think it should meet the bar of "AGI".


> LLMs are obviously more general than any previous AI system, in that they can do math or play chess or write a poem, all using the same system

But they can't, they still fail at arithmetic and still fail at counting syllables.

I think that LLMs are really impressive but they are the perfect example of a narrow intelligence.

I think they don't blur the lines between narrow and general, they just show a different dimension of narrowness.


>But they can't, they still fail at arithmetic and still fail at counting syllables.

You are incorrect. These services are free, you can go and try it out for yourself. LLMs are perfectly capable of simple arithmetic, better than many humans and worse than some. They can also play chess and write poetry, and I made zero claims at "counting syllables", but it seems perfectly capable of doing that too. See for yourself, this was my first attempt, no cherry picking: https://chatgpt.com/share/ea1ee11e-9926-4139-89f9-6496e3bdee...

I asked it a multiplication question so it used a calculator to correctly complete the task, I asked it to play chess and it did well, I asked it to write me a poem about it and it did that well too. It did everything I said it could, which is significantly more than a narrow AI system like a calculator, a chess engine, or an image recognition algorithm could do. The point is it can do reasonably at a broad range of tasks, even if it isn't superhuman (or even average human) at any given one of them.

>I think that LLMs are really impressive but they are the perfect example of a narrow intelligence.

This doesn't make any sense at all. You think an AI artifact that can write poetry, code, play chess, control a robot, recommend a clutch to go with your dress, compute sums etc is "the perfect example of a narrow intelligence." while a chess engine like Stockfish or an average calculator exists? There are AI models that specifically and only recognise faces, but the LLM multitool is "the perfect example of a narrow intelligence."? Come on.

>I think they don't blur the lines between narrow and general, they just show a different dimension of narrowness.

You haven't provided an example of what "dimension of narrowness" LLMs show. I don't think you can reasonably describe an LLM as narrow without redefining the word - just because something is not fully general doesn't mean that it's narrow.


For arithmetic, this is ChatGPT:

    how much is (0.2 + 0.1) * 10?
    The result of (0.2+0.1)×10 is approximately 3, with a slight rounding difference leading to 3.0000000000000004.   
My 10yo does not make this error, ChatGPT does because it does not understand math, but knows how to use Python.

For poetry: counting syllables is a significant part of most poetry forms, so if you can't count syllables, you can't do poetry.

Let's say you want a 5-7-5 haiku, this is ChatGPT

    write a 5-7-5 haiku about windstorms

    Fierce winds howl and spin,
    Branches bend, clouds race the sky,
    Storm leaves quiet calm. 
this is not a 5-7-5 haiku.

LLMs are not general, but they show that a specific specialization ("guess next token") can solve a lot more problem that we thought it could.


This argument generalises to all possible AI systems and thus proves way too much.

>[AI system]s are not general, but they show that a specific specialization ("[process sequential computational operations]") can solve a lot more problem that we thought it could.

Or if you really want:

>Humans are not general, but they show that a specific specialization ("neuron fires when enough connected neurons fire into it") can solve a lot more problem that we thought it could.

This is just sophistry - the method by which some entity is achieving things doesn't matter, what matters is whether or not it achieves them. If it can achieve multiple tasks across multiple domains it's more general than a single-domain model.


It would still need an objective to guide the evolution that was originally given by humans. Humans have the drive for survival and reproduction... what about AGI?

How do we go from a really good algorithm to an independently motivated, autonomous super intelligence with free reign in the physical world? Perhaps we should worry once we have robot heads of state and robot CEOs. Something tells me the current, human heads of state, and human CEOs would never let it get that far.


Someone will surely set its objective for survival and evolution.


That would be dumb and unethical but yes someone will do it and there will be many more AIs with access to greater computational power that will be set to protect against that kind of thing.


Do you have examples?


Yes, but only one side has the guns.


> Yes, but only one side has the guns.

Aside from the unnecessary inflammatory injection of firearms, I'd like to clarify and say "only one side thinks only they have the guns". Not that this should matter in a civilized country.


I didn't mean for it to be inflammatory, but it's certainly not unnecessary to the conversation. We can both-sides it all day, but the fact is that one side has a militia proudly ready to use force. Like other commenters have suggested, this isn't necessarily a concern with a functioning civil society. But if we descend into populist mob violence, the side that has eschewed firearms will be at a distinct disadvantage. I hope it doesn't come to that, but it's not so improbable as to be irrelevant here.


I assure you that both sides have firearms in sufficient quantities (I looked up the statistics before I posted) such that if one side attempted armed conflict, it would end poorly and with smaller families at the end.


> Not that this should matter in a civilized country.

It should be very clear by now that "civilization" is more fragile than we'd like to believe.

Society, including the systems that protect you from bad actors, could break down, and quickly.

When it comes to guns, the absense of "government" is an under-appreciated possibility.


Just look at how useful guns were to average citizens in the USSR during its collapse.

Wait: were guns useful in 1991 to non-gangsters?


I'm sure they were.

You acknowledge the rise of organized crime in a power vacuum, but doubt the utility of self-defense in that environment?


Individual self-defense against organized crime syndicates is not terribly effective. You shoot one guy, the capo shows up with 10 men to shoot you.

A broad organized militia may be able to monopolize enough force to keep organized crime in check... and congratulations, you've just reinvented policing from first principles.


> you've just reinvented policing from first principles.

Yes, in the absense of government protection (scroll up)


In a civilized country, civil society and the rule of law don't depend on the ever-present threat of populist violence to begin with. The US is not a civilized country, and one side definitely not only has most of the guns, but has proven itself more eager to use those guns in the service of their ideological goals than the other.


… whoever’s in control of the government?

If you mean only one political party, no, both do.


This is the "limiting harm" part of 2.


Congrats on having good parents! However, I believe that most of this discussion applies to those less fortunate poor kids with shitty parents.


I'm very skeptical of the claim that most of the kids with good parents in your poor neighborhood ended up in the 1%. Even with good parents, kids can grow up to be dumb and poorly socialized.

Also, "poor" as you're using it here might be too broad a term. The amount of brain cycles that parents can spare for their kid's academic performance is inversely related to how much time they're worrying about food, whether or not the utilities will be shut off, how to work overtime and keep the kid safe, etc.


What happens more often, MSM doing culture war gymnastics? Or people accusing MSM of culture war gymnastics (an accusation which is, itself, culture war gymnastics)?


Well probably the latter since there's a lot more people who don't work for MSM then do?


I was sort of with you until the FSD


The new FSD is fantastic. It's not perfect but 12.3 is a massive upgrade over 11.x. Very natural. The end is within sight, really.


IDK. I tried previous versions others said were “fantastic” and it was a pants-shitting experience every time, so I quickly turned it off. Plus I had to hold onto the steering wheel the entire time. That’s not “self driving” IMO


Version 12 is a quantum leap above version 11. It's not even comparable. V11 was just bad imo


Maybe I’ll give it another try. Do you still have to keep your hands on the steering wheel?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: