LLMs are only useful information systems, largely for parsing/managling variable data and building other information systems. Problem sets any large org like DoD has.
I don’t think anyone has even seriously proposed using them for weapons targeting, at least in the current broad LLM form.
If they are slow (2x as slow on a cruise missile or drone SOC) and are wrong all the time then why would they even bother? They already have AI models for visual targeting that are highly specialized for the specific job and even that’s almost entirely limited to very narrow vehicle or ship identification which is always combined with existing ballistic, radar, or GPS targeting.
Buying some LLM credits doesn’t help much at all there.
Too much of AI gets uncritically packaged with these hand wavy FUD statements IMO.
I'd like to believe you, but there's credible evidence that (e.g) DOGE has been using LLMs to cut funding for NSF or HHS using prompts in the vein of "is this grant woke."
Which is obviously stupid. So if stupid people are using these things in stupid ways, that seems bad.
Given that that's a task you want to do, it's at least the right kind of task (language processing) for an LLM. The proposals from the comment starting this thread aren't.
If grant classification is trying to drive a car non-stop (including not stopping for gas) from NY to LA, stuffing LLMs into weapons is more like trying to drive that same car from NY to London. They're just not the proper kind of tool for that, and it's not the same class of error.
If people on Hacker News are uncertain about what is and isn't a suitable task for these models then the non technical people making these decisions surely are as well.
You're saying that weapons are designed by incompetents, and that enthusiasts have a reasoned understanding of the capabilities and limitations of the latest thing they're going "ooh shiny" about.
That's fundamentally not a language processing task. That's a decision making task with a huge impact on individual scientists and the scientific community. Not something that should be delegated to a machine, no matter how sophisticated.
I don’t think anyone has even seriously proposed using them for weapons targeting, at least in the current broad LLM form.
If they are slow (2x as slow on a cruise missile or drone SOC) and are wrong all the time then why would they even bother? They already have AI models for visual targeting that are highly specialized for the specific job and even that’s almost entirely limited to very narrow vehicle or ship identification which is always combined with existing ballistic, radar, or GPS targeting.
Buying some LLM credits doesn’t help much at all there.
Too much of AI gets uncritically packaged with these hand wavy FUD statements IMO.