If the employee exceeded authorized access or acted without authorization to manipulate the payment system, they could be charged under the CFAA, which criminalizes unauthorized access to computer systems.
Obstruction of Federal Proceedings or Official Duties - 18 U.S.C. § 1505 or § 1913:
§ 1505: Obstruction of agency proceedings or congressional actions.
§ 1913: Prohibits using appropriated funds to lobby or interfere with government decisions, though applicability may depend on intent.
Interfering with congressionally mandated payments could constitute obstruction of lawful government functions.
Theft or Conversion of Government Funds - 18 U.S.C. § 641:
If the payment was lawfully owed and the employee’s actions deprived the recipient of funds, this could be seen as theft or conversion of government property.
False Statements or Fraud - 18 U.S.C. § 1001:
If the employee falsified records, submitted false information, or lied to justify stopping the payment, they might face charges for making false statements.
Conspiracy - 18 U.S.C. § 371:
If others were involved, conspiracy charges could apply to defraud the U.S. or commit other offenses.
Malfeasance or Misconduct in Office:
While not a specific federal statute, general misconduct or breach of public trust could lead to charges under broader provisions or administrative penalties (e.g., termination, fines).
You will need to do some mental gymnastics to find a criminal statute that could be used to prosecute that and it does not appear the US Attorney for DC is at all interested in doing that.
Musk and DOGE employees could be arrested and tried by the DOJ if a democrat wins the next Presidential election. These are the crimes they have broken.
Computer Fraud and Abuse Act (CFAA) - 18 U.S.C. § 1030:
If the employee exceeded authorized access or acted without authorization to manipulate the payment system, they could be charged under the CFAA, which criminalizes unauthorized access to computer systems.
Obstruction of Federal Proceedings or Official Duties - 18 U.S.C. § 1505 or § 1913:
§ 1505: Obstruction of agency proceedings or congressional actions.
§ 1913: Prohibits using appropriated funds to lobby or interfere with government decisions, though applicability may depend on intent.
Interfering with congressionally mandated payments could constitute obstruction of lawful government functions.
Theft or Conversion of Government Funds - 18 U.S.C. § 641:
If the payment was lawfully owed and the employee’s actions deprived the recipient of funds, this could be seen as theft or conversion of government property.
False Statements or Fraud - 18 U.S.C. § 1001:
If the employee falsified records, submitted false information, or lied to justify stopping the payment, they might face charges for making false statements.
Conspiracy - 18 U.S.C. § 371:
If others were involved, conspiracy charges could apply to defraud the U.S. or commit other offenses.
Malfeasance or Misconduct in Office:
While not a specific federal statute, general misconduct or breach of public trust could lead to charges under broader provisions or administrative penalties (e.g., termination, fines).
Again, none of those directly applies to what DOGE is doing. You have to really stretch the meaning of those laws to try to make it fit. Prosecutors do this regularly, but the current US Attorney is unlikely to.
An LLM is simply a model which given a sequence, predicts the rest of the sequence.
You can accurately describe any AGI or reasoning problem as an open domain sequence modeling problem. It is not an unreasonable hypothesis that brains evolved to solve a similar sequence modeling problem.
> It is not an unreasonable hypothesis that brains evolved to solve a similar sequence modeling problem.
The real world is random, requires making decisions on incomplete information in situations that have never happened before. The real world is not a sequence of tokens.
Consciousness requires instincts in order to prioritize the endless streams of information. One thing people dont want to accept about any AI is that humans always have to tell it WHAT to think about. Our base reptilian brains are the core driver behind all behavior. AI cannot learn that
How do our base reptilian brains reason? We don't know the specifics, but unless it's magic, then it's determined by some kind of logic. I doubt that logic is so unique that it can't eventually be reproduced in computers.
Reptiles didn't use language tokens, that's for sure. We don't have reptilian brains anyway, it's just that part of our brain architecture evolved from a common ancestor. The stuff that might function somewhat similar to an LLM is most likely in the neocortex. But that's for neuroscientists to figure out, not computer scientists. Whatever the case is, it had to have evolved. LLMs are intelligently designed by us, so we should be a little cautious in making that analogy.
"Consciousness requires instincts in order to prioritize the endless streams of information. "
What if "instinct" is also just (pretrained) model weight?
The human brain is very complex and far from understood and definitely does NOT work like a LLM. But it likely shares some core concepts. Neuronal networks were inspired by brain synapses after all.
> What if "instinct" is also just (pretrained) model weight?
Sure - then it will take the same amount of energy to train as our reptilian and higher brains took. That means trillions of real life experiences over millions of years.
Not at all, it took life hundreds of millions of years to develop brains that could work with language, and took us tens of thousands of years to develop languages and writing and universal literacy. Now computers can print it, visually read it, speech-to-text transcribe it, write/create/generate it coherently, text-to-speech output it, translate between languages, rewrite in different styles, explain other writings, and that only took - well, roughly one human lifetime since computers became a thing.
Information is a loaded word. Sure, you can say that based on our physical theories, you can think of the world that way, but information is what's meaningful to us amongst all the noise of the world. Meaningful for goals like survival and reproduction from our ancestors. Nervous systems evolved to help animals decide what's important to focus on. It's not a premade data set, the brain makes it meaningful in context of it's environment.
It depends on the goal, epicycles don't tell you about the nature of heavenly bodies - but they do let you keep an accurate calendar for a reasonable definition of accurate. I'm not sure whether I need deep understanding of intelligence to gain economic benefit from AI.
My first answer was a bit hasty, let me try again;
We are clearly a product of our past experience (in LLMs this is called our datasets). If you go back to the beginning of our experiences, there is little identity, consciousness, or ability to reason. These things are learned indirectly, (in LLMs this is called an emergent property). We don't learn indiscriminately, evolved instinct, social pressure and culture guide and bias our data consumption (in LLMs this is called our weights).
I can't think of any other way our minds could work, on some level they must function like a LLM, Language perhaps supplemented with general Data, but the principle being the same. Every new idea has been an abstraction or supposition of someones current dataset, which is why technological and general societal advancement has not been linear but closer to exponential.
Genes encode a ton of behaviors, you can't just ignore that. Tabula rasa doesn't exist among humans.
> If you go back to the beginning of our experiences, there is little identity, consciousness, or ability to reason.
That is because babies brains aren't properly developed. There is nothing preventing a fully conscious being from being born, you see that among animals etc. A newborn foal is a fully functional animal for example. Genes encode the ability to move around, identify objects, follow other beings, collision avoidance etc.
>Genes encode a ton of behaviors, you can't just ignore that.
I'm not ignoring that, I'm just saying that in LLMs we call these things weights. And i don't want to downplay the importance of weights, its probably a significant difference between us and other hominids.
But even if you considered some behaviors to be more akin to the server or interface or preprocess in LLMs it still wouldn't detract from the fact that the vast majority of the things that make us autonomous logical sentient beings come about through a process that is very similar to the core workings of LLMs. I'm also not saying that all animal brains function like LLMs, though that's an interesting thought to consider.
Look at a year old baby, there is no logic, no reasoning, no real consciousness, just basic algorithms and data input ports. It takes ten years of data sets before these emergent properties start to develop, and another ten years before anything of value can be output.
I strongly disagree. Kids, even infants, show a remarkable degree of sophistication in relation to an LLM.
I admit that humans don’t progress much behaviorally, outside of intellect, past our teen years; we’re very instinct driven.
But still, I think even very young children have a spark that’s something far beyond rote token generation.
I think it’s typical human hubris (and clever marketing) to believe that we can invent AGI in less than 100 years when it took nature millions of years to develop.
Until we understand consciousness, we won’t be able to replicate it and we’re a very long way from that leap.
Humans are not very smart, individually, and over a single lifetime. We become smart as a species in tens of millennia of gathering experience and sharing it through language.
What LLMs learn is exactly the diff between primitive humans and us. It's such a huge jump a human alone can't make it. If we were smarter we should have figured out the germ theory of disease sooner, as we were dying from infections.
So don't praise the learning abilities of little children, without language and social support they would not develop very much. We develop not just by our DNA and direct experiences but also by assimilating past experiences through language. It's a huge cache of crystallized intelligence from the past, without which we would not rule this planet.
That's also why I agree LLMs are stalling because we can't quickly scale a few more orders of magnitude the organic text inputs. So there must the a different way to learn, and that is by putting AI in contact with environments and letting it do its own actions and learn from its mistakes just like us.
I believe humans are "just" contextual language and action models. We apply language to understand, reason and direct our actions. We are GPTs with better feedback from outside, and optimized for surviving in this environment. That explains why we need so few samples to learn, the hard work has been done by many previous generations, brains are fit for their own culture.
So the path forward will imply creating synthetic data, and then somehow evaluating the good from the bad. This will be task specific. For coding, we can execute tests. For math, we can use theorem provers to validate. But for chemistry we need simulations or labs. For physics, we need the particle accelerator to get feedback. But for games - we can just use the score - that's super easy, and already led to super-human level players like AlphaZero.
Each topic has its own slowness and cost. It will be a slow grind ahead. And it can't be any other way, AI and AGI are not magic. They must use the scientific method to make progress just like us.
Humans do more than just enhance predictive capabilities. It is also a very strong assumption that we are optimised for survival in many or all aspects (even unclear what that means). Some things could be totally incidental and not optimised. I find appeals to evolutionary optimisation very tricky and often fraught.
Have you ever met a baby? They're nothing like an LLM. For starters, they learn without using language. By one year old they've taught themselves to move around the physical world. They've started to learn cause and effect. They've learned where "they" end and "the rest of the world" begins. All an LLM has "learnt" is that some words are more likely to follow others.
I find it hard to believe you're not trying to seem edgy. Is it difficult to imagine that many Twitter user's experiences will be affected by Elon Musk's involvement with the company?
> messenger completely died when it started forcing you to use it on mobile and people simply started using other message services because half their friends wouldn't respond until they were at computers.
To counter this with my own personal anecdote, in my social circles this is not true at all, and messenger has become even more of the messaging standard in recent times.
A second anecdote, every single communication for me is over FB messenger or iMessage. I occasionally get SMS messages but I refuse to use SMS long-term with my android friends, you just miss out on too many features.
Encryption, authentication, delivery notifications, read receipts, typing indicators, group messaging, hyperlink previews, third party app integrations, (some) emojis, etc.
The experience of using SMS is spartan and incomparable to modern messaging applications like iMessage, Whatsapp or Facebook Messenger.
Plus, if you lose your phone and you don't have your SMS msgs backed up, they are lost forever pretty much. With messenger, because they are not device dependent, they are still there. You can access them from any device (mobile or desktop) which is connected to the net.
I haven't used an SMS in ages...can't even remember the last time I texted or received one.
Definitely. There are many friends and family members whose phone numbers I’ve lost track of I can easily contact via messenger. That’s amazingly useful.
It's very hard to accidentally give a random guy access to your Gmail inbox. Doing so would require you to opt in to a dialog clearly and explicitly stating that you are giving said permission to a developer.
You’re dismissing the observation that users habitually click accept or continue when prompted with a dialog. Sure, you can blame this on the users being lazy but it becomes ingrained into users when everything they access has a dialog, especially when that contains terms of service that would be twenty pages long in paper form (slight exaggeration). I cannot even count the number of times I’ve had conversations with people when observing this behavior. So many users inherently trust that what they’re agreeing to is not only safe, but widely accepted. After all, why else would the service be so popular and have so many users—“Someone out there had to make sure this was legit before me.”
Im suggesting a UI feature same as the one Github has when deleting repos: clearly input the full name of the repo, or in this case, maybe input ”I UNDERSTAND” in order to proceed. This could be a browser plugin maybe...
Access to my personal email would be pretty much security game over for me as far as I can tell. Other people might feel otherwise.