No conceivable harm in what sense? It seems obvious that it is harmful for a user who requests and is granted privacy to then have their private messages delivered to NYT. Legally it may be on shakier ground from the individual's perspective, but OpenAI argues that the harm is to their relationship with their customers and various governments, as well as the cost of the implementation effort:
>For OpenAI, risks of breaching its own privacy agreements could not only "damage" relationships with users but could also risk putting the company in breach of contracts and global privacy regulations. Further, the order imposes "significant" burdens on OpenAI, supposedly forcing the ChatGPT maker to dedicate months of engineering hours at substantial costs to comply, OpenAI claimed. It follows then that OpenAI's potential for harm "far outweighs News Plaintiffs’ speculative need for such data," OpenAI argued.
>> It seems obvious that it is harmful for a user who requests and is granted privacy to then have their private messages delivered to NYT.
This ruling is about preservation of evidence, not (yet) about delivering that information to one of the parties.
If judges couldn't compel parties to preserve evidence in active cases, you could see pretty easily that parties would aggressively destroy evidence that might be harmful to them at trial.
There's a whole later process (and probably arguments in front of the judge) about which evidence is actually delivered, whether it goes to the NYT or just to their lawyers, how much of it is redacted or anonymized, etc.
Understatement of the millenium, but Yarvin has written a lot more than "let's do fascist technocracy!"
I find his writing style wastes a lot of one's time and I disagree with him on nearly everything, but there's no denying that there are many interesting ideas in there.
I absolutely deny that there are any interesting ideas in there.
We've done this, it was called the dark ages and it sucked and we moved past it. Engaging with this pablum in any way is granting it attention and vigor it obviously doesn't deserve.
I agree with the other replies that I've never been able to find any interesting ideas amidst his schlock, but I'm wondering what "many interesting ideas" you see, from your prospective? Just a couple examples would be useful. It's totally possible I've missed them because I haven't ever been able to engage with his writing.
He's pretty thoughtful about how power is actually leveraged and has interesting insights around these ideas, particularly areas of democratic failure that I think are worth thinking about. I think his solutions are more questionable, but his writing is at least worth engaging with.
I think people just dismiss him out of hand because he's a political enemy.
I'm worried I'm sealioning, but could you possibly point me to one of these thoughtful pieces? It's a lot to wade through for me to try to figure out what you're talking about without any pointers...
I did read a decent amount of his "mencius moldbug" stuff back in the day, and I just wouldn't describe it the way you do in this comment, so I'm wondering what I'm missing.
Hmmm, ok, I read this. Are there any parts of it that you find particularly thoughtful about power or areas of democratic failure that are worth thinking about?
I think we probably just fundamentally disagree here, because to me, this whole thing seems like drivel. Are there gems in there that I'm just not recognizing?
The idea that stuck out to me is even if you repeal chevron deference and argue congress should be making laws like it’s supposed to, the outcome will be vague laws which then get interpreted by the courts, pushing the real legislation from administration technocrats that might at least be subject matter experts in the best case to unelected judges that probably don’t know anything.
The symbolic idea of who holds power and who actually holds power in practice are not the same.
There’s also the bit that doge is constrained in ways that make success unlikely (which has now been proved out).
Thanks, that's helpful. I agree that first point is interesting, but it's maybe the most mainstream view he expresses in the article. (The issue of judicial power is pretty commonly discussed by normie liberals as well!) But that's not really a knock against it. So fair enough, thanks for calling that one out!
I think the doge thing is silly though. It didn't fail because it was "constrained in ways that make success unlikely", it failed because: 1. There was obviously just arithmetically not enough money in discretionary spending to make more than a tiny dent in spending, and 2. They never made even the most cursory effort to improve efficiency, and just went with this ideological chainsaw approach. Maybe there's some version of the idea that was (and is) a good one, but it was always doomed to fail as conceived and led.
I never found any of his ideas interesting. Unusual, maybe, but unusual does not mean interesting. I come from a country with a history of autocracy, and it has been an absolute dogma for me to not touch anything autocratic with a six-mile pole.
Well, I hope the rest of the world now will get the memo too, before we'll need a world war to crystallize the lesson.
13 hours is wild. I bought a house, got married, sold a bunch of stock, and qualified for a bunch of deductions last year. We filed jointly in maybe an hour and a half, including finding all of the paperwork. I'm pretty good with numbers and instructions, so I could see 4-6 being the average.
It was through freetaxusa, maybe handwriting balloons the job a bit? But it looks like only 14% file physically.
Yeah, no idea what's going on in this thread. As far as I can tell, this connotation was just invented for the purposes of- well, I shouldn't guess motivations, but I can't think of any good ones.
Here's the BBC using it[1], CNN[2], The AP [3], The Conversation [4]
"It was the war with Russia that drove the fed to raise interest rates in 2022" sounds like the fed or the US was at war with Russia. Your links 1, 3, 4 mention Ukraine in the same sentence as "war with Russia", which makes it clear that the US and the fed are not at war with Russia. Link 2 talks about a threat of war, not an actual war.
GP's complaint is that it implies that someone other than Russia started the war. I don't think mentioning another party who wasn't responsible should change that.
I think this comes down to whether the chemistry is providing some kind of deep value or is just being used by evolution to produce a version of generic stochastic behavior that could be trivially reproduced on silicon. My intuition is the latter- it would be a surprising coincidence if some complicated electro-chemical reaction behavior provided an essential building block for human intelligence that would otherwise be impossible.
But, from a best-of-all-possible-worlds perspective, surprising coincidences that are necessary to observe coincidences and label them as surprising aren't crazy. At least not more crazy than the fact that slightly adjusted physical constants would prevent the universe from existing.
> My intuition is the latter- it would be a surprising coincidence if some complicated electro-chemical reaction behavior provided an essential building block for human intelligence that would otherwise be impossible.
Well, I wouldn't say impossible: just that BMI's are probably first. Then probably wetware/bio-hardware sentience, before silicon sentience happens.
My point is the mechanisms for sentience/consciousness/experience are not well understood. I would suspect the electro-chemical reactions inside every cell to be critical to replicating those cells functions.
You would never try to replicate a car never looking under the hood! You might make something that looks like a car, seems to act like a car, but has a drastically simpler engine (hamsters on wheels), and have designs that support that bad architecture (like making the car lighter) with unforeseen consequences (the car flips in a light breeze). The metaphor transfers nicely to machine intelligence: I think.
I don't see how self-awareness should be supernatural unless you already have supernatural beliefs about it. It's clearly natural- it exists within humans who exist within the physical universe. Alternatively, if you believe that self-awareness is supernatural in humans, it doesn't make a ton of sense to criticize someone else for introducing their own unfounded supernatural beliefs.
I don't think they are saying self-awareness is supernatural. They're charging the commenter they are replying to with asserting a process of self-awareness in a manner so devoid of specific characterization that it seems to fit the definition of a supernatural event. In this context it's a criticism, not an endorsement.
Is it just the wrong choice of word? There's nothing supernatural about a system moving towards increased capabilities and picking up self-awareness on the way; that happened in the natural world. Nothing supernatural about technology improving faster than evolution either. If they meant "ill-defined" or similar, sure.
To me, the first problem is that "self-awareness" isn't well-defined - or, conversely, it's too well defined because every philosopher of mind has a different definition. It's the same problem with all these claims ("intelligent", "conscious"), assessing whether a system is self-aware leads down a rabbit hole toward P-Zombies and Chinese Rooms.
I believe we can mostly elide that here. For any "it", if we have it, machines can have it too. For any useful "it", if a system is trying to become more useful, it's likely they'll get it. So the only questions are "do we have it?" and "is it useful?". I'm sure there are philosophers defining self-awareness in a way that excludes humans, and we'll have to set those aside. And definitions will have varying usefulness, but I think it's safe to broadly (certainly not exhaustively!) assume that if evolution put work into giving us something, it's useful.
>There's nothing supernatural about a system moving towards increased capabilities and picking up self-awareness on the way
There absolutely is if you handwave away all the specificity. The natural world runs on the specificity of physical mechanisms. With brains, in a broad brush way you can say self-awareness was "picked up along the way", but that's because we've done an incredible amount of work building out the evolutionary history and building out our understanding of specific physical mechanisms. It is that work that verifies the story. It's also something we know is already here and can look back at retrospectively, so we know it got here somehow.
But projecting forward into a future that hasn't happened, while skipping over all the details doesn't buy you sentience, self-awareness, or whatever your preferred salient property is. I understand supernatural as a label for a thing simply happening without accountability to naturalistic explanation, which is a fitting term for this form of explanation that doesn't do any explaining.
If that's the usage of supernatural then I reject it as a dismissal of the point. Plenty of things can be predicted without being explained. I'm more than 90% confident the S&P 500 will be up at least 70% in the next 10 years because it reliably behaves that way; if I could tell you which companies would drive the increase and when, I'd be a billionaire. I'm more than 99% confident the universe will increase in entropy until heat death, but the timeline for that just got revised down 1000 orders of magnitude. I don't like using a word that implies impossible physics to describe a prediction that an unpredictable chaotic system will land on an attractor state, but that's semantics.
I think you're kind of losing track of what this thread was originally about. It was about the specific idea that hooking up a bunch of AI's to interface with each other and engage in a kind of group collaboration gets you "self awareness". You now seem to be trying to model this on analogies like the stock market or heat death of the universe, where we can trust an overriding principle even if we don't have specifics.
I don't believe those forms of analogy work here, because this isn't about progress of AI writ large but about a narrower thing, namely the idea that the secret sauce to self-awareness is AI's interfacing with each other and collaboratively self-improving. That either will or won't be true due to specifics about the nature of self-improvement and whether there's any relation between that and salient properties we think are important for "self-awareness". Getting from A to B on that involves knowledge we don't have yet, and is not at all like a long-term application of already settled principles of thermodynamics.
So it's not like the heat death of the universe, because we don't at all know that this kind of training and interaction is attached to a bigger process that categorically and inexorably bends toward self-awareness. Some theories of self-improvement likely are going to work, some aren't, some trajectories achievable and some not, for reasons specific to those respective theories. It may be that they work spectacularly for learning, but that all the learning in the world has nothing to do with "self awareness." That is to say, the devil is in the details, those details are being skipped, and that abandonment of naturalistic explanation merits analogy to supernatural in it's lack of accountability to good explanation. If supernatural is the wrong term for rejecting, as a matter of principle, the need for rational explanation, then perhaps anti-intellectualism is the better term.
If instead we were talking about something really broad, like all of the collective efforts of humanity to improve AI, conceived of as broadly as possible over some time span, that would be a different conversation than just saying let's plug AI's into each other (???) and they'll get self-aware.
>I think you're kind of losing track of what this thread was originally about.
Maybe I am! Somebody posed a theory about how self-improvement will work and concluded that it would lead to self-awareness. Somebody else replied that they were on board until the self-awareness part because they considered it supernatural. I said I don't think self-awareness is supernatural, and you clarified that it might be the undefined process of becoming self-aware that is being called supernatural. And then I objected that undefined processes leading to predictable outcomes is commonplace, so that usage of supernatural doesn't stand up as an argument.
Now you're saying it is the rest of the original, the hive-mindy bits, that are at issue. I agree with that entirely, and I wouldn't bet on that method of self-improvement at 10% odds. My impression was that that was all conceded right out of the gate. Have I lost the plot somewhere?
But how does self-awareness evolve in biological systems, and what would be the steps for this to happen with AI models? Just making claims about what will happen without explaining the details is magical reasoning. There's a lot of that going on the AGI/ASI predictions.
Exactly, I almost referenced the Underpants Gnome meme in my reply. I would call it basically supernatural, or in an important sense anti-intellectual if the defense of it is based on refusing to explain as a matter of principle.
But perhaps Underpants Gnome is the clearest way of drawing attention to the missing step.
Given that we have no freaking clue of where self awareness comes from even in humans, expecting a machine to evolve the same capability by itself is pure fantasy.
I find I'm willing to explore a minecraft world or puzzle through a nethack dungeon that nobody bothered to create. You could argue that humans made the biomes or defined the layout constraints, but humans also supplied the training data for an LLM. I guess it comes down to whether the art is any good, with procedural generation being mostly irrelevant? But perhaps a book is different
I don't think I'd want to read a novel that was generated by an algorithm, but I might be up for a Choose Your Own Adventure style game, which might be a better analogy to Minecraft or nethack.
I mean the difference with Minecraft is each part that is procedurally generated was made by a human or involved human input into the design decisions.
Unless you are suggesting Notch was a generative AI model he made Minecraft.
What? That's not true at all, sans structures. Minecraft worlds are infinite and unique, using randomly generated noise textures as the basis. Nobody "made" each "part".
And arguing that a human tweaking noise parameters is somehow more creative than humans distilling their entire knowledge and cultural repertoire into a machine, then working with that to produce literature with a guided hand seems quite silly.
But someone did design a diamond block and a grass block and program their properties and model them and add them to the procedural generation system with rules on how they should be placed.
An LLM would from whole cloth create a block, how it works, and how it would be randomly generated. That’s why the current MinecraftGPT doesn’t have any consistency if you turn around in 360 degrees. Everything is being generated on the fly and how it works. Once you generate that Minecraft world, how it works and what it looks like is static and why it works the way it works was designed entirely by people.
Not quite- from a strictly financial perspective, it means we should care 0.002% as much as we care about an intervention that doubles the GDP or eliminates 100% it. Neither exists, so we're better off comparing to other theft- this is about 15% of numbers for retail shrink, 50% of reported personal theft, so this suggests we should care proportionally.
But I don't know about the strict financial analysis. I'm pretty sure it would tell us to have negative care about a serial killer that targets the homeless.
>For OpenAI, risks of breaching its own privacy agreements could not only "damage" relationships with users but could also risk putting the company in breach of contracts and global privacy regulations. Further, the order imposes "significant" burdens on OpenAI, supposedly forcing the ChatGPT maker to dedicate months of engineering hours at substantial costs to comply, OpenAI claimed. It follows then that OpenAI's potential for harm "far outweighs News Plaintiffs’ speculative need for such data," OpenAI argued.
reply