Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For all the fears of AGI, these are the more concrete nefarious uses we can actually reason about. It is a point I often make that we don't need AGI for AI to already become very disturbing in its potential use.

The other point, is that technically this AI is not "unaligned". It is doing exactly what is requested of the operator.

The implications are that humanity suffers in either scenario, either by our own agency in control of power we are not prepared to manage or we will be managed by power that we can not control.



> It is a point I often make that we don't need AGI for AI to already become very disturbing in its potential use.

But we don't need AI or LLMs at all for the above scenario. Companies don't currently pry into your e-mails to make hiring decisions, but they could (ignoring laws) do it if they wanted. No LLM or AI necessary.

So why would the existence of AIs or LLMs change that?

If they wanted to use the content of your e-mails against you, they don't need an LLM to do it.


Running an authoritarian police state is risky because of all the people involved in the authoritarian police state, also it's massively expensive to keep all those people snooping and you have to take them out and kill them on occasion because they learn too much.

But wait, you can just dump that information into a superAIcomputer and get reliable enough results while not needing a break with little to no risk of the computer rising up against you. Sounds like a hell of a deal.

Quantity is a quality in itself.


Because it's now cheaper and more cost-effective, and if they can get away with it, saves them tons of money. Note: I don't think companies are likely to do this, but being able to do this without AI is not sufficient reason to dismiss the possibility. It's the same reason people who wouldn't steal DVDs from a store would pirate movies online. Much harder to get caught and easier to do, so this new way of watching movies for free became popular while the previous method was not.


I feel like the backlash against Stable Diffusion had the opposite change in visibility. It revealed that thousands of people wanted a way to produce unique art in the styles of living artists, where some of those people might have gone to either their Patreon or a piracy site that scraped Patreon instead. Either way they're not as visible if they're only consuming the result.

To some artists, AI generated images from their styles would amount to "productive piracy." Unlike torrenting the act is often out in the open since users tend to share the results online. I'm not sure if this phenomenon has happened before; with teenagers pirating Photoshop it's impossible to tell from a glance if the output is from a pirated version.


Whenever we get to see behind the corporate veil, we often find companies don't abide by laws. How many companies failed this year hiding nefarious activities?

Also, what types of behavior did we get a glimpse of from the Twitter Files?

Aren't there always constant lawsuits about bad behavior of companies especially around privacy?

So yes, we are talking about the same behavior existing, but the concern is that they now get orders of magnitude more power to extend such bad behavior.


> Also, what types of behavior did we get a glimpse of from the Twitter Files?

Can you actually explain the types of bad behavior? The rhetorical question about The Twitter Files somehow being a groundbreaking expose of bad behavior doesn't really match anything I've seen. Most of what was cited was essentially a social media company trying to enforce their rules.

Might want to read up on the latest developments there. Several journalists have debunked a lot of the key claims in the "Twitter Files". Taibbi's part was particularly egregious, with some key numbers he used being completely wrong (e.g. claiming millions when the actual number was in the thousands, exaggerating how Twitter was using the data, etc.).

Even Taibbi and Elon have since had a falling out and Taibbi is leaving Twitter.

If Elon Musk so famously and publicly hates journalists for lying, spinning the truth, and pushing false narratives, why would he enlist journalists for "The Twitter Files"? The answer is in plain view: He wanted to take a nothingburger and use journalists to put a spin on it, then push a narrative.

Elon spent years saying that journalists can't be trusted because they're pushing narratives, so when Elon enlists a select set of journalists to push a narrative, why would you believe it's accurate?

> So yes, we are talking about the same behavior existing, but the concern is that they now get orders of magnitude more power to extend such bad behavior.

No they don't. The ultimate power is being able to read the e-mails directly. LLMs abstract that with a lower confidence model that is known to hallucinate answers when the underlying content doesn't have a satisfactory set of content.


That is not evidence against bad behavior, that is more evidence of bad behavior.

I agree that Musk has not honored his original intent. He has already broken in many ways the transparency pledge and free speech principles.

Yet, these were already broken under previous ownership. We simply see that as continuing.


Because the law as it stands today for AI and LLMs is untested; and because it’s untested, it’s frequently seen by AI based products and companies as something that can be done without legal ramifications or at least something that isn’t blocking their products from being used this way.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: