Hacker Newsnew | past | comments | ask | show | jobs | submit | smrtinsert's commentslogin

On what grounds is there a lawsuit? Hasn't scraping been classified as legal?

Calling someone’s apartment an opium den is potentially libel, and if it results in a material financial impact, you’ve got a lawsuit.

Is it someone's apartment or Airbnb's apartment?

classifying people's businesses as an "opium den" using a shitty LLM prompt seems like a pretty good way to piss some people off.

I don't necessarily agree with labeling them drug dens. But certainly the hosts showed zero or negative effort in keeping the room clean and suitable to rent. They do deserve some shaming.

At least Google pretended to not be evil for a few years

> The problem is millions of years of evolutionary wiring makes us see it as alive

Maybe for laymen, but I would think most technologists should understand that we're working with the output of what is effectively a massive spreadsheet which is creating a prediction.


The thing with evolutionary wiring is that it doesn't matter if you're layman or "technologist". The technologist part is just a small layer on top of very thick caveman/animal insticts and programming.

That's why a technologist can, just as easily as any layman, get addicted to gambling, or do crazy behaviors when attracted by the opposite sex.


>small layer on top of very thick caveman/animal insticts and programming.

Which is also why marketing and advertising works on EVERYONE. When AI puts out the phrase "Prompt engineering", everyone instinctively treat it as something deterministic, despite them having some idea of how an LLM works...


The same could be said for your brain.

LLMs are highly intelligent. Comparing them to spreadsheets is reductionist and highly misleading.


>LLMs are highly intelligen

I will tell you why it is not.

Intelligence is understanding low level stuff and using it to reason about and understand high level stuff.

When LLMs demonstrate "highly intelligent" behavior, like solving a complex math problem (high level stuff), but also simultaneously demonstrate that it does not know how to count (low level stuff that the high level stuff depends on), it proves that it is not actually "intelligent" and is not "reasoning".


You just invented you own definition of intelligence. I'm pretty sure that strategy could also support the opposite conclusion.

So your problem with the definition is that "I invented it"?

Do you have any rational objection to the definition? If you don't have, then I am afraid that you don't have a point.


> "NEVER FUCKING GUESS"

It's very hard to treat this post seriously. I can't imagine what harness if any they attempted to place on the agent beyond some vibes. This is "most fast and absolutely destroy things" level thinking. That the poster asks for journalists to reach out makes it like a no news is bad news publicity grab. Just gross.

The AI era is turning about to be most disappointing era for software engineering.


This is going to be the most important job going forward, the guy in charge of making sure production secrets are out CC's reach. (It's not safe for any dev to have them anywhere on their filesystem)

I'd be interested to learn where those words exist in Cursor's context. My assumption was that it was part of the Cursor agent harness, but it's just as likely it was in the user instructions.

> The AI era is turning about to be most disappointing era for software engineering.

this has been obvious to me since like 2024, it truly is the worst, most uninspiring era of all time.


As soon as I read that line, I knew everything I needed about the author and his abilities.

Did the author even mention what they thought should have been used instead?

pi.dev I'm much more interested in. Closer to the bone, or maybe better said, pi.dev is more like a lego and OpenClaw seems like a big Ninjago set.

I was shocked when I saw the guy behind libgdx was also behind pi.dev. Random tech worlds colliding.


Maybe the haphazard/devil may care look feels more authentically American


How true is this? How does a regulated industry confirm the model itself wasn't trained with malicious intent?


Why would it matter if the model is trained with malicious intent? It's a pure function. The harness controls security policies.


Much like a developer can insert a backdoor as a "bug" so can an LLM that was trained to do it.

One way you could probably do it is by identifying a commonly used library that can be misused in a way that would allow some kind of time-of-check to time-of-use (TOCTOU) exploit. Then you train the LLM to use the library incorrectly in this way.


Some truth to that. I hear it thrown around the office and everyone feels obligated to out agent each other (without actually proving a great use case)

For myself I don't need autonomous agents. I need a smaller version of Claude Code instead (the mcp client not the coding agent) that can run on local models that are under 24B params. I still need to try pi dev.


At some point virtue signaling is fixing symptoms of the problem. Always had a problem with master slave terminology happy to see it gone.


You'll be happy to know in the context of a "master branch" it never had any connotation to slavery, except in the minds of people who see everything as a question of race*

Anyway I'm off to listen to the 50th anniversary Dark Side of the Moon remaster. Wait, is "dark" an okay word? I didn't get a master's degree in English

* Parallel ATA on the other hand, yeah, yikes


> virtue signaling is fixing symptoms of the problem

It diminishes the seriousness of the entire anti-racism movement by making it look petty, out of touch and more interested in creating nuisances than solving real problems. The San Francisco school board got fired for doing similar nonsense during COVID, renaming schools and thus showing they weren’t serious people.


> by making it look

I have yet to see a good reason to believe that it isn't actually the case.


Can you, perhaps, cite your own pre-2020 writing attesting to the problem, and explaining why it should be considered a problem?

Do you consider that using the name "master" for a branch tends to endorse or normalize slavery, or (even stochastically) increase the amount of slavery that occurs in the world?

If so, how?

If not, why is it actually a problem to reference the concept (even disregarding the evidence that it was not intended to do so)?


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: