Hacker Newsnew | past | comments | ask | show | jobs | submit | integralid's commentslogin

I always type "please continue". I guess being polite is not a good idea.

Always seems strange to me that people say "please" and "thank you" to LLMs.

It actually works really well if you suck up to the AI.

"Please do x"

"Thank you, that works great! Please do y now."

"You're so smart!"

lol. It really works though! At least in my experience, Claude gets almost hostile or "annoyed" when I'm not nice enough to it. And I swear it purposefully acts like a "malicious genie" when I'm not nice enough. "It works, exactly like you requested, but what you requested is stupid. Let me show you how stupid you are."

But, when I'm nice, it is way more open, like "Are you sure you really want to do X? You probably want X+Y."


What really works? Sycophancy? I think that is a bug, not a feature.

>What type of developer chooses UX and performance over security? So reckless.

Initially I assumed this is sarcastic, but apparently not. UX and performance is what programmers are paid to do! Making sure UX is good is one of the most important things in programmer job.

While security is a moving target, a goal, something that can never be perfect, just "good enough" (if NSA wants to hack you, they will). You make it sound like installing third party packages is basically equivalent to a security hole, while in practice the risk is low, especially if you don't overdo it.

Wild to read extreme security views like that, while at the same time there are people here that run unconstrained AI agents with --dangerous-skip-confirm flags and see nothing wrong with it.


Installing 3rd party packages the way Node and Python devs do regularly _is_ a security hole.

We definitely agree on that. Fortunately some of the 600+ comments here include suggestions of what to do about it.

Even more wild to read that sarcasm about "removing locks from doors for 87% speedup" is considered extreme...

And yes, we agree that running unconstrained AI agents with --dangerous-skip-confirm flags and seeing nothing wrong with it is insane. Kind of like just advertising for burglars to come open your doors for you before you get home - yeah, it's lots faster to get in (and to move about the house with all your stuff gone).


First of all, I think your comment is against HN guidelines.

And I expect GP has actually a lot of experience in mathematics - there are exactly right and this is how professional mathematicians see math (at least most of them, including ones I interact with).


Engineers, maybe. Not the case with Mathematicians.

>This is literally the same thing as

No.

>You can

Not right now, right? I don't think current AI automated proofs are smart enough to introduce nontrivial abstractions.

Anyway I think you're missing the point of parent's posts. Math is not proofs. Back then some time ago four color theorem "proof" was very controversial, because it was a computer assisted exhaustive check of every possibility, impossible to verify by a human. It didn't bring any insight.

In general, on some level, proofs like not that important for mathematicians. I mean, for example, Riemann hypothesis or P?=NP proofs would be groundbreaking not because anyone has doubts that P=NP, but because we expect the proofs will be enlightening and will use some novel technique


search crawlers used to bring people TO your site llm boots are used to keep people OUT of your site, because knowledge is indexed and distributed by corporations.

So if your site is dependent on ads, and since the only way for people to see those ads is coming to your site, then yes, you lose.

If your site exists to share information, then the information gets disseminated, whether via LLM or some browser, it doesn't make a difference to me


Those are not the only two options.

Why are you presenting the latter option as if it were mainstream? It's such a small percentage of use cases that it probably isn't even a rounding error.

People who want to disseminate information also want the credit.

I'd still like to know why you are presenting this false dichotomy. What reason do you have for presenting a use case that has fractions of a percentage as if it were a standard use case? What is your motivation behind this?


My only motivation is that it pains me to see smart capable people working on insignificant problems.

Maybe I don't understand the problem as well as I should, and I'm open to hearing what it is you think that I'm missing.

But from my perspective, this is a solution for a non-problem, which in my eyes is a problem itself.


You misunderstand: I am asking what is your motivation for presenting a 0.0001% use case as a 50% use case.

The use case you present is so small it can be ignored as an option, yet you present it as the only other option.


> People who want to disseminate information also want the credit.

This is psychological projection.


> This is psychological projection.

You don't know what that means.

In any case, people who want to disseminate information with credit can do so without standing up a blog (any place that allows posting of comments, such as Reddit, HN, etc).

In the context of this discussion, we're talking about site owners; people who put up a blog.


You don't get attribution for your work if it merely feeds into it's training data

That assumes the AI bots are scraping for training data and not simple retrieval/ RAG (which would likely provide attribution)

Oh come on. We know they’re doing both. They are scraping in morally/legally dubious ways as well as doing other things.

Yes but if there are three banana shops around and there are five banana addicted people living nearby the number of bananas available on average for every person is not 15.

In other words, if all ai companies need more compute that a single provider can provide, then there's just not enough of it. So the question "why everyone partners with everyone" must have a different answer.


It's not really "creating more compute" it's just a natural outcome of everyone desperately grabbing whatever becomes available. The dynamics make sense for all parties involved.

Firstly, it's very clear now that everyone is seriously crunched for capacity (like, each of the hyperscalers' backlogs -- i.e. capacity for which payment is committed, but as yet unsatisfied -- are in the double-digit billions.)

So as the compute providers bring more capacity online, everyone with demand wants to get a slice of that. Like, why would anyone NOT dive in and try to secure some capacity for themselves? Especially when the rate of capacity growth is constrained by the availability of GPUs and energy and data center buildouts, which is measured in years.

On the flip side, why would the compute providers NOT want multiple customers? It creates competition and drives prices up.

There are likely other forces at play too. For one, none of the parties - the model providers and the compute providers, with some of them like Google being both -- wants to get too dependent on any of the other parties, but they also want to secure a slice of each others' future growth, so they're all partnering with each other. Obviously, Google wants Gemini to win and Microsoft wants Copilot to win, but as a hedge, they'll be happy hosting their competitors' products and taking a cut.

This is partly the origin of the "circular investments" concerns. The scale at which this industry is growing, all these players have enormous mountains of money that they must invest to secure their future, but they are also the only players that can operate at this scale, and so the only place they can invest that money in is each other.


This is significantly safer than shorting that some people here suggest.

You can't lose money sitting on cash. While when shorting your potential loss is infinite.


Mistiming the market is losing out on potential gains, which is losing money compared to sitting on cash. Cash doesn't grow.

Most people are best off investing in index funds and forgetting about it for 10+ years.


I disagree. More often than not is "We know how to solve the problem, and the solution is some linear algebra"

I disagree with both of you.

It's not about linear algebra (which is just used as a way to represent arbitrary functions), it's about data. When your problem is better specified from data than from first principles, it's time to use an ML model.


I think what you're expressing is also known as "the Bitter Lesson".

That's vaccously true as you said isn't it? I write fish on my shell and then I can save it as a fish script. Worth noting that bash is much more portable and available by default, but if I'm going for portability I go straight to /bin/sh

Fair point, but for scripting I don't feel fish (or zsh) offer an advantage big enough to bother learning that language with their rather narrow scope. But bash it's good to anyways know, you don't really get around it either. Larger/more complex scripts I write in other languages (depending on domain I and other requirements I guess). It's also not that I daily write those scripts on my shell, so I also think that even if I learned fish or zsh, I would have to look up things again every time I need to write something again.

Except not everyone uses bash shell - so it's not really accurate.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: