Hacker Newsnew | past | comments | ask | show | jobs | submit | what's commentslogin

It’s not about creativity. The incentive to produce drops to zero when an LLM is just going to slurp it up and regurgitate it without some form of compensation (notoriety, money, whatever).

Which ever shitty model they’re using for search is so much better than the free offerings from the other companies. It’s not even close. It’s not going anywhere.

And this will get you like $1M at 45? You can’t retire on that.

$1.8M-$2.2M. Assumes 6%-7.5% annual return. Does not include employer contribution. Provides $72k-$88k /yr income. Assuming you pull social security at 67, your continued gains exceed your draw, and your fund perpetuates until you die.

If you retire at 45 won't that significantly impact social security?

It just means you draw ~$2500/month instead of ~$3800/month. That makes your $77k/yr income into $107/yr, but more importantly it helps your retirement account keep growing so it outlives you.

You can't live on $40,000 a year?

What about property taxes, the occasional $40k visit to the ER for a few stitches?

Does that happen often to you?

No - my hospital visits are £0 ;)

How close is your net worth and age to a million at 45?

Pretty bang on actually.

And how big is your dick too?


Bang average

I definitely could. An american maybe couldn't.

Where can I see the actual prompts and follow ups you fed each model?

So the prompts are tuned and adjusted on a per-model basis. If you look at the number of attempts, each receives a specific prompt variation depending on the model. This honestly isn't as much of an issue these days because SOTA models natural language parsing (particularly the multimodal ones) has eliminated a lot of the byzantine syntax requirements of the SD/SDXL days.

The template prompt seen in each comparison gets adjusted through a guided LLM which has fine-tuned system prompts to rewrite prompts. The goal is to foster greater diversity while preserving intent, so the image model has a better chance of getting the image right.

Getting to your suggestion for posting all the raw prompts, that's actually a great idea. Too bad I didn't think about it until you suggested it. And if you multiply it out - there's 15 distinct test cases against 22 models at this point, each with an average of about 8 attempts so we’re talking about thousands of prompts many of which are scattered across my hard drive. I might try to do this as a future follow-up.


Shouldn’t every model get the same prompt? Seems a bit weird, especially when you can’t see the prompts that were used.

The goal isn’t the prompt itself. The test is whether a prompt can be expressed in such a way that we still arrive at the author's intent, and of course to do so in a way that isn't unnatural.

The prompts despite their variation are still expressed in natural language.

The idea is that if you can rephrase the prompt and still get the desired outcome, then the model demonstrates a kind of understanding; however more variation attempts also get correspondingly penalized: this is treated more as a failure of steering, not of raw capability.

An example might help - take the Alexander the Great on a Hippity-Hop test case.

The starter prompt is this: "A historical oil painting of Alexander the Great riding a hippity-hop toy into battle."

If a model fails this a couple of times (multiple seeds), we might use a synonym for a hippity-hop, it was also known as a space hopper.

Still failing? We might try to describe the basic physical appearance of a hippity-hop.

Thus, something like GPT-Image-2 scored much higher on the compliance component of the test, requiring only a single attempt, compared with Z-Image Turbo, which required 14 attempts.


Why would you use an LLM for OCR?

Because if it's multimodal, oops all transformers and they're pretty much best in class for ocr now, afaik?

Yep, Its pretty damn good compared to classic OCR and even more lightweight ones as well that I can run locally. the cards just vary too much over time.

Because apparently that's what programming is and can only be these days...

Isn’t it the last comment in the chain that is being referenced? About Idris Elba playing the mother and that he did such a good job no one noticed?

Can’t you just partition the table by time (or whatever) and drop old partitions and not worry about vacuuming? Why do you need to keep around completed jobs forever?

If you're looking for kafka-like semantics, you might want to keep messages around.

Your temporal partition idea is spot on. But instead of dropping old partitions, you can instead archive them.


What about old failed jobs? You might wanna keep them around? And maybe you have retries that have a backoff.

Yes you can, and at the risk of sounding a little snarky; if you do something like that and then release it as open source, people may even discuss it on HN!

> Why are you handwaving things away though? I've got you on max effort. I even patched the system prompts to reduce this.

Do you think it knows what max effort or patched system prompts are? It feels really weird to talk to an LLM like it’s a person that understands.


I've tested system prompt patching and it's definitely capable of identifying that my changes have been applied.

As someone who's been programming alone for over a decade, I absolutely do want to enjoy my coding buddy experience. I want to trust it. I feel pretty bad when I have to treat Claude like a dumb machine. It's especially bad when it starts making mistakes due to lack of reasoning. When I start explaining obvious stuff it's because I've lost the respect I had for it and have started treating it like a moron I have to babysit instead of a fellow programmer. It's definitely capable of understanding and reasoning, it's just not doing it because of adaptive thinking or bad system prompts or whatever else.


I thought that was really weird as well.

No, they still have to act in the interest of shareholders even if they have no voting power.

As a PBC, the intent of the company is not only profit, but it's hard to analyze the counterfactuals of if Anthropic were a pure for-profit or a non-profit

thats the benefit of a pbc

What will happen if they don't because the founders control the voting powe

Your employer doesn’t pay the subscription cost, they pay per token. So it’s already way more than 10x the cost.

Depends on the type of subscription. We have Codex Team and have a monthly subscription, no per-token costs.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: