Hacker Newsnew | past | comments | ask | show | jobs | submit | senko's commentslogin

It sounds like you went in deep for a while, and then rebounded. Good for you (no sarcasm, I mean it).

We should all find little joys in our life and avoid things that deaden us. If AI is that for you, I'd say you made a good decision.


> I can't stand Gnome. [...] Nah, no thanks.

Good thing there are many other Linux desktop environments to choose: you already mentioned XFCE, Mate and KDE!

I personally like Gnome, I'm happy with out-of-the-box experience (zero extensions), and I'm glad they don't try to be everything to everyone.


How do rights/contract work in case of anthologies?

As I understand it, the "normal" workflow is, as the author you get an advance from a publisher, and when the book is published and revenue exceeds the advance, you get the royalties.

You mention you sign the contract with the authors and pay them, which sounds like a one-time deal.

If it's not confidential, how does this work, and how do original publishers (if any) figure into this?


Good question, original publishers (the magazines, in this case) set the terms for how long an exclusivity period they want to purchase for first print rights. It's usually a year, but many magazine contracts have a carve-out for "year's best" anthologies like mine.

I deal directly with the authors, not the original publishers. I pay a fixed rate for reprint rights, which is a one-time deal. This is mostly because pro rata royalties on an anthology are a pain to calculate (I've done this before), but also because it's unlikely for me to make back the money I spend on anthology creation, so if there were ever royalties they would be tiny in any case.

There's not much money in short fiction -- many authors tend to use it as a stepping stone to novels, or just as a hobby.


For every complex problem, there is an answer that is clear, simple, and wrong.

There are a number of other reasons that (might have) contributed to greater or lesser extent:

* rush to capture users and get acquired (the buyer can worry about profitability)

* race to the bottom by multiple competitors (you might want to be profitable but can't command a high price because others' are artificially low)

* ignoring costs that were rising faster than anticipated (wages, cloud costs, etc)

... and probably many more.

Not saying you're completely wrong, but ZIRP is just part of the picture.


I would argue that everything you’ve listed are just downstream of ZIRP.


I would not agree, as these sorts of things were frequent before the latest round of ZIRPs:

* Uber IPOed in 2019, had a loss of $8.5b that year; interest rates were around 2%

* YouTube was acquired by Google for $1.65B in 2006, it lost ~$350m in the year before and the entire music industry was suing it; interest rates were around 4%

* Facebook bought Instagram for $1b in 2012, which at that point had no revenue and no plan how to achieve it; this was smack in the middle of the previous ZIRP cycle, however I don't think anyone would say that Instagram wasn't a huge success either for the founders or for Facebook

I would agree ZIRP fuels those things (to unhealthy levels), but not that it's always the root cause.


> I would agree ZIRP fuels those things (to unhealthy levels), but not that it's always the root cause.

It you were to set a house on fire with just a lighter in your hands, you would not succeed. If you have a lighter and a tank of gasoline, you might probably succeed. ZIRP was the fuel, the lightener and your will are the "root causes". But with no fuel, no fire.


>ZIRP was the fuel, the lightener and your will are the "root causes". But with no fuel, no fire.

But the "Z" in "ZIRP" is literally zero% interest so your reply doesn't seem to address the gp's counter-examples of >0%. Other examples of non-zero% interest rate time periods include 1990s high-interest rates of +5% with Amazon in 1994 losing money for 7 years, PayPal 1998 losing money for 3+ years, Google 1998 losing money for 3+ years.

Those counter-examples means the simplistic narrative of "ZIRP is The Reason" does not explain everything. Those non-profitable companies were immediately scaling out to win the market and didn't wait for year 2008 ZIRP to do it.

Today, OpenAI (and other AI startups) are losing billions and expect to lose more billions in the upcoming years even though the current interest rate is ~4%.


>Today, OpenAI (and other AI startups) are losing billions and expect to lose more billions in the upcoming years even though the current interest rate is ~4%.

AI stuff is little different. If OpenAI and others hit AGI or anything remotely near it, the money is in theory massively endless. So investing when you could get 4% in a company that would return 10000% makes sense.

However, 4% in a company growing 15% in their field with profit margin of 10% means if only 1 in 5 survive, you have lost money so investors pull back.


In the interest of clearly communicating:

I used "fuel" in the meaning "to make people's ideas or feelings stronger, or to make a situation worse", not "a substance that is burned to provide heat or power", see https://dictionary.cambridge.org/dictionary/learner-english/...

I did provide two quick examples of these effects happening in the absence of ZIRP, so it is clearly not always required.

As, sadly, is not a tank of gasoline or ill intent to set a house on fire - these things can happen by accident, often a single spark is enough.

There's plenty of literal fuel beside gasoline, and there's plenty of "startup growth at all costs" fuel beside ZIRP.


I would consider the few unicorn examples provided to be outliers.

For most of the 2010’s ZIRP created a startup gold rush with everyone trying to leverage the same “burn money, get users” strategy you’ve outlined.

Excepting the current AI bubble, you cannot play that strategy today. Investors started demanding real results in the post COVID inflation years and continue to do so today, or else don’t invest at all in high-risk ventures with no tangible results.


ZIRP: Zero interest rate policy

Just if anybody was wondering. Would have liked to see it spelled put at first mention, so I do that for y'all.


> Figuring it all out is part of the fun,

Which is why it's done that way. Other text-based games where the focus is not on puzzling out what to do next (like roleplaying MUDs) have a more strict and easily discoverable vocabulary.

This would be like saying using programming languages is terrible because Brainfuck is a terrible programming language.


So what. Enough of us do that it just might be feasible.

I've used Linux for a loong time before some business-critical software ran on it. I had to have a Windows VM for years for netbanking, or before that, dual-boot for gaming.

If we're all too spoiled to give a free alternative a chance because it might be slightly inconvenient, we don't deserve the free alternative.


> Enough of us do that it just might be feasible.

Not nearly enough. Not by three orders of magnitude for the market to care.

This isn't the 1990s. Computers are now mainstream.


The thing is, achieving say, 99.99999% reliable AI would be spectacularly useful even if it's a dead end from the AGI perspective.

People routinely conflate the "useful LLMs" and "AGI", likely because AGI has been so hyped up, but you don't need AGI to have useful AI.

It's like saying the Internet is dead end because it didn't lead to telepathy. It didn't, but it sure as hell is useful.

It's beneficial to have both discussions: whether and how to achieve AGI and how to grapple with it, and how to improve a reliability, performance and cost of LLMs for more prosaic use cases.

It's just that they are separate discussions.


Claude Glyph.

Smallest, fastest model yet, ideally suited for Bash oneliners and online comments.


I've tried it on a test case for generating a simple SaaS web page (design + code).

Usually I'm using GPT-5-mini for that task. Haiku 4.5 runs 3x faster with roughly comparable results (I slightly prefer the GPT-5-mini output but may have just accustomed to it).


I don't understand why more people don't talk about how fast the models are. I see so much obsession with bechmark scores but speed of response is very important for day to day use.

I agree that the models from OpenAI and Google have much slower responses than the models from Anthropic. That makes a lot of them not practical for me.


If the prompt runs twice as fast but it takes an extra correction, it’s a worse output. I’d take 5 minute responses that are final.


I don’t agree that speed by itself is a big factor. It may target a certain audience but I don’t mind waiting for a correct output rather than too many turns with a faster model.


Well, it depends on what you do. If a model can produce a PR that is ready to merge (and another can't), waiting 5 minutes is fine.


> You'll see Groq averaging 1,086tps

What I don't understand is, Groq reporting 200tps for the same model: https://console.groq.com/docs/model/moonshotai/kimi-k2-instr...

OpenRouter numbers look fishy.


Wonder if it’s prompt caching? OpenRouter is (I guess) just reporting actual throughput, where presumably groq is reporting a from-scratch figure? Just a guess tho.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: