How do rights/contract work in case of anthologies?
As I understand it, the "normal" workflow is, as the author you get an advance from a publisher, and when the book is published and revenue exceeds the advance, you get the royalties.
You mention you sign the contract with the authors and pay them, which sounds like a one-time deal.
If it's not confidential, how does this work, and how do original publishers (if any) figure into this?
Good question, original publishers (the magazines, in this case) set the terms for how long an exclusivity period they want to purchase for first print rights. It's usually a year, but many magazine contracts have a carve-out for "year's best" anthologies like mine.
I deal directly with the authors, not the original publishers. I pay a fixed rate for reprint rights, which is a one-time deal. This is mostly because pro rata royalties on an anthology are a pain to calculate (I've done this before), but also because it's unlikely for me to make back the money I spend on anthology creation, so if there were ever royalties they would be tiny in any case.
There's not much money in short fiction -- many authors tend to use it as a stepping stone to novels, or just as a hobby.
I would not agree, as these sorts of things were frequent before the latest round of ZIRPs:
* Uber IPOed in 2019, had a loss of $8.5b that year; interest rates were around 2%
* YouTube was acquired by Google for $1.65B in 2006, it lost ~$350m in the year before and the entire music industry was suing it; interest rates were around 4%
* Facebook bought Instagram for $1b in 2012, which at that point had no revenue and no plan how to achieve it; this was smack in the middle of the previous ZIRP cycle, however I don't think anyone would say that Instagram wasn't a huge success either for the founders or for Facebook
I would agree ZIRP fuels those things (to unhealthy levels), but not that it's always the root cause.
> I would agree ZIRP fuels those things (to unhealthy levels), but not that it's always the root cause.
It you were to set a house on fire with just a lighter in your hands, you would not succeed.
If you have a lighter and a tank of gasoline, you might probably succeed. ZIRP was the fuel, the lightener and your will are the "root causes". But with no fuel, no fire.
>ZIRP was the fuel, the lightener and your will are the "root causes". But with no fuel, no fire.
But the "Z" in "ZIRP" is literally zero% interest so your reply doesn't seem to address the gp's counter-examples of >0%. Other examples of non-zero% interest rate time periods include 1990s high-interest rates of +5% with Amazon in 1994 losing money for 7 years, PayPal 1998 losing money for 3+ years, Google 1998 losing money for 3+ years.
Those counter-examples means the simplistic narrative of "ZIRP is The Reason" does not explain everything. Those non-profitable companies were immediately scaling out to win the market and didn't wait for year 2008 ZIRP to do it.
Today, OpenAI (and other AI startups) are losing billions and expect to lose more billions in the upcoming years even though the current interest rate is ~4%.
>Today, OpenAI (and other AI startups) are losing billions and expect to lose more billions in the upcoming years even though the current interest rate is ~4%.
AI stuff is little different. If OpenAI and others hit AGI or anything remotely near it, the money is in theory massively endless. So investing when you could get 4% in a company that would return 10000% makes sense.
However, 4% in a company growing 15% in their field with profit margin of 10% means if only 1 in 5 survive, you have lost money so investors pull back.
I would consider the few unicorn examples provided to be outliers.
For most of the 2010’s ZIRP created a startup gold rush with everyone trying to leverage the same “burn money, get users” strategy you’ve outlined.
Excepting the current AI bubble, you cannot play that strategy today. Investors started demanding real results in the post COVID inflation years and continue to do so today, or else don’t invest at all in high-risk ventures with no tangible results.
Which is why it's done that way. Other text-based games where the focus is not on puzzling out what to do next (like roleplaying MUDs) have a more strict and easily discoverable vocabulary.
This would be like saying using programming languages is terrible because Brainfuck is a terrible programming language.
So what. Enough of us do that it just might be feasible.
I've used Linux for a loong time before some business-critical software ran on it. I had to have a Windows VM for years for netbanking, or before that, dual-boot for gaming.
If we're all too spoiled to give a free alternative a chance because it might be slightly inconvenient, we don't deserve the free alternative.
The thing is, achieving say, 99.99999% reliable AI would be spectacularly useful even if it's a dead end from the AGI perspective.
People routinely conflate the "useful LLMs" and "AGI", likely because AGI has been so hyped up, but you don't need AGI to have useful AI.
It's like saying the Internet is dead end because it didn't lead to telepathy. It didn't, but it sure as hell is useful.
It's beneficial to have both discussions: whether and how to achieve AGI and how to grapple with it, and how to improve a reliability, performance and cost of LLMs for more prosaic use cases.
I've tried it on a test case for generating a simple SaaS web page (design + code).
Usually I'm using GPT-5-mini for that task. Haiku 4.5 runs 3x faster with roughly comparable results (I slightly prefer the GPT-5-mini output but may have just accustomed to it).
I don't understand why more people don't talk about how fast the models are. I see so much obsession with bechmark scores but speed of response is very important for day to day use.
I agree that the models from OpenAI and Google have much slower responses than the models from Anthropic. That makes a lot of them not practical for me.
I don’t agree that speed by itself is a big factor. It may target a certain audience but I don’t mind waiting for a correct output rather than too many turns with a faster model.
Wonder if it’s prompt caching? OpenRouter is (I guess) just reporting actual throughput, where presumably groq is reporting a from-scratch figure? Just a guess tho.
We should all find little joys in our life and avoid things that deaden us. If AI is that for you, I'd say you made a good decision.
reply