WeWork was taking on long term liability commitments and paying for them with short term revenue commitments. One bad thing and poof. Everybody in the commercial real estate market saw this coming.
OpenAI maybe in the same situation, committed to spending $1.4T while enjoying a good revenue year this year but then One Bad Thing and poof.
"500 KB/s workload should not use Kafka" - yyyy!!! indeed, I'm running 5MBps logging system through a single node RDS instance costing <$1000/mon (plus 2x for failover). There's easily 4-10x headroom for growth by paying AWS more money and 3-5x+ savings by optimizing the data structure.
+1000 every time I swing by PH I'm impressed. Unlike other NYC museums, there's never a line and you can be in/out in 30-45mins. Also, located on 23rd st right near 3 subway stations serving a slew of lines. It's a regular stop when I have a few minutes to kill on my way to other things.
This is backward looking. Future advances don't have to work like this
Example: 20ish years ago, stage IV cancer was a quick death sentence. Now many people live with various stage IV cancers for many years and some even "die of sending else" these advancements obviously skew towards helping older people.
Your claim doesn't argue against the issue. Even if we accept that you're correct there, you're again speaking of more people getting to their 'expiration date' rather than expanding that date itself. If you cure cancer, heart disease, and everything else - we're still not going to be living to a 100, or even near it, on average.
The reason humans die of 'old age' is not because of any specific disease but because of advanced senescence. Your entire body just starts to fail. At that point basically anything can kill you. And sometimes there won't even be any particular cause, but instead your heart will simply stop beating one night while you sleep. This is how you can see people who look like they're in great shape for their age, yet the next month they're dead.
I think that the definition needs to include something about performance on out-of-training tasks. Otherwise we're just talking about machine learning, not anything like AGI.
That's how the term was sometimes used before. Think of video games AIs, those weren't (and still aren't) especially clever, but they were called AIs and nobody batted an eye at that.
When I write AI I mean what LLM apologists mean by AGI. So to rephrase I was talking about so called AGI 50 years ago in a calculator. I don't like this recent term inflation.
So a sign of AGI or intelligence on par with human is the ability to solve small generic math problems? And it still requires a handler human level intellinge to be paired with, to even start solving those math problems? Is that about right?
Not even close to right. First of all, the "small generic math problems" given at IMO are designed to challenge the strongest students in the world, and second, the recent results have been based on zero-shot prompts. The human operator did nothing but type in the questions and hit Enter.
If you do not understand the core concepts very well, by any rational definition of "understand," then you will not succeed at competitions like IMO. A calculator alone won't help you with math at this level, any more than a scalpel by itself would help you succeed at brain surgery.
It may be difficult for you to believe or digest, but this means nothing for actual innovation. Im yet to see the effects of LLMs send a shockwave in the real economy.
Ive actually hung around Olympiad level folks and unfortunately, their reach of intellect was limited in specific ways that didnt mean anything in regards to the real economy.
You seem to be arguing with someone who isn't here. My point is that if you think a calculator is going to help you do math you don't understand, you are going to have a really tough time once you get to 10th grade.
First of all, it's zero thinking tasks, calculators can't think. But let's call it that way for the sake of an argument. LLM can do less than a dozen thinking tasks, and I'm generous here. Generating text, generating still images, generating digital music, generating video, and generate computer code. That's about it. Is that a complete and exhaustive list of all what constitutes a human? Or at least a human mind? If some piece of silicon can do 5-6 tasks it is a human equivalent now? (AI aka AGI presumes human mind parity)
Since I have only used Gemini Pro 2.5 (free) and Claude on the web (free) and I am thinking of subbing to one service or two, are you saying that:
- Gemini Pro 2.5 is better at feeding it more code and ask it to do a task (or more than one)?
- ...but that GPT Codex and Claude Code are better at iterating on a project?
- ...or something else?
I am looking to gauge my options. Will be grateful for your shared experience.
OpenAI maybe in the same situation, committed to spending $1.4T while enjoying a good revenue year this year but then One Bad Thing and poof.