Hacker Newsnew | past | comments | ask | show | jobs | submit | more asah's commentslogin

WeWork was taking on long term liability commitments and paying for them with short term revenue commitments. One bad thing and poof. Everybody in the commercial real estate market saw this coming.

OpenAI maybe in the same situation, committed to spending $1.4T while enjoying a good revenue year this year but then One Bad Thing and poof.


"500 KB/s workload should not use Kafka" - yyyy!!! indeed, I'm running 5MBps logging system through a single node RDS instance costing <$1000/mon (plus 2x for failover). There's easily 4-10x headroom for growth by paying AWS more money and 3-5x+ savings by optimizing the data structure.


I've always said, don't even think about Kafka until you're into MiB/s territory.

It's a complex piece of software that solves a complex problem, but there's many trade-offs, so only use it when you need to.


FTFY: "for now"


+1000 every time I swing by PH I'm impressed. Unlike other NYC museums, there's never a line and you can be in/out in 30-45mins. Also, located on 23rd st right near 3 subway stations serving a slew of lines. It's a regular stop when I have a few minutes to kill on my way to other things.


Does this include your SPoF Internet connection?


I have a redundant fiber with two providers, I'm a LIR and I do my BGP, my internet is fine.


This is backward looking. Future advances don't have to work like this

Example: 20ish years ago, stage IV cancer was a quick death sentence. Now many people live with various stage IV cancers for many years and some even "die of sending else" these advancements obviously skew towards helping older people.


Your claim doesn't argue against the issue. Even if we accept that you're correct there, you're again speaking of more people getting to their 'expiration date' rather than expanding that date itself. If you cure cancer, heart disease, and everything else - we're still not going to be living to a 100, or even near it, on average.

The reason humans die of 'old age' is not because of any specific disease but because of advanced senescence. Your entire body just starts to fail. At that point basically anything can kill you. And sometimes there won't even be any particular cause, but instead your heart will simply stop beating one night while you sleep. This is how you can see people who look like they're in great shape for their age, yet the next month they're dead.


Depends on the definition, I might take that bet because under some definitions were already here.

Example: better than average human across many thinking tasks is done.


I think that the definition needs to include something about performance on out-of-training tasks. Otherwise we're just talking about machine learning, not anything like AGI.


Yes, like stated in this video: https://youtu.be/COOAssGkF6I


Calculator can do arithmetic better than a human. Does this mean we have so called AI for half a century now?


That's how the term was sometimes used before. Think of video games AIs, those weren't (and still aren't) especially clever, but they were called AIs and nobody batted an eye at that.


When I write AI I mean what LLM apologists mean by AGI. So to rephrase I was talking about so called AGI 50 years ago in a calculator. I don't like this recent term inflation.


Let's get an English major to take a calculator to the International Math Olympiad, and see how that goes.


So a sign of AGI or intelligence on par with human is the ability to solve small generic math problems? And it still requires a handler human level intellinge to be paired with, to even start solving those math problems? Is that about right?


Not even close to right. First of all, the "small generic math problems" given at IMO are designed to challenge the strongest students in the world, and second, the recent results have been based on zero-shot prompts. The human operator did nothing but type in the questions and hit Enter.

If you do not understand the core concepts very well, by any rational definition of "understand," then you will not succeed at competitions like IMO. A calculator alone won't help you with math at this level, any more than a scalpel by itself would help you succeed at brain surgery.


It may be difficult for you to believe or digest, but this means nothing for actual innovation. Im yet to see the effects of LLMs send a shockwave in the real economy.

Ive actually hung around Olympiad level folks and unfortunately, their reach of intellect was limited in specific ways that didnt mean anything in regards to the real economy.


You seem to be arguing with someone who isn't here. My point is that if you think a calculator is going to help you do math you don't understand, you are going to have a really tough time once you get to 10th grade.


A calculator does 1 thinking task.


First of all, it's zero thinking tasks, calculators can't think. But let's call it that way for the sake of an argument. LLM can do less than a dozen thinking tasks, and I'm generous here. Generating text, generating still images, generating digital music, generating video, and generate computer code. That's about it. Is that a complete and exhaustive list of all what constitutes a human? Or at least a human mind? If some piece of silicon can do 5-6 tasks it is a human equivalent now? (AI aka AGI presumes human mind parity)


Good ol' Turing Test, but the real one, not the pop-sci one.


try codex and claude code - game changing ability to use CLI tools, edit/reorg multiple files, even interact with git.


Gemini cli is a thing that exists. Are you saying those specifically are better? Or CLIs are better?


OpenAI Codex currently seems quite a lot better than Gemini 2.5 and marginally better than Claude.

I'm using all three back-to-back via the VS Code plugins (which I believe are equivalent to the CLI tools).

I can live with either OpenAI Codex or Claude. Gemini 2.5 is useful but it is consistently not quite as good as the other two.

I agree that for non-Agentic coding tasks Gemini 2.5 is really good though.


Since I have only used Gemini Pro 2.5 (free) and Claude on the web (free) and I am thinking of subbing to one service or two, are you saying that:

- Gemini Pro 2.5 is better at feeding it more code and ask it to do a task (or more than one)? - ...but that GPT Codex and Claude Code are better at iterating on a project? - ...or something else?

I am looking to gauge my options. Will be grateful for your shared experience.


Codex and Claude are better than Gemini in all coding tasks I've tried.

At the "smart autocomplete" level the distinction isn't large but it gets bigger the more agentic you ask for.


Gemini CLI does all this too


note: Waymo typically spends 8-12+ months with safety drivers, before they launch true driverless.


There's a lot more to this y'all aren't seeing. Difficult family situation you shouldn't judge.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: