Hacker Newsnew | past | comments | ask | show | jobs | submit | badmonster's commentslogin

Love the automatic linking of related concepts - that's the holy grail of knowledge management! I'm curious about your approach to idea extraction and clustering. How do you handle nuanced distinctions between similar concepts? For instance, if I have articles discussing both "eventual consistency" and "strong consistency," does the system recognize these as related but distinct concepts, or does it risk merging them?

Also, the RAG integration over your whole library is powerful. How do you balance between surfacing diverse perspectives versus creating potential echo chambers when the system preferentially links to existing concepts in your knowledge graph?


He's right to question the economics. The AI infrastructure buildout resembles the dot-com era's excess fiber deployment - valuable long-term, but many individual bets will fail spectacularly. Utilization rates and actual revenue models matter more than GPU count.


I disagree on that and covered a lot of it in this blog (sorry for the plug!) https://martinalderson.com/posts/are-we-really-repeating-the...


100% of technical innovations have had the same pattern. The same thing happens every time because this is the only way the system can work: excess is required because there is some uncertainty, lots of companies are designing strategies to fill this gap, and if this gap didn't exist then there would be no investment (as happens in Europe).

Also, demand wasn't over-estimated in the 2000s. This is all ex-post reasoning you use data from 2002 to say...well, this ended up being wrong. Companies were perfectly aware that no-one was using this stuff...do you think that telecoms companies in all these countries just had no idea who was using their products? This is the kind of thing you see journalists write after the event to attribute some kind of rationality and meaning, it isn't that complicated.

There was uncertainty about how things would shake out, if companies ended up not participating then CEOs would lose their job and someone else would do it. Telecoms companies who missed out on the boom bought shares in other telecom's companies because there was no other way to stay ahead of the news and announce that they were doing things.

This financial cycle also worked in reverse twenty years later too: in some countries, telecoms companies were so scarred that they refused to participate in building out fibre networks so lost share and then ended up doing more irrational things. Again, there was uncertainty here: incumbents couldn't raise from shareholders who they bankrupted in fiber 15 years ago, they were 100% aware that demand was outstripping supply, and this created opportunities for competitors. Rationality and logic run up against the hard constraints of needing to maintain a dividend yield and the exec's share options packages.

Humans do not change, markets do not change, it is the same every time. What people are really interested in is the timing but no-one knows that either (again, that is why the massive cycle of irrationality happens)...but that won't change the outcome. There is no calculation you can make to know more, particularly as in the short-term companies are able to control their financial results. It will end the same way it ended every time before, who knows when but it always ends the same way...humans are still human.


> Also, demand wasn't over-estimated in the 2000s. This is all ex-post reasoning you use data from 2002 to say...well, this ended up being wrong.

Well, the estimate was higher than the reality, by definition it was over-estimated. They built out as if the tech boom was going to go on forever, and of course it didn't. You can argue that they made the best estimates they could with the information available, but ultimately it's still true that their estimates were wrong.


Your blog article stopped at token generation... you need to continue to revenue per token. Then go even further... The revenue for AI company is a cost for the AI customer. Where is the AI customer going to get incremental profits from the cost of AI.

For short searches, the revenue per token is zero. The next step is $20 per month. For coding it's $100 per month. With the competition between Gemini, Grok, ChatGPT... it's not going higher. Maybe it goes lower since it's part of Google's playbook to give away things for free.


Great article, thank you for writing and sharing it!


Fiber seems way easier to get long-term value out of then GPUs, though. How many workloads today other than AI justify massive GPU deployments?


Would be funny if all the stagnant GPUs finally brought game streaming to the mainstream.


They discuss it in the podcast. Laid fiber is different because you can charge rent for it essentially forever. It seems some people swooped in when it crashed and now own a perpetual money machine.


"Code red" feels like theater. Competition is healthy - Google's compute advantage was always going to matter once they got serious. The real question isn't who's ahead this quarter, but whether anyone can maintain a moat when the underlying tech is rapidly commoditizing.


It was always clear that the insane technological monopoly of Google would always eventually allow them to surpass OpenAI once they stopped messing around and built a real product. It seems this is that moment. There is no healthy competition here because the two are not even remotely on the same footing.

"Code red" sounds about right. I don't see any way they can catch up. Their engineers at the moment (since many of the good researchers left) are not good enough to overcome the tech advantage. The piling debts of OpenAI just make it all worse.


I was wondering how much difference people leaving has made. Most of OpenAI's lead seemed to happen before the trying to fire Altman, Ilya and Mira leaving saga.


Yeah, but now it's questionable whether the insane investments will ever pay off.


wasn't it always?


*even more questionable


"Who is ahead this quarter" is pretty much all that the market and finance types care about. Maybe "who will be ahead next year" as a stretch. Nobody looks beyond a few quarters. Given how heavily AI is currently driven by (and driving!) the investment space, it's not surprising that they'll find themselves yanked around by extremely short term thinking.


People who only care about this quarter don't donate to a non-profit in the hopes it turns into an investment in a private company.


It feels like (to me) that Google's TPU advantage (speculation is Meta is buying a bunch) will be one of the last things to be commoditized, which gives them a larger moat. Normal chips are hard enough to come by for this stuff.


Also, they have all the infra to actually use all that tpus advantage (as well as actual researchers, contrariwise to OpenAI)


That will be less of a problem since OAI can spill out to other providers as needed if their own capacity is under high utilization. They already use coreweave, aws, azure, etc. Google doesn't do that as far as I know and don't see why they would, so they are stuck eating the capacity planning.


OAI is already working on shipping their own chips.


True, but Google's been making them for 10 years, which subjectively feels like a long time in tech.


Declaring a “code red” seems to be a direct result of strong competition?

Sure, from an outsider’s perspective, competition is fine.


The real insight here is recognizing when network latency is your bottleneck. For many workloads, even a mediocre local database beats a great remote one. The question isn't "which database is best" but "does my architecture need to cross network boundaries at all?"


(author here) yes 100% this. This was never mean't to be a SQLite vs Postgres article per say, more about the fundamental limitations of the network databases in some contexts. Admittedly, at times I felt I struggle to convey this in the article.


Sure. Now keep everything in memory and use redis or memcache. Easy to get performance if you change the rules.


You can use SQLite for persistence and a hash map as cache. Or just go for Mongo since it's web scale.


yep, then add an AWS worker in-between


SQLite can also do in memory


Yeah, very good point. It all comes down to requirements. If you require persistence, then we can start talking about redundancy and backup, and then suddenly this performance metric becomes far less relevant.


Backups are to the second with litestream.


So much this. My inner perf engineer shudders every time I see one of these "modern" architectures that involve databases sited hundreds of miles from the application servers.


This article is very much a reaction to that. The problem is the problem as Mike Acton would say.


What makes Flowctl different from existing workflow automation tools like n8n or Zapier?


Hi! Sorry for the late response.

The key difference is the use case. n8n and Zapier are designed for integration automation for connecting apps and services together.

Flowctl focuses on operational workflows that require human approvals and inputs, things like database migrations, infrastructure changes or scheduled maintenance tasks.

n8n has some permissions features, but approvals aren't built-in. Zapier is cloud-only and expensive at scale. Flowctl is fully open-source with no enterprise only features.


How does the system visualize the spread of news across different sites? Are there network graphs or timeline visualizations showing propagation?


What was the specific pixel art problem with Google's Nano Banana that this Rust project solved?


Does this tool work offline or does it require an API connection to external services like OpenAI?


What RFC standards or protocols can users browse on RFC Hub? Does it include IETF RFCs or is it focused on internal organizational RFCs?


There are zero IETF RFCs on it.


Fair clarification. Does Ghostty's WASM approach have performance advantages over pure xterm.js implementations?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: