CS fundamentals is about framing an information problem to be solvable.
That'll always be useful.
What's less useful, and what's changed in my own behavior, is that I no longer read tool specific books. I used to devour books from Manning, O'reilly etc. I haven't read a single one since LLMs took off.
The point of the argument is that meaning emerges in conversation. A session between human and AI is a conversation.
Current AI storage paradigms offer lateral memory across the time axis. What exists around me?
A bit branch is longitudinal memory across the time axis. What exists behind me?
Persist type checked decision trees within it. Your git history just became a tamper-proof, reproducible O(1) decision tree. Execution becomes a tree walk.
Same here. I'm really curious about this too. What do they mean by "wonderful"? I suppose some of the pieces here might not be very well-known or popular, but they are inspiring, or maybe they are a good resource for learning something. All I can see is they are maybe associated with a read-later app?
> Non-offending pedophiles should be more widely accepted by society. It’s unfair to ostracize someone for a desire they were born with, and integrating them into society makes them less likely to cause harm.
There's no evidence that anyone is born with particular sexual deviations. It attempts to simultaneously absolve and normalize attitudes that ideate rape of children, so long as they don't act on it. That's a pretty thin and permeable line to draw.
It depends on how you test it. I recently found that the way devs test it differs radically from how users actually use it. When we first built our RAG, it showed promising results (around 90% recall on large knowledge bases). However, when the first actual users tried it, it could barely answer anything (closer to 30%). It turned out we relied on exact keywords too much when testing it: we knew the test knowledge base, so we formulated our questions in a way that helped the RAG find what we expected it to find. Real users don't know the exact terminology used in the articles. We had to rethink the whole thing. Lexical search is certainly not enough. Sure, you can run an agent on top of it, but that blows up latency - users aren't happy when they have to wait more than a couple of seconds.
This is the gap that kills most AI features. Devs test with queries they already know the answer to. Users come in with vague questions using completely different words. I learned to test by asking my kids to use my app - they phrase things in ways I would never predict.
Ironically, pitting a LLM (ideally a completely different model) up against what you're testing, letting it write human "out of the ordinary" queries to use as test cases tend to work well too, if you don't have kids you can use as a free workforce :)
It solves some types of issues lexical search never will. For example if a user searches "Close account", but the article is named "Deleting Your Profile".
But lexical solves issues semantic never will. Searching an invoice DB for "Initech" with semantic search is near useless.
Pick a system that can do both, including a hybrid mode, then evaluate if the complexity is worth it for you.
Depends on how important keyword matching vs something more ambiguous is to your app. In Wanderfugl there’s a bunch of queries where semantic search can find an important chunk that lacks a high bm25 score. The good news is you can get all the benefits of bm25 and semantic with a hybrid ranking. The answer isn’t one or the other.
Can you have a coding philosophy that ignores the time or cost taken to design and write code? Or a coding philosophy that doesn't factor in uncertainty and change?
If you're risking money and time, can you really justify this?
- 'writing code that works in all situations'
- 'commitment to zero technical debt'
- 'design for performance early'
As a whole, this is not just idealist, it's privileged.
Will save you time and cost in designing, even in the relatively near term of a few months when you have to add new features etc.
There's obviously extremes of "get something out the door fast and broken then maybe neaten it up later" vs "refactor the entire codebase any time you think soemthing could be better", but I've seen more projects hit a wall due to leaning to far to the first than the second.
Either way, I definitely wouldn't call it "privileged" as if it isn't a practical engineering choice. That seems to judt frame things in a way where you're already assuming early design and commitment to refactoring is a bad idea.
Your argument hinges on getting the design right, upfront.
That assumes uncertainty is low or non-existent.
Time spent, monetary cost, and uncertainty, are all practical concerns.
An engineering problem where you can ignore time spent, monetary cost, and uncertainty, is a privileged position. A very small number of engineering problems can have an engineering philosophy that makes no mention of these factors.
It’s the equivalent of someone running on a platform where there would be world peace and no hunger.
That’s great and all as an ideal but realistically impossible so if you don’t have anything more substantial to offer then you aren’t really worth taking seriously.
You forgot “get it right first time” which goes against the basic startup mode of being early to the market or die.
For some companies, trying to get it right the first time may make sense but that can easily lead to never shipping anything.
So: The author wants to work for a company with resources.
Unfortunately, details take time and time takes money.
For a business's survival, the company's relative positioning in the market, access to sales and marketing channels, financing are much stronger concerns.
Also the book is $60 on Kindle and $80 for paperback? Who's the target audience?
reply