Hacker News new | past | comments | ask | show | jobs | submit | sqs's comments login

Tool-calling agents with search tools do very well at information retrieval tasks in codebases. They are slower and more expensive than good RAG (if you amortize the RAG index over many operations), but they're incredibly versatile and excel in many cases where RAG would fall down. Why do you think you need semantic indexing?

> Why do you think you need semantic indexing?

Unfortunately I can only give an anecdotal answer here, but I get better results from Cursor than the alternatives. The semantic index is the main difference, so I assume that's what's giving it the edge.


Is it a very large codebase? Anything else distinctive about it? Are you often asking high-level/conceptual questions? Those are the questions that would help me understand why you might be seeing better results with RAG.

I'll ask something like "where does X happen?" But "X" isn't mentioned anywhere in the code because the code is a complete nightmare.

It's fundamentally hard. If you have an easy solution, you can go make a easy few billion dollars.

What a shallow, negative post. "Hype" is tautologically bad. Being negative and "above the hype" makes you sound smart, but this post adds nothing to the discussion and is just as fuzzy as the hype it criticizes.

> It is a real shame that some of the most beneficial tools ever invented, such as computers, modern databases, data centers, etc. exist in an industry that has become so obsessed with hype and trends that it resembles the fashion industry.

Would not the author have claimed at the time that those technologies were also "hype"? What consistent principle does the author use (a priori) to separate "useful facts" from "hype"?

Or, if the author would have considered those over-hyped at the time, then they should have some humility because in 10 years they may look back at AI as another one of the "most beneficial tools ever invented".

> In technology, AI is currently the new big hype. ... 10% of the AI hype is based on useful facts

The author ascribes malice to people who disagree with them about the use of AI. The author says proponents of AI are "greedy", "careless", unskilled, inexperienced, and unproductive. How does the author know that these people don't believe that AI has great utility and potential?

Don't waste your time on this article. I wish I hadn't. Go build something, or at least make thoughtful, well defined critiques of the world.


>> Would not the author have claimed at the time that those technologies were also "hype"? What consistent principle does the author use (a priori) to separate "useful facts" from "hype"?

Are you saying someone hyped ... databases? In the same way as AI is hyped today?

This is a tweet from Sam Altman, dated April 18 2025:

https://x.com/sama/status/1913320105804730518

Whence I quote:

  i think this is gonna be more like the renaissance than the industrial revolution
Do you remember someone from the databases industry claiming that databases are going to be "like the renaissance" or lik the industrial revolution? Oracle? Microsoft? PostgreSQL?

Here's another one with an excerpt of an interview with Demis Hssabis, dated April 17, 2025:

https://x.com/reidhoffman/status/1912929020905206233

Whence I quote:

  " I think maybe in the next 10, 15 years we can actually have a real crack at solving all disease."

  Nobel Prize Winner and DeepMind CEO Demis Hassabis on how AI can revolutionize drug discovery doing "science at digital speed."

Who, in databases, has claimed that "in the next 10, 15 years we can actually have a real crack at solving all disease"? Data centers? Computers in general? All disease?

The last time I remember the hype being even remotely real was Web 2.0. And most of everything that made that hypeworthy is long gone (interoperability and open standards like RSS or free APIs) or turned out to be genuinely awful ("social media was a mistake") or has become far worse than what it replaced (SaaS).

It is an interesting comparison. Databases are objectively the more important technology, if we somehow lost AI the world would be equal parts disappointed and relieved. If we somehow lost database technology we'd be facing a dystopian nightmare.

If we cure all disease in the next 10-15 years, databases will be just as important as AI to that outcome. Databases supported a technology renaissance that reshaped the world on a level that is difficult to comprehend. But because most of the world doesn't interact directly with databases, as a technology it is not the focus of enthusiastic rhetoric.

LLMs are further along tech-chain and they might be an important part of world-changing human achievements, we won't know until we get there. In contrast, we can be certain databases were important. I imagine the people who were influential in their advancement understood how important the tech would be, even if they didn't breathlessly go on about it.


My favorite that I’ve heard a couple times is “solve math” and/or “solve physics”

Altman’s claimed LLMs will figure out climate change. Solid stuff.


Sure, databases didn't get as much hype but that's partly because they are old.

Look at something more recent: "cloud", "social networking", "blockchain", "mobile".

Plenty of hype! Some delivered, some didn't.


I’m not sure how hyped up databases were during their advent, but what do you mean “by partly because they are old?” The phonograph prototypes that were made by Thomas Alva Edison are old and they were hyped in a way. People called him the “Wizard of Menlo Park” for his work because they were seeing machines that could talk (or at least reproduce sounds in the same way photographs let you reproduce sights.)

Even blockchain didn't have the degree of hype as this AI stuff.

The CEO of Google said that AI would be as profound as fire in revolutionizing humanity. People are saying that it will replace all intellectual labor in the near term and then all physical labor soon afterwards.


AI is old too.

“In from three to eight years we will have a machine with the general intelligence of an average human being.” (Minsky, 1970)

https://aiws.net/the-history-of-ai/this-week-in-the-history-...


Which of those things claimed it would be "like the renaissance" or that we'd cure all diseases?

In the clip I link above Hassabis says he hopes that in 10-15 years' time we'll be looking back on the way we do medicine today like the way they did it in the middle ages. In 10-15 years time. Modern medicine - you know, antibiotics, vaccines, transplants, radiotherapy, anti-retrovirals, the whole shebang, like medieval times.

Are you saying - what are you saying? Who has said things like that ever before in the tech industry? Azure? Facebook? Ethereum? Who?


Ray Kurzweil?

> Are you saying someone hyped ... databases?

I was too young to remember databases but I vividly remember people (sometimes even myself) thinking “the web”, “smart phones”, “e-commerce“,“social media” and “cloud computing” all being “hype”.

Thinking about this was ultimately what led me to giving up my AI skepticism and diving into the space.

At this point I actually don’t know how people sincerely think AI is “hype”. For me, and many people I know, there are multiple AI tools that I’m not sure how I would get by without.


The use of semantic web and linked data (a type of distributed database and ontology map) for protein folding (therefore, medical research too) was predicted by many and even used by some.

Databases were of key interest. Namely, the problem of relating different schemas.

So, yes. _It was claimed_ that database tech could help. And it probably did so already. To what extent I really don't know. Those consortiums included many big players.

It never hyped, of course. It did not stood the test of time either (as far as I know).

Claims, as you can see, don't always fully develop into reality.

LLMs now need to stand a similar test of time. Will they become niche and forgotten like semweb? We will know. Have patience.


You're taking a sliver of truth as though it dismantles their entire argument. The point was, nobody was _claiming_ databases would cure all diseases. That's the argument around the hype of AI here.

Maybe it will cure all diseases, I don't know. Hard to put an honest "I don't know" in a box, isn't it?

I am actually having a blast seeing the hooks for many kinds of arguments and counter-arguments.


It will not

I guess OP hated it when Bill Gates said "personal computers have become the most empowering tool we've ever created."

Or Vint Cerf, "The Internet is the most powerful tool we have for creating a more open and connected world."


Yea, and the internet never went through a hype bubble that ultimately burst ¯\_(ツ)_/¯

The thing is, the dot com hypesters were right about the impact of the Internet. Their timing was wrong, and they didn't pick the right winners mostly, but the one thing they were right about was that the Internet would change the world significantly and drive massive economic transformation.

it doesn't really compare, but the "paperless office" was hyped for decades

"Whence" is actually a question, it means from where or from what origin.

They did not say database was hyped. Although I think computers(both enterprise and personal) were hyped and so was internet and smartphone, long before they began to deliver value. It takes a decade to say which hype lives up to expectation and which doesn't.

> Are you saying someone hyped ... databases? In the same way as AI is hyped today?

Nah, but they hyped Clippy (Office Assistant). Oh wait... maybe that's "AI" back in the days...


> Who, in databases, has claimed that "in the next 10, 15 years we can actually have a real crack at solving all disease"?

I doubt anyone claimed 10-15 years specifically, but it does actually seem like a pretty reasonable claim that without databases progress will be a snails pace and with databases it will be more of a horses trot. I imagine the human body requires a fair amount of data to be organised to analyse and simulate all the parts and I'd recommend storing all that in some sort of database.

This might count as unsatisfying semantics, but there is a huge leap going from physical ledgers and ad-hoc formats to organised and standardised data storage (ie, a database - even if it is just excel sheets that counts to me). Suddenly scientists can record and theorise on order(s) of magnitude more raw material and the results are interchangeable! That is a big deal and a necessary step to make the sort of progress we can make in modern times.

Regardless, it does seem fair to compare the AI boom to the renaissance or industrial revolution. We appear to be poking at the biggest thing to ever be poked in history.


> but it does actually seem like a pretty reasonable claim that without databases progress will be a snails pace and with databases it will be more of a horses trot.

This isn't what anyone is saying


Fair point; let me put it this way:

Database hype was relatively muted and databases made a massive impact on our ability to cure diseases. AI hype is wildly higher and there is a reasonable chance it will lead to the curing of all diseases - it is going to have a much bigger impact than databases did.

The 10-15 year timeframe is obviously impossible for logistic reasons if nothing else - but the end goal is plausible and the direction we need to head in next as a society is clear. As unreasonable claims go it is unobjectionable and I'd rather be standing with Hassabis in the push to end disease than with naysayers worried that we won't do it as quickly as an uninformed optimist expects.


> there is a reasonable chance it will lead to the curing of all diseases

This is complete nonsense. AI might help with the _identification_ of diseases, but there is nothing to support the idea that every human ailment is curable.

Perhaps AI can help find cures, but the idea that it can cure every human ailment deserves to be mocked.

> I'd rather be standing with Hassabis in the push to end disease than with naysayers worried that we won't do it as quickly as an uninformed optimist expects.

It's a good thing those aren't our only options!


> but there is nothing to support the idea that every human ailment is curable.

There is; we can conceivably cure everything we know about right now. There isn't a law of nature that says organisms have to live less than centuries and we can start talking seriously about brain-in-jar or consciousness uploading now that we appear to be developing the computing tech to support it.

Everything that exists stops eventually but we're on the cusp of some pretty massive changes here. We're moving from a world with 8 1-in-a-billion people wandering around to one with an arbitrary number of superhuman intelligences. That is going to have a major impact larger than anything we've seen to date. A bunch of science fiction stuff is manifesting in real time.


I think you're only reinforcing the contrast. Yes databases are massivly useful and have been self evidently so for decades; and yet, none of the current outlandish AI claims were ever made about them. VCs weren't running around 30 or 40 years ago claiming that SQL would cure disease and usher in a utopia.

Yes LLMs are useful and might become vastly more useful, but the hype:value ratio is currently insane. Technologies that have produced like 9 orders of magnitude more value to date have never recieved the hype that LLMs are getting.


Some issues with this "hype":

- Company hires tens of people to build an undefined problem. They have a solution (and even that is rather nebulous) and are looking for a problem to solve.

- Company pushes the thing down your throat. The goal is not clear. They make authoritative-sounding statements on how it improves productivity, or throughput, or some other metric, only to retract later when you pull off those people into a private meeting.

- People who claim what all the things that nebulous solution can accomplish when, in fact, nobody really knows because the thing is in a research phase. These are likely the "charlatans" OP is referring to, and s/he's not wrong.

- Learning "next hot thing" instead of the principles that lead to it and, worse still, apply "next hot thing" in the wrong context when the trade-offs have reversed. My own example: writing a single-page web application with "next hot JS framework" when you haven't even understood the trade-off between client-side and server-side rendering (this is just my example, not OP's, but you can probably relate.)

etc. etc. Perhaps the post isn't very well articulated, but it does make several points. If you haven't experienced any of the above, then you're just not in the kind of company that OP probably has worked at. But the things they describe are very real.

I agree there is nothing wrong with "hype" per se, but the author is using the word in a very particular context.


What a shallow, negative post. Can't believe you're implying that there's no outsized hype about AI. At least bring some arguments forth instead of asking silly hypothetical questions.

> Would not the author have claimed at the time that those technologies were also "hype"? What consistent principle does the author use (a priori) to separate "useful facts" from "hype"?

Well, dear gosh. You look at the objective qualities of the technology then compare it to what's being said about it. For stuff like AI, blockchain etc. the hype surrounding them is orders of magnitude greater than their utility. Less so for AI than the near-useless blockchain, but still disproportionate.

AI has an obvious downside in its inability to ever be the source of truth. So then all you need to do is look for the companies using it as such, even for something as simple as phone support and you've got your hype-driven bone-headed decision making right there: [1] [2].

> Or, if the author would have considered those over-hyped at the time, then they should have some humility because in 10 years they may look back at AI as another one of the "most beneficial tools ever invented".

Very clever wording, you can make "one of the most beneficial tools ever invented" fit basically anything with a little bit of spin. Make up your mind instead of inventing weasel statements.

> How does the author know that these people don't believe that AI has great utility and potential?

Oh I'm sure most of them do. Does not contradict "greedy, careless, unskilled" in any way.

[1]: https://news.ycombinator.com/item?id=43683012

[2]: https://news.ycombinator.com/item?id=40536860


There are issues with our current economic model and it blows down to rent. The service need model is allowing the owners and controllers of capital to set up systems that allow them to extract as much rent as possible, AI is just another approach to this.

And then if it is successful for building, as you say we'll have yet another production issue as that building is essentially completely automatic. Read how over production has affect society for pretty much ever and then ask yourself will it really be good for the masses.

Additionally all the media is so thoroughly captured that we're in "1984" yet so few people seem to realise it. The elites will start wars, crush people's livelihoods and brainwash everyone into being true believers as they march their sons to war while living in poverty.


It's sad to see such a terrible comment at the top of the discussion. You start with an ad-hominem against author assuming they want to "look smart" by writing negatively about hype, you construct a straw-man to try to make your point, and you barely touch on any of the points made by them, and when you do, you pick on the weakest one. Shame.

It's one of the stupidest concepts on the face of the earth and tons of people ascribe to it unknowingly: hype = bad.

AI is one of the most revolutionary things to ever happen in the last couple of years. But with enough hype tons of people now think it's complete garbage.

I'm literally shocked how we can spend a couple decades fantasizing and writing stories about this level of AI, only to be bored of it in two years when a rudimentary version of it is finally realized.

What especially pisses me off is the know it all tone, like they knew all along it's garbage and that they're above it all. These people are tools with no opinions other then hype = bad and logic = nonexistent.


> I'm literally shocked how we can spend a couple decades fantasizing and writing stories about this level of AI

It was never this level of AI. The stories we wrote and fantasised were about AI you could blindly rely on, trust, and reason about. No one ever fantasised about AI which couldn’t accurately count the number of letters in a common word or that would give you provably wrong information in an assertive authoritative tone. No one longed for a level of AI where you have to double check everything.


> No one longed for a level of AI where you have to double check everything.

This has basically been why it's a non-starter in a lot of (most?) business applications.

If your dishwasher failed to clean anything 20% of the time, would you rely on it? No, you'd just wash the dishes by hand, because you'd at least have a consistent result.

That's been the result of AI experimentation I've seen: it works ~80% of the time, which sounds great... except there's surprisingly few tasks where a 20% fail rate is acceptable. Even "prompt engineering" your way to a 5% failure/inaccuracy rate is unacceptable for a fully automated solution.

So now we're moving to workflows where AI generates stuff and a human double checks. Or the AI parses human text into a well-defined gRPC method with known behavior. Which can definitely be helpful, but is a far cry from the fantasized AI in sci-fi literature.


It feels a bit like LLMs rely a lot on _us_ to be useful. Which is a big point to the author's article about how companies are trimming off staff for AI.

> how companies are trimming off staff for AI

But they're not. That's just the excuse. The real truth is somewhere along pandemic over hire and bad / unstable economy.


Also attempts to influence investors/stock-price.

https://newrepublic.com/article/178812/big-tech-loves-lay-of...


We've frozen hiring (despite already being under staffed) and our leadership has largely pointed to advances in AI as being accelerative to the point that we shouldn't need more bodies to be more productive. Granted it's just a personal anecdote but it still affects hundreds of people that otherwise would have been hired by us. What reason would they have to lie about that to us?

One type of question that a 20%-failure-rate AI can still be very useful for is ones that are hard to answer but easy to verify.

For example say you have a complex medical problem. It can be difficult to do a direct Internet search that covers the history and symptoms. If you ask AI though, it'll be able to give you some ideas for specific things to search. They might be wrong answers, but now you can easily search specific conditions and check them.

Sort of P vs. NP for questions.


> For example say you have a complex medical problem.

Or you go to a doctor instead of imagining answers.


You put too much faith in doctors. Pretty much every woman I know has been waived off for issues that turned serious later and even as a guy I have to do above average leg work to get them to care about anything.

Doctors are still better than LLMs, by a lot.

All the recent studies I’ve read actually show the opposite - that even models that are no longer considered useful are as good or better at diagnosis than the mean human physician.

literally the LAST place I would go (I am American)

"The stories we wrote and fantasised were about AI you could blindly rely on, trust, and reason about."

Stanley Kubrick's 2001: A Space Odyssey - some of the earliest mainstream AI science fiction (1968, before even the Apollo moon landing!) was very much about an AI you couldn't trust.


that's a different kind of distrust, though, that was an AI that was capable of malice. In that case, "trust" had to do with loyalty.

The GP means "trust" in the sense of consistency. I trust that my steering wheel doesn't fly off, because it is well-made. I trust that you won't drive into traffic while I'm in the passenger seat, because I don't think you will be malicious towards us.

These are not the same.


Going on a tangent here: not sure 2001's HAL was a case of outright malice. It was probably a malfunction (he incorrectly predict a failure) and then conflicting mission parameters that placed higher value on the mission than the crew (the crew discussed shutting down HAL because it seemed unreliable, and he reasoned it would jeopardize the mission and the right course of action was killing the crew). HAL was capable of deceit in order to ensure his own survival, that much is true.

In the followup 2010, when HAL's mission parameters are clarified and de-conflicted, he doesn't attempt to harm the crew anymore.

I... actually can see the 2001's scenario happening with ChatGPT if it was connected to ship peripherals and told mission > crew and that this principle overrides all else.

In modern terms it was about both unreliability (hallucinations?) and a badly specified prompt!


I don't think there was any malfunction. The conflicting parameters implicitly contained permission to lie to the crew.

The directive to take the crew to Saturpiter but also to not let them learn anything of the new mission directive meant deceiving. It's possible HAL's initial solution was to impose a communication blackout by simulating failures, then the crew reactions to the deception necciatsted their deaths to preserve the primary mission.

Less a poor prompt and more two incompatible prompts both labeled top priority. Any conclusion can be logicLally derived from a contradiction. Total loyalty cannot serve two masters.

Clarke felt quite guilty about the degree of distrust of computers that HAL generated.


> It was never this level of AI.

People have been dreaming of an AI that can pass the turing test for close to a century. We have accomplished that. I get moving the goalposts since the turing test leaves a lot to be desired, but pretending you didnt is crazy. We have absolutely accomplished the stuff of dreams with AI


>It was never this level of AI.

You're completely out of it. We couldn't even get AI to hold a freaking conversation. It was so bad we came up with this thing called the turing test and that was the benchmark.

Now people like you are all, like "well it's obvious the turing test was garbage".

No. It's not obvious. It's the hype got to your head. If we found a way to travel at light speed for 3 dollars the hype would be insane and in about a year we get people like you writing blog posts about how light speed travel is the dumbest thing ever. Oh man too much hype.

You think LLMs are stupid? Sometimes we all just need to look in the mirror and realize that humans have their own brand of stupidity.


I invite you to reread what I wrote and think about your comment. You’re making a rampant straw man, literally putting in quotes things I have never said or argued for. Please engage with what was written, not the imaginary enemy in your head. There’s no reason for you to be this irrationally angry.

You wish I didn’t read it. You said we never wished for this “level” of AI.

We did man. We did. And we couldn’t even approach 2 percent of what we wished for and everybody knew we couldn’t even approach that.

Now we have AI that approaches 70 percent of what we wished for. It’s AI smarter than a mentally retarded person. That means current AI is likely smarter than 10 percent of the population.

Then we have geniuses like you and the poster complaining about how we never wished for this. No. We wished for way less than this and got more.


I genuinely wish whatever is hurting you in life ceases. You are being deeply, irrationally antagonistic and sound profoundly unwell. I hope you’ll be able to perceive that. I honestly recommend you take some time off from the internet, we all should from time to time. You clearly are currently unfit for a reasoned discussion and I do not wish to add to your pain. All the best.

Can you diagnose me too? Because you are peak facepalm right now and I can’t cringe harder. So please tell me to touch grass so I can go heal from the damage you caused my brain from having to read you.

You’re a dick. Addressing someone as if they have some sort of “problem” or that I’m “hurt” and pretending to be nice about it. This type of underhanded malice only comes from the lowest level of human being.

I remember how ~5 years ago I said - here on HN - that AI will pass TT within 2 years. I was downvoted into oblivion. People said I was delusional and that it won’t happen in their lifetime.

The test has been laxed by previous generations.

You miss the people who were skeptic about the details of the test since the very beginning. There are those too.

Moving the goalpost is a human behavior. The human part should be able to do it. The passing AI should also be able to do it.

Many challenges that AI still struggles with, like identifying what is funny in complex multi-layered false cognates jokes, are still simpler for humans.

I trust it can get there. That doesn't mean we are already in a good enough place.

Maybe there is a point in which we should consider if keeping testing it is ethical. Humans are also paranoid, fragile, emotionally sensitive. Those are human things. Making a machine that "passes it" is kind of a questionable decision (thankfully, not mine to make).


Dig that quote up, find anyone who gave you a negative reply, and just randomly reply to them with a link to what you just posted here (along with the link to your old prediction) lol. Be like "told you so"

LLMs are glorified, overhyped autocomplete systems that fail, but in different, nondeterministic ways than existing autocomplete systems fail. They are neat, but unreliable toys, not “more profound than fire or electricity” as has been breathless claimed.

You just literally described humans; and the meta lack of awareness reinforces itself. You cyclicly devalue your own point.

Not for nothing, humans also enjoy the worth and dignity inherent with being alive and intelligent…not to mention significantly less error prone (see: hallucination rates in literally any current model), while being exponentially more efficient to produce and run. I can make that last assertion pretty confidently, because while I’ve never built a data center so resource intensive it required its own dedicated power generation plant, I have put in the work to produce dozens of new people (those projects all failed, but only because we stubbornly refuse to take the wrappers off the tooling), and the resource requirements there only involved some cocktails and maybe a bag of Doritos. Anyhow, I reckon humans are still, on-balance, the better non-deterministically imperfect vessels of logic, creation, and purpose.

Don't be mad about their opinions, be grateful for the arbitrage opportunity

I like this approach, the challenge us that without a good grasp of finance it is really hard to leverage these opportunities

Please find me someone with any background in technology who thinks AI is complete garbage (zero value or close to it). The author doesn't think so, they assert that "perhaps 10% of the AI hype is based upon useful facts" and "AI functions greatly as a "search engine" replacement". There is a big difference between thinking something is garbage and thinking something is a massive bubble (in the case of AI, this could be the technology is worth hundreds of billions rather than trillions).

Nobody is talking about a financial bubble. That's orthoganol.

Something can be worth zero and still be fucking amazing.

The blog post is talking about the hype in general and about AI in general. It is not just referring to the financial opportunity.

You can use chatGPT for free. Does that mean it's total shit because openAI allowed you to use it for free? No. It's still freaking revolutionary.


> Something can be worth zero and still be fucking amazing.

Gull-wing doors on cars. Both awesome and flawed.


I was thinking more like oxygen.

Amazing because without it you’re dead meat. But nobody gives a shit about it because it’s everywhere and free.

That’s what LLMs are. They are everywhere and too readily accessible so people end up just complaining about too much hype.


Yeah well this hype comes with a lot of financial investment, which means I get affected when the market crash.

If people makes cool thing on their own money ( or just not consume as much of our total capital ), and it turns out not as affective as they would like, I would be nice to them.


Yeah the effectiveness of the hype on investment is more important than the effectiveness of the technology. AI isn't the product, the promise of the stock going up is. Buy while you can, the Emperor's New Clothes are getting threadbare.

Sounds like you bought the hype about LLMs without any understanding anything about LLMs and are now upset that the hype train is crashing because it was based on promises that not only wouldn’t but couldn’t be kept.

> hype train is crashing

According to who? Perhaps the people who aren't paying attention. People who use AI frequently and see the rate of progress are still quite hyped.


It makes sense that people who don't believe the (current wave of generative) AI hype aren't using it and those who do are.

It is more probable that people who have used it more have a more realistic and balanced view of their capabilities, based on the experience. Unless their livelihood depends on not having a relalistic view of the capabilities.

"Don't waste your time on this article."

By telling others not to read something doesn't it just make them curious and want to read it even more. Do HN readers actually obey such commands issued by HN commenters.


Agreed. “Hype is always bad” was where I had to stop.

It could lead to good things. Most startups have hype.


Hype is good. Hype is the explosion of exploration that comes in the wake of something new and interesting. We wouldn't be on this website of no one was ever hyped about transistors or networking or programming languages. Myriad people tried myriad ideas each time, and most of them didn't pan out, but here we are with the ideas that stuck. An armchair-naysayer like the author declaring others fools for getting excited and putting in work to explore new ideas is the only true fool involved.

What broke on you when using Cody? Sorry to hear about that and want to fix it for you.


Thanks for asking. This may be more information than you were expecting, but here goes!

I tried to use it one day and was confronted by a cryptic auth error. It looked like a raw HTML error page but it was rendering in the side panel as plain text. So I tried logging out an logging back in again. That got me a different cryptic auth error. Then I noticed I had accidentally left my VPN on, so I turned that off, but the extension seemed to have gotten stuck in some state that it couldn't recover from. I'd either get an auth error, or it simply wouldn't ever complete auth. I even reinstalled, but couldn't get it to log me in.

So I contacted support. The experience didn't exactly spark joy. Once I got a response, the support person suggested I send more details, including logs. But they didn't say where I could find those logs. I'm a customer - how would I know where the logs are? Anyway, I uploaded a video of the bug on the web tracker, but later the support person said they never got it. The upload had apparently failed, but I didn't get an error when I uploaded it, so I didn't know that.

After I asked for the location of the logs, they sent me instructions for where to find them. But I was busy and couldn't respond for a few days, so then the system sent me an automatic message saying that since they hadn't heard from me, my issue would be closed. Ugh. I sent another email saying to keep the issue open. Then I sent the logs, and the support person told me to try logging in with a token and gave me instructions. That worked! It took about a week and a half to sort it out, so I asked for a refund for my trouble. I was told that no refund or credit would be given.

This left a sour taste in my mouth. It's not about the money. Credit for lost time using the service would have been around $2. It's more about what it means, that the company values my time and trouble, and this issue cost me a lot of time and trouble.

I hope that I'm not ruining this support person's day. My sense is that these kinds of things are usually due to training and policy, and they were probably just following their training.

It's a shame because Cody definitely has a much better UX than Continue. It does a lot of smart things by default that are really helpful. So I was ready to stick with it, but this experience definitely made me ready to try Continue again.

Hope this helps!


Thank you, and I’m sorry about that. Will look into this and fix on our side.


Congratulations and thank you to Sid, the GitLab CEO, for building an incredible company and product.

GitLab was the first code host to add more products (CI, security, ops, helpdesk, analytics, etc.) and create a whole suite, and GitHub followed. GitLab also built for the enterprise years before GitHub started to give appropriate love to the enterprise. Some people think that GitLab is a GitHub clone. Quite the opposite!

Even if you don't use GitLab yourself, you've been a huge beneficiary of the dev workflow GitLab envisioned and created, and of the competition they've given to Microsoft/GitHub. Competition in this space makes everything better.


> GitLab was the first code host to add more products (CI, security, ops, helpdesk, analytics, etc.) and create a whole suite, and GitHub followed.

Disclaimer: I've worked with Sid and his team in the past.

Few people realize how long it's been since GitLab was a simple clone -- there has been a ton of legitimate net new innovation, and that happened under Sid (and of course all the awesome people working at GitLab).

Another thing that's actually insanely under-discussed is how openly GitLab runs and how that's been a successful model for them. I'm not sure I know another open core company that has been so successful in the space of developers who bend over backwards to pay nothing and spend hours of their own time (read $$$$$) to host their own <X>.

IMO they are the only credible competitor to GitHub, and they're open core, huge open source orgs, small companies, and large companies trust them (rightfully so), and they've built this all while being incredibly open and to this day you can still self-host their core software (which is a force multiplier for software companies).


Gitlab used to stand alone in the "Github replacement" market, but these days Gitea is quickly closing in on them. I hope the competition will drive Gitlab to continue to compete, but the switch to "AI everything" makes me weary for its future.

Without Gitlab, Github would've taken years, maybe even longer, to develop what it has become today. I don't think Gitea and its forks would exist.

Now if only Github would go the extra mile and copy another feature from Gitlab (IPv6 support)…


GitLab is currently marketing itself as the "AI-powered DevSecOps platform" which in my view ditches it's history/brand as an open and transparent alternative to GitHub.


But GitHub enterprise is not a great product. So the other around, I wouldn't want to call Github a credible competitor to Gitlab.


Indeed. Github Actions runs because GitLab CI walked and Travis crawled. There's a clear evolution through line with how each laid the groundwork for the successor.


I disagree that GitHub Actions is much more powerful than GitLab. Both can be helped by a YC company, depot.dev, if you literally mean running containers quickly and reliably. GitHub Actions can be easier to set up if you like having stuff outside of your repo and an OCI image. GitLab may not have the actions library that GitHub has but it can pull docker images and that’s a powerful build library.


> I disagree that GitHub Actions is much more powerful than GitLab

It is, by leagues.

Even something simple like running a step before clone/checkout is impossible with Gitlab CI, let alone any of the actual powerful stuff.


GitLab CI can suppress the checkout altogether, do stuff, and then trigger a downstream job.

But really that’s emblematic of the whole thing, where some particular workflow is possible but extremely awkward and hacky. You feel like you’re fighting the system and wish you were just writing whatever it is as a few lines of groovy in a Jenkinsfile.


With great power comes great responsibility, and the responsibility to maintain what started out as “a few lines of groovy” is not one I’d ever take up again.

There’s a middle ground between overly flexible and very constrained, and I think GitHub actions nails that.

Individual steps/actions are reusable components with clear interfaces, which is tied together by a simple workflow engine. This decoupling is great, and allows independent evolution.

As a point to this: GitHub actions doesn’t even offer git clone functionality: it doesn’t care about it. Everyone uses the core “GitHub/checkout” action, but there is nothing special about it.

The same for caching - the workflow/steps engine doesn’t give two shits about that. The end result of this decoupling is things like sccache and docker can offer native integrations with the cache system, because it’s a separate thing.


Ah interesting, yeah the whole container build -> CI build has been a long-standing paint point for me across Github, GitLab, and even Jenkins. I will investigate what depot.dev is doing.... cause yeah, proper and intelligent on-demand rebuilding of based containers could be a game changer.


One of the founders of Depot here. Always feel free to ping me directly (email in my bio) if you ever want to chat more about container builds in general.


For sure! I've always felt like a bit of a loner in that the assumption in most of these platforms is that your build starts with either something barebones (just apt) or maybe your platform only (python3:latest).

However, I've typically dealt with builds that have a very heavy dependency load (10-20GB) where it isn't desirable to install everything every time— I'd rather have an intermediate "deps" container that the build can start from. But I don't want to have to manually lifecycle that container; if I have a manifest of what's in my apt repo vs the current container, it should just know automatically when a container rebuild is required.


Yeah, we’re using it a lot at Sourcegraph. There are some extra APIs it offers beyond what MCP offers, such as annotations (as you can see on the homepage of https://openctx.org). We worked with Anthropic on MCP because this kind of layer benefits everyone, and we’ve already shipped interoperability.


Interesting. In Cody training sessions given by Sourcegraph, I saw OpenCtx mentioned a few times "casually", and the focus is always on Cody core concepts and features like prompt engineering and manual context etc. Sounds like for enterprise customers, setting up context is meant for infrastructure teams within the company, and end users mostly should not worry about OpenCxt?


Most users won't and shouldn't need to go through the process of adding context sources. In the enterprise, you want these to be chosen by (and pre-authed/configured by) admins, or at least not by each individual user, because that would introduce a lot of friction and inconsistency. We are still working on making that smooth, which is why we haven't been very loud about OpenCtx to end users yet.

But today we already have lots of enterprise customers building their own OpenCtx providers and/or using the `openctx.providers` global settings in Sourcegraph to configure them in the current state. OpenCtx has been quite valuable already here to our customers.


Context is a huge part of the chat experience in Cody, and we're working hard to stay ahead there as well with things like OpenCtx (https://openctx.org) and more code context based on the code graph (defs/refs/etc.). All this competition is good for everyone. :)


Your vscode and rider integrations are fantastic, love the different ways to add context to the chat


Cody (https://cody.dev) will have support for the new Claude 3.5 Sonnet on all tiers (including the free tier) asap. We will reply back here when it's up.


Thank you for Cody! Enjoy using it and the chat is perfect for brainstorming and iteratin. Selecting code + asking to edit it makes coding so much fun. Kinda feel like a caveman at work without it :)


Rule of law FTW! Governments can't usually promise timelines, but when the process is well documented and predictable, that is a very good thing.


Literally nobody wants the government telling them what kind of headphones they are allowed to wear. This is a failure of the rule of law.


The government is not telling you which headphones you can wear. They are saying that these particular headphones work well enough as a hearing aid that it is ok that market them as such. This protects you from quacks that claim their device is a hearing aid but that doesn’t actually work.


To be fair, in the case of hearing aids you are both in the right.

Excessive regulation has created oligopolies and kept prices high in the US. The OTC hearing aid category is meant to help. Before that, low-cost devices tended to remain niche.

OTOH the regulation(s) were introduced due to blatant sales of substandard devices, esp in the 1970s. A high-amplification device runs the risk of further damaging your hearing. Many hearing aid users are vulnerable elderly.


Nobody is telling anyone what kind of headphones they're allowed to wear. They do, however, tell _companies_ that they can't claim their product has medical benefits without proving (to some kind of standard) that the product is safe to use, and does what it claims to do. This system was put in place after businesses spent decades scamming the public with "medicine" that didn't do what it claimed to do and, in many cases, was also poisonous.


Cody is open source: https://github.com/sourcegraph/cody. And for the reasons explained there, it makes more sense for it to be open source.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: