Hacker Newsnew | past | comments | ask | show | jobs | submit | more tastyminerals2's commentslogin

Nope. I am somewhat in disbelief about the level people here downgrade the recent Bing upgrade. The chatbot feature is a such a time saver for jobs where you need to write tons of abstract surface-level content. Just yesterday, I tried compiling a monthly teachers working plan for some local school in Ukrainian language. What it spawned in a minute was at least a day of typing work for a human. This is especially useful when you don’t know the formal language. And the quality of this was good enough for bootstrapping.


At this point, we don't know how useful it will turn out to be. While we are trading anecdotes, here's mine. I asked chatGPT who the Exec team of a medium sized company I knew, was. It confidently stated seven names. Turns out three of them it categorized under a different title and another three never even worked at the company! I would hardly call this a bootstrappble result.

In your example, if I told that at random about 30% of the results were made up, you would not consider that a time saver. In fact it would be total time waster since you would have to vet every single entry. People think since 70% is accurate, only 30% work is needed but not if you don't know which 30% is bogus. You would need to check the entire work using conventional means including perhaps a 'regular' search engine.


You’re not using it right. LLM fundamentally don’t know s*t about this company’s exec team. Maybe some names are statistically close to the company name in vector-space but no guarantee (as you discovered).

The LLM won’t revolutionize search as it is today for factual queries. They’re Clippy 2.0. It’s great people are finding use for the models, but I wish this search story would be balanced out a bit.

I was laid off recently, and I’m using the LLM to write a bunch of cover letters. I give it my resume, a blurb about the company and job and a bit about what I like about work, and it outputs a cover letter. I don’t like writing BS cover letters where I pretend I majored in the company mission and my whole life has been teaching me their values. GPT can do that for me though- and yes I fact check but I’m fact checking against my resume and personal opinions which I obviously know quite well.


But the whole point of these debates is a discussion about whether Bing is going to eat Google's lunch in search, which very much is about finding out about things like companies' exec teams.

The ability of an LLM to generate decent content (provided you're an attentive editor or the users of the content aren't too discerning) could be huge for Office365, but that's irrelevant to any potential threat to Google, since Docs is of very little importance to Google's revenues and strategy in a market where Office is completely dominant and has always had a more full-featured product.


> But the whole point ... is ... in _search_ (not content generation)

True. And also, keep in mind ... it doesn't truly have to be _better_ than Google search. You just need to start and maintain a _social trend_ so that the mainstream public _chooses_ it over Google. People use Google because it's the first and only option that comes to mind -- they haven't actually compared its accuracy to anything else in a long time (the audience of Hacker News is of course an exception).


> whether Bing is going to eat Google's lunch in search,

There are a few types of search queries that people seem to do, factual lookups ("who is the exec of abc?"), but also generally treat the search engine as the entryway to the internet ("I need a teaching plan about Ukraine"). We'll see that LLM fall flat for facts (assuming people care), but they can supplant some of the general traffic. Realistically, its a bad fact search replacement, but it could be a great tool to put next to a search bar, making a better "starting place for accessing the internet".

With the teaching plan example, the original user was probably going to make a query for a template (or 5), then copy+paste, then do 10-100 queries learning all about Ukraine history and culture, then rewrite that into the template, editing down to manageable size, then send to peers to edit and review, then format for distribution. That could be dozens of Google searches. Now, one or two AI queries, and they have a template, basic written text, and can focus on a couple queries for fact checking. Oh, and since they used bing to do the AI part, they may just stick with bing for the fact check part. Google was irrelevant in that whole flow instead of getting dozens of queries over a day before, but if that feature was moved to Office365, then they may never have used bing for search while still killing a chunk of google's traffic.

The danger to google is not equal to the opportunity to bing. If 5-10% of traffic never reaches a google search, that's a huge chunk of google's revenue, even if it doesn't translate to searches on a different engine. Think of the potential impact an AI code generator could have on StackOverflow. When I need to pick up a new language, I often query "how to append to an array in python" in a search engine, but a LLM (or large-code-model) built into my IDE could supplant that query entirely. I


An analogue of Jevons' law applies here.

I doubt you hand wrote cover letters by making dozens of search queries (similarly, I doubt people devise teaching curricula by learning the history of Ukraine through a series of Google queries). But when you weren't taking the time to write them yourself, I bet you had more time free to search for jobs, or do general internet browsing using Google as your gateway to the internet...

People having more time free to browse the internet is unlikely to be a threat to Google's business, even in the highly unlikely scenario Google is incapable of advancing its existing AI products beyond their current state


I think of these tools as "first draft writers". You can't rely on OpenAI's GPT models to do your research for you or replace knowledge of a particular domain, but they significantly accelerate the initial content drafting process. Then you edit and fact-check and adapt, as you would anyway, but you've cut out much of the time-consuming grind of getting words on the page.


There is something that I find deeply unsatisfying about this. For me, the first draft is as much about working through the problems space and considering possibilities. If I rely on a chat bot, then I am more apt to become anchored to whatever the chat bot spits back out at me. Even if what it produces is good enough, I do not benefit from the drafting process in the way that I would if I did it myself. Sometimes maybe this is a good enough shortcut but I generally don't believe in shortcuts.


Actually it works not as the first draft, or the final draft - it is the middle draft. In stage one you just drop a bunch of bullet points, ideas, short notes. In stage 2 the model writes your article or paper. In stage 3 you fix it.

Almost 100% next iteration will sport a fact checker, powerful style and format controls, and a much larger context. The development of advanced fact checkers will have a big impact on anything propagated online.


It's more like a pre-draft from what I've seen. But for an area I know, I can absolutely see something like ChatGPT throwing 500 words down on some topic and generating some explanatory boilerplate about something like what a service mesh is. I'll take out some things that I don't quite agree with, give it more of a "voice," maybe add some data/quotes/links/etc.

It's not going to write me something I'll hand to an editor. But for certain things, it could definitely give me a head start relative to a blank sheet of paper.


first draft does set the direction


In my experience a domain expert (myself in my domain) can quickly validate the answers. It can make a multi-hour task a multi-minute task.


I also got false positives when I asked for facts. This is what classical search is for. But what chat feature in Bing is — a context aware coherent text generation machine with an amazing online feature to modify and bend its content to your wishes. Its also not bad at summarizing articles. But hey, if this is just the beginning I bet in a couple of years it will match your standards as well.


with lang-chain and tooling, these problems will likely quickly shrink.

imagine where we'll be just two papers down the line!


Everyone in this thread replying to npalli gets it. I am getting more and more skilled in using it everyday. I feel like I have a third lobe of my brain.

There are largely three groups of people

1) ChatWhat?

2) It only makes bullshit!

3) OMG, this amazing, and scary and amazing, and useful. Oh wow...


I think there's also a pretty big group of us who find that it is today a moderately useful tool for certain types of things but isn't really transformative in general.


You’re not using ChatGPT correctly.


As the saying goes, you’re holding it wrong.


In the context of replacing a search engine, it is the correct use.


"abstract surface-level content"

aka "bullshit"


Bullshit to you but full time work for plenty...


Well, they think they will be paid 10 times more now that they can produce bullshit 10x faster ... in reality, they will be fired.


> Well, they think they will be paid 10 times more now that they can produce bullshit 10x faster … in reality, they will be fired.

…and replaced by b.s. chatbot wranglers, who will be paid much more, and who will produce more total output, and who will be selected preferentially from among the people that best understand the work the chatbots are doing…so, yeah, in lots of cases, the people that were writing bullshit will end up wrangling chatbots, in jobs that bring in more money for their employer, and probably at higher pay (though, by historical trends of automation, a lower share of the generated value) who write bullshit.

This is even more clearly the case for people who have writing bullshit as an incidental part of their job rather than a core part, since the incidental part will consume less time, increasing productivity, without eliminating need for the core job. So its not even a “lose one job but move to the replacement job” situation, its just a “be more valuable in existing job”.


Relative to what, the "high quality" content you'd spend more time dredging up on Google?


Maybe someone who admits to not knowing the Ukrainian language, shouldn't volunteer to create an entire teaching plan in that language, from a chatbot.


Fair, but the question isn’t whether it is useful but whether it will replace traditional search


It doesn't need to replace it though. If Bing has good chatgpt which is useful for some non-trivial amount of searches then people will go there instead of google and then also run all their other searches there as well.

It just needs to be useful enough to dislodge the google monopoly.


If the search product still isn't good enough, why would a ChatGPT panel make the difference? You can always pull up ChatGPT in another tab.


'in another tab' == never for 98% of the population.


I doubt most people measure search quality in any useful way.

People are lazy and are creatures of habit. Give them a single place to talk to ChatGPT and to search, they'll take it.


Yes it will replace search. The same way that cars replaced horses, and planes replaced zeppelins because they are had a significant speed advantage and allowed humans to spend time doing more productive activities that can't be automated.

Then traditional search is going to become "raw index search" that you can query writing something like "intitle:"carbonara" source:"google_index"" or "give me all webpages containing carbonara in its title"


And how Google Glass replaced smartphones, bitcoin replaced USD, Juicero replaced juicers, Segways replaced walking, and Second Life replaced life.


I think it is something to reconsider, especially if you genuinely don't perceive the difference of value between Juicero and a product that has been adopted by 100 million users in its first 2 months.

Really, you can believe me, outside Silicon Valley, nobody cares or cared about Juicero.

This is very different for ChatGPT, and I'm sure that if you get interested to it you'll find interesting usages with it that can fit your daily workflow (or just fun! like with image generation models).


ChatGPT frenzy goes all the way to China. They are reacting with "huge excitement", but also realising how far behind they are.


I stopped considering ChatGPT to be niche/nerdy phenomenon after, less than two weeks after release, I overheard some random Polish commentary youtubers (the YouTube equivalent of mass market celebrity gossip, except more self-referential) showcasing the chatbot and voicing their opinions about large language models.

One useful thing I learned from this, though, is that ChatGPT can handle Polish just fine. It never even occurred to me to try it - I incorrectly assumed the model was trained on English text only. I suspect that being multilingual from day 1 was a huge factor in ChatGPT's sudden and extreme user growth.


It all sounds good as a consumer but there are a few questions which are by no means clear to me:

* How does getting recommendations from a chatbot (what TV to buy) play with websites that produce such content (TV reviews) * How does it play with websites that rely on ad impression * How can you monetize a chatbot? (there's an easy way: free tier + monthly subscription) * How to reduce the massive compute cost of a good chatbot without making it bad (this also seems more straightforward)


Apparently Google is capable of doing cool things. DeepMind’s speculative sampling achieves 2–2.5x decoding speedups in LLM. That brings cost down significantly, without degradation in quality.


Yikes, I don't have access to bing chatbot but google translate for Ukrainian is not great compared to Polish or Russian. I would not rely on it for any work.


Did you try deepl? It has been blowing google translate out of the water for years now.


How is that possible? Google has the research power, the TPUs, the data, everything. But Translate is not as good as a tool made by a much smaller company.

This is also a pattern. Their voices are not better than some paid services (NaturalReader for example). Their OCR and document understanding is inferior to Amazon Textract. Even in speech recognition there is the excellent Whisper from OpenAI doing just as good or better. Google's generative image models are not the best, and locked away for good measure. I think SD and MJ rule.

Google's AI was cool in 2000 for search and in 2016 for games. But now the best people are leaving them - almost the whole team who invented transformers has their own startups.

I also think maybe, just maybe, their TPUs are bad and they can't scale high quality models to the public. Maybe they lost the race because GPUs were better in the end. Maybe it's stupid, but how can we explain the lack of advanced AI? The other explanation is they won't mess with something that makes them so much money (current search/ad model).


I've found similar... it did a days work for me the other day in about 20 seconds.

But is this related to search? I'm a "pro ChatGPT" user (ie, I pay) and I don't use it for anything search related. It's entirely unrelated.


That's really nice, but how will you monetize it? In its chat form, that's going to be very difficult. Anyone can train it and launch their own LLM, there's know monopoly or differentiator. Sure, its possible overtime it will uplift search engines, but create a new Industry vertical like Search, Mobile, Social Media, Cloud. I think of it more as a feature /augmentation than its own thing.


Services. Inevitably some of the queries involve recommending a service, and when you have two equal substitute services in a market, both will pay a certain price to be recommended over the other.


That's a very highfalutin way of saying "ads", which feels like it still has one foot planted squarely in the monetization thinking of yesterday. As lots of other people have pointed out, ads in the midst of a blob of chat output don't work the same way as they do on a page of search results. Search results are impersonal and so putting promoted content in them is less of a personal affront. But if the interface is a "chat session" with something that's designed to feel human-like in its responses, the interleaving of paid content produces a completely different psychological response in users. It's more insulting and undermines trust.

To put it another way: the main value proposition of using something like ChatGPT to navigate the internet is that you're putting your trust in it to filter out the noise on your behalf. If you can't trust it to actually do that (there's still ad noise in what you get back), then what's the point?

Either people will pay a subscription fee to unlock the utility of an information-distilling agent, or they won't. Trying to sidechain ad revenue into that equation is self-defeating.


Search is ad-influenced already, but there's still signal under the noise.

Adding a chat front end is just going to lower the SNR, because ChatGPT has no idea what facts are or how to check them.

Unfortunately it's also the main attraction for corporate revenue generation. You can sell stuff conversationally. Woo hoo. These systems are going to turn into automated used car sales bots which use persuasion techniques to steer users towards a sale.

From the user POV the main attraction is the prospect of a kind of universal summarising WikiBot and bureaucratic paperwork automator.

Those are fundamentally different domains.

Users have been pretty relaxed about being manipulated and distracted by social media and covert PR/sales/influencer operations, so there's going to be a huge market for the bad stuff.

But it's just corporate noise, as it always is. The real value will come from processed search in the sense of automated teaching and intelligence augmentation.

Unfortunately there's not where most of the research will go. It's not going to become common until LLMs are taught to fact check with high reliability, and the cost of entry is low enough for that to be offered as a service.

Meanwhile - yes, exactly: ads disguised as search results.


>Either people will pay a subscription fee to unlock the utility of an information-distilling agent, or they won't.

This feels a bit like projection though. People in general are trained to tolerate ads for most freemium services, such as social media, search, etc., and chat is no different.

For any market involving human attention, there's a portion willing to pay money for the service, but a significant larger portion willing to trade attention time (e.g. ad impressions) for a free service instead.


> the main value proposition of using something like ChatGPT to navigate the internet is that you're putting your trust in it to filter out the noise on your behalf.

Right. That's a transient state, unfortunately - we can trust ChatGPT now because we know OpenAI had neither the time nor resources nor a reason to make their tool biased for commercial purposes (they're busy biasing and constraining it so it doesn't generate too much bad press, but this doesn't affect the trustworthiness of responses to typical queries). A model like this obviously won't be allowed to gain widespread adoption as a search proxy - it's destructive to commercial interests.

> If you can't trust it to actually do that (there's still ad noise in what you get back), then what's the point?

Exactly. The problem is, as users, we have no say in it. If Microsoft and Google decide that conversational interfaces are the future, then we'll be doing searches via ChatGPT-derived sales bots. End of story. Google and Microsoft each have enough clout to unilaterally change how computing works for everyone. And if they both decide to compete on quality of their ML search chatbots, there's no force on Earth that could stop it. Short to mid term, if they want it, we have no choice but to use it (long-term this might create an opening for a competitor to claw back some of the search market with a chatbot-free experience).

> Either people will pay a subscription fee to unlock the utility of an information-distilling agent, or they won't. Trying to sidechain ad revenue into that equation is self-defeating.

This, unfortunately, has been proven false again and again. Newspapers. Radio. Broadcast TV. Cable TV. Music streaming. Video streaming. On-line news and article publishing. And so on.

Advertising is a disease, a cancer that infects and slowly consumes every medium and form of communication we create. Often enough, creation of a new medium is driven by the desire for an alternative, after the old medium became thoroughly consumed by advertising and seems to be reaching terminal stage.

Side-chaining ads into a chatbot interface is going to be even more powerful than ads in normal search results - not only you can tweak the order of recommendations like search engines do today, you can also tweak the tone and language used in the conversational aspects, effectively turning the bot into a sneaky salesman.


Agreed. The level of naysaying on Bing + GPT (along with ChatGPT) is absurd.


Is this materially better than standard "fill in the blank" boilerplate? I would guess it is in terms of variety, but in content?


I played with dev Edge version which was updated today with a chat feature. I was impressed by how well it can write abstract stuff or summarize over data by making bullet points. Trying drilling down to concrete facts or details, makes it struggle and mistakes do appear. So, we don't go there.

On a bright side, asking it recipes of sauces for salmon steak is not a bad experience at all. It creates you a list, filters it and then can help you pick out the best recipe. And this is probably the most frequent use case for me on a daily basis.


When someone says that they used this and that technique to increase their research productivity, I always think about TRIZ and the fact that so few ppl know about it.


Helps to provide at a minimum some context:

TRIZ (/ˈtriːz/; Russian: теория решения изобретательских задач, teoriya resheniya izobretatelskikh zadatch), literally: "theory of inventive problem solving " is “the next evolutionary step in creating an organized and systematic approach to problem solving. The development and improvement of products and technologies according to TRIZ are guided by the objective Laws of Engineering System Evolution. TRIZ Problem Solving Tools and Methods are based on them.”[1] In another description, TRIZ is "a problem-solving, analysis and forecasting tool derived from the study of patterns of invention in the global patent literature".[2] It was developed by the Soviet inventor and science-fiction author Genrich Altshuller (1926-1998) and his colleagues, beginning in 1946. In English the name is typically rendered as the theory of inventive problem solving,[3][4] and occasionally goes by the English acronym TIPS.

https://en.wikipedia.org/wiki/TRIZ


TRIZ is extremely hard to apply beyond engineering areas. I tried to bend it to soft.dev with no luck.


Can you imagine a football team half remote / half on the field? It can be only one or the other (when they all play it coop via a game console). And yes, the specifics of our work is different (software development happens in the mind) but the underlying principles of human communication are the same. Fully remote communication works but worse than on-site unless you are outsourced or one of the 1k employees where they measure your productivity by the lines of committed code. Seriously, this road of thinking popularised by the latter type of employees because they are the majority I am afraid.


>Can you imagine a football team half remote / half on the field?

On gameday several members of the coaching staff for football teams are in skyboxes in the stadium removed from the players and coaches on the field. That is because they prefer to have an aerial view that allows them to see the game better. It works better for them and therefore it works better for the team. Your analogy doesn't provide the obvious answer you think it does.


Because not everyone is productive? Also, I want statistically significant proof of the correlation between working from home and company profits. I doubt there here is such research because it is nearly impossible to prove due to many other factors. For example, my company profits also increased but not because of home office but because of ppl doing more stuff online. Why? Because 2 out of 3 colleagues complained being unproductive and demotivated at home.


What about letting each person choose for themselves. If I want to work remote 100% of the time, I can. If I want to work only in the office, I can. If I want to work remote 3 days and in the office 2 days, I can. If I want to work 10 hours 4 days and take the 5th off, I can.


> I want statistically significant proof of the correlation between working from home and company profits.

Let's start with statistically significant proof of the correlation between working from the open office and company profits. That game cuts both ways.

Or, simply let everyone work as and where they wish and apply the same evaluation and retention criteria to everyone. No need to discrimiante against one group.


So "because not everyone is productive", everyone should be mandated to go back into the office?


No, but I am in favour of communication this beforehand during the interview. There are on-site companies and companies who allow remote as well as companies working only remote. Also, I am in favour of making exceptions if you can prove that you are more productive within the course of several months and afterwards while being completely remote. You can tell that I am suspicious to the remote == productivity mantra. Because we are working in a team unless you’re a freelancer or a one man project guy (which in a well managed company is a rare thing) and as human beings need eye to eye communication to work efficiently.


Why is the past year and a half of ratings while working remotely enough "proof" of productivity? Really, why is my opinion as an employee about how I work best not enough? You are exhibiting the same issue described elsewhere in this thread - you do not have enough trust for your employees to allow remote work.


Because it is always always more about the team dynamics and chemistry than about you as an individual. If you’re a lone wolf and do-all guy this doesn’t mean you will be valuable in a team, well connected with your colleagues OR on the other hand, outperform several people. In the end of the day, the company size, business domain, etc. has more impact to whether you work remote or not and have to be evaluated and assessed. We allow remote but these are exceptions for those who cannot move easily.


So explain my company. My team is split into two primary locations. The HQ, where I work is a typical multistory office building, while the remote site is more stripped down. The remote site has no managers for my team, our managers are all at the HQ.

So essentially, the workers at the remote site have been working (tada) remotely for over 13 years. And they do a great job. We communicate via email, by Teams/Webex, and there's no real difference.

And then COVID comes along and we all WFH for the last 18 months. Now when things appear to be getting better, mgmt is all in a rush to "RTO."

So not only have we proven that we can work well remotely in non-COVID times, but under the immense pressure of COVID, we had record profits.

So now we're going with the worst of all worlds for my team; hybrid. We're allowed to work 1-3 days a week remotely, but the other days we'll be in the office in desk hotels. Sounds like so much fun.

It really boils down to old school VPs/Managers who think butts in seats is how you get work done. And I say this as someone who's closer to retirement age than most.


Why should "more productive" be the standard?

If a person is equally productive but happier, why not let 'em be happier?


It’s weirdly saddening that these are the things ppl need to create a whole list for and keep being reminded of. Yet, it is very often is the case. The vast majority of engineers don’t even consider these to be of big importance. I wonder if this is something specific to our profession or people personalities that follow it.


In the past, for me, the word should was the blocker to improving on some of these. e.g. Senior manager X should know how this works. Programmer Y should be promoted. Technical debt Z should be paid down. I used to get stuck on this dismissive stance. It was the idealist in me saying we shouldn't have to waste energy on this stuff because it should be common sense. I was seriously lacking compassion for some situations. I think learning to synergise my empathetic side and my logical side has allowed me to step away from what 'should' be and what 'can' be.


In my case growing up infront of the computer “helped” me avoiding all social situations.

This made me a programmer, but oblivious to all sorts of things about how the world really works.

I needed years of conscious effort to bring myself up to a basic level, which everyone around me seems to have mastered yeeears ago.

But it’s well worth it, you just need self awareness, then realizing and admitting I’m not good at X, but will learn is the life long ultimate meta-skill.


Strange jump from one new language to another which is intended to solve different set of problems. Why not Nim (which actually has some data libs to compare against)? Performance wise Julia and Rust are not a fair comparison either once you’ve out of Julia’s comfort zone. So, the actual reason of why the author decided to try one over another and not looking at anything else is the exposure? Julia can also learn a lot from Ada about safety. It can learn tons from APL on how to manage multidimensionality in a concise manner. And even though I believe Julia and Rust are great languages everyone should learn about, this kind of choice between one new shiny thing and another new shiny thing feels rushed and therefore amateurish.


no u


I like D and could pick it up easily. It feels a lot like C without memory management overhead, Ada-inspired safety and lots of productivity feats. However, where D really shines is the amount of time between learning the language and starting doing something useful in it.


Definitely. If you come from a C-family background it feels quite familiar.


Plenty of great answers above already. What I also think is a valid argument is that in order to appreciate the language, one has to be a seasoned developer hence many heated discussions on HN or Reddit. It reminds me of Lua, where at least one person would mention something about 1-index arrays and nil element behaviour.


I hypothesise that the majority of active SO users are beginner or intermediate programmers. I was using SO quite frequently when I started learning how to program. Now, I don’t have a need neither do I participate in their surveys. My senior colleague does not use it as well, moreover he doesn’t even have time for that. You can occasionally look up a few things but that’s that. In comparison, just three four years ago it was my goto website. What you see in the results of their survey is a sample of the active users who are predominantly between this and that age nothing more and nothing less.


As an individual contributor over 35, I land on Slack Overflow about once a month I’d guess (it feels like even less, though it may well be more, if I do without consciously remembering). The problems I need to solve have become so specialized and niche that it’s often not worth googling. They mostly rather need reading existing code or talking to people.

That’s not to say that I know everything after those decades. But for the things that are more “I forgot the details, yet again” (or “I didn’t really know that detail but now it has become important”), such as for example “what are the exact semantics of that uncommon assembly instruction” or “what does the C standard define here”, it’s well advised to just look into an official source instead.


Oops, that “Slack Overflow” typo was unintentional, I swear. (But I missed the window where I could still edit it.)


Also, I suspect that younger people are more likely to have time to fill survey as requested by some company/website/organization.

With older people having less time and seeing less point in doing it.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: