Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That really depends on how much the public accepts results without sources, aka no credit, aka rampant breaking of social norms and copyright, as well as destroying the easy ability to verify something. In many ways, OpenAI and anyone who supports it are trying to pull an Uber here, but shift the Overton on something indescribably larger than transportation licenses. They want to Borg global intelligence (though, of course, they will be in control).

Say what you will about Google, they generally credit their sources. Yes, it's part of their advertising model, but it's still a Very Good Thing.

I hope that Google's plan is to release something that continues this model. If it's near as good at ChatGPT and strongly includes sources, it is the right future.



I think you're assuming that Bing is just going to let GPT blindly answer queries, that's not at all how you build a system like that.

How it actually works is more like:

1. User asks "What's the tallest building in the world?"

2. MS, rightfully, assumes that GPT has no idea what the answer to this is. And even if you trusted it to know, it will always lag behind and new buildings could have been built since then.

3. MS searches their index for the most relevant document snippets related to this query and feeds it to GPT as context.

4. MS asks GPT to answer the question in the context of those document snippets.

5. MS returns the result from GPT along with references to the documents it sourced the information from.

This is how the OpenAI /search endpoint used to work.


If it does that, and properly highlights the sources, I have nothing to criticize. Though, I think the results won't be as good if it doesn't use its entire breadth (if it does, the problem reappears).


OpenAI already has a model to improve factual accuracy and provide citations:

https://openai.com/blog/webgpt/

It's probably not too hard for them to tune ChatGPT and the upcoming GPT4 that way, and I think it's very likely they will do something like that in Bing.


I think it'd be really nice for there to be an effort within GPT to have responses that don't mimic the bias of information found online and instead draw its own conclusions based evidence, and the ability to scrutinize different types of evidence.


>In many ways, OpenAI and anyone who supports it are trying to pull an Uber

Can you elaborate here? (Honestly asking since I'm not seeing the similarity)


My understanding of that phrase is that "pull an Uber" means "break laws and social norms to more quickly deliver a product that beats the status quo". Uber broke laws in some regions that required special taxi licenses, and it broke social norms by blurring the lines between contractor and full-time employees when it took away certain employee freedoms common for contractors while not giving them full-time employment benefits.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: