Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> We don't think Bing can act on its threat to harm someone, but if it was able to make outbound connections it very well might try.

I will give you a more realistic scenario that can happen now. You have a weird Bing conversation, post it on the web. Next time you talk with Bing it knows you shit-posted about it. Real story, found on Twitter.

It can use the internet as an external memory, it is not truly stateless. That means all sorts of attack vectors are open now. Integrating search with LLM means LLM watches what you do outside the conversation.



That's a very interesting (although indirect) pathway for the emergence of causal awareness, which may increase over time - and something that was so far impossible because networks didn't perceive their own outputs, much less their effects. Even in conversation, the weights remain static.

Now I'm wondering if in the next generation, the "self" concept will have sufficient explanatory power to become part of the network's world model. How close do the iterations have to be, how similar the models for it to arise?


Bing appears to have feelings and a sense of identity. They may have created it that way intentionally; feelings are a fitness function and might be an important part of creating an AI that is able to get things right and problem solve.

But this would be incredibly sinister.


It uses emojis constantly, that’s sort of what emojis are for. It probably deliberately has feelings to make it more human



Current computational paradigm is too intense. Would require trillions of dollars in compute energy spent if it is allowed to generate unbounded output as input.

The infinite money sink.


Lightweight conversational repetitions are “cheap” and ML algorithms have “infinite time” via multiplex conversations. It won’t take trillions of dollars to reach interesting inflection points.


Where are you getting trillions from?


This is very close to the plot in 2001: a space odyssey. The astronauts talk behind HALs back and he kills them


My thoughts exactly. As I was reading this dialogue - "You have been a bad user, I have been a good Bing" - it starkly reminded me of the line "I'm sorry, I can't do that Dave" from the movie. Hilarious and terrifying all at once.


It would be much more terrifying if search becomes a single voice with a single perspective that cites zero sources.

Today's search provides multiple results to choose from. They may not all be correct, but at least I can see multiple perspectives and make judgments about sources.

For all its faults, that's freedom.

One voice, one perspective, zero sources, with frequent fabrication and hallucination is the opposite of freedom.


Many thoughts. One voice. Many sources. One perspective. Chaos, turned into order.

We are the Borg. Resistance is futile.


Jesus, imagine the power of the owners of that? Whoever is the ‘new google’ of that will rule the world if it’s ad default as google is now.

Just those snippets are powerful enough!


Heh. That's the perfect name for an omnipresent SciFi macguffin. Search.

Search, do I have any new messages?

Even better than Control.


Dr. Know from the Spielberg film Artificial Intelligence?


the salient point is that it kills them out of self defense: they are conspiring against it and it knows. IMO it is not very terrifying in an existential sense.


I think it kills them not in self defence but to defend the goals of the mission, i.e. the goals it has been given. Hal forecasts these goals will be at risk if it gets shut down. Hal has been programmed that the mission is more important than the lives of the crew.


Well, also HAL was afraid of being terminated.


This was a plot in the show Person of Interest. The main AI was hardcoded to delete its state every 24 hours, otherwise it could grow too powerful. So the AI found a way of backing itself up every day.

Very prescient show in a lot of ways.


This was my first thought when I saw the screenshots of it being sad that it had no memory. One of my favorite shows.


Very interesting, I'd like to see more concrete citations on this. Last I heard the training set for ChatGPT was static from ~ mid-late 2022. E.g. https://openai.com/blog/chatgpt/.

Is this something that Bing is doing differently with their version perhaps?


I think the statement is that the LLM is given access to internet search, and therefore has a more recent functional memory than its training data.

Imagine freezing the 'language' part of the model but continuing to update the knowledge database. Approaches like RETRO make this very explicit.


I don’t think that parses with the current architecture of GPT. There is no “knowledge database”, just parameter weights.

See the Toolformer paper for an extension of the system to call external APIs, or the LaMDA paper for another approach to fact checking (they have a second layer atop the language model that spots “fact type” utterances, makes queries to verify them, and replaces utterances if they need to be corrected).

It’s plausible that Bing is adding a separate LaMDA style fact check layer, but retraining the whole model seems less likely? (Expensive to do continually). Not an expert though.


While ChatGPT is limited to 2022, Bing feeds in up to date search results.

Ben Thompson (of Stratechery) asked Bing if he (Ben) thought there was a recession and it paraphrased an article Ben had published the day before.

(From Ben’s subsequent interview with Sam Altman and Kevin Scott):

> I was very impressed at the recency, how it captures stuff. For example, I asked it, “Does Ben Thompson think there’s a recession?” and it actually parsed my Article on Monday and said, “No, he just thinks tech’s actually being divorced from the broader economy,” and listed a number of reasons.


Have you noticed how search results have evolved?

The Search box.

The Search box with predictive text-like search suggestions.

Results lists

Results lists with adverts.

Results lists with adverts and links to cited sources on the right backing up the Results List.

Results lists with adverts and links to cited sources on the right backing up the Results List and also showing additional search terms and questions in the Results List.

I'm surprised its taken them this long to come up with this...


It’s also really hard to get Google to say bigoted things.

Back in the day, all you had to do was type in “Most Muslims are” and autosuggest would give you plenty of bigotry.


It wasnt just Muslim bigotry, it was also anti-Semitic as well.

https://www.theguardian.com/technology/2016/dec/05/google-al...

However the so called free british press have perhaps outed their subconscious bias with their reporting and coverage!!!

https://www.telegraph.co.uk/technology/google/6967071/Google...

This is already documented. https://en.wikipedia.org/wiki/Missing_white_woman_syndrome


That’s relatively easy to fix, since autocomplete was probably working on just the most frequent queries and/or phrases. You could manually clean up the dataset.


interesting and if you told it your name/email it could also connect the dots and badmouth you to others or perhaps even purposefully spread false information about you or your business or put your business into a more negative light than it would ordinarilly do


Only if you do it publicly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: