Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> we start making a sentence and then decide halfway through its not going where we like

I'll just add the observation that when we do this it's largely based on feedback receive from the recipient (well, so long as you're talking-with as opposed to talking-at) - we're paying attention to how the audience is paying attention or not, any small facial tics that might betray skepticism or agreement and so on. I'm looking forward to interacting with an LLM that pairs an emotion-vector along with each token it has previously produced.

hume.ai goes a long way analyzing audio, just a matter of time before they're ingesting realtime facial cues to also incorporate their audience's reaction in their choice of what to say next



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: