Hacker News new | past | comments | ask | show | jobs | submit login

You fear that over time, artificially intelligent systems will suffer from increasingly harmful variance due to deteriorating confidence in training data, until there is total model collapse or otherwise systemic real-world harm.

Personally, I believe we will eventually discover mathematical structures which can reliably extract objective truth, a sieve, at least in terms of internal consistency and relationships between objects.

It's unclear philosophically whether this is actually possible, but it might be possible within specific constraints. For example, we could solve accuracy of dates specifically through some sort of automated reference-checking system, and possibly generalize the approach to entire classes of problems. We might even be able to encode this behavior directly into a model. Dates, at least, are purely empirical data, so there is an objective moment or statistically likely range of time which is recorded somewhere, so the problem becomes one of locating, identifying and analyzing the correct sources for a given piece of information, and having a sound-proof method of checking the work, likely not through an LLM.

I think we will come to discover that LLMs/transformers are a highly generalizable and integral component of a complete artificial brain, but we will soon uncover better metamodels which exhibit true executive functioning and self-referential loops which allow them to be trusted for critical tasks, leveraging transformer architecture for tasks which benefit from it, while employing redundancy and other techniques to increase confidence.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: