Hacker News new | past | comments | ask | show | jobs | submit login

How are they ensuring robustness against adversarial responses?



From the article, seems like TOPLOC:

> based on top of novel components such as TOPLOC, which verifies rollouts from untrusted inference workers

https://github.com/PrimeIntellect-ai/toploc


Can an expert explain how this protects against adversarial actors?

At a glance it looks like something akin to a computing a checksum that's locality sensitive, so it's robust to floating point errors, etc.

What's to stop someone from sending bad data + a matching bad checksum?


The validation procedure is described on page 8 of the TOPLOC paper: https://arxiv.org/abs/2501.16007

The checksum is validated by redoing the computation, but making use of the fact that you already have the entire response to enable greater parallelism than when generating it one token at a time.


TOPLOC attempts to detect model substitution, i.e. responses being generated by a different model than requested, it comes with certain caveats, as far as I can tell the TOPLOC paper considers verifiable learning / training as out of scope.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: