Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Zero-trust AI APIs serving Llama 2 70B inside enclaves (mithrilsecurity.io)
35 points by DanyWin on Sept 14, 2023 | hide | past | favorite | 8 comments



This seems likely to be extremely weak. The TPM is somewhat useful for verifying that hardware that you own or fairly tightly specify is running what you think it is unless there is a moderately sophisticated physical attacker around. But for servers in the cloud? How are you even supposed to know what firmware, etc you’re verifying?

Actual confidential computing systems (TDX, fancy variants of SEV, etc) are meant to address this type of use case. The TPM isn’t.


I've been thinking about the privacy model of E2E encrypted channels with AI processing before sending back for a while now.

Awesome to see people doing it and wish you the best of luck.


How “secure” is this zero-trust? Are there cryptographic guarantees?


Is the model being served using confidential GPUs?


brilliant! very excited to use this.

what will pricing look like (tried CMF-F)?


Hi there,

Thanks for your question and sorry for the delay in getting back to you!

The pricing information is available here: https://www.mithrilsecurity.io/pricing


also can you make your whitepaper accessible without the login/email hoops


Hi, We've added a copy directly in our GitHub repo:

https://github.com/mithril-security/blind_llama/blob/main/do...




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: