Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
qeternity
76 days ago
|
parent
|
context
|
favorite
| on:
TPUs vs. GPUs and why Google is positioned to win ...
This is not the case for LLMs. FP16/BF16 training precision is standard, with FP8 inference very common. But labs are moving to FP8 training and even FP4.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: