Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
baq
3 months ago
|
parent
|
context
|
favorite
| on:
AI 2027
If you bake the model onto the chip itself, which is what should be happening for local LLMs once a good enough one is trained eventually, you’ll be looking at orders of magnitude reduction in power consumption at constant inference speed.
Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: