Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not sure where else to post this, but when attempting to use any of the Gemini 2.5 models via API, I receive an "empty content" response about 50% of the time. To be clear, the API responds successfully, but the `content` returned by the LLM is just an empty string.

Has anyone here had any luck working around this problem?



What finish reason are you getting? Perhaps your code sets a low max_tokens, so the generation stops while the model is still thinking, without giving any actual output.


The finish reason is `length`. I have tried setting minimal token budgets, really small prompts, and max lengths of various sizes from 100-4000 and nothing seems to make a consistent dent in the behavioral pattern.


This can happen if the prompt or response is blocked by a safety filter. Check some of the other fields in the response.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: