Hi all, Ishaan from LiteLLM here (LiteLLM maintainer)
The compromised PyPI packages were litellm==1.82.7 and litellm==1.82.8. Those packages have now been removed from PyPI.
We have confirmed that the compromise originated from the Trivy dependency used in our CI/CD security scanning workflow.
All maintainer accounts have been rotated. The new maintainer accounts are @krrish-berri-2 and @ishaan-berri.
Customers running the official LiteLLM Proxy Docker image were not impacted. That deployment path pins dependencies in requirements.txt and does not rely on the compromised PyPI packages.
We are pausing new LiteLLM releases until we complete a broader supply-chain review and confirm the release path is safe.
From a customer exposure standpoint, the key distinction is deployment path. Customers running the standard LiteLLM Proxy Docker deployment path were not impacted by the compromised PyPI packages.
The primary risk is to any environment that installed the LiteLLM Python package directly from PyPI during the affected window, particularly versions 1.82.7 or 1.82.8. Any customer with an internal workflow that performs a direct or unpinned pip install litellm should review that path immediately.
We are actively investigating full scope and blast radius. Our immediate next steps include:
reviewing all BerriAI repositories for impact,
scanning CircleCI builds to understand blast radius and mitigate it,
hardening release and publishing controls, including maintainership and credential governance,
and strengthening our incident communication process for enterprise customers.
We have also engaged Google’s Mandiant security team and are actively working with them on the investigation and remediation.
Maybe the news has distorted a bit after crossing the Atlantic, but waren't there substantial outrages after the bits that couldn't be touched had in fact been touched?
It would be nice to aggregate all that and put it under a "profile". Kind of like facebook, but your entire profile feed is just the long list of court records, assholery and screw overs for other people. I actually saw a version that someone did for Jack (Twitter's ex founder) a few years ago and it was hilarious but cleverly informative. That's honestly where I got this idea from.
Two things are holding back current LLM-style AI of being of value here:
* Latency. LLM responses are measured in order of 1000s of milliseconds, where this project targets 10s of milliseconds, that's off by almost two orders of magnitute.
* Determinism. LLMs are inherently non-deterministic. Even with temperature=0, slight variations of the input lead to major changes in output. You really don't want your DB to be non-deterministic, ever.
From what I understand, in practice it often is true[1]:
Matrix multiplication should be “independent” along every element in the batch — neither the other elements in the batch nor how large the batch is should affect the computation results of a specific element in the batch. However, as we can observe empirically, this isn’t true.
In other words, the primary reason nearly all LLM inference endpoints are nondeterministic is that the load (and thus batch-size) nondeterministically varies! This nondeterminism is not unique to GPUs — LLM inference endpoints served from CPUs or TPUs will also have this source of nondeterminism.
"But why aren’t LLM inference engines deterministic? One common hypothesis is that some combination of floating-point non-associativity and concurrent execution leads to nondeterminism based on which concurrent core finishes first."
How do you propose we measure signal? Lines of code is renowned for being a very bad measure of anything, and I really can't come up with anything better.
The OP said that they kept what they liked and discarded the rest. I think that's a reasonable definition for signal; so, the signal-to-token ratio would be a simple ratio of (tokens committed)/(tokens purchased). You could argue that any tokens spent exploring options or refining things could be signal and I would agree, but that's harder to measure after the fact. We could give them a flat 10x multiplier to capture this part if you want.
I personally discard code for the tiniest of reasons. If something feels off moments after I open the PR, it gets deleted. The reason we still have 1.2K open PRs is because we can't review all of them in time.
The most likely solution is to delete all of them after a month or two. By that time the open PRs on this project alone will be at least 10-20 more.
Doesn't seem like too efficient process, no? Seems to me like investment in better quality of the output is exactly what is needed here, wouldn't you agree?
I feel they sit of on the opposite end of the OP here. One wants to write out specs to control the agent implementation to achieve a one shot execution. Other side says: let’s won’t waste time of humans writing anything.
I’m personally torn. A lot of the spec talk and now here in combination with TDD etc feels like the pipe dreams of the mid 2000. There was this idea of the Architect role who writes UML and specs. And a normal engineer just fills in the gaps. Then there was TDD. Nothing against it personally. But trying to write code in test first approach when you don’t really have a clue how a specific platform/system/library works had tons of overhead. Also the side effect of code written in the most convenient way to be tested and not to be executed. All in all to throw this ideas together for AI now…
But throwing tokens out of the window and hoping for the token lottery to generate the best PR is also not the right direction in my book. But somebody needs to investigate in both extremes I say.
Actually, nobody said the spec needs to be written by humans.
My personal opinion: with today's LLMs, the spec should be steered by a human because its quality is proportional to result quality. Human interaction is much cheaper at that stage — it's all natural language that makes sense. Later, reasoning about the code itself will be harder.
In general, any non-trivial, valuable output must be based on some verification loop. A spec is just one way to express verification (natural language — a bit fuzzy, but still counts). Others are typecheckers, tests, and linters (especially when linter rules relate to correctness, not just cosmetics).
Personally, on non-trivial tasks, I see very good results with iterative, interactive, verifiable loops:
- Start with a task
- Write spec in e.g. SPEC.md → "ask question" until answer is "ok"/proceed
- Write implementation PLAN.md — topologically sorted list of steps, possibly with substeps → ask question
- For each step: implement, write tests, verify (step isn't done until tests pass, typecheck passes, etc.); update SPEC/PLAN as needed → ask question
- When done, convert SPEC.md and PLAN.md into PR description (summary) and discard
("Ask question" means an interactive prompt that appears for the user. Each step is gated by this prompt — it holds off further progress, giving you a chance to review and modify the result in small bits you can actually reason about.)
The workflow: you accept all changes before confirming the next step. This way you get code deltas that make sense. You can review and understand them, and if something's wrong you can modify by hand (especially renames, which editors like VS Code handle nicely) or prompt for a change. The LLM is instructed to proceed only when the re-asked answer is "ok".
This works with systems like VSCode Copilot, not so much with CC cli.
I'm looking forward to an automated setup where the "human" is replaced by an "LLM judge" — I think you could already design a fairly efficient system like this, but for my work LLMs aren't quite there yet.
That said, there's an aspect that shouldn't be forgotten: this interactive approach keeps you in the driving seat and you know what's happening with the codebase, especially if you're running many of these loops per day. Fully automated solutions leave you outside the picture. You'll quickly get disconnected from what's going on — it'll feel more like a project run by another team where you kind of know what it does on the surface but have no idea how. IMO this is dangerous for long-term, sustainable development.
From my experience, LLMs understand prompt just fine, even if there are substantial typos or severe grammatical errors.
I feel that prompting them with poor language will make them respond more casually. That might be confirmation bias on my end, but research does show that prompt language affects LLM behavior, even if the prompt message doesn't change/
Infinitief scrolling is only mentioned in the title. The actual legislation focuses on addictive patterns of which infinite scroll is just one. The exact formulation will of course matter a lot, but it will not simply be banning infinite scroll, as that would be trivial to circumvent.
The "lethal trifecta" is a limited view on security, as it's mostly concerned with leaking data. This solution focuses on a different aspect: the ability of rogue actions (instead of rogue communications per #3).
Not updating your DL after changing your address is a crime* in all US states. I'm not as familiar with law elsewhere, but would be surprised if that's not true most other places.
*There are exceptions for active duty military personal and other limited exceptions.
It is a law but rarely enforced, also some places like Washington are primarily digital meaning you update your DL address online but they don’t print a new ID unless you request it or your DL is expired
Unless you’re wild camping, campsites have addresses. So do marinas where a ship would need to be docked more or less regularly to establish residency.
As for being a nomad, you don’t need a driver’s license or any kind of ID to wander if you’re willing to sleep rough. If you want to drive on public roadways though, you better have a primary address where the courts can send someone if you kill someone in a traffic accident and bail.
Docking is expensive, so no. It's also only needed once per 5 years or so for maintenance.
Government fining you a ticket doesn't mean your address has to be on the drivers license. They could register the number plate to an SSN for instance.
Did you skip my last sentence? A traffic ticket is not the worst thing you can do in an automobile. And not everyone eligible for a drivers license will have an SSN.
Laws of the government can't override laws of physics. If you don't have a place where you can receive mail, do they just arrest you or what? Do they assign a PO box to you?
This is especially true if the marketing team claims that humans were validating every step, but the actual humans did not exist or did no such thing.
If a marketer claims something, it is safe to assume the claim is at best 'technically true'. Only if an actual engineer backs the claim it can start to mean something.
And
> Dropped you a mail from [email]
I don't think there is any indication of a compromise, they are just offering help.
reply