Hacker Newsnew | past | comments | ask | show | jobs | submit | kpw94's commentslogin

The options from big companies to run untrusted open source code are:

1) a-la-Google: Build everything from source. The source is mirrored copied over from public repo. (Audit/trust the source every time)

2) only allow imports from a company managed mirror. All imported packages needs to be signed in some way.

Here only (1) would be safe. (2) would only be safe if it's not updating the dependencies too aggressively and/or internal automated or manual scanning on version bumps would catch the issue .

For small shops & individuals: kind of out of luck, best mitigation is to pin/lock dependencies and wait long enough for hopefully folks like Fibonar to catch the attack...

Bazel would be one way to let you do (1), but realistically if you don't have the bandwidth to build everything from source, you'd rely on external sources with rules_jvm_external or locked to a specific pip version rules_pyhton, so if the specific packages you depend on are affected, you're out of luck.


The Autoboxing example imo is a case of "Java isn't so fast". Why can't this be optimized behind the scenes by the compiler ?

Rest of advice is great: things compilers can't really catch but a good code reviewer should point out.


javac for better or worse is aggressively against doing optimizations to the point of producing the most ridiculously bad code. The belief tends to be that the JIT will do a better job fixing it if it has byte code that's as close as possible to the original code. But this only helps if a) the code ever gets JIT'd at all (rarely true for eg class initializers), and b) the JIT has the budget to do that optimization. Although JITs have the advantage of runtime information, they are also under immense pressure to produce any optimizations as fast as possible. So they rarely do the level of deep optimizations of an offline compiler.

Why should compiler optimize obviously dumb code? If developer wants to create billions of heap objects, compiler should respect him. Optimizing dumb code is what made C++ unbearable. When you write one code and compilers generates completely different code.

The problem is rather that Java doesn't have generics and structs, so you're kind of forced to box things or can't use collections.

No, in the example they provided, programmer wrote obviously stupid code. It has nothing to do with necessity:

    Long sum = 0L;
    for (Long value : values) {
        sum += value;
    }
I also want to highlight that there are plenty of collections utilizing primitive types. They're not generic but they do the job, so if you have a bottleneck, you can solve it.

That said, TBH I think that adding autoboxing to the language was an error. It makes bad code look too innocent. Without autoboxing, this code would look like a mess and probably would have been caught earlier.


>They're not generic but they do the job, so if you have a bottleneck, you can solve it.

But that's the thing, in other languages you don't need a workaround to work on primitives directly.


People complaining about how hard to get simple answer is don't appreciate the complexity in figuring out optimal models...

There's so many knobs to tweak, it's a non trivial problem

- Average/median length of your Prompts

- prompt eval speed (tok/s)

- token generation speed (tok/s)

- Image/media encoding speed for vision tasks

- Total amount of RAM

- Max bandwidth of ram (ddr4, ddr5, etc.?)

- Total amount of VRAM

- "-ngl" (amount of layers offloaded to GPU)

- Context size needed (you may need sub 16k for OCR tasks for instance)

- Size of billion parameters

- Size of active billion parameters for MoE

- Acceptable level of Perplexity for your use case(s)

- How aggressive Quantization you're willing to accept (to maintain low enough perplexity)

- even finer grain knobs: temperature, penalties etc.

Also, Tok/s as a metric isn't enough then because there's:

- thinking vs non-thinking: which mode do you need?

- models that are much more "chatty" than others in the same area (i remember testing few models that max out my modest desktop specs, qwen 2.5 non-thinking was so much faster than equivalent ministral non-thinking even though they had equivalent tok/s... Qwen would respond to the point quickly)

At the end, final questions are: are you satisfied with how long getting an answer took? and was the answer good enough?

The same exercise with paid APIs exists too, obviously less knobs but depending on your use case, there's still differences between providers and models. You can abstract away a lot of the knobs , just add "are you satisfied with how much it cost" on top of the other 2 questions


That's a big flaw of LLMs, not limited to RAGs: it lacks the fundamental understanding of "good and bad", like Richard Sutton said in that Dwarkesh podcast.

So if you flood the Internet with "of course the moon landing didn't happen" or "of course the earth is flat" or "of course <latest 'scientific fact' lacking verifiable, definitive proof> is true", you then get a model that's repeating you the same lies.

This makes the input data curating extremely important, but also it remains an unsolved problem for topics where's there's no consensus


> That's a big flaw of LLMs, not limited to RAGs: it lacks the fundamental understanding of "good and bad", like Richard Sutton said in that Dwarkesh podcast.

After paticipating in social media since the beginning I think this problem is not limited to LLMs.

There are certain things we can debunk all day every day and the only outcome isit happens again next day and this has been a problem since long before AI - and I personally think it started before social media as well.


> After paticipating in social media since the beginning I think this problem is not limited to LLMs.

Yup, but for LLMs the problem is worse... many more people trust LLMs and their output much more than they trust Infowars. And with basic media literacy education, you can fix people trusting bad sources... but you fundamentally can't fix an LLM, it cannot use preexisting knowledge (e.g. "Infowars = untrustworthy") or cues (domain recently registered, no imprint, bad English) on its own, neither during training nor during inference.


So true.


"water is wet" kind of study, as tariffs are precisely supposed to increase price for consumers for imported goods... But the last 3 paragraphs are interesting:

- Importers raised the price more than needed (i.e. blame tarifs to increase their profit margin)

- Price increases took one year to fully reflect to the customers, and persisted nearly one year after the tariffs expired.

- chicken-tax-like loopholes implemented wherever possible (for wine apparently it's raising the ABV to more than 14%)


You remind me of the fact that humans do not in fact have sensors in the skin to detect specifically wetness.

I think given the amount of ideas floating around, it is occasionally good to revisit things that are "known", just in case some underlying assumption changed, especially for economics which is harder to get right as it deals a lot with what human want and do.


I can't see how anyone can think "the exporters pay the tariff" makes any sense. TBH, we'll never know how many people thought it made sense because it didn't matter.


In the end money move around. If - for example - the government would just give the citizens the money from the tariffs in equal share (I mean not that I suggest they would, but technically possible), it would be like taking from the citizens that consume more and give it to the citizens that consume less.

So, yes, it is correct in a practical immediate sense that "the exporters pay the tariff" but that excludes many relevant issues like how prices evolve (which are paid by the consumers), what the government does with the money (it could share or not) and what others decide to produce (to avoid tariffs). But definitely many people didn't thought of all that ...


Your first 2 points make me extra bitter about COVID.

Less store hours. Higher prices. Inflation. People in school got a terrible education and it affected my workforce. (But hey 1% of people died, as predicted if we did nothing at all... )

It only reinforces the importance of competition over protectionism.

I used to be a walmart fan, but my local store is cheaper now. I didn't bother to look at prices until things were getting silly.


> (But hey 1% of people died, as predicted if we did nothing at all... )

You're at a football stadium with 100k people. A thousand of them die suddenly. Do you feel safe?

> Less store hours. Higher prices. Inflation.

At this point, that's just greed. They figured out what the market would bear.


> But hey 1% of people died, as predicted if we did nothing at all

Nope. Compare the death rates of Sweden vs its neighbours in the Nordics (the closest comparisons we have with similar weather/culture/etc.). Or if you don't care about minimising variables, in the US between states that did lockdowns and mask mandates and those that didn't. In every comparable (e.g. excluding rural vs urban) case, there were more deaths in "doing nothing" than implementing the same basic public health axioms that have held true for centuries.

> Inflation

That was also helped by Russia invading Ukraine, which increased global prices of multiple important raw materials. But yes, inflation after a period of deflation/economic contraction/restricted travel and consumption was to be expected.

> People in school got a terrible education and it affected my workforce

It's definitely a bigger issue for them than it is for you. And yeah, it sucks for them. Would have been pretty terrible to tell teachers (who overwhelmingly skew older) they should risk their lives just to keep kids occupied too.

> It only reinforces the importance of competition over protectionism.

What has that got to do with COVID?


The thing too many forget is that if we didn't flatten the curve our entire medical system was going to collapse. It's insane that people don't yet understand this concept and can't even empathize with medical professionals. Yes, we all struggled, but try talking to medical professionals to see how they did.

When something doesn't happen because enough measures were taken, then it wasn't worth it because it didn't happen?


> The thing too many forget is that if we didn't flatten the curve our entire medical system was going to collapse

Yep, if things were going well there wouldn't have been makeshift morgues with refrigerated trucks, sick people having to be moved around to different countries, the military deploying field hospitals, corpses piling in the streets. Those examples are from a variety of countries, which shows how bad the situation was globally.


> Compare the death rates of Sweden

As a New Zealander, I like to chuck out our achievement of a negative death rate. Covid lockdowns resulted in less New Zealanders dying than usual.

But, like elsewhere, economic and social harm were both high.


You had 6 weeks of staying at home, and then quarantines for international travellers after that. In return, you had no COVID-19 at all for several years. Seems a fair trade.


> negative death rate.

Norway had that too; without lockdown. Curfews would require a change in the constitution and the last time they happened was during WWII which makes them doubly unpopular.


Sweden all-cause mortality was indeed higher if an immediate pre-pandemic year is taken as a base. However, pre-pandemic years in Sweden show a substantial dip in all-cause mortality, something that neighboring countries did not see. It is not that simple.


I mean sure more people died than were necessary, but think of the shareholder value that was created!


On my 32GB Ryzen desktop (recently upgraded from 16GB before the RAM prices went up another +40%), did the same setup of llama.cpp (with Vulkan extra steps) and also converged on Qwen3-Coder-30B-A3B-Instruct (also Q4_K_M quantization)

On the model choice: I've tried latest gemma, ministral, and a bunch of others. But qwen was definitely the most impressive (and much faster inference thanks to MoE architecture), so can't wait to try Qwen3.5-35B-A3B if it fits.

I've no clue about which quantization to pick though ... I picked Q4_K_M at random, was your choice of quantization more educated?


Quant choice depends on your vram, use case, need for speed, etc. For coding I would not go below Q4_K_M (though for Q4, unsloth XL or ik_llama IQ quants are usually better at the same size). Preferably Q5 or even Q6.


> Basically, I was told to make it so that my phone's camera could see something on the screen and my desk at the same time without washing out

+1. The low-tech version of this I've heard and I've been doing is:

Hold a printed white paper sheet right next to your monitor, and adjust the amount of brightness in monitor so the monitor matches that sheet.

This of course requires good overall room lightning where the printed paper would be pleasant to read in first place, whether it's daytime or evening/night


I think this was what I was told the first time. The advantage of taking a picture with my phone's camera is it kind of made it obvious just how much brighter the screen was then the paper.

Which, fair that it may be obvious to others to just scan their eyes from screen to paper. I've been surprised with how much people will just accept the time their eyes have to adjust to a super bright screen. Almost like it doesn't register with them.


There's some overlap with bias lighting here - good overall room lighting works if you've got good daylight, but it's much easier to get bright bias lighting at night than to light up the entire room.


Per https://github.com/QwenLM/Qwen3.5, more are coming:

> News

> 2026-02-16: More sizes are coming & Happy Chinese New Year!


> However Germany and it's infrastructure can not be compared to the Netherlands. I refuse to take trains through that country anymore.

In which country are the trains bad? Netherlands or Germany? Do you care elaborating why? is that punctuality? strikes? decaying infrastructure?


Yeah I see now how that was unclear.

I was talking about Germany's infrastructure. Last year I had 3x separate trips turn into chaos due to how broken their system is. Broken trains, broken track infrastructure etc. Think multiple hours on each trip rather than just 10 minutes delay.

The Ditch system is very reliable in contrast.


Very cool! And important for sure, thank you.

Few questions:

- is the stack to index those open source?

- is there some standardized APIs each municipality provides, or do you go through the tedious task of building a per-municipality crawling tool?

- how often do you refresh the data? Checked a city, it has meeting minutes until 6/17, but the official website has more recent minutes (up to 12/2 at least)


Thanks for asking!

- The framework for crawling is open-source. https://github.com/civicband

- There is absolutely not a standardized API for nearly any of this. I build generalized crawlers when I can, and then build custom crawlers when I need.

- Can you let me know which city? The crawlers run for every municipality at least once every day, so that's probably a bug


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: