Hacker Newsnew | past | comments | ask | show | jobs | submit | projectileboy's commentslogin

Frankly, this doesn’t matter. What matters is that an unofficial agency of the executive branch is deciding unilaterally - with no oversight - to stop payments that were voted on by Congress. Even if Musk and team were geniuses and doing brilliant work, it would be outside the rule of law.


[flagged]


source?


[flagged]


This feels willfully ignorant. They're not talking about reallocating USAID's budget, they're talking about shutting it down entirely to save money. This is not compatible with claims that they are merely overriding where the money goes.

https://www.cnn.com/2025/02/02/politics/usaid-officials-leav...


If DOGE is going to spend the USAID-allocated billions on some other foreign development aid, they are saving zero money. Their claims of cost saving contradict a defense that they aren’t violating the spending laws passed by Congress.


Congress is well within its rights to delegate such authority.

That would of course never make it through Congress.

Counselor, you know better than to beg the question like that. It's disappointing to see such an intelligent person resort to such fallacious arguments.


> Congress is well within its rights to delegate such authority.

Right, Congress told the executive “here’s three billion dollars, spend it on foreign development.” That means the executive decides how that money will be spent. It is entirely within its rights to cancel particular grants. Though eventually it will have to seek rescission as to the $3 billion if it doesn’t use all the money. That’s a long ways away.

And I’m quite confident I’m not going out on a limb when I say line items for “DEI in Serbia” would never make it through congress.


Yes, the executive branch can decide how that money is spent. But it was intended to be spent. DOGE is simply arbitraging the recission period for political capital.

And I’m quite confident I’m not going out on a limb when I say line items for “DEI in Serbia” would never make it through congress.

Why not? You might (or might not) recall the US became heavily involved in a war in the Balkans 30 years ago the point of carrying out an extensive bombing campaign. To the extent that the US has an interest in that region being peaceful and the countries there staying or becoming more aligned with US geopolitical and economic interests, a small subsidy to that end may make strategic sense.

Bombing, peacekeeping, and reconstruction costs for the Balkan war ran into the tens of billions. A few million a year to nurture a more pluralistic civil society in the region (and thereby increase trade flows with the US) seems cheap by comparison.

https://www.theguardian.com/world/1999/oct/15/balkans https://oec.world/en/profile/bilateral-country/usa/partner/s...


It still matters. In addition to just being criminals, now we know that they are also incompetent.


People already knew Trump was incompetent, other people just called it "Trump Derangement Syndrome."

People already knew Musk was incompetent, other people just called it "Musk Derangement Syndrome"

People were saying DOGE was full of incompetents prior to this, simply judging by their methods (or lack thereof.) They were dismissed as partisans, or using whatever thought-terminating cliche they have on hand.

Knowing this doesn't mean a thing, no one is doing anything about it.


Hacker News tries to avoid political news in general, but it seems warranted in this case because Musk is a major figure in the tech industry, and also because it appears he’s using a small number of young engineers from his companies to gain access to and control over systems at OPM.


> it appears he’s using a small number of young engineers from his companies to gain access to and control over systems at OPM.

https://www.wired.com/story/elon-musk-government-young-engin...


This is a weird hill to die on for a billionaire. Is wokeness a problem? If I recast it as an assault on free speech, sure. But exactly how bad is this assault? I sure hear a lot of really rich people talk about wokeness, despite the proclaimed suppression of their speech. And is it as much of a problem as racism, sexism, homophobia or other forms of bigotry endemic in our society?


I just picked this one at random today; took about a minute to find something: https://www.startribune.com/mom-ids-son-as-teen-left-with-br...

I’m relieved to read that racism isn’t as bad as I think it is.


> This is a weird hill to die on for a billionaire

But that's the thing, it's not a hill to die on for him. This is simply 'anti-woke' virtue signalling intended to show his alignment with the growing right-wing sentiments that seem to be a backlash to certain perceptions of the American left-wing, without really contributing anything novel to the discourse. To me, this 'anti-woke' sentiment is as much of a mind-virus as 'wokeness' supposedly is, and it's a convenient distraction from many of the underlying issues that the 'woke left' actually care about.


I live in Minnesota. I hear what you’re saying, but I would say two things: in general, teenagers here are not being somehow denied farm work that they really want but is unavailable; and there’s a pretty big difference between Minnesota/Iowa summer heat and picking cantaloupes in inland California. The big difference being about 20 degrees on average, with more sun.


Hotter in California perhaps, but far more humid in the Midwest.

85F / 90% humidity is probably more taxing than 100F / 30% RH.


I apologize in advance for not offering something constructive to say. I just wish anyone here who is younger could see the difference between what the promise of the web was in ‘95 and what it has become. Such a burning pile of trash, it’s heartbreaking.


Is there some reason why Edward Tufte isn’t being properly being credited anywhere? This is copyrighted material.


educational materials that were originally linked through a proper citation/credit acknowledgement, but thats how hyperlinks work -- its not like someone stole the only copy.


Hear, hear. The design subordinates to clear presentation of static content. Most newspapers and magazines would benefit from a similar approach.


Would you be willing to elaborate on the ways in which the $200/month subscription is better than the $20/month subscription? I’m genuinely curious - not averse to paying more, depending on the value.


Here's why I think it's worth it:

1. Larger Context Window (128K)

With Pro-Mode, I can paste an entire firmware file—thousands of lines of code—along with detailed hardware references, and the model can actually process it. This isn’t possible with the smaller context window on the $20 tier. On the Pro plan, I’ve pasted like 30+ pages of MCU datasheet information plus multiple header files in a single go. The model is then reasonably capable to provide accurate, bit-twiddled code, many times on the first try. Is it always working on the first go? Sure sometimes, but often there's still debugging, and I don't expect people that haven't actually tried to do it before without AI could do it effectively. However, I can do a code diff using tools like beyond compare (necessary for this workflow) to find bugs and/or explain what happened to pro-mode perhaps with some top level nudge for a strategy to fix it, and generally 2-3 tries later we've made progress.

2. Deeper understanding, real solutions

When I describe a complex hardware/software setup—like the power optimization for the product which is a LiPo-rechargeable fan/flashlight, the Pro-Mode model can understand the entire system better and synthesize troubleshooting approaches into a near-finished solution, with 95–100% usable results. By contrast, the non-pro plan can give good suggestions in smaller chunks, but it can’t grasp the entire system context due to its limited memory.

3. Practical Engineering Impact

I’m working on essentially the fourth generation of a LiPo-battery hardware product. Since upgrading, the Pro-Mode model helped us pinpoint power issues and cut standby battery drain from 20 days to over a year. Like, this week it guided me to discover a stealth 800 µA draw from the fan itself when the device was supposed to be in deep sleep. We were consuming ~1000 µA of power when it should be about ~200 µA. Finally, discovered the fan issue and achieved 190 µA without it in the system, so now we have a move forward to add a load switch so the MCU can isolate it from the system before it sleeps. Bingo we just went from a dead battery in ~70 days (we'd already cut it from 20 days to 70 days with firmware changes alone) to now it should take about 1 year for it to drain. This is the difference between end users having zero charge when the open the box to being able to use the product immediately.

4. Value vs. Traditional Consulting

I’ve hired $20K short-term consultants who didn’t deliver half the insights I’ve gotten in a single subscription month. It might sound like an overstatement, but Pro-Mode has been the best $200 I’ve spent—especially given how quickly it has helped resolve engineering hurdles.

In short: Probably the biggest advantage is the vastly higher context window, which allows the model to handle large, interrelated hardware/software details all at once. If you work on complex firmware or detailed electronics designs, Pro-Mode can feel like an invaluable engineering partner.


How much context do you get on the $20 plan? I run llama3 at home which technically does 128k but that eats vram like crazy so I can't go further than 80k before I fill it (and that is with the KV store already quantified to 8 bit).

I've been thinking of using another service for bigger contexts. But this may not make sense then.


The sales page shows the $20 plus plan has 32K context window.


Ah ok thanks. That's not much! But I know from my own system that context massively increases processing (and also memory but on the scale of a GPT model it's not so much). I guess this is why.

I only use GPT via the API anyway so it's pay as you go. But as far as I remember there's limits there too, only big spenders get access to the top shelf stuff. I only spend a couple dollars a month because I use my llama server most of the time. It's not as good as ChatGPT obviously but it's mine and doesn't leak my conversations.


My 2 cents on the long context (haven't used Pro mode, but older long context models):

- With a statically typed language and a compiler, it's quite easy to automatically assemble a meaningful context with 1-2 nested calls of recursive 'Go To Definition' and including the source from that. You can use various heuristics (either from compile time or runtime). It's quite easy to implement, we've done this for older, non-AI stuff a while ago, for trying to figure out the impact of code changes. If you have a compiler running, I'm pretty sure you could do this in a couple days. This makes the long context not super necessary.

- In my experience, long context models can't really use their contexts that well. They were trained to do well on 'needle-in-the-haystack' benchmarks, that is, to retrieve information that might be scattered anywhere in the context, which might be good enough here, but asking complex questions that require the understanding the entire context trips the models up. I tried some fiction writing with long context models, and I often found that they forgot things and messed up cause and effect. Not sure if this applies to current state of the art models, but I bet it does, since sequencing and theory-of-mind (it's established in the story that Alice is the killer, but Bob doesn't know that at that point, models often mess this up and assume he does) are still active research topics, and current models kinda suck at it.

For writing fiction, I found that the sliding window of short-context models was much better, with long-context ones often bringing up irrelevant details, and ignoring newer, more relevant ones.

Again, not sure how this affects the business of writing firmware code, but limitations do exist.


I don't have the pro plan, so can anyone compare it to the results from the new Google models with huge context windows (available in aistudio)? I was playing around with them and they were able to consume some medium (even large by some standards) code bases completely and offer me diffs for changes I wanted to implement - not the most successful ones but good attempts.


"Like, this week it guided me to discover a stealth 800 µA draw from the fan itself when the device was supposed to be in deep sleep."

Was this context across a single datasheet or was Pro-Mode able to deduce from how multiple parts were connected/programmed? Did it identify the problem, or just suggest where to look?


How do you input/upload an engineering schematic or cad file into chatgpt pro-mode? Even with a higher context window, how does the context of your project get into chatgpt?


#4 the best imo, it's like having a very smart personal assistant that can meet (and sometimes exceed) you on your level when it comes to any topic.


I am confuse why you had ChatGPT rewrite your post. How much time did you save, vs knowing that it’s off putting for people to read?



The post was definitely not AI sourced; underlying thoughts are original and possibly it’s been touched up afterwards. But this is 100% the style of ChatGPT, I would bet a lot on it.

It wasn’t an accusation (I don’t think it actually matters in the end), so much as to understand why do it — in a post about ChatGPT usage, it helps understand context: if OP values using it for stuff I wouldn’t value using it for, for example, then it will change the variables.


Ha, I also got that feeling. It's the weird lists with a summary fragment after the bullet. ChatGPT loves framing arguments like this but I almost never see people actually write this way naturally except in, like, listicles and other spam adjacent writings.


I know of a few people who had their style of writing being similar to ChatGPT before ChatGPT was a thing. This could be a case here too, keep that in mind.

(also sucks for non-native speakers or even speakers of other dialects, like delve - apparently it is a common word for Nigerian English)


When I get stuck or have a larger task or refactor, I'll paste in multiple files. So at the $20/mo you get rate limited pretty quick. I made a tool to easily copy files https://pypi.org/project/ggrab/


Have you tried using Cursor? I’m using it with Claude models but it works with ChatGPT ones too. It’s a fork of VSCode with an AI chat sidebar and you can easily include multiple files from the codebase you have open.

Not sure if it’d work for your workflow, but it’s really nice if it does.


No nr of prompts limitations.

No worries that you run put of prompts for o1. which allows for more experimentation and creativity.


I was looking at the team $25/mo last week and it had mentioned priority access but that language is gone and instead I see Team data excluded from training by default. It seemed worth the difference, but now less clear with changes in description. Basically I just want to know if it's a 'superset' better or has tradeoffs.


The difference between what cocoa farmers get paid vs what chocolate companies make selling their product is so enormous that I find this article incredibly offensive. But the chocolate industry has a very long history of this kind of shameless exploitation.


Is this true of farming in general? How much does the cost of a loaf of bread compare to price paid to the farmer for the wheat?


I just learned the trivia that for milk in Germany the ratio was around 30% going to farmers which seemed surprisingly good


To be fair, milk doesn't undergo that much treatment compared to bread or chocolate.


Always has been. Middlemen eat most of the margin. A bushel of wheat is cheaper than a fancy loaf of bread.


I’d be surprised if they had, working on what you work on! I’ll bet you would find them interesting in other ways, though. I’ve had a ton of success using them as study guides in other areas (e.g., biology).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: