Would Microsoft not exist without open source project? Microsoft is that company founded in 1975, but the GPL license only appeared in 1989, and BSD licenses appearing at roughly the same time just because of the Unix Wars.
Big tech companies can easily hire manpower to make proprietary versions of software, or just pay licensing fees for other proprietary software. They don’t rely on open source. Microsoft bought 86-DOS to produce MS-DOS; Microsoft paid the Unix license to produce Xenix; and when Microsoft hired former DEC people to make NT, it later paid DEC.
Instead, modern startups wouldn’t exist without open source.
Indeed, open source exists despite Microsoft trying its hardest to kill it. Microsoft was (and still is) a ruthless, savage competitor. Their image has softened as of late but I'll never forget the BS they did under Bill Gates and Steve Ballmer.
I think they would due to massive financial incentive. On the other hand, a lot more developers might actually be getting compensated for their work, instead of putting their code on the internet for free and then complaining on social media that they feel exploited.
Bezos is making a lot of money. But it doesn't mean it makes the world better. Prime or AWS can still work fine without having Bezos making tons of money
Agentic coding are bringing new people to coding. But instead of reading some books about coding or looking at the history, they face the same problems as before, they have the same struggle and they re-invent the same solutions.
I am waiting for the vibe coding expert posts that will tell us that lines of code are not a good measure, it is a liability and you should instruct your agent to write less code ...
If you can't see this by working with Claude Code for a few weeks, I don't want to go into bigger efforts than writing a blog post to convince you. It's not a mission, mine. I just want to communicate with the part of people that are open enough to challenge their ideas and are willing to touch with their hands what is happening. Also, if you tried and failed, it means that either for your domain AI is not good enough, or you are not able to extract the value. The fact is, this does not matter: a bigger percentage of programmers is using AI with success every day, and as it progresses this will happen more and in more diverse programming fields and tasks. If you disagree and are happy to avoid LLMs, well, it's ok as well.
okay, but again: if you say in your blog that those are "facts", then... show us the facts?
You can't just hand-wavily say "a bigger percentage of programmers is using AI with success every day" and not give a link to a study that shows it's true
as a matter of fact, we know that a lot of companies have fired people by pretending that they are no longer needed in the age of AI... only to re-hire offshored people for much cheaper
for now, there hasn't been a documented sudden increase in velocity / robustness for code, a few anecdotical cases sure
I use it myself, and I admit it saves some time to develop some basic stuff and get a few ideas, but so far nothing revolutionary. So let's take it at face value:
- a tech which helps slightly with some tasks (basically "in-painting code" once you defined the "border constraints" sufficiently well)
- a tech which might cause massive disruption of people's livelihoods (and safety) if used incorrectly, which might FAR OUTWEIGH the small benefits and be a good enough reason for people to fight against AI
- a tech which emits CO2, increases inequalities, depends on quasi slave-work of annotators in third-world countries, etc
so you can talk all day long about not dismissing AI, but you should take it also with everything that comes with it
1. If you can't convince yourself, after downloading Claude Code or Codex and playing with them for 1 week, that programming is completely revolutionized, there is nothing I can do: you have it at your fingertips and you search for facts I should communicate for you.
2. The US alone air conditioning usage is around 4 times the energy / CO2 usage of all the world data centers (not just AI) combined together. AI is 10% of the data centers usage, so just AC is 40 times that.
I enjoyed about your blog post, but I was curious about the claim in point 2 above. I asked Claude and it seems the claim is false:
# Fact-Checking This Climate Impact Claim
Let me break down this claim with actual data:
## The Numbers
*US Air Conditioning:*
- US A/C uses approximately *220-240 TWh/year* (2020 EIA data)
- This represents about 6% of total US electricity consumption
*Global Data Centers:*
- Estimated *240-340 TWh/year globally* (IEA 2022 reports)
- Some estimates go to 460 TWh including cryptocurrency
*AI's Share:*
- AI represents roughly *10-15%* of data center energy (IEA estimates this is growing rapidly)
## Verdict: *The claim is FALSE*
The math doesn't support a 4:1 ratio. US A/C and global data centers use *roughly comparable* amounts of energy—somewhere between 1:1 and 1:1.5, not 4:1.
The "40 times AI" conclusion would only work if the 4x premise were true.
## Important Caveats
1. *Measurement uncertainty*: Data center energy use is notoriously difficult to measure accurately
2. *Rapid growth*: AI energy use is growing much faster than A/C
3. *Geographic variation*: This compares one country's A/C to global data centers (apples to oranges)
## Reliable Sources
- US EIA (Energy Information Administration) for A/C data
- IEA (International Energy Agency) for data center estimates
- Lawrence Berkeley National Laboratory studies
The quote significantly overstates the disparity, though both are indeed major energy consumers.
I tried Claude on a project where I'd got stuck trying to use some MacOS media APIs in a Rust app.
It just went in circles between something that wouldn't compile, and a "solution" that compiled but didn't work despite the output insisting it worked. Anything it said that wasn't already in the (admittedly crap) Apple documentation was just hallucination.
A bit like we should trust RFK on how "vaccines don't work" thanks to his wide experience?
The idea here is not to say that antirez has no knowledge about coding or software engineering, the idea was that if he says "hey we have the facts", and then when people ask "okay, show us the fact" he says: "just download claude code and play with it one hour and you have the facts" we don't trust that, that's not science
That's a great example in support of my argument here, because RFK Jr clearly has no relevant experience at all - so "figuring out, based on prior reputation and performance, who you should trust" should lead you to not listen to a word he says.
Well guess what, a lot of people will "trust him" because he is a "figure of power" (he's a minister of the current administration). So that's exactly why "authority arguments" are bad... and we should rely on science and studies
1. "if you can't convince yourself by playing anecdotically" is NOT "facts"
2. it's not because the US is incredibly bad at energy spending in AC that it somehow justifies the fact that we would add another, mostly unnecessary, polluting source, even if it's slightly lower. ACs have existed for decades. AI has been exploding for a few years, so we can definitely see it go way, way past the AC usage
also the idea is of "accelerationnism". Why do we need all this tech? What good does it make to have 10 more silly slop AI videos and disinformation campaigns during election? Just so that antirez can be a little bit faster at doing his code... that's not what the world is about.
Our world should be about humans, connecting together (more slowly, not "faster"), about having meaningful work, and caring about planetary resources
The exact opposite of what capitalistic accelerationism / AI is trying to sell us
Sure, but I wasn't the one pretending to have "facts" on AI...
> Slightly odd question to be asking here on Hacker News!
It's absolutely not? The first line of question when you work in a domain SHOULD BE "why am I doing this" and "what is the impact of my work on others"
> If you can solve "measure programming productivity with data" you'll have cracked one of the hardest problems in our industry.
That doesn't mean that we have to accept claims that LLMs drastically increase productivity without good evidence (or in the presence of evidence to the contrary). If anything, it means the opposite.
At the is point the best evidence we have is a large volume of extremely experienced programmers - like antirez - saying "this stuff is amazing for coding productivity".
My own personal experience supports that too.
If you're determined to say "I refuse to accept appeal to authority here, I demand a solution to the measuring productivity problem first" then you're probably in for a long wait.
> At the is point the best evidence we have is a large volume of extremely experienced programmers - like antirez - saying "this stuff is amazing for coding productivity".
The problem is that we know that developers' - including experienced developers' - subjective impressions of whether LLMs increase their productivity at all is unreliable and biased towards overestimation. Similarly, we know that previously the claims of massive productivity gains were false (no study reputable showed even a 50% improvement, let alone the 2x, 5x, 10x, etc that some were claiming, indicators of actual projects shipped were flat, etc). People have been making the same claims for years at this point, and every time when we actually were able to check, it turned out they were wrong. Further, while we can't check the productivity claims (yet) because that takes time, we can check other claims (e.g. the assertion that a model produces code that doesn't need to be reviewed by a human anymore), and those claims do turn out to be false.
> If you're determined to say "I refuse to accept appeal to authority here, I demand a solution to the measuring productivity problem first" then you're probably in for a long wait.
Maybe, but my point still stands. In the absence of actual measurement and evidence, claims of massive productivity gains do not win by default.
If a bunch of people say "it's impossible to go to the moon, nobody has done it" and Buzz Aldrin says "I have been to the moon, here are the photos/video/NASA archives to prove it", who do you believe?
The equivalent of "we've been to the moon" in the case of LLMs would be:
"Hey Claude, generate a full Linux kernel from scratch for me, go on the web to find protocol definitions, it should handle Wifi, USB, Bluetooth, and have WebGL-backed window server"
And then have it run in a couple of hours/days to deliver, without touching it.
If a bunch of people say "there are no cafes in this town that serve bench on a Sunday" and then Buzz Aldrin says "I just had a great brunch in the cafe over there, here's a photo", who would you listen to?
Many issues have been pointed in the comments, in particular the fact that most of the things that antirez speaks about is how "LLMs make it easy to fill code for stuff he already knows how to do"
And indeed, in this case, "LLM code in-painting" (eg let the user define the constraints, then act as a "code filler") works relatively nicely... BECAUSE the user knows how it should work, and directed the LLM to do what he needs
But this is just, eg, 2x/3x acceleration of coding tasks for good coders already, this is neither 100x, nor is it reachable for beginner coders.
Because what we see is that LLMs (for good reasons!!) *can't be trusted* so you need to have the burden of checking their code every time
So 100x productivity IS NOT POSSIBLE simply because it would be too long (and frankly too boring) for a human to check the output of 100x of a normal engineer (as long as you don't spend 1000 hours upfront trying to encode all your domain in a theorem-proving language like Lean and then ensure the implementation is checked through it... which would be so costly that the "100x gains" would already have disappeared)
Nobody is saying we want to "turn down" (although, this would be a discussion between pros/cons if the boost is "only" 2x and the cons could be "this tech leads to authoritarian regimes everywhere)
What we are discussing here is whether this is a true step-change for coding, or this is merely a "coding improvement tool"
This is obviously a collision between our human culture and the machine culture, and on the surface its intent is evil, as many have guessed already. But what it also does is it separates the two sides cleanly, as they want to pursue different and wildly incompatible futures. Some want to herd sheep, others want to unite with tech, and the two can't live under one sky. The AI wedge is a necessity in this sense.
I continue to hope that we see the opposite effect: the drop of cost in software development drives massively increased demand for both software and our services.
Why do you care so much to write a blog post? Like if it's such a big advantage, why not stay quiet and exploit it? Why not make Anti-AI blog posts to gain even more of an advantage?
One of the big red flags I see around the pro-AI side is this constant desire to promote the technology. At least the anti-ai side is reactionary.
It seems quite profitable nowadays to position yourself as [insert currently overhyped technology] GURU to generate clicks/views. Just look at the amount of comments in this thread.
I am waiting people to commits their prompt/agents setup instead of the code to call this a changing paradigm. So far it is "just" machine generating code and generating code doesn't solve all the software problem (but yeah they get pretty good at generating code)
* you add an HTTP header saying "I am a kid"
* porn web servers read and handle this headers
* if they don't (easy to test), they get fined
It is easy to implement, easy to monitor, and will probably just work if the government do the effort to monitor and enforce it. If not, it will just be an other DNT header
You add draconian client-side enforcement via parent controls. You can even mandate that stores ask for whom the device will be and provision it accordingly with the flag being automatically removed in the future when the person is of age.
I don't think it'll need draconian enforcement, parental controls on iOS and Android can co-exist with Linux. Having the option to enable specific filters on a client side and requiring a pass-code or OS level permissions to change them seems like a realistic way to tackle this that doesn't end in dangerous government power concentration.
As always with security, perfect is the enemy of good. A good set of hard to change - for children - client-side filters would do wonders in terms of real improvement. As much as I'm tired of the LLM hype, they might actually be a good fit for such tasks.
I don't even understand what you're getting at; as if RBAC is an alien concept? Do you think everyone should have root access to any machine they touch?
It's the parent's computer and they have a right to put a password on the BIOS and a child lock on the system that forces these types of headers, with no available bypass for the child account. Or, if they do please, have the router filter any website outside of a whitelist without a password.
The security argument is the best one to shove all this monopoly practices, but I doubt there are real proof of that somewhere. These days, I think I have most trust in a small app developed by a folk in a garage than something produces by Meta or Google
the existence of shady and criminal apps on both apple and android app store proves that their policies are definitely not sufficient and should be focused on first.
The thing people need to understand here is that the annoyance is not due to lack of technical solutions, or regulations forcing something. It is explicitly wanted by the industry so they can maximize the consent rate. The browser solution is probably the best technical/user friendly one, but ad tech/data gathering industry won't have any consent. As they control most of the web, they will never do that
It was implemented in browsers and ignored by sites. Chrome help says:
Turn "Do Not Track" on or off
When you browse the web on computers or Android devices, you can send a request to websites not to collect or track your browsing data. It's turned off by default.
However, what happens to your data depends on how a website responds to the request. Many websites will still collect and use your browsing data to improve security, provide content, services, ads and recommendations on their websites, and generate reporting statistics.
Most websites and web services, including Google's, don't change their behavior when they receive a Do Not Track request. Chrome doesn't provide details of which websites and web services respect Do Not Track requests and how websites interpret them.[1]
About the best we have browser side is a mode where all cookies are cleared at browser exit.
That's not an implementation. That's a request to sites that you visit to comply willingly. An implementation would be defensive.
It's what you would do if you had the crazy idea that a browser should be a client for the user, and only a client for the user. It should do nothing that a user wouldn't want done. The measure of a client's functionality is indistinguishable from the ability of the user to make it conform to the their desires.
It's not realistic to completely prevent tracking solely on the client-side. Every time that you interact with a server, that's an opportunity to track you. You can't prevent unless you just completely stop interacting with the server.
> About the best we have browser side is a mode where all cookies are cleared at browser exit.
No. The best we have are adblockers and scripts like consent-o-matic.
Clearing cookies does mostly clear cookies, tracking goes far beyond that. Clearing cookies has always been a red herring enabling adtech submarines like "I don’t care about cookies".
Didn't manifest v3 kinda voided all that for chrome based browser? Even brave's time in manifest v2 is timed. For that reason have switched to Firefox.
Correct. Age verification and privacy consents belong on the browser. The issue is that on the browser, things work a bit too well (remember https://en.wikipedia.org/wiki/P3P ?), so the big players are incentivized to ignore completely the browser-based mechanisms and say/do nothing whenever they see lawmakers going on a dumb direction (risking fines is a reasonable price to pay in order to kill adoption of an actual browser/OS based control that would cause a dent to their tracking operations) that puts the onus on individual website operators.
I believe Medium's DNT implementation showed a little confirmation button on embedded Youtube players. That's the kind of consent screen you may still need with proper DNT handling.
None of those cookie popups, though. That's all malicious compliance.
I don't think this is true. DNT being absent or set to consenting is not enough to infer the user has given specific and informed consent under the GDPR.
> Explicit consent: Under the GDPR and similar laws, consent must be specific, informed, and an unambiguous, affirmative action from the user. Consent cannot be assumed by a user's continued browsing or inaction, which is what DNT would require.
if DNT is absent you could show GDPR-compliant consent screen (ofc, it would still need to be actually compliant, i.e. with "reject all" button front and center)
US browsers account for more than 90% of the market share though. The main non-US browsers are Opera (Norwegian company majority owned by a Chinese company) and Samsung browser (South Korean company). Both Opera and Samsung are based on an American controlled open source project (Chromium, controlled by Google).
So for the vast majority of people in the world government ownership of their browser would mean American political control. The current administration has expressed both in theory and practice a willingness to directly exert influence over organizations even when they are supposed to be independent of government control. That would be a concern for much of the world even outside the US.
I did work in the ad tech industry for almost 15y and big corp like Google/FB scam their user:
- they don't allow double tracking, so you have to trust their numbers
- if you look at IP from their "clicks", you see often a FB/Google datacenter IP range
- and for most of the traffic they might send you, they did just clever algorithm and heavy profiling to stole your organic traffic. So they get this "amazing" performance by claiming people that would have bought on your site anyway
I have seen and been working in companies trying do to the impact metrics well, but these are outliers
- websites showing ads are annoying their user and get no benefit of it
- stores/brands/people that want to advert pays a bug chunk of money for nothing
- only the middle men are getting benefits
It's even worse, because it's Google who built a high platform in front of your store in the first place, with comfy chairs for your competitors to sit on.
Very true. So you end up on paying for a lot of clicks from existing customers who type your product name into Google and click on the first thing that comes up, which is increasingly an ad.
I used to advertise in Adwords on the name of my products, but I no longer bother.
Google ads is bad but wait till you reach the ads on all the streaming platforms. Watching the same ad roll for the 20th time in the same evening is like the door to door salesman that rung your doorbell for the 20th time and kept loudly screaming about their product while you’re trying to enjoy a peaceful meal. If anything, I’m going to actively start hating your brand or product.
It's entirely intentional. Companies would rather you "hate" their brand than forget their brand.
The reality is that a shockingly high percentage of people just don't get upset about that. Pay attention to anyone using a web browser without an ad blocker. They are so deep in the learned helplessness hole they don't even complain as their web page takes ages to load and reflow while they want to read, and they just silently click skip on all three youtube ads.
The ads work great on them. Think of how many people bought HeadOn.
And if you don't pay them, the spots in front of your store are instead filled by people handing out flyers about your competitor down the street.
They don't just take credit for people that were coming to your site anyway, they actively steer them away with competing ads if you aren't paying enough.
> they don't allow double tracking, so you have to trust their numbers
Facebook Ads and SA360 both allow 3rd party impression and click tracking. Not to mention the myriad of 3rd party analytics tools you can use to track the website.
> if you look at IP from their "clicks", you see often a FB/Google datacenter IP range
I've never heard this before and seems hilariously simplistic. These megacorps don't have VPNs?
> they did just clever algorithm and heavy profiling to stole your organic traffic
Not sure if you had a stroke while writing this, but this makes no sense.
As someone who has also been in the industry for 10+ years, none of what you said passes the smell test. Maybe after 15 years you still never understood the industry or technology?
>> they did just clever algorithm and heavy profiling to stole your organic traffic
>Not sure if you had a stroke while writing this, but this makes no sense.
I have only ever been a buyer; so my technical understanding of the back end implementation is incomplete.
However, I think the point OP is trying to make is that the back end of the ad targeting infrastructure ends up attributing "spend" to folks who would have otherwise been organic traffic and found your site anyway. ie placing your ad in effective organic pathways and/or in front of well targeted users.
This of course makes sense. A well targeted ad is going to be presented to lots of people who would have otherwise been organic traffic.
This is just a problem with measuring ROI on ad's in general though. I think what has changed is improved attribution of digital ad's has confused people. They see 10 clicks, $5 spend, 1 sale @ $10 and think a 200% ROAS.
In the old days (and still sometimes for non digital) ad effectiveness was measured as a lift over baseline. Different media had different decay rates.
Depending on your digital property, a similar model may need to be applied to your digital campaigns. As I understand though, this is harder to do in modern times with digital ad's.
> Depending on your digital property, a similar model may need to be applied to your digital campaigns. As I understand though, this is harder to do in modern times with digital ad's.
This is insanely important to do now, and the people who know how to successfully do it are old and retiring.
But nobody really wants to know the number of advertising dollars spent to get customers who were already going to spend on your product to ... spend on your product.
No I'm with him it still makes no sense to me. There's a massive assumption that because you fit the profile you'd have heard of the service if there wasn't advertising. A major part of advertising is to find people who like your product. You advertise to let people know about your product and keep it in their minds. Lift over baseline is relevant yes for ROI, but it doesn't imply the service is worthless!
I left the industry 4y ago, but at the time, it was impossible to track impressions on FB ads. So when FB tells you "We did display 1000 time your ads", you just have to believe them.
For the IP of the click, I have seen it from my own eyes in 2013, from Facebook. About analytic tools, I let you check who is the leader in this market, and about their independence in the click market ... If you have a website that pointed by a FB ad campaign, just look at the IP of the incoming clicks.
> Not sure if you had a stroke while writing this, but this makes no sense.
Well, that what is happening in practice. It is an attribution game, ad companies try to have the best conversion numbers more than increasing the overall sales of a store.
I have been working in a drive to (physical) store company, we were measuring performances with an expose group + a control group and we were showing 3% visit in store increase during a campaign, while our competitors were claiming things around 10%. We knew that their tracking technology was just about "we shown an ad, they visited the store, it is because of us"
I'm going to stop going back and forth on this. But let me just say this: verifying paid traffic, within the ad platforms as well as on your website is totally possible.
If you want to second guess the psychology of whether or not the ad "drove" the user to make a purchase or not, that is up for discussion. Or whether Google/META do shady shit in their bidding environment. Absolutely.
But this idea that you can't track everything is bullshit.
“It's possible that [being a VC] is quite literally timeless, and when the AIs are doing everything else, that may be one of the last remaining fields that people are still doing."
Interestingly, his justification is not that VCs are measurably good at what they do, but rather that they appear to be so bad at what they do:
“Every great venture capitalist in the last 70 years has missed most of the great companies of his generation... if it was a science, you could eventually dial it in and have somebody who gets 8 out of 10 [right]. There's an intangibility to it, there's a taste aspect, the human relationship aspect, the psychology — by the way a lot of it is psychological analysis."
(Personally, I’m not quite sure he actually believes this - but watching him is a certain kind of masterclass in using spicy takes to generate publicity / awareness / buzz. And by talking about him I’m participating in his clever scheme.)
Even his justification for why AI can't become a VC sounds like you could just go by random chance and have the same chance at success which means even the personal touch he is trying to advocate is useless. A monkey could do his job.
The way I’d read that take is that being a “good” VC is about having enough money to spread around and enough networking connections to generate the right leads. After that pretty much any idiot can do the job.
Tldr AI can replace labor but not capital. More news at 11.