Yes, this is correct. Google pays Mozilla hundreds of millions of dollars annually to be the default search engine. This makes up the vast majority of Mozilla Corporation's revenue. It's somewhere in the ballpark of 85% of all their annual revenue last I heard.
They've tried hard in recent years to get out from under Google by diversifying into other areas. For example, they have a VPN service that is a wrapper around Mullvad, and they've made some privacy tools that you can pay to use, also largely wrappers around other companies' tools.
I was an employee of Mozilla Corporation and saw first-hand the effort they were making. In my opinion, it's been a pretty abysmal failure so far. Pulling Google funding would effectively hamstring Mozilla Corp.
I upvoted this post and would not have commented if not for yours, so maybe my reason for (almost) abstaining is similar as other people who upvoted. What follows might be a ramble and is just my anecdotal experience on HN:
I've been an HN user for years, and I've found it somewhat hard to comment on anything related to economics or non-technical / pop-cultural topics. Many HN users are experts in their technical fields, and they seem to think this automatically translates to expertise in political science, sociology, psychology, and all the other fields of endeavor where we can't just point to source code to justify our positions. I mostly find that HN commenters are a thoughtful bunch. But, there's a small, noisy group of armchair experts waiting to swoop in and correct your grammar or disagree on some technicality over social issues like this.
Since HN is tech-focused, even posting something not directly related to technology can get your post flagged and taken down as irrelevant. So in a way, discussing these things is disincentivized by the site's purpose. I get that WaPo is an online platform and therefore in-scope, but it's "scarier" to comment on because it's more social than technical.
In part, the act of being a thoughtful commenter also means steering well clear of any flame wars (that aren't related to NixOS, Rust, or LLMs). I.e., it's like jazz in that it's about the notes you don't play— it's the comments you don't make that foster a good online experience.
This is also a US-specific article, and lots of American people are overwhelmed by the onslaught of post-election political news; so, this might be a cultural thing in that people are not commening as much because they're dealing with a big inbox of emotions to sort through.
I still take active interest in political posts like this and personally think Bezos is a modern-day robber baron in the new Gilded Age. But, I'll seldom say so, opting instead to upvote so others can see the post and then move on silently.
It's exciting to see a an OpenWRT router where compatibility is guaranteed! I've been running OpenWRT at home for years, and whenever it comes time to upgrade, it's always a deep dive into their Table of Hardware [1]. Many of the newest routers with an absurd number of antennae that you might see at big-box stores like Costco have incompatible chipsets, so usually I have to buy something a bit older.
Most recently I bought a couple of Belkin AX3200 routers because they support WiFi6 and are only about $50 USD. The annoying part is that they're a Walmart exclusive, but they have worked flawlessly so far. Still, I'd rather have the new, officially-endorsed one.
BTW none of the links to online on the OpenWrt pages currently work; everything goes 404 for me.
The only working online store that agrees to sell the device to me is an AliExpress shop I found via shopping.google.com, and its list price is $116. (Now marvel at Walmart's pricing power.)
I've been living with the same router/modem combo (Fritzbox) for around 11 years, and I don't plan to replace it before I get FTTH. (Otherwise I would have.)
Nothing against OpenWRT, I have used it in the past, but I doubt I would have switched routers more often if I was still using it...
I wanted to leave this comment, but now I’m going to have to leave a helpful correction to your comment instead: prescribed is closer to “forced”, or “made the rule”, than to “recommended”. :)
Once I saw that the headline image was AI-generated, I skimmed the first paragraph and didn't find a lot of meaning in it. The dearth of content combined with an AI image made me suspect that the article itself might be AI-generated.
As a litmus test, I decided to check for the word "delve" to see whether it appeared in the text. According to an article I read in The Guardian[1], this word is more likely to appear in AI-generated responses to prompts. Sure enough, "delve" was right there in the second paragraph.
Of course, these two things combined aren't exactly a "smoking gun" proving that the whole thing is AI blog-spam, but I would bet it is (as first mentioned in another comment here). It's pretty wild to be living in a time where we have to be so wary of an entire article being prompt-engineered into existence by a lazy "author" eager for clicks.
I think this may have less to do with "python-brain" and more with "data-science brain". If a person is well-versed in data science concepts and has been trained to use Pandas DataFrames and Series for everything, that's what they'll lean on. After all, it's some kind of in-memory object that can hold many values and has a way to label them with column labels and indices.
Chances are somewhat good that these people weren't computer science majors to begin with. For example, math or biology majors who have moved away from R to Python might know a great deal about data but not much about compsci.
For people who use Python in a DevOps context, they'll likely be exposed to more OOP concepts and lean more heavily on classes.
Yeah, I've seen a lot of people write absurd dataframe monstrocities that ended up being slower than the naive Python loop-over-a-list solution never mind 10X the code. But I've also seen plenty of non-data-science examples.
Agreed, most or all shots of the Planet Express building and Planet Express ship are 3D renderings, even in the original first few seasons. Beyond that, even some shots of Bender in Space are 3D renderings, especially in cases where a complex and continuous shift in perspective is required.
Non-photo-realistic (NPR) 3D art goes back a surprisingly long way in animations. I rewatched the 1988 Disney cartoon "Oliver and Company" recently, and I was surprised to see that the cars and buildings were "cel-shaded" 3D models. I assumed that the movie had been remastered, but when I looked it up, I found out that it was the first Disney movie ever to make heavy use of CGI[0] and that what I was seeing was in the original. The page I found says:
"This was the first Disney movie to make heavy use of computer animation. CGI effects were used for making the skyscrapers, the cars, trains, Fagin's scooter-cart and the climactic Subway chase. It was also the first Disney film to have a department created specifically for computer animation."
I guess it depends on the definition of "heavy use." I know in Tron a few scenes were CG, and there were a few CG+live-action bits, but the majority was filmed on normal physical sets in high-contrast, then painstakingly hand-processed[1] to add the neon "glow".
From your link:
>The 1982 Disney movie is privy to a remarkable number of firsts: the first feature-length film to combine CGI and live-action; the first talking and moving CGI character; the first film to combine a CGI character and a live-action one; the first fully CGI backgrounds… The list goes on and on.
>Eleven minutes of the film used "computer-assisted imagery" such as the skyscrapers, the taxi cabs, trains, Fagin's scooter-cart, and the climactic subway chase
But Disney financed and distributed Tron. It wasn't made by a Disney Studio, and most of the animation was outsourced to a Taiwanese studio because Disney wouldn't lend any of their own talent. So I think it's fair to say that Oliver & Company is the first Disney-made film to use CGI.
The Great Mouse Detective (1986) was earlier and the ending sequence is CG (printed out and traced onto cels so traditional 2D characters could be drawn on top).
That's a good point. What's funny is that "The Great Mouse Detective" was actually the film I was thinking of this whole time - I believe the ending sequence took place in Big Ben, and it looks quite good by 2024 standards. But I forgot the name of the movie and assumed it was "Oliver & Company" because Oliver is a plausible name for an English mouse :)
Kind of, it hasn't replaced anyone though. 3DCG just became good-enough basis for artists to build on, what AI bros have been fantasizing and advocating for couple years by now, yet completely ignored and mocked over.
Which tells, AI hatred don't necessary come from what pro-AI thinks where it comes from, people potentially just find AI art rage inducing.
Like, not even specific technical aspect of AI is bad or could use improvements. It just sits at the wrong side of the uncanny valley, and arguments clump around that.
For a more long-form answer to your question, I recommend checking out "Plunder - Private Equity's Plan to Pillage America" by Brendan Ballou [0]. He gives some insights into the tactics used by private equity firms to acquire, gut, and destroy existing companies and profit by doing so.
I feel your pain on having to manage so many dependencies. I write primarily in Python, and the various pip / Pipenv / pipx / PDM / Poetry dependency managers drive me pretty crazy. That's not even accounting for the multiple Python versions I need!
That said, I'm surprised that you're trying to _alleviate_ this by implementing your FP language in Python. The Python ecosystem is full of half-documented config files, incompatible dependency trees, etc.
Have you considered implementing it in any other languages after the Python one proves its worth? For example, if the language becomes strong enough, would you consider writing a scrapscript compiler in scrapscript, itself?
One thing I think we all agree on is that the implementations should be simple enough to easily port themselves to other languages. For example, one could probably port the existing scrapscript.py to Rust or Javascript using GPT in a single weekend.
Some languages like Rust and Go put a lot of weight on the "official" implementation. I think scrapscript can be more like Lisp/Json where the spec guides parallel implementations. There are obvious downsides to this in general, but I think that content-addressability makes some of those problems moot.
None of these config/dependency problems are present in scrapscript.py because it has no external dependencies and is written in one file. This is intentional!
They've tried hard in recent years to get out from under Google by diversifying into other areas. For example, they have a VPN service that is a wrapper around Mullvad, and they've made some privacy tools that you can pay to use, also largely wrappers around other companies' tools.
I was an employee of Mozilla Corporation and saw first-hand the effort they were making. In my opinion, it's been a pretty abysmal failure so far. Pulling Google funding would effectively hamstring Mozilla Corp.