I took meds for depression a few years ago. I don't know that they did anything other than signal to myself that I wasn't ready to give up. They may have served as a kind of "dumbo's feather" that helped me get through a rough patch. Exercise might be similar. People who choose to exercise make the statement to themselves that they are worth doing something positive for. Some mental health problems resolve with time and without medication, and in those cases, exercise might be a great way to address them. But if you're struggling, call your doctor and make and appointment. Medication is sometimes the answer.
> Carbohydrate overfeeding produced progressive increases in carbohydrate oxidation and total energy expenditure resulting in 75-85% of excess energy being stored. Alternatively, fat overfeeding had minimal effects on fat oxidation and total energy expenditure, leading to storage of 90-95% of excess energy.
It seems like the author is more focused on database features than user base. Every metric I can find online says that MySQL/MariaDB is more popular than PostgreSQL. PostgreSQL seems "better" (more features, better standards compliance) but MySQL/MariaDB works fine for many people. Am I living in a bubble?
Popularity can mean multiple things. Are we talking about how frequently a database is used or how frequently a database is chosen for new projects? MySQL will always be very popular because some very popular things use it like WordPress.
It does feel like a lot of the momentum has shifted to PostgreSQL recently. You even see it in terms of what companies are choosing for compatibility. Google has a lot more MySQL work historically, but when they created a compatibility interface for Cloud Spanner, they went with PostgreSQL. ClickHouse went with PostgreSQL. More that I'm forgetting at the moment. It used to be that everyone tried for MySQL wire compatibility, but that doesn't feel like what's happening now.
If MySQL is making you happy, great. But there has certainly been a shift toward PostgreSQL. MySQL will continue to be one of the most used databases just as PHP will remain one of the most used programming languages. There's a lot of stuff already built with those things. I think most metrics would say that PHP is more widely deployed than NodeJS, but I think it'd be hard to argue that PHP is what the developer community is excited about.
Even search here on HN. In the past year, 4 MySQL stories with over 100 point compared to 28 PostgreSQL stories with over 100 points (and zero MariaDB stories above 100 points and 42 SQLite). What are we talking about here on HN? Not nearly as frequently MySQL - we're talking about SQLite and PostgreSQL. That's not to say that MySQL doesn't work great for you or that it doesn't have a large installed base, but it isn't where our mindshare is about the future.
What do you mean by this? AFAIK they added MySQL wire protocol compatibility long before they added Postgres. And meanwhile their cloud offering still doesn't support Postgres wire protocol today, but it does support MySQL wire protocol.
> Even search here on HN.
fwiw MySQL has been extremely unpopular on HN for a decade or more, even back when MySQL was a more common choice for startups. So there's a bit of a self-fulfilling prophecy where MySQL ecosystem folks mostly stopped submitting stories here because they never got enough upvotes to rank high enough to get eyeballs and discussion.
That all said, I do agree with your overall thesis.
> Every metric I can find online says that MySQL/MariaDB is more popular than PostgreSQL
What are those metrics? If you're talking about things like db-engines rankings, those are heavily skewed by non-production workloads. For example, MySQL still being the database for Wordpress will forever have a high number of installations and developers using and asking StackOverflow questions. But when a new company or established company is deciding which new database to use for their custom application, MySQL is seldom in the running like it was 8-10 years ago.
There are rumblings that the MySQL project is rudderless after Oracle fired the team working on the open-source project in September 2025. Oracle is putting all its energy in its closed-source MySQL Heatwave product. There is a new company that is looking to take over leadership of open-source MySQL but I can't talk about them yet.
The MariaDB Corporation financial problems have also spooked companies and so more of them are looking to switch to Postgres.
> There are rumblings that the MySQL project is rudderless after Oracle fired the team working on the open-source project in September 2025.
Not just the open-source project; 80%+ (depending a bit on when you start counting) of the MySQL team as a whole was let go, and the SVP in charge of MySQL was, eh, “moving to another part of the org to spend more time with his family”. There was never really a separate “MySQL Community Edition team” that you could fire, although of course there were teams that worked mostly or entirely on projects that were not open-sourced.
Wouldn't Oracle need those 80%+ devs if they wanted to shift their efforts into Heatwave? That percentage sounds too huge to me and if true I believe they won't be making any larger investments into Heatwave neither. There's several core teams in MySQL and if you let those people go ... I don't know, I am not sure what to make out of it but that Oracle is completely moving away from MySQL as a strategic component of their business.
So, AI ate the cake ... I always thought that the investment that Oracle needs to make for MySQL is peanuts compared to the Oracle's total revenue and the revenue MySQL is generating. Perhaps the latter is not so true anymore.
I'm slowly de-Microsofting my computing. I've traded OneDrive for Syncthing. I ditched one PC for a Mac. I have the technical skills to run Linux effectively, but the biggest obstacle for my Linux adoption is distro fatigue. Run Ubuntu? Debian? Fedora? PopOS? Kubuntu? Arch? The article introduced yet another one to consider--Bazzite.
The Linux world is amazing for its experimentation and collaboration. But the fragmentation makes it hard for even technical people like me who just want to get work done to embrace it for the desktop.
Ubuntu LTS is probably the right choice. But it's just one more thing I have to go research.
Pick a popular distro, and during installation, put your /home directory on its own partition. This way, you won't have much to reconfigure if you ever have a reason to switch distros. (You might not ever have a reason; they're all pretty capable.)
Just use Debian. If you have technical skills, run Debian Testing. If not, run Debian Stable, or someone else who repackages Debian Testing such as Mint or (as you mentioned) Ubuntu LTS.
Debian Testing will sometimes break, so technical skills are necessary if you want to always be sure you can be up and running. Otherwise, something may not work for a few days to a week, like CUPS (printing). 99.9% of the time, it won't be your networking or something super-important, but it could be. When you update, read the list of changes and absolutely make sure that you know what the things are that are being uninstalled, and whether you can do without them for a few days. Check the internet for when packages are removed from testing (and why) or will be moved into testing from unstable. Don't forget that you can use LLMs now when you have a problem.
Once you've been able to handle Debian Testing for a while, especially through a couple of breakages, you'll probably be confident enough and knowledgeable enough to know if you want to go to another distro. I personally don't need anything other than testing for my desktops, and stable for my servers.
edit: Debian Testing gets software that has worked smoothly on Debian Unstable for a two-week period. Sometimes things get missed during those two weeks, and sometimes Debian decides to reorganize packages radically in a way that takes more than one update. One thing to remember is that urgent bugfixes to Debian Stable might bypass testing altogether, and might actually arrive later to testing than everywhere else. You'll probably hear about those on the news or on HN, and you might want to manually install those fixes before they actually hit testing.
As a beginner, just pick Ubuntu and get on with your life imo. Switching distros isn't that big of a lift later on and pretty much everything you learn carries over from one to the other. It's much more worthwhile to just pick _something_ and learn some basics and become comfortable with the OS imo.
Don't go for Ubuntu LTS. There are many choices that would work equally well for you. I'd go for Fedora KDE edition (but could easily be EndeavorOS or OpenSUSE Tumbleweed).
Ubuntu stopped caring about the desktop experience when the switched to Gnome. Now they have annoying SNAPs. They are a business and they are going to continue enshitifying it.
I haven't tried Bazzite because I'm not into gaming but Linux Mint is working very well for a lot of people coming from Windows. It just works and has great defaults. Windows users seem to pick it up pretty easily.
Also, Linux Mint upgrades very well. I've had a lot of success upgrading to new versions without needing to reinstall everything. Ubuntu and other distros I've tried often have failed during upgrading and I had to reinstall.
I think fragmentation is the wrong way to look at it; they're all basically compatible at the end of the day. It's more like an endless list of people who want to min-max.
Any reasonably popular distro will have enough other users that you can find resources for fixing hitches. The deciding factor that made me go with EndeavourOS was that their website had cool pictures of space on it. If you don't already care then the criteria don't need to be any deeper than that.
Once you use it enough to develop opinions, the huge list of options will thin itself out.
I don't know that I'd trust IBM when they are pitching their own stuff. But if anybody has experience with the difficulty of making money off of cutting-edge technology, it's IBM. They were early to AI, early to cloud computing, etc. And yet they failed to capture market share and grow revenues sufficiently in those areas. Cool tech demos (like the Watson Jeopardy) mimic some AI demos today (6-second videos). Yeah, it's cool tech, but what's the product that people will actually pay money for?
I attended a presentation in the early 2000s where an IBM executive was trying to explain to us how big software-as-a-service was going to be and how IBM was investing hundreds of millions into it. IBM was right, but it just wasn't IBM's software that people ended up buying.
Xerox was also famously early with a lot of things but failed to create proper products out of it.
Google falls somewhere in the middle. They have great R&D but just can’t make products. It took OpenAI to show them how to do it, and the managed to catch up fast.
"They have great R&D but just can’t make products"
Is this just something you repeat without thinking? It seems to be a popular sentiment here on Hacker News, but really makes no sense if you think about it.
Products: Search, Gmail, Chrome, Android, Maps, Youtube, Workspace (Drive, Docs, Sheets, Calendar, Meet), Photos, Play Store, Chromebook, Pixel ... not to mention Cloud, Waymo, and Gemini ...
So many widely adopted products. How many other companies can say the same?
I don't think Google is bad at building products. They definitely are excellent at scaling products.
But I reckon part of the sentiment stems from many of the more famous Google products being acquisitions orignally (Android, YouTube, Maps, Docs, Sheets, DeepMind) or originally built by individual contributors internally (Gmail).
Then here were also several times where Google came out with multiple different products with similar names replacing each other. Like when they had I don't know how many variants of chat and meeting apps replacing each other in a short period of time. And now the same thing with all the different confusing Gemini offerings. Which leads to the impression that they don't know what they are doing product wise.
Starting with an acquisition is a cheap way of accelerating once your company reaches a certain size.
Look at Microsoft - Powerpoint was an acquisition. They bought most of the team that designed and built Windows NT from DEC. Frontpage was an acquisition, Azure came after AWS and was led by a series of people brought in in acquisitions (Ray Ozzie, Mark Russinovich, etc.). It's how things happen when you're that big.
Because those were "free time" projects. It wasn't directed to do by the company, somebody at the company with their flex time - just thought it was a good idea and did it. Googlers don't get this benefit any more for some reason.
Leadership's direction at the time was to use 20% of your time in unstructured exploration and cool ideas like that, though good point of the other poster that that is no longer a policy.
Those are all free products, some of them are pretty good. But free is the best business strategy to get a product to the top of the market. Are others better, are you willing to spend money to find out? Clearly, most people are not interested. The fact that they can destroy the market for many different types of software by giving it away and still stay profitable is amazing. But that's all they are doing. If they started charging for everything there would be better competition and innovation. You could move a whole lot of okay-but-not-great cars, top every market segment you want, if you gave them away for free. Only enthusiasts would remain to pay for slightly more interesting and specific features. Literally no business model can survive when their primary product is competing with good-enough free products.
They come up with tons and tons of products like Google Glass and Google+ and so on and immediately abandon them. It is easy to see that there is no real vision. They make money off AdSense and their cloud services. That's about it.
Google does abandon a lot of stuff, but their core technologies usually make their way into other, more profitable things (collaborative editing from Wave into Docs; loads of stuff from Google+; tagging and categorizing in Photos from Picasa (I'm guessing); etc)
It annoyed me recently that they dropped support for some Nest/Google Home thermostats. Of course, they politely offered to let me buy a replacement for $150.
> Products: Search, Gmail, Chrome, Android, Maps, Youtube, Workspace (Drive, Docs, Sheets, Calendar, Meet), Photos, Play Store, Chromebook, Pixel ... not to mention Cloud, Waymo, and Gemini ...
Many of those are acquisitions. In-house developed ones tend to be the most marginal on that list, and many of their most visibly high-effort in-house products have been dramatic failures (e.g. Google+, Glass, Fiber).
I was extremely surprised that Google+ didn't catch on. The week before Google+ launched, me and all my friends agreed that Facebook is toast, Google will do the same thing but better, and everyone has a Gmail account so there will be basically zero barrier to entry. Obviously, we were wrong; Google+ managed to snatch defeat out of the jaws of victory, Google+ never got significant traction, and Facebook managed to keep growing and now they're yet another Big Evil Tech Corporation.
Honestly, I still don't really know how Google managed to mess that up.
I got early access to Google+ because of where I worked at the time. The invite-only thing had worked great for GMail but unfortunately a social network is useless if no-one else is on it. Then the real names thing and the resulting drumbeat of horror stories like "Google doxxed me to my violent ex-husband" killed what little momentum they had stone dead. I still don't know why they went so hard on that, honestly.
I think the sentiment is usually paired with discussion about those products as long-lasting, revenue-generating things. Many of those ended up feeding back into Search and Ads. As an exercise, out of the list you described, how many of those are meaningfully-revenue-generating, without ads?
A phrasing I've heard is "Google regularly kills billion-dollar businesses because that doesn't move the needle compared to an extra 1% of revenue on ads."
And, to be super pedantic about it, Android and YouTube were not products that Google built but acquired.
They bought YouTube but you have to give Google a hell of a lot of credit for turning it into what it is today. Taking ownership of YouTube at the time was seen by many as taking ownership of an endless string of copyright lawsuits, suing them into oblivion.
Youtube maintains an independent campus from the google/alphabet mothership, I'm curious how much direction they get, as (outwardly, at least) appear to run semi-autonomously.
Before Google touched Android it was a cool concept but not what we think of today. Apparently it didn't even run on Linux. That concept came after the acquisition.
Notably all other than Gemini are from a decade or more ago. They used to know how to make products, but then they apparently took an arrow in the knee.
Search was the only mostly original product. With the exception of YouTube which was a purchase, Android and ChromeOS all the other products were initially clones.
Google had less incentive. Their incentive was to keep API bottled up and in brewing as long as possible so their existing moats in search, YouTube can extend in other areas. With openai they are forced to compete or perish.
Even with gemini in lead, its only till they extinguish or make chatgpt unviable for openai as business. OpenAI may loose the talent war and cease to be leader in this domain against google (or Facebook) , but in longer term their incentive to break fresh aligns with average user requirements . With Chinese AI just behind, may be google/microsoft have no choice either
Google was especially well positioned to catch up because they have a lot of the hardware and expertise and they have a captive audience in gsuite and at google.com.
The original statistical machine translation models of the 90's, which were still used well into the 2010's, were famously called the "IBM models" https://en.wikipedia.org/wiki/IBM_alignment_models These were not just cool tech demos, they were the state of the art for decades. (They just didn't make IBM any money.)
Neither cloud computing nor AI are good long term businesses. Yes, there's money to be made in the short term but only because there's more demand than there is supply for high-end chips and bleeding edge AI models. Once supply chains catch up and the open models get good enough to do everything we need them for, everyone will be able to afford to compute on prem. It could be well over a decade before that happens but it won't be forever.
This is my thinking too. Local is going to be huge when it happens.
Once we have sufficient VRAM and speed, we're going to fly - not run - to a whole new class of applications. Things that just don't work in the cloud for one reason or another.
- The true power of a "World Model" like Genie 2 will never happen with latency. That will have to run locally. We want local AI game engines [1] we can step into like holodecks.
- Nobody is going to want to call OpenAI or Grok with personal matters. People want a local AI "girlfriend" or whatever. That shit needs to stay private for people.
- Image and video gen is a never ending cycle of "Our Content Filters Have Detected Harmful Prompts". You can't make totally safe for work images or videos of kids, men in atypical roles (men with their children = abuse!), women in atypical roles (woman in danger = abuse!), LGBT relationships, world leaders, celebs, popular IPs, etc. Everyone I interact with constantly brings these issues up.
- Robots will have to be local. You can't solve 6+DOF, dance
routines, cutting food, etc. with 500ms latency.
- The RIAA is going door to door taking down each major music AI service. Suno just recently had two Billboard chart-topping songs? Congrats - now the RIAA lawyers have sued them and reached a settlement. Suno now won't let you download the music you create. They're going to remove the existing models and replace them with "officially licensed" musicians like Katy Perry® and Travis Scott™. You won't retain rights to anything you mix. This totally sucks and music models need to be 100% local and outside of their reach.
It is very misleading or outright perverse to write "they were selling software as a service in the IBM 360 days" when there was no public network that could be used to the deliver the service. (There were wide-area networks, but each one was used by a single organization and possibly a few of its most important customers and suppliers, hence the qualifier "public" above.)
But anyways, my question to you is, was there any software that IBM charged money for as opposed to providing the software at no additional cost with the purchase or rental of a computer?
I do know that no one sold software software (i.e., commercial off-the-shelf software) in the 1960s: the legal framework that allowed software owners to bring lawsuits for copyright violations appeared in the early 1980s.
There was an organization named SHARE composed of customers of IBM whereby one customer could obtain software written by other other customers (much like the open-source ecosystem) but I don't recall money ever changing hands for any of this software except a very minimal fee (orders of magnitude lower than the rental or purchase price of a System/360, which started at about $660,000 in 2025 dollars).
Also, IIUC most owners or renters of a System/360 had to employ programmers to adapt the software IBM provided. There is software with that quality these days, too (.e.g, ERP software for large enterprises) but no one calls that a software as a service.
> but it just wasn't IBM's software that people ended up buying.
Well, I mean, WebSphere was pretty big at the time; and IBM VisualAge became Eclipse.
And I know there were a bunch of LoB applications built on AS/400 (now called "System i") that had "real" web-frontends (though in practice, they were only suitable for LAN and VPN access, not public web; and were absolutely horrible on the inside, e.g. Progress OpenEdge).
...had IBM kept up the pretense of investment, and offered a real migration path to Java instead of a rewrite, then perhaps today might be slightly different?
Markdown is the minimum viable product. It’s easy to learn and still readable if not rendered in an alternate format. It’s great.
For making PDFs, I’ve recently moved from AsciiDoc to Typst. I couldn’t find a good way to get AsciiDoc to make accessible PDFs, and I found myself struggling to control the output. Typst solves all of AsciiDoc’s problems for me.
But in the end, no markup language will make you write better. It’s kind of like saying that ballpoint pens are limiting your writing, so you should switch to mechanical pencils.
Yes, the author conflates two different use-cases.
Markdown is the answer for "how do we enable people that don't want to invest a lot of time into producing content that's somewhat better than plain text?".
It's not trying to solve the problem of "how do we enable people that are willing to invest time into learning to produce the best possible and most structured content possible?" and I doubt that there will be language that will serve both of those use-cases very well.
The problem in practice is that quickly one merges into the other. You start with a markdown readme, then you have markdown documentation for a small project. But then one day you need full documentation for your project with cross links, translations, accessibility. With Markdown you end up bolting these things on and each flavor does it a bit differently.
Perhaps some of the blame can be laid with the poor UX of technically superior systems. restructuredtext (apart from the terrible name) built with Spinx can do impressive things but becomes a huge pain to configure. All the XML-based tools like DocBook are very complete but try to get started actually building something - apart from having to author them in XML (which is already a kind of punishment), then you have to figure out XSLT stylesheets, 2000s-era design Java tools for processing them. And just look at the DocBook landing page! AsciiDoc has improved their onboarding recently but does have the issue of feeling like a markdown-ish alternative that's just a bit different for no clear reason.
One downside here is that as more and more tools focus on the first use-case, people start using those tools by default when they actually fall into the second use-case. And there's often a pretty high barrier to switching once you've produced a lot of content, so a bunch of projects are using the wrong one long-term.
Arguably having a ton of hard to write, hard to maintain docs is waaay worse than Markdown that gets attention in PRs (MRs).
Especially that the things in the article seem irrelevant compared to actually adding and handling non-text content IMHO. (Mermaid diagrams for example.)
Sure a validator would be nice, but that's why a simple preview is available in most collaboration platforms.
typst looks interesting -- but how are you writing it? from what I looked at, it looks like theres an official web editor and a vscode plugin with limited support. this feels pretty limited, as someone who came in expecting something like obsidian.
> I'm not aware of any limitations in the Tinymist plugin.
I looked into this a while ago, and couldn't find a workflow I could live with. Have things improved? What's the workflow like for working on an image in, say, OmniGraffle to include in the document? Does text search in embedded PDFs work these days? LinkBack so I can edit the images easily inline?
Typst really does look good. Can one get an editor with live PDF preview ? It would be useful mainly for immediate feedback on markup correctness; then an HTML output ought to be "close enough".
Tinymist in VS Code does this out of the box (and looks like it can be set up in other editors). That or you can configure it to save out a new PDF automatically on save or as you edit the document and just open it in a PDF viewer that'll reload when the file changes.
I always find these discussions about AWS NAT gateways interesting because I recall way back in the day, before AWS had a manages NAT gateway, the recommendation was to roll your own anyway. Or at least that's what I heard. I took an ACloud Guru course and one of the first ec2 lessons was to create a simple NAT gateway in your VPC so that your other instances could reach the Internet.
People might be fleeing public schooling because lawmakers are dictating what happens in the classroom. There are lots of good teachers who struggle with the resources given to them and the constraints imposed on them.
At home, parents can be flexible. They can let their kids use AI when appropriate or discourage its use. They don't have to wait for legislators to get involved. If there is a great math book, parents can just buy it instead of waiting for some committee to evaluate it.
> If there is a great math book, parents can just buy it
How do you know if the math book is great if there hasn’t been consensus about it. The problem isn’t the committee that will always be there in some form. The problem is the politics the committee is used for. If the committee were to prioritize and offload their specific requirements for review instead of requiring substantial analysis twice then the school system would be just as quick.
> IBM anticipates that the first cases of verified quantum advantage will be confirmed by the wider community by the end of 2026.
In 2019, Google claimed quantum supremacy [1]. I'm truly confused about what quantum computing can do today, or what it's likely to be able to do in the next decade.
There's legitimately interesting research in using it to accelerate certain calculations. For example, usually you see a few talks at chemistry conferences on how it's gotten marginally faster at (very basic) electronic structure calculations. Also some neat stuff in the optimization space. Stuff you keep your eye on hoping it's useful in 10 years.
The most similar comparison is AI stuff, except even that has found some practical applications. Unlike AI, there isn't really much practicality for quantum computers right now beyond bumping up your h-index
Well, maybe there is one. As a joke with some friends after a particularly bad string of natural 1's in D&D, I used IBM's free tier (IIRC it's 10 minutes per month) and wrote a dice roller to achieve maximum randomness.
that was my understanding too - in the fields of chemistry, materials science, pharmaceutical development, etc... quantum tech is somewhat promising and might be pretty viable in those specific niche fields within the decade.
A decade from now Quantum computing will be in the same place it was a decade ago, on the cusp of proving a quantum advantage for tailor made problems in comparison to normal availability supercomputers. Classical compute will advance in that time period to keep the quantum computers always on the cusp.
The major non-compute related engineering breakthroughs needed for quantum computing to actually be advantageous in a way that would be revolutionary are themselves so revolutionary that the advancements of quantum computing would be vastly overshadowed. Again it's a case where those breakthroughs would so greatly enhance classic compute in terms of processing and reduction in costs that it still probably wouldn't be economically viable to produce general purpose quantum computers.
The trouble with quantum supremacy results is they disappear as soon as you observe them (carefully).
Sorry for that, but seriously, I'd treat this kind of claim like any other putative breakthrough (room-temperature superconductors spring to mind), until it's independently verified it's worthless. The punishment for crying wolf is minimal and by the time you're shown to be bullshitting the headlines have moved on.
The other method, of course, is to just obsessively check Scott Aaronson's blog.
IBM challenged that the 2019 case could be handled by a supercomputer [1].
The main issue is that these algorithms where today's early quantum computers have an advantage were specifically designed to be demonstration problems. All of the tasks that people previously wanted a quantum computer to do are still impractical with today's hardware.
There is currently no support for:
Paid tiers with guaranteed quotas and rate limits
Bring-your-own-key or bring-your-own-endpoint for additional rate limits
Organizational tiers (self-serve or via contract)
So basically just another case of vendor lock-in. No matter whether the IDE is any good - this kills it for me.
reply