I have used Gemini's 2.5 Pro deep research probably about 10 times. I love it. Most recently was reviewing PhD programs in my area then deep diving into faculty research areas.
I just made the switch. I had been developing on Windows for the last couple of years, mostly to get used to the ecosystem. I wanted to be able to write C and C++ like I do on Linux, without an IDE and with the native toolchain (i.e. no cygwin). On top of that, I play Overwatch every night.
Windows just seems to have zero focus on performance though. React based start menu with visible lag, file Explorer (buggily) parsing files to display metadata before listing them, mysterious memory leaks not reflected in task manager processes.
I installed Linux Mint. While it didn't just work (TM), and I had to go into recovery mode to install Nvidia drivers, it worked well enough. I can run Overwatch via Steam and pull comparable FPS to Windows (500 FPS on a 3090 with dips into the 400s). Memory usage is stable and at a very low baseline.
It is nice to come back to Linux, and with games I don't really have a need to run Windows anymore.
The only thing windows has focused on has been dark patterns to force users towards cloud and figuring out more and more ways to collect data to sell ads.
I’m not naive, I know a ton of huge enterprises still run huge fleets of windows “servers” but I still find it hilarious that a supposedly serious server OS would default to showing you the weather and ads in the start menu.
> The only thing windows has focused on has been dark patterns to force users towards cloud and figuring out more and more ways to collect data to sell ads.
And backwards compatibility.
They're really good at it. And I'd say that's the reason Windows is still dominant. There's this unfathomably long tail of niche software that people need or want to run.
Windows has changed the kernel interface more often than Linux.
This fact alone throws this commonly held belief to the wind.
Glibc provides binary compatibility to newer versions too.
Shims exist in both, “windows compatibility layer” for example, but pulseaudio can emulate ALSA- and pipewire can emulate pulseaudio and ALSA.
It’s actually a quagmire, but I would contend that either has solid story for backwards compatibility depending on the exact lens you’re looking at. Microsoft is worse than Linux in many ways.
Microsoft sort of only wins in the closed-source, “run this arbitrary binary” race - if you totally ignore the w10/11 UWP migration that killed a lot of win32 applications, but drivers for older hardware are much more long lived under linux.
Binary applications do not include drivers. I only mean applications, drivers do not transfer cleanly between versions of Windows.
To answer your other question though; Any GDI that is not accessible through DirectX- The Contacts API, Timers API, BITS (Background Intelligent Transfer Service), The inbound HTTP server API, NDF (Network Diagnostic Framework), SNMP.
AllocConsole and ReadConsole are gone, NamedPipes (something I used to use extensively) are gone. Toolbar and Statusbar APIs are gone and direct manipulation APIs for the Desktop.
You are describing limitations on sandboxed UWP apps, but Windows still supports regular Win32 just fine, and everything that you describe is available there.
I still run 30 year old games on Windows and write new software using WPF and WinForms even, and it all "just works", much more so than similar attempts at software archeology on Linux.
It's really too bad that Microsoft is hell bent on shoving ads, AI, and dark patterns everywhere in what could otherwise be a decent boring "it just works" OS.
Surprising amount of drivers do transfer between versions of Windows, even if not officially supported. But yes, most break at some point.
I'm able to run binaries compiled over 20 years ago on the latest version of Windows most of the time. They do require enabling compatibility mode and sometimes installing legacy features.
I don't know, if APIs you mentioned are available in compatibility modes, but at least named pipes can still be enabled.
But Windows is going downhill lately, so backwards compatibility isn't what it used to be. Improving backwards compatibility for running old binaries would make Linux adoption easier. I hope that Linux PCs market share keeps improving to cross the threshold where it becomes an economically viable platform for most of commercial software.
> Windows has changed the kernel interface more often than Linux. This fact alone throws this commonly held belief to the wind.
Every Windows release I compile code straight from a Windows programming book from the 90’s. The only changes I made last time was a few include statements and one define.
They are getting worse at this. I bought a Surface Laptop Studio 2 two years ago. Windows Mail and Windows Calendar, two nice minimalist programs from Microsoft, were actively killed in this time. If you open them, it will redirect you to a new ad-laden Outlook app. If you somehow get a workaround going through the registry, they still fuck with it because the (incredibly simple) UI somehow has network dependencies.
I use MailSpring for email and no longer have a native calendar on my fairly expensive laptop from Microsoft. This is actually what drove me over the edge to switch to Linux for my workstation. Unclear exactly what I'll do for my next laptop but it won't be from MS.
That's not a lack of backwards compatibility, that's an app purposefully self-destructing itself!
What I'm talking about is, if your widget factory uses some app to calibrate all the widgets which was written by a contractor in 2005, it probably still works fine on Windows 11.
I used some software called Project 5 from Cakewalk back in 2006, as well as VST plugins. I can still install it and use it on Windows 11. Meanwhile, basic plugins from that time stopped working on Mac OS X Lion.
That detail is definitely true, I just think that in practice the frustration with behavior like this from MS will trickle down(/up/whatever direction). Like the benefit of Windows as a regular user or power user was also that after the pain of dealing with whatever shit MS decided, you could configure it more-or-less however you wanted and it would not change. It will be delayed in the corporate world but it will happen.
Since M$ is doing away with simple free apps (such as Mail) and forcing users to move to cloud-based expensive apps, you can use FOSS (Free and Open Source) alternatives -- especially the Portable ones (e.g., apps from PortableApps.com) that don't need an install, they can run off a USB drive, and app+userdata can be easily backed up without fuss.
I tried Thunderbird first, but unfortunately it was kinda heavy and was fairly unreliable, which kinda tracks with my experience before (at least on Windows). Mailspring works fine and is also open source.
Couldn't find a decent minimalist calendar program that integrated well with Windows. People say they like OneCalendar but I refuse to use the Windows Store, I even got WSL set up without it lol
Try Vivaldi. It's a "kitchen sink" browser in the same vein as Opera used to be back in its days of glory, so it comes with an email and calendar client that can be optionally turned off:
Vivaldi's email client is kinda clunky as well and has no way to show just my "inbox" (mail without any labels) from Google as far as I can tell, just one big unread chunk. And the calendar seems to just be a column on the left of the browser.
Either way, MailSpring works fine for email, and I've recently discovered Fantastical for a straightforward calendar program.
But it's absurd that I have to do this at all. At a minimum, if I buy a laptop, Microsoft should not be able to actively break it without refunding me 100% of the purchase price.
Yep! I can compile a program on Windows and expect it to work on any Windows OS from the past ~15 years that has the same CPU architecture. Linux? Each binary is more provincial. I want to try some of the tricks like MUSL though; haven't explored the space beyond default compiler options.
Linux also doesn't have as good hardware support. While Linux will probably run on most hardware. It doesn't run well. Like you may just immediately give up half or more of your laptop battery life if you switch from Windows to Linux on a particular machine, even if you use a lightweight and up-to-date environment and use TLP and whatever else to tweak kernel settings. I used Linux on my personal laptops for many years. No amount of tweaking could make it perfectly smooth and have comparable battery life and cooling.
New apple-silicon Macbooks also get such good battery life and performance now that if you are switching from Windows to a Unix-y personal computer, is is increasingly hard to not say that you should go to Mac.
> Linux also doesn't have as good hardware support.
I once had to patch uvc to support a webcam that wouldn't work natively on Linux. It would advertise one version of the API but implement another. That didn't affect windows which probably already knew and had proper patched drivers for it.
We can all but wonder why, but my guess isn't that there is some sloppy dev there and windows is just making up for it. It all seems very deliberate to undermine Linux. And it's plausible given Microsoft's bottomless pockets.
So it wouldn't surprise me that these companies are actively hindering Linux compatibility. So much for a free market with open competition.
I believe that Linux is just a low-priority target. There are so few users on Linux that it's not worth investing in Linux support unless you specifically target Linux crowds.
If you start thinking about a conspiracy, the first thing you should do is ask yourself how much effort it would take to keep it under the lid without anyone leaking.
> Linux also doesn't have as good hardware support
My experience has been that I can generally just install Linux on a machine and pretty much everything will just work straight away, but with Windows, I have to go and find the relevant Windows drivers to get things like iSCSI working.
"I had to patch drivers to get the dot-matrix printer working, and it didn't play nice with the PS/2 used by my mouse (the big one that goes on the nice mousepad)"
I have plenty of printers that have stopped working on Windows over the years, my current Brother laser doesn't have drivers that Windows will allow to be installed anymore. Its fine with Linux, so I just print share it as a generic so the Windows clients can connect.
My favorite has to be the Windows 8 era UI disaster.
How do most people log into a server? With a high-res physical touchscreen, or remote desktop?
So let's make a whole bunch of functionality impossible to access, because you have to bump up against a non-existent edge of a windowed remote screen, and literally make the UI not fit into common server screen resolutions at the time. I don't remember if 1024x768 was the minimum resolution that worked, or the maximum resolution that still didn't work. But it was an absolute comedy case.
I want to say that with only the basic VGA display drivers installed, screen resolution was too small to even get to the settings to fix it, but it's been a while and I can't find the info to prove it.
I wonder if it was losing Jim Allchin that did it. He retired after Vista and I'd say he was in charge of Windows during its golden age. 7 was basically Vista SP3, and then things took a different direction.
But, to every coin there are two sides:
"I consider this cross-platform idea a disease within Microsoft. We are determined to put a gun to our head and pull the trigger."
I curious how profitable it has been for Microsoft so far. Are they making billions and billions from these dark patterns? I feel like they'd have to be making a fortune for it to be worth throwing their brand in the gutter like they have been doing.
Everything I’ve seen suggests that Microsoft has entered the metaphorical private equity phase of investment in Windows. They’ve already given up any expectation of it being a viable competitor long-term and are purely focused on milking as much short-term revenue from the product as possible before it dies.
I’m sure windows will continue to exist and maybe be relevant for at least a decade. But it will be in zombie/revenue-extraction mode from here on.
My tech friends always joke that pretty soon we’re going to see “the year of the Linux Windows”, where windows will just be an OS on top of the Linux kernel.
I think we’re only half joking though, I could see it happening.
> "My tech friends always joke that pretty soon we’re going to see “the year of the Linux Windows”, where windows will just be an OS on top of the Linux kernel."
There's no need because the Year Of Linux On The Desktop™ already happened and it's called WSL2. Meanwhile, the opposite has also already actually happened: SteamOS + Proton is a distro whose main purpose is to be a launcher for Windows apps on a Linux kernel.
Jokes aside, this chest-thumping is incredibly ironic for those of us who lived through the 1990s-2000s. First it was, "FOSS will eliminate all proprietary software and M$ (sic) will be crushed and Bill Gates will go to the poorhouse. Hooray!" Later, it became "Well, we haven't killed proprietary software but at least Linux / LAMP and Firefox are succeeding at taking down Windows and Internet Explorer. Hooray!" Now it's "Maybe Microsoft will consider switching its kernel to Windows. Probably. Someday. Hooray?" What's the backpedaling of the 2030s going to be?
Linux has won on phones (Android) and on the server side. I don't think Windows Server is seriously used for anything but Exchange/AD these days, outside of hosting specialized or legacy apps.
Windows also comprehensively lost the "exclusivity" moat. Most of popular apps are now cross-platform, because they need to run on Android/iOS/macOS. So desktop Linux is often an easy addition: Slack, Discord, all the messengers, Zoom, various IDEs, etc.
So Linux indeed won to a large extent. Just not in the way people expected it.
Even if you consider running on tightly locked down devices to support a monopoly a win, the adoption of the Linux kernel for Android has the same basis as it does for server adoption: people love getting the hard work of others for free. It's basically buying market share. I mean, if Microsoft also started giving away Windows for free and took a bunch of market share away, would you consider that a legitimate win for them?
There was also the whole "web apps are coming and they run everywhere" thing. Which actually did work out exactly as people expected it to, although it took longer than most predicted - but your average casual PC user spends most of their time in the browser these days.
However, while those web apps might run on Linux (or not, if it uses DRM like all those streaming providers), they increasingly only run in Chrome.
I don't see that making much sense, honestly. Windows kernel is super solid and well architectured. There are thousands of drivers for every peripheral on the Earth. And I don't believe that Microsoft spends that much on kernel development to be incentivised to cut it.
If anything, they invested into the opposite: possibility to run Linux binaries on top of Windows kernel.
I disagree. I think the end of the “world revolves around Windows” era of Microsoft has been hugely beneficial to the OS. Microsoft is way less hostile to other platforms now that their main revenue source is Azure, not Windows, Visual Studio, and SQL Server licenses.
It seems like the Windows team has been freed to add features that they want rather than adding features that fit into a narrative.
WSL, pre-installing git, adding POSIX aliases to PowerShell, iPhone/Android integration, PowerShell/.net/VSCode/Edge on Mac/Linux, not making Office on Mac complete afterthought shit on purpose, etc.
I disagree that Microsoft benefits the end user. Their IoT which took over the Embedded version of Windows is completely bloated in 10 and higher. Version 7 allowed for only installing necessities where their successors force XBox and other built in forced features. Windows 11 IoT is also forcing the creation of a Microsoft account instead of allowing an local account. IoT / Embedded does not mean it is connect and often air gaped. They are also often used to host products and should not have a Microsoft account assigned.
Microsoft's standards for quality keep going down hill. Windows 11 does not even allow the moving of the task bar from the bottom of the screen. Microsoft is end user hostile just like Google.
Niche distribution has nothing to do with quality of distribution. The user base are passive users versus active users like daily office and game users.
The quality has gone done hill. Windows Embedded / IoT is often used to run your ATMs or some form of industrial automation. Windows actually has a real-time OS (RTOS) mode for just this.
The company I work has planned to replace Windows with Linux for future products and even moving active products to support both Windows and Linux during the transition. Only products that will stay on Windows will be legacy that are near EOL.
Personally, I would never use Windows OS for future products and solutions in these environments. Nor would I use it for network / server based solutions.
> now that their main revenue source is Azure, not Windows, Visual Studio, and SQL Server licenses.
Funnily enough, opening their stack to Linux probably made it easier to sell licenses for everything except Windows, since now you don't have to commit to a potentially unfamiliar hosting environment. Even SQL Server runs on Linux now.
One would assume but I do wonder how much long term damage they are doing for short term gains with this drive?
I'm not a believer in "the year of linux desktop!?!!?" and all that, but it achieved a level of robustness about 5-10ish years ago that I openly encourage non technical users to give it a try. For the few people that actually did try, they did stick with it.
At this point it is Microsoft's position to lose through quality degradation rather than Linux to openly out wit. There is still a long way to go and MS could turn their boat around but they would have to stop chasing this data scrapping scheme of theirs to begin with. But how addicted are they to that cash flow? They are probably far more interested in keep share holders happy short term than customers long term and that is not a brilliant strategy if you want to have a life time of decades.
I don’t much like MS, but in their defense they are trying to sell operating systems in a market where the going out-of-pocket price is $0. The development of their competition is ad supported, community supported, or built into the price of hardware.
Turn the boat around? To where? Nobody would be willing to pay for their product even if they were to start trying to make it appealing.
> I don’t much like MS, but in their defense they are trying to sell operating systems in a market where the going out-of-pocket price is $0.
The price of the windows license has been included in the price of PCs for literally decades now. Every computer you buy with windows preinstalled nets Microsoft a couple dozen dollars.
None of their products have a decent moat left, and all are heavily competed. Focusing on making azure competitive while accepting it is a commodity industry with commodity margins is how they stick around. But they will be a value stock, not a growth stock. That is ok, as long as you know that is what you are.
Perhaps the aims of these dark patterns were not to benefit Microsoft overall, but perhaps an individual or a team? For example, produce good numbers for particular KPIs at the expense of unmeasured or unmeasurable aspects.
Using Windows as a server feels like using your lounge room as a commercial kitchen. I can never shake the feeling that this isn't a serious place to do business.
I have this impression from years of using both Windows and linux servers in prod.
While I agree that Microsoft has not been the greatest at delivering customer-friendly stuff, and has built in a lot of revenue streams to their (mostly not-paying) users like Bing and cloud upsells, I think that your take is overly cynical to the software.
Windows 11 has some really legitimate improvements that make it a really solid OS.
It’s not surprising that Microsoft isn’t focusing on Windows as a server OS as they don’t expect anyone to deploy it in a new environment. They know it has already lost to Linux and that’s why .NET Core is on Linux and Mac, why WSL exists, etc. Azure is how Microsoft makes revenue from servers, Windows Server is a legacy product.
The whole “server OS has the weather app installed” thing is pretty irrelevant since enterprises have their own customized image building processes and don’t ever run the default payload. It’s really not worth Microsoft’s
time to customize the server version knowing that their enterprise customers already have.
Microsoft knows the strength of Windows lies in the desktop environment for workstations, casual laptop use, and gaming systems, and it is excellent at all those things. They’ve delivered a whole lot of really nice and generally innovative features to those spaces. Windows has really nice gaming features, smartphone integrations including with iPhones, even doing some long-overdue work on small details like notepad and the command line.
I don’t find that windows has forced me to cloud or done anything like that.
> Microsoft knows the strength of Windows lies in the desktop environment for workstations, casual laptop use, and gaming systems, and it is excellent at all those things.
Sure, Microsoft seems to have some great developers behind Windows and those developers are improving the underlying operating system. The trouble is that Microsoft is also using Windows to push their other products. Coming from a Linux environment, I find that pushiness unbearably crass.
On top of that, Windows' main strength has always been application support. I don't even know if that is relevant anymore with commercial developers shifting to subscription models (for native applications) and web based applications (for everything else). The latter makes Windows nearly irrelevant. The former makes open source more desirable to at least some people.
I've also noticed that things appear to flipping when comparing Linux to Windows. I can take a distribution that is intended for desktops, install it, and expect almost everything to work out of the box. It doesn't seem to matter whether it is printer or video drivers or pre-installed applications. Meanwhile, I'm finding that I have to copy drivers to a USB drive and drop to the command line to get something as simple as a trackpad or touchscreen to work under Windows. Worse yet, I've had something similar happen with network adapters. Short of bypassing the OOBE, a Windows installation will not complete without a working network adapter and Internet connection. Similar tales can be told for applications: there is a never ending stream of barriers to climb to get software to install ("look, we care about privacy since we are asking you half a dozen questions about what you're willing to share," while ignoring dozens of other settings that affect your privacy) or prevent advertising from popping up. You don't deal with that nonsense under Linux.
I don't know what the future of Windows is. I don't much care, as long as I get to use the operating system I want to use in peace. That seems to be much more true today than it did 20 years ago.
It was interesting to read your comment and find myself disagreeing with every single point you made. I'm not invested enough to argue about anything of it, it's really just a meta observation that stood out to me: Obviously it's still possible to have substantially different points of view on even the most basic aspects. I guess that's a good thing, at least it feels kinda reassuring to me. We could both be right, and the truth is probably somewhere in the middle.
> I don’t find that windows has forced me to cloud
Have you tried performing a fresh Home install recently without command line hacks? It's now impossible for a normal person to set up Windows without creating a MS account, forcing them to dip a toe into their cloud service connectivity and facilitate taking the next step towards paying them. They don't "force" you, but they sure will nag you incessantly about it, plopping that shit in Explorer, the Start Menu, tossing One Drive in the menubar at startup, shoving it in your face on login after a big update, etc. It's a pathetic cash grab everywhere you look.
A lot of this isn't very relevant to my personal use case and/or has not been my experience.
- I have had my Microsoft account connected since early in the Windows 10 days so that I can use my Xbox library. For my personal use case it doesn't really bother me that I have to login. Sure, most competing commercial OSes don't straight up force you to login, but as an example I never really used my Mac laptop without the Apple ID logged in because it has some pretty clear benefits and essentially no discernible downsides. It has some downsides that mostly boil down to what-if scenarios and thought experiments. To me, Microsoft forcing you to login with an account is not a big deal in the context of commercial paid software with a paid license. I can certainly understand why it might be a big deal in a different context. I can certainly see why my own Linux laptop is more appealing to not have this requirement. However, I specifically use Windows for a lot of commercial stuff - Steam, Xbox, etc. Being logged in was going to happen anyway, at least for me.
- As far as being nagged to pay, use Edge/Bing, or buy cloud stuff from Microsoft, all of that has been extremely easy to dismiss permanently. I have not needed to use any power user tools or scripts.
- It's an outdated notion that OneDrive is tossed in the menu bar forever. In Windows 11, OneDrive can be uninstalled entirely like a standard app. When I open my Start Menu and search for "OneDrive," nothing comes up besides an obscure tangentially-related system setting. It's literally not there.
- Sure, various new things have been presented to me along with new updates, like Copilot and the like, but I have been forced into none of it. When I visit Settings > Apps > AI Components, nothing is installed. When I type "Copilot" into the Start Menu, nothing comes up besids Windows Store search suggestions (apps I have not installed) and a keyboard key customization setting. Copilot is literally not there.
- I think there’s actually a good argument that upsells like OneDrive/Copilot (again, in my experience easy to dismiss once a year and uninstall permanently) that solve complicated problems for the median user (secure backups, document storage, AI assistant) is a decently tasteful way to fund a commercial operating system. All of that stuff is optional, and I can just say no, while paying for annual point releases (e.g. Mac OS X) kinda sucked.
Goodness the file save dialog(s) on Windows - it makes it so hard to save a file into my personal space. It's unintuitive and you need to click through, I think a couple of dialog boxes before you get to 'Your Documents'.
Two things can be bad! But the GTK file picker has improved and now has thumbnails, while you can't really trust MS not to continue to damage its file picker
Office has a particularly annoying dark pattern when saving a file. It hides the regular save dialog behind a tiny button in a confusing UI embedded in the main window that is designed to misdirect the user into saving files on OneDrive.
Many other programs do still open the standard file dialog directly, but even there, the local drive amd directory hierarchy is hidden behind a folded "This Computer" node in the tree view that is itself below the fold most of the time.
Yeah, this is the only Microsoft application I am aware of that does this, and I actually think that most Office users want to save to OneDrive and that it makes sense in this context.
The median Office user is using it at work and your employer doesn’t want you saving documents in places you will lose them.
Ditto for universities and schools that provide 365.
I just installed windows on a new laptop and somehow my user directory was setup in a OneDrive subfolder and backed up to their cloud. Between that, Microsoft basically demanding I use their online account to log in, Windows harassing me to finish setting up my computer every time I turn it on because they want me to change my default browser and buy subscriptions, and the random forced update restarts I can't seem to fully disable, I've had it. So I finally made the full time switch to kubuntu. Also, it's a brand new $1k laptop with 16gb of RAM and Windows uses half of it. I'm closing apps to save the RAM. Kubuntu uses 2gb.
I wonder how much of it is to collect data and sell ads compared to just getting people to start utilizing what is now Microsoft's core resource, which is cloud services.
For them, getting you using onedrive is a (huge) step towards getting you to pay them for more storage using onedrive, and to also allowing them to use their advantage as the OS provider to get you using features that both keep you from moving away from Windows and keep you from moving to dropbox or another cloud competitor that normal consumers commonly use. For example, onedrive desktop sync tied to your Microsoft login, so you can log into a new system and have it put your preferences and files in place.
Having more data to monetize people is useful, but I would bet that they value the the lock-in of integrated services far more, as that's where they can possibly grow (by offering more services once you're less likely to leave), and growth is king.
It's the same thing Google does (and Samsung also attempts to do with their custom apps and store) with Android, but at the desktop level. Apple is able to do it for both desktop and mobile.
> but I still find it hilarious that a supposedly serious server OS would default to showing you the weather and ads in the start menu.
In my experience thats just not true. Microsoft's client OSs like Win 11 and 10 include these consumer-oriented "features" [1] but they're not present on servet versions of Windows.
[1] I agree that the weather widget etc is annoying, even though it is easy to disable.
I don't think Windows Server has ads by default in the menu (don't remember for the weather though), the default are pretty sensible there since it's a minority OS that has to compete while desktop Windows is a monopoly free to inflict whatever it wants onto users without having to fear any kind of consequence.
one thing to remember is that window servers are deployed with GPO pre-configured, so you don’t usually see these unless admins leave them at their defaults. plus enterprise/education can turn off tracking using the same mechanisms
I switched a couple months ago. This is my third time trying to switch to desktop Linux, and things are very different this time.
I installed CachyOS and all of my hardware just worked, including NVIDIA/Wayland. No real bugs beyond incorrect monitor positioning, and some tinkering needed for Diablo 4/Battle.net.
The Diablo 4 issue is present on Windows as well, and ironically, there isn't a fix on Windows for those affected. On Linux, a DXVK config change solves the bug.
It really is hard to overstate just how much progress there's been in the past few years. I first started using Linux in late 2012 (with Ubuntu 12.10 being the first version that actually came with my laptop's wifi firmware in the default installtion; when I first tried 12.04 I had to plug it into ethernet just to download it), and by that point, graphical stuff mostly worked without needing a ton of manual work, and it was past the era where I would have had to compile a custom kernel or something (although a few years later I did learn how to do that just for the fun of tinkering when I got a macbook with a wifi driver that wasn't released in a stable kernel for another few months), but when I started getting into gaming in the later part of the decade, I had to spend a decent bit of time learning about Wine, Crossover, Lutris, etc. Over the course of the next few years I started playing around with Proton in Steam, even for games that aren't released on Steam, and nowadays I don't even have Lutris or Crossover installed, and I can't remember the last time I tried to play a game that Proton couldn't run.
At this point, Valve has done enough to make Linux gaming viable that they might have permanently bought my goodwill. Right now I mostly play on my Steam Deck an equal mix of games that are and aren't from Steam (streamed from my desktop with Moonlight, which itself is a third-party app rather than from Steam), but even if they started trying to lock things down more, I'm not sure I'd be able to get mad at them. So much of the investment they've made into the ecosystem has been in the tooling itself that isn't exclusive to them, ostensibly for the purpose of entering the "handheld desktop" gaming market (not sure what exactly to call it, but playing the same PC games on handhelds is demonstrably different from a handheld console with a separate catalog), but they did it in a way that benefited a lot more than just that. I don't pretend they're a perfect company, because those don't exist, but as far as companies go, this might be the first time I actually identify as a fan of one.
> No real bugs beyond incorrect monitor positioning
Windows really needs to catch up with this. Multiple monitors have been a thing in Linux pretty much since the beginning of X.
Why can't I plug a Windows laptop into a docking station, and expect the screens to come up in the same order they were in last time? Why is it so hard?
> Why can't I plug a Windows laptop into a docking station, and expect the screens to come up in the same order they were in last time? Why is it so hard?
I regularly move my work Win 11 Pro laptop between three different multi-monitor (hdmi) setups, and it works flawlessly. I don't recall any problems with Win10 over many years either.
My last 2 laptops have really struggled with win10/11 multimonitor support. Explorer would often crash, taskbars would not populate, behaviors weren't consistent, taskbars would reset themselves, settings would change randomly after reboots, not including updates resetting all my settings and having no real way to disable updates cuz windows would re-enable them.
did i mention explorer would crash pretty often? like, half the time I plugged in a docking station it would crash explorer. That then reset all the settings. lol just a mess.
Pop OS! is a simple plug and play on any setup i've tried it on, over usb3 or hdmi/dpi. Works great.
Multi-monitor mostly works fine for me in Windows 11, but I do have consistent issues when dragging windows between displays with different dpi scaling. They have the same resolution but the dpi scaling on the laptop display doesn't match that on the two 27" displays next to it.
Adobe Acrobat in particular takes multiple seconds to drag a window from the laptop screen to one of the attached displays when a PDF is open. Now, this is on a 6 year old laptop due to be replaced, but it was fairly high spec when it was purchased (64 GB, RTX 2060, NVMe SSD). It really shouldn't be making me wait on 2d rendering of a document.
My non-corporate desktop has 3 screens, screen#2 is shared with a corporate laptop (via a KVM, but the issues happen without it as well).
If I switch that monitor to the other machine, Windows re-arranges ALL windows to appear on the new "primary" portrait-oriented screen#1, some maximised to fill the screen, some not. They stay there after the other screen is reconnected.
Possibly because the screen being switched is the "primary" screen? At least it's consistent behaviour between both Win10 AND Win11, which is nice.
Why can't I lock my computer and have Windows turn off my displayport monitor without having it turn itself on and off every few minutes until I log back in?
Why can't I turn off the power button on my monitor and then turn it back on to keep using it again without having to shut down my PC, turn off the PSU switch, press the power button to fully power it down, then bring everything back up? I just want my monitors off when I'm in bed...
> Why can't I plug a Windows laptop into a docking station, and expect the screens to come up in the same order they were in last time? Why is it so hard?
I've never seen this work correctly. My work dock breaks monitor ordering on MacOS reliably and Gnome+Wayland frequently. I don't remember if it broke for Xorg. My home monitor setup breaks mouse behavior in borderless fullscreen and libreoffice scaling on KDE+Wayland.
I don't do a ton of multi-monitor stuff with Windows these days, but I certainly have done a good bit of it. It worked OK on my desktop at home with three screens, and when I sometimes plug an extra monitor into my laptop at work it seems to put things back how they were last time.
But I don't recall a time when Bluetooth was "good" on Windows -- like, at all. I've spent somewhere in the realm of 20 years now dinking around with it. As far as I can tell, it has always been a miserable experience.
> and some tinkering needed for Diablo 4/Battle.net
Funnily this is the same thing I tried to do just last month, Installed CachyOS after not having Linux on my desktop for a very long time, tried installing Battle.net and just ran into too many issues and haven't come back yet (to be honest I didn't try too many avenues to fix it).
If you don't mind me asking what was the tinkering you had to do to make this work? Thanks!
CreateProcessA() on Windows is very slow. A significant portion of the perceived speedup for development tasks is that fork() takes on the order of microseconds, but creating a Windows process takes ~50ms, sometimes several times that if DEP is enabled. This is VERY painful if you try to use fork-based multiprocessing programs directly.
I recently converted a large SVN repository to Git using git-svn.
Started on Windows. After five days it failed for some reason so I had to rerun it (forgot an author or along those lines, trivial fix). Meanwhile I looked into why it was so slow, and saw git-svn spun up perl commands like crazy.
Decided to spin up a Linux VM. After fixing the trivial issue it completed in literally a couple of hours.
Interesting, I wonder why DEP would degrade process creation performance. My understanding is it's just a flag in page table entries to forbid execution, I am not sure how this could impact performance so much (except that data and code now have to be mapped separately).
Uh, AMD drivers have most assuredly not always not just worked. They do now, and they have for something like 10 years, but before that they were a steaming pile of locked in garbage.
not to split hairs, but I think the parent is justified in saying they “always worked” if they’ve been this good for a decade.
If I was 10 years younger than I am today, my perspective would have been that it “always worked” and at some point we have to acknowledge that there has been good work done and things are quite stable in the modern day. 10y is not a small amount of time to prove it out.
Clicking through to their source, it seems true enough despite their protests.
Its not entirely built with React Native, but React Native does seem to be responsible for at least one element of the start menu that appears initially when the menu is presented.
I never understood why file search is SOOO bad on windows (mac too). Its so damn slow and even feature wise I never figured out why it was so difficult to just search for files in this directory
It regressed compared to Windows 10 too - I have a folder with photos, I normally have them sorted by date taken. On windows 10 I would open the folder and they were always sorted correctly the moment I opened the folder. Maybe there was a point in time at the start there the system had to sort them for the first time but ever since they were always shown correctly the second I opened the folder. On windows 11? Every single time it opens unsorted, the photos are in some random god knows what order, and literally 10 seconds(!!!!) later they suddenly move themselves to the correct position. Every single time. That's with maybe 200 photos? On a machine with 16 cores and 64GB of ram. People coding on 16kHz chips decades ago could do this faster than whatever microsoft is doing.
"Everything" is another that puts the default search to shame. I've also seen people who just have a script that pumps all new files into a txt file every so often and runs bruteforce ripgrep on it, which gives instant interactive results. It's really hard to imagine coming up with a search routine that is as slow and unreliable as what ships with mainstream OS file managers.
File search is also the fastest thing among all the 3 OS it's not even funny. Just use the Everything search app and a good file manager that can integrate Everything.
> file Explorer (buggily) parsing files to display metadata before listing them
It's crazy, open a directory full of .mp4s and sometimes the list briefly appears but then it goes completely blank, just to start listing them again one-by-one taking about one second per entry, while being unresponsive to input.
I have this exact same problem with OGG files. Either their parser has some insane bugs or they are starting an isolation VM per file to run the parse. Either way, unusable.
I did the same, I had jumped into POP OS instead, which is also Ubuntu based, then a year back I got into EndeavourOS an Arch based distro, and have not looked back since. I use it on everything I can put Linux on.
For me it wasnt driver issues, moreso “you need glibc to be a version that Debian / Ubuntu cant upgrade without it being potentially problematic” just to run 3D printer slicer software. I said screw this give me bleeding edge and give it to me now. I had been itching to try something slightly more bleeding edge, Fedora was on my mind but it did not work out as I wanted it to.
Out of curiosity, why are such high fps numbers desirable? Maybe I don't understand how displays work, but how does having fps > refresh rate work? Aren't many of those frames just wasted?
If you have a 60Hz display and the game is locked to 60fps, when you take an action it may take up to 16.67 milliseconds for that action to register. If the game is running at 500fps, it registers within 2 milliseconds, even though you won't see the action for up to 16.67 milliseconds later. At extremely high levels of competition, this matters.
> even though you won't see the action for up to 16.67 milliseconds later
Note that this is only the case if you have vsync enabled. Without vsync you will see the action (or some reaction anyway) +2ms later instead of +16.67ms, just not the full frame. This will manifest as screen tearing though if the screen changes are big - though it is up to personal preference if it bothers you or not.
Personally i always disable vsync even my high refresh rate monitor as i like having the fastest feedback possible (i do not even run a desktop compositor because of that) and i do not mind screen tearing (though tearing is much less visible with a high refresh monitor than a 60Hz one).
> If the game is running at 500fps, it registers within 2 milliseconds, even though you won't see the action for up to 16.67 milliseconds later.
Okay I think I follow this, but I think I'd frame it a little differently. I guess it makes more sense to me if I think about your statement as "the frame I'm seeing is only 2ms old, instead of 16.67ms old". I'm still not seeing the action for 16.67ms since the last frame I saw, but I'm seeing a frame that was produced _much_ more recently than 16.67ms ago.
This is mostly like high fidelity audio equipment, or extreme coffee preparation. Waste of time for most people.
I used to play CS:Go at a pretty high level (MGE - LE depending on free time), putting me in the top 10%. Same with Overwatch.
Most of the time you're not dying in a clutch both pulling the trigger situation. You missed, they didn't, is what usually happens.
I never bothered with any of that stuff, it doesn't make a meaningful difference unless you're a top 1%.
But there's a huge number of people who play these games who THINK it does. The reason they're losing isn't because of 2ms command registrations, it's because they made a mistake and want to blame something else.
I'm sure that's true, but low latency can just plain feel good. I don't play FPSses at all, and I can totally understand how low latency helps the feeling of being in control. Chasing high refresh rates and low latency seems a lot more reasonable to me than chasing high resolution.
That's correct, and the most competitive multiplayer games tend to have fixed tick rates on the server, but the higher FPS is still beneficial (again, theoretically for all but the highest level of competition) because your client side inputs are sampled more frequently and your rendered frames are at most a couple ms old.
I think you're missing the point. The game could be processing input and doing a state update at 1000Hz, while still rendering a mere 60fps. There doesn't have to be any correlation whatsoever between frame rate and input processing. Furthermore, this would actually have less latency because there won't be a pipeline of frame buffers being worked on.
Tying the input loop to the render loop is a totally arbitrary decision that the game industry is needlessly perpetuating.
No, I'm explaining how most games work in practice.
You're right a game could be made that works that way. I'm not aware of one, but I don't have exhaustive knowledge and it wouldn't surprise me if examples exist, but that was not the question.
I would not at all be surprised that there are examples out there, although I don't know of them. Tying the game state to the render loop is decision made very deep in the game engine, so you'd have to do extensive modifications to change any of the mainstream engines to do something else. Not worth the effort.
But a greenfield code shouldn't be perpetuating this mistake.
On most modern engines there is already a fixed-step that runs at a fixed speed to make physics calculation deterministic, so this independence is possible.
However, while it is technically possible to run the state updates at a higher frequency, this isn't done in practice because the rendering part wouldn't be able to consume that extra precision anyway.
That's mainly because the game state kinda needs to remain locked while: 1) Rendering a frame to avoid visual artifacts (eg: the character and its weapon are rendered at different places because the weapon started rendering after a state change), or even crashes (due to reading partially modified data); 2) while fixed step physics updates are being applied and 3) if there's any kind of work in different threads (common in high FPS games).
You could technically copy the game-state functional-style when it needs to be used, but the benefits would be minimal: input/state changes are extremely fast compared to anything else. Doing this "too early" can even cause input lag. So the simple solution is just to do state change it at the beginning of the while loop, at the last possible moment before this data is processed.
Source: worked professionally with games in a past life and been in a lot of those discussions!
I can give an example. I'd heard that Super Meat Boy was hard, and it was, but it turned out, if you ran it at the 60hz it was designed for instead of 75hz, it was considerably easier. At 120hz it was unplayable.
You kind of understand how the game loop is tied to the refresh rate in games like this, though. Practicing "pixel perfect" jumps must be challenging if the engine updates aren't necessarily in sync with what goes on on screen. And in the really old days (when platformers were invented!) there was no real alternative to having the engine in sync with the screen.
In the model I am describing there would be whole game state updates on every tick cycle, completely decoupling the frame rate from the response latency and prediction steps.
Doing that will increase input latency, not decrease it.
There are many tick rates that happen at the same time in a game, but generally grabbing the latest input at the last possible moment before updating the camera position/rotation is the best way to reduce latency.
It doesn't matter if you're processing input at 1000hz if the rendered output is going to have 16ms of latency embedded in it. If you can render the game in 1ms then the image generated has 1ms of latency embedded in to it.
In a magical ideal world if you know how long a frame is going to take to render, you could schedule it to execute at a specific time to minimise input latency, but it introduces a lot of other problems like both being very vulnerable to jitter and also software scheduling is jittery.
Game has to process the input, but it also has to update the "world" (which might also involve separate processing like physics) and then also render it both visually and audio. With network and server updates in-between things get even more complex. Input to screen lag and latency is a hardcore topic. I've been diving into that on and off for the past few years. One thing that would be really sweet of hardware/OS/driver guys would be an info when the frame was displayed. There's no such thing yet available to my knowledge.
It doesn't and well programmed games won't be tied to fps that way. I'm not sure anything past 300 fps plausibly matters for overwatch even with the best monitor available.
You want your minimum FPS to be your refresh rate. You won't notice when you're over it, but you likely will if you go below it.
In Counter-Strike, smoke grenades used to (and still do, to an extent) dip your FPS into a slideshow. You want to ensure your opponent can't exploit these things.
Not OP but got quite a bit of experience with this playing competitive FPS for a decade. You're right that refresh rate sets the physical truth of it, e.g. 180 FPS on a 160 Hz monitor won't give you much advantage over 160 FPS if at all. However reaching full multiples of your refresh rate in FPS – 320 in this instance, 480, and so on – will, and not only in theory but you'll feel it subjectively too. I get ~500-600 FPS in counter-strike and I have my FPS capped to 480 to get the most of my current hardware (160 Hz). Getting a 240 Hz monitor would make it smoother. Upgrading the PC to get more multiples would also.
If you're not using V-sync, if a new frame is rendered while the previous one wasn't fully displayed yet, it gets swapped to the fresher one half-way through. This causes ugly screen tearing, but makes the game more responsive. You won't see the whole screen update at once, but like 1/5th of it will react instantly.
I used to do that until I switched to Wayland which forces vsync. It felt so unresponsive that I bought a 165hz display as a solution to that.
To certain extent for online games it can be advantage (atleast it feels like it to me). AFAIK The server updates state between players at some (tick) rate when you have FPS above tick rate then the game interpolates between the states. The issue is that frames and networking might not be constantly synced so you are juggling between fps, screen refresh rate, ping and tick rate. In other words more frames you have higher the chance you will "get lucky" with latency of the game.
>
Out of curiosity, why are such high fps numbers desirable? Maybe I don't understand how displays work, but how does having fps > refresh rate work? Aren't many of those frames just wasted?
I just quote the central relevant sentences of this section:
"For frames that are completed much faster than interval between refreshes, it is possible to replace a back buffers' frames with newer iterations multiple times before copying. This means frames may be written to the back buffer that are never used at all before being overwritten by successive frames."
Tying the input and simulation rates to the screen refresh rate is an old "best practice" that is still used in some games. In fact, a long time ago it was even an actual good practice.
I think it was just to show that the performance is comparable to Windows, implying that it also will be fine for games/settings where fps is in the range that does matter.
osu (music beat-clicking game) has a built-in screen frequency a/b test, and despite running on a 60hz screen I can reliably pass that test up to 240hz. It's not just having 60 frames ready per second, it's what's in those frames.
I don't understand how this works, I guess? If your screen is 60Hz, you're drawing four frames for every one that ends up getting displayed. You won't even see the other three, right? If you can't see the frames, what difference does what's in them make?
[E] Answered my own question elsewhere: the difference is the "freshness" of the frame. Higher frame rates mean the frame you do end up seeing was produced more recently than the last frame you actually saw
An aside based on what you have mentioned. What the heck happened to Windows file manager? I mean it used to be that Windows was rock solid while Linux variants had various parsing performance/stability issues. Now it feels like it is the complete opposite.
In Win 11 I am constantly finding the whole explorer locking up just copying files via USB because of reasons unknown. Where as on my Linux machines, I have absolute faith that it will just handle it or at the very least not just stop spinning in the background in zombie land, not dead enough to die but not alive enough to do anything. Windows is in a very unfortunate place right now, I do hope they will wake up and try to get things back on the road but I am very doubtful considering the leader ship they have nowadays.
if you haven't enabled the checkbox starting explorer in a new process which isn't super easy to find, it will basically be one process running most of the windows ui, which means when they write shoddy code, the ui tends to hang
Does anybody have security concerns about running games with Proton/Wine? Games already have a massive attack surface and I can imagine there are some nasty bugs lurking in the compat layer that would enable RCEs not possible on Windows. This is kind of holding me back from making the jump.
You can trivially sandbox your Steam installation with pretty much zero performance overhead, if you install it through Flatpak. Using an app like Flatseal, you can then configure Steam to only have access to a designated drive with next to no further contact to your PC. You can individually disable access to networking, audio, D-Bus, USB devices, Bluetooth, shared memory and even the GPU itself if you're really freaked out. No command line needed.
That being said, I just run Steam natively on NixOS and have never seen any issues. The biggest RCEs I'm worried about are Ring 0 anticheat nuking my desktop like CloudStrike.
>Steam installation with pretty much zero performance overhead, if you install it through Flatpak.
In reality that isn't true. Flatpak steam runs like poo for a lot of people. Really, flatpak should be avoided if there are other installation methods, in general.
There are. But there are many more such bugs in DirectX on Windows, and it’s a much bigger target. If a national intelligence organization wants to burn a Proton zero-day on my Steam Deck, cool!
People oversell how much windows just works. It only does so because it comes pre installed. I regularly reinstall my wife's and it's always more of a pain in the ass than Linux.
> Can I transfer my Overwatch battlenet account to steam? I really want to jump ship too.
Seems like you can just keep using the Battle.net account on GNU/Linux. You just add the Battle.net installer as a "non-steam game" (bottom left of the games list). Then, you start it, add your account, install the game and it "just works". I used it on the steam deck to play D4 beta and D2R on my CachyOS desktop.
> How is Proton with nVidia drivers? I have a 3080.
My battle-hardened 1060GTX served me for years. I recently upgraded my whole rig from Debian + Intel + Nvidia to full AMD and the RX9070XT works very well, with the caveat that I had to switch to a newer kernel on CachyOS to support it. That was 4 months ago and the situation now should be resolved, so you can prob use any old normie distro.
On Wayland+gnome/plasma I’ve had great luck with games, Firefox is almost there with some bugginess, and video playing apps that use mpv like plex work great. It’s definitely not perfect and you may dive into configuring per app flags to make them utilize hdr, but the easy stuff generally works
I am working on this for a different definition of term dataset. I started learning deep learning which led me to start building datasets.
Wanting to store versions of the datasets efficiently I started building a version control system for them. It tracks objects and annotations and can roll back to any point in time. It helps answer questions like what has changed since the last release and which user made which changes.
Still working on the core library but I'm excited for it.
Thanks for the suggestion. I have glanced through the docs in past but haven't tried it. I am trying to do a bit more than what git can offer.
First the good. Git LFS solves the issue of checking out a massive repository in whole.
Git can work pretty well if your annotations are in a text based format and stored one annotation per file. That makes it easy to track and attribute annotation changes.
What I'm building can serve as a backend to labeling. There is a built in workflow for reviewing changes, objects have different statuses (in annotation, included in release, etc.), reproducible releases, things like that.
It is really designed for collaboration with untrusted third parties. Imagine someone making a pull request for a binary annotation format. To review it you would have to clone it, load it in an annotation tool, then go and tie what you saw to what is in the pull request. What do you do it like 90% of the annotations are correct? Reject everything? Very tough, also assumes your annotater can make a pull request.
Mine will still require you to bring your own annotation tool, but makes it much easier to integrate the review process.
I started drinking milk a year or two ago. Most milks have vitamin D added. It has made a stark difference in my life and it wasn't until I was looking back saying "I'm a lot happier than I was this time last year" that I figured out what changed.
Highly recommend supplementing with D. If you can tolerate lactose, milk is a nice pathway for it.
This is why, even though as a programmer I’m congenitally unfit for it, I go outside sometimes. Sunlight also gives you vitamin D, and has all sorts of other benefits. As I don’t go outside often I’m not particularly concerned about skin cancer and am generally skeptical of the idea that terrestrial life is maladapted to the Sun. I buy over exposure has bad long term effects, but the current view seems to be you need to wear sunblock for any level of exposure. That just seems wildly unlikely and the fact the body needs sunlight to produce essential stuff like vitamin D further reinforces my belief that we are over correcting into unhealthy zones.
As a person not from america, the fact that this is the main view is worrying to me. Over here in the UK it's 100% normal to just use the rule of thumb that as long as you try not to get burnt, it's healthy to get some sun. No sunblock needed unless you know you'll be out in it for many hours without shade like if you go to the beach or a long walk.
In general, norther countries need more protection from the sun, not less. The rates of skin cancer go up significantly as you move north. My guess is partly cultural, but I don't know for sure the cause. When it is dark all winter, people want to spend as much time in the sun as they can during summer, perhaps. The fact that you also get more natural summer exposure due to longer days is probably part of it.
This is a very common attitude in northern Europe, but it’s dangerous. I recently saw a dermatologist, I live at approximately the same latitude as the UK, east of there. He strongly advised sunblock, especially for children. You so very rarely see children wearing it during the summer. You are really playing with your health if you ignore these dangers.
Ozone: hey, at least we managed to ban CFCs internationally.
If we can do that, it's confusing how we can't incrementally tackle climate change with selective international agreements that would make the most impact for the socio-economic burden.
The high cholesterol level is likely correlated to vitamin d.
A lot of these studies came out of heart disease investigations within the black community. In the US there is just more compared to West Africa, and after controlling for weight, income, diet, etc., they guessed the difference was likely sunlight and outdoor time.
Followup studies found similar findings, albeit less pronounced with white and middle eastern people.
> Sunscreen also blocks our skin from making vitamin D, but that’s OK, says the American Academy of Dermatology, which takes a zero-tolerance stance[1] on sun exposure: “You need to protect your skin from the sun every day, even when it’s cloudy,” it advises on its website. Better to slather on sunblock, we’ve all been told, and compensate with vitamin D pills.
The vast, vast majority of Google results do not support your claims. I’m glad you found one that did, and I’m all for more studies, but “it’s really not that hard to google” is both condescending and doesn’t move the argument forward. You are the one making an argument against the established widespread medical opinion; the onus is on you to prove your argument, not on me to prove your argument for you.
“ There is little evidence that sunscreen decreases 25(OH)D concentration when used in real-life settings, suggesting that concerns about vitamin D should not negate skin cancer prevention advice. However, there have been no trials of the high-SPF sunscreens that are now widely recommended. What's already known about this topic? Previous experimental studies suggest that sunscreen can block vitamin D production in the skin but use artificially generated ultraviolet radiation with a spectral output unlike that seen in terrestrial sunlight. Nonsystematic reviews of observational studies suggest that use in real life does not cause vitamin D deficiency. What does this study add? This study systematically reviewed all experimental studies, field trials and observational studies for the first time. While the experimental studies support the theoretical risk that sunscreen use may affect vitamin D, the weight of evidence from field trials and observational studies suggests that the risk is low. We highlight the lack of adequate evidence regarding use of the very high sun protection factor sunscreens that are now recommended and widely used.”
What exactly do you think my claims are? I'm not the one who made the original claim, only supported the claim that american dermatologists have a zero tolerance policy for sun exposure.
Either you think there's a more representative opinion for American dermatologists than the AAD or what seems more likely is that you don't understand the argument that I was supporting.
The evidence I actually wanted proof for was the original commenters’ assertion that sunblock somehow raises cholesterol to unhealthy levels.
The quote you chose from the article (which I did read, for what it’s worth, but was also very light on sources) strongly suggested that sunblock blocks Vitamin D production. The science on that is unclear, but prior research suggests it doesn’t; that said, it warrants more research. I took your choice of that quote specifically to mean that was a claim you were making. If that wasn’t the case and you simply meant to show that the AAD suggested not being in the sun without sunblock, then I agree.
The science on melanoma being very bad is pretty cut and dry, on the other hand.
I never disagreed that American dermatologists tend to follow a “zero tolerance” policy for sun exposure without sunblock. They would very much like you to get sun with sunblock, though.
The cholesterol thing sounded like sarcasm in my ears.
Apart from that, zero tolerance makes no sense - do they really recommend to wear sunscreen on a cloudy winter day? - but it is somewhat debatable that some people still underestimate the destructive effects of overexposure.
And the elefant in the room is of course your skin type.
Couple of my friends have skin of the fitzpatrick type with reddish hair, light skin and many dark spots.
For them sunscreen is a must when I wouldn't even think about it.
Another interesting tangent: wasn't there a somewhat potent carcinogenic in most sunscreen products?
I didn’t sense sarcasm. I could’ve misinterpreted.
Cloudy days still have plenty of UV, depending on where you are (especially near the equator).
Skin type matters, of course, and I would probably be less heavy handed with the recommendation, but the “zero tolerance” issue wasn’t the original point as I understood it.
[edit] Quote from the very article you posted:
“It's important to note that these results are from one study (Valisure), which hasn't yet been validated. Strangely, they also detected benzene in blank test tubes (no sunscreen), leaving some to question if the testing methods contributed to the levels detected.
Toxicologists note that even if you applied the worst sunscreen on the Valisure list to your entire body, you'd be exposed to less than half the amount of benzene you breathe in normal city air in a day. Benzene is also very unstable, so it's unclear how much would be absorbed through the skin.
Don’t let this study convince you to skip sunscreen altogether. Although benzene is a potential risk, it pales in comparison to the known, real risk of UV radiation. Instead, take the time to check that your preferred sunscreen isn’t on the list of contaminated products.”
I just criticized someone for not doing their own simple google search and now you're asking me to google for you as well? I'm really not sure what result you're hoping for here.
You really shouldn't be so sure of anything you're too lazy to validate yourself. If you're too lazy now, chances are you were too lazy to validate it when you formed the opinion to begin with.
“My thing is true, even though the vast majority of medical professionals and societies disagree with me. And I don’t have to prove it to you, as that is best left as an exercise to the reader” is lazy, and a terrible argument.
Even if you didn’t validate the established guidelines, that doesn’t actually make you lazy; as humans, we cannot possibly hope to empirically validate every single thing we are told, as that would be madness. We often rely on various sources to validate claims for us by running solid, peer-reviewed studies and then we read those, and the vast majority of those studies do not agree with you, though more studies definitely need to be run, particularly with higher SPF sunblocks and mineral sunblocks.
There’s a lot of doctor cartels in the US that emphatically force singular ideologies - babies sleeping on bellies die instantly, mothers who can’t nurse are creating sickly autistic monsters, everyone must take statins, etc. See parallel comment for source, or visit a dermatologist.
There is no cartel that says babies sleeping on their bellies die instantly, or that mothers who don’t breastfeed are creating sickly autistic monsters. Literally not a single doctor who should be allowed to practice has said any of that, as it is far too extreme and one sided.
Investigating those issues? Sure. Possibly even believing it’s safer to sleep a baby not on their belly, or that mothers should breastfeed if they can because it is likely to be healthier for the baby? Absolutely.
But almost the entirety of your comment is an appeal to extremes, which is a logical fallacy.
This is not an American centric view. Dermatologists in many countries throughout Europe, especially northern countries, strongly advise the use of sunblock.
I live in NZ. The rest of the planet doesn't live this close to the ozone hole. We have the highest rates of skin cancer in the world. We need to use sun protection at all times.
My understanding is that the main correlation between sunlight and skin cancer is for sudden exposure to strong sun. If you go outside all year without sunblock (as people used to do back in the day), and thus build up and maintain a natural tan, the cancer risk is very low. But if you sit inside all year and then suddenly go to the beach,l in July, the risk increases.
I do not know a single person who died from skin cancer. And only a few who had skin cancer. The prevalence cannot be sufficiently high to justify a hysteria.
UV-A, -B, and -C are bad in different ways. Sunburns skew towards UV-A, while cancer and aging tends to be from UV-B and -C. Vitamin D synthesis results from UV-B exposure.
Sure, but as discussed elsewhere in this thread, the studies (at present, which doesn’t mean we won’t learn more in the future) do not support the assertion that sunblock prevents Vitamin D synthesis in any meaningful way, despite what a random blog post might tell you.
Amen. :) You're preaching to the choir. Health and wellness blogs generally service the theater of the placebo and hivemind ignorance bordering on mysticism.
Milk consumption is pretty horrible for the environment (and I'm not even mentioning what happens to animals on milk farms). I say that as a (full-of-shame) milk drinker, but recommending milk because of vitamin D is pretty ridiculous to me.
There are so many vitamin D pills that are much cheaper than milk, and much better for the environment while having the same function.
Medicine, computers, and building materials for EVs and well insulated homes are also "horrible" for the environment. We gladly make that sacrifice because we're happy with the trade. I understand you aren't, and that's fine.
The point being, advocating for abstention is rarely a winning strategy. Instead we should use technology and policy to improve our production methods. Then we can save the environment and continue to enjoy products we consider to be important to our lifestyles. Don't let perfect be the enemy of good.
Human civilization is pretty horrible for the environment in it's current form. Trying to break it down vertically and pinpointing personal choices on it is an exercise in diversion. We need deep structural changes to how we source energy, how we solve logistics and how we manage labor. Arguing about diet choices or duration of showers is just a way to keep us from tackling what really matters.
If most people were willing to change their diets overnight, maybe. Mathematically or statistically, irrelevant when you take into account real human beings. It's as disingenuous as saying "if everyone were nice, Earth would be paradise".
That doesn't sound much like a dietary 'choice' to me. I hope pricing can force the necessary changes in time. Honestly, I fully expect disaster levels sea level rise within my lifetime...
"many peer-reviewed studies, ... put livestock emissions at between 14.5 percent and 19.6 percent of the world’s total"
"... it doesn’t factor in the significant climate benefits we’d get if we freed up some of the land now dedicated to livestock farming and allowed forests to return, unlocking their potential as “carbon sinks” that absorb and sequester greenhouse gases from the air.
Scientists call this the opportunity cost of animal agriculture’s land use. Because animal farming takes up so much land — nearly 40 percent of the planet’s habitable land area — that opportunity cost is massive ...
"One study found that ending meat and dairy production could cancel out emissions from all other industries combined over the next 30 to 50 years."
> we can save the environment and continue to enjoy products we consider to be important to our lifestyles
No, we can't.
Without Changing Diets, Agriculture Alone Could Produce Enough Emissions to Surpass 1.5°C of Global Warming (2018)
We ought to worry about meat agriculture before cow milk consumption. Meat ag presents multiple existential threats greater than diabetes, cancer, or obesity:
- Cramming thousands of animals together with their excrement and humans who work with it creates a convenient pandemic pathogen bioreactor
- Antibiotic resistance by abusing the same substance (just a different supply chain) to make animals grow faster at the expense of antibiotics ceasing to work in people
- Climate change, roughly 10%
- Pollution of air, water, and soil (Ever see what pig farmers do with shit? They liquify it and spray it in the air in shit lakes.)
- Inefficient use of agricultural land and resources that could feed more people and more cheaply
Depends heavily on diary farming practice. Open-pasture diary (which is a minority practice) is not bad for the environment: low energy use for high calorie production, methane emission vastly reduced, excellent soil management., humane treatment of the cows. And the milk tastes amazing.
Industrial agriculture is pretty horrible for the environment and also unsustainable for long term soil management. But it is what we need at the moment to feed the world's population.
>But it is what we need at the moment to feed the world's population.
Not true. The American food system is incredibly wasteful, and people are getting sick from diseases associated with western diets and overeating, like diabetes and heart disease. People eating traditional eastern diets use less resources and have less diet related illnesses.
Industrial farming is not helpful or needed. The only thing it's good for is putting money in some people's pockets.
I started taking vitamin D supplements (D3 pills) after getting a blood test with below norm vitamin D level. Saw an immediate improvement in concentration, mood and energy throughout the day. (this is merely anecdotal, of course)
Can't agree more once I started prioritizing Vitamin D in my diet and getting hours of sun if possible it's like a 180 with mental health. Eggs are a great source as well.
You don’t need much fat to absorb vitamin D and amount of fat you eat with the supplement doesn’t really affect the level in your bloodstream: https://pubmed.ncbi.nlm.nih.gov/23427007/
“ We conclude that absorption was increased when a 50,000 IU dose of vitamin D was taken with a low-fat meal, compared with a high-fat meal and no meal, but that the greater absorption did not result in higher plasma 25(OH)D levels in the low-fat meal group.”
Did you take before/after blood tests to confirm you had low blood levels and that drinking milk helped? A cup of milk is roughly 150iu, while recommended supplements range from 600 to 2000iu. Also, lactose-free milk is common nowadays.
Even better, eat fatty fish! This may sound weird but adding consistent sardines/herring/mackerel/salmon to your diet can be life changing. They are nutrient powerhouses, they’re the best dietary sources of both vitamin D and DHA/EPA fatty acids. Of course, they’re also good sources of protein.
Most people I’ve mentioned this to will say something like “but they’re so fishy” or think that sardines are disgusting cat food. At least for canned sardines, what a lot of people don’t know is that there is a wide range of quality in both the packing and actual fish - fishy odors are from a chemical TMAO that is a byproduct of decomposition and anecdotally is more common in the cheap brands. There are brands like Matiz and Nuri that have much higher quality sardines; they’re also physically much larger than what you might think sardines are supposed to look like, with only 3 or so fish per tin. I also like this new brand Minna for a slightly cheaper option.
Small fish like sardines have very low amounts of heavy metals to the point it’s negligible. I couldn’t confidently tell you anything about microplastics though
You can charge up mushrooms with vitamin D (D2) by placing them in direct sunlight for 30 minutes. Any ordinary mushrooms. So salmon (D3) and charged mushrooms would do just as well.
A study back in 1974 reported that vit. D in milk was unaffected by pasteurization, boiling, or sterilization.
Hartman, A. M., and L. P. Dryden. 1974. Vitamins in milk and milk
products. Pages 325–401 in Fundamentals of Dairy Chemistry.
2nd ed. B. H. Webb, A. H. Johnson, and J. A. Alford, ed. AVI/
Van Nostrand Reinhold, New York, NY.
The effect of cooking on nutrients varies widely, depending on the specific cooking process and the nutrient in question.
"Nutrients" are typically defined as proteins, carbohydrates, lipids (fats), vitamins, and minerals.
Vitamin C is not very heat-stable, so you generally need to get that from raw fruits and vegetables (unless the food has been supplemented with it after cooking).
Vitamin D, by contrast, is pretty heat-stable.
Some proteins are rendered much more digestible by heat, so cooking actually improves the nutritional value of the food, in some cases by a great deal.
Lipids aren't generally affected much, though again some are more heat-stable than others. This is why some fats and oils are a better choice for deep-frying.
Carbohydrates are generally rendered more digestible by cooking, if anything (as long as they don't get so hot they start burning).
Minerals are mostly unaffected by heat. They can leach into the cooking water, so you'll lose some minerals if the cooking water is discarded. If it's something like soup, where the liquid is consumed, there's little or no impact.
There are even some commonly-consumed foods that are actually toxic unless they are cooked or otherwise processed. Cassava and some types of beans fall into this category.
I don't want to make a blanket statement, but I'd reckon that overall cooking helps more than it harms. Note that cooking is nearly universal across human cultures. Some cultures eat a lot more raw foods than others, true, but even the most raw-loving groups generally have some foods they cook (or otherwise process to break down, for instance, by fermentation).
of course it does. pasteurization involves rapidly heating and cooling the milk to kill off bacteria and viruses. Thats going to denature a lot of enzymes as well as change the structure of a lot of proteins. To say that no vitamins or nutrition is affected is an incredulously false to make here
> This is my 19 video [YouTube] playlist against drinking cow's milk from numerous perspectives
I’ve pretty much stopped trusting YouTube as a source for scientific evidence.
The motivation for YouTube content creators is views — much like news media, which I trust about as much.
Sure, there are individual creators that are probably fine, but I’d have to already recognize them to trust them. Otherwise it will be hard for me to not have some skepticism for random videos shared by others.
Otherwise, I view YouTube consumption as mostly entertainment, or DIY.
> Sure, there are individual creators that are probably fine, but I’d have to already recognize them to trust them.
There is also an unfortunatly inverse correlation with subscribers/views/production quality and trustworthiness.
A channel can't succeed unless it optimizes for entertainment value and click bait after all.
So the videos with very high informational value generally have no discernable differences to completely insane conspiracy theory rants... Well, aside from the words they're saying.
Okay, that's an exaggeration. The conspiracy theorists will likely have lots of videos, while the person with the good educational content will likely have uploaded <10 videos over several years
I didn't read the first link but looking at the second one this is typical nutritional pseudoscience misrepresentation of evidence, the nutritionstudies.org author states:
"A large observational cohort study[1] in Sweden found that women consuming more than 3 glasses of milk a day had almost twice the mortality over 20 years compared to those women consuming less than one glass a day. In addition, the high milk-drinkers did not have improved bone health. In fact, they had more fractures, particularly hip fractures."
Which is a misleadingly confident interpretation of the cited paper which concludes with:
"High milk intake was associated with higher mortality in one cohort of women and in another cohort of men, and with higher fracture incidence in women. Given the observational study designs with the inherent possibility of residual confounding and reverse causation phenomena, a cautious interpretation of the results is recommended."
Beyond correlation =/= causation, note the trepidation in the referenced authors conclusion. Furthermore some of this has been subsequently contradicted by other studies[0] (probably why the Swedish authors had hesitation). [1] provides a more detailed explanation of why the originally referenced study has limited interpretability.
Avoid milk all you want but suggesting "milk is bad for you" is supported by evidence is very misleading, there is conflicting poorly controlled observational evidence on both sides of the discussion. If you like milk, drink it within reason and don't feel bad (for your health).
> The China–Cornell–Oxford Project, short for the "China-Oxford-Cornell Study on Dietary, Lifestyle and Disease Mortality Characteristics in 65 Rural Chinese Counties," was a large observational study conducted throughout the 1980s in rural China, a partnership between Cornell University, the University of Oxford, and the government of China.[1] The study compared the health consequences of diets rich in animal-based foods to diets rich in plant-based foods among people who were genetically similar. In May 1990, The New York Times termed the study "the Grand Prix of epidemiology".
Didn’t watch that playlist but in general the arguments against cows milk are:
- hormones from the cows and their supplements is in the milk and impacts our hormone system in negative ways
- antibiotics used excessively in cows are in the milk and have negative affects on an individual level and might also contribute to the bacteria antibiotic arms race
- saturated fats are generally bad and should be minimized in the human diet. Milk is full of them and they are direct causes of heart disease and other top killer health issues for people
- sugar argument similar to saturated fats but for diabetes
- milk production is generally inhumane in its treatment of animals and it’s on a pretty big scale
> saturated fats are generally bad and should be minimized in the human diet.
About 30 years out of date, based on corrupt fraudulent industry "research", completely ignoring recent studies over the past 20 years which have debunked all of that. We need saturated fat. It is essential. Animal fats are loaded with fat soluble vitamins you won't get from industrial seed oil. Vegetable oils are toxic rancid garbage loaded with Omega-6 and 100% deficient in fat soluble nutrients.
> - saturated fats are generally bad and should be minimized in the human diet. Milk is full of them and they are direct causes of heart disease and other top killer health issues for people
Of course, that's why a human mother's milk is 50-60% saturated fat, right? Saturated fat consumption grams per capita has basically remained steady for the last 120 years or risen slightly, even 20 years before heart disease started to surge right around the time Crisco in the 1920s was introduced into the food supply.
Let's look at the data since 1900. We were told to replace saturated fat with polyunsaturated. Look let's see how that turned out:
The thing that always bothers me about 'x is bad for you' arguments about food is: what is the alternative food that provides similar positive things without the supposed harms?
I'm assuming the case here is against dairy in general, which can provide easily digestible protein, a mix of fats, and B vitamins with a minimal amount of carbohydrates. Besides lean meats and eggs, you aren't going to find other sources of those things in similar ratios in easy to consume quantities.
The other thing with "[specific food/drink] is [good/bad] for you" is that it's nearly impossible to study at baseline with a million confounders so it's all hypothetical pseudoscience at best.
Living a life of generally avoiding processed foods and sugar as well as emphasizing lean meats/protein and vegetables is probably the best thing any of us can do for ourselves whatever that combination may look like for an individual.
Anyone who makes a claim that anything specific is beneficial is almost certainly talking out of their ass or selling a product.
Recall the food pyramid, the greatest corporate pseudoscience scam ever pulled. There was also a generation that was told "butter is bad for your health".
"- hormones from the cows and their supplements is in the milk and impacts our hormone system in negative ways
- antibiotics used excessively in cows are in the milk and have negative affects on an individual level and might also contribute to the bacteria antibiotic arms race
- saturated fats are generally bad and should be minimized in the human diet. Milk is full of them and they are direct causes of heart disease and other top killer health issues for people"
None of this is supported by evidence, picking the last argument as an example:
> Multiple reviews of the evidence have demonstrated that a recommendation to limit consumption of saturated fats to no more than 10% of total calories is not supported by rigorous scientific studies. Importantly, neither this guideline, nor that for replacing saturated fats with polyunsaturated fats, considers the central issue of the health effects of differing food sources of these fats. The 2020 DGAC review that recommends continuing these recommendations has, in our view, not met the standard of “the preponderance of the evidence” for this decision."
I would add that energy balance (i.e. don't get fat) is probably the most important thing, assuming a reasonably healthy diet. Conversely, there's no diet that will save you if you're carrying excess pounds and/or are gaining weight beyond what is the ideal body composition.
Many of the diet studies, as poor as they are, that show beneficial changes due to diet almost always involve fat loss from baseline. Whatever diet can satisfy you and keep the weight down appears to be the local optimum.
What about "organic" milk? By that I mean milk that is grown by my small local farm, humane conditions for the cows, no antibiotics or supplements for them.
You'd still get pus, blood, endotoxins, hormones (like estrogen), pesticides/herbicides (organic farms still usually use them), etc.
Only 40% of consumers in UK [0] know that a cow has to give a birth to a calf to be able to give milk. Male calves are usually immediately killed these days, or sold for meat in a few months (together with 25?% of female calves). In dairy industry calves are removed from their mothers the day they're born (only 27% of consumers know this), in beef industry they're usually kept together.
The saddest story I've seen is a mother cow who gave birth to two calves. Because she was not first-time mother, she prepared. One calf was immediately taken away, the other she managed to hide somewhere in the fields. Of course when the farmer found about it (insufficient milk output), he located the calf and took it away. I can't find it, but here is a similar story. [1]
All dairy cows are forcibly impregnated every year, are spent after 5-6 years to the point where they often can no more walk [2], and instead of a normal life which would be 20-48? years (upper number is the record) they're taken to the slaughterhouse [3].
> humane conditions for the cows
That doesn't exist, not even on small local farms. Humane? It's an oxymoron.
There are farms like this [0] which are certainly humane. And the farms suggested by Dr Temple Grandin also qualify as such, although I'm not aware of any farm actually implementing her methods.
90+% (IIRC) of slaughtered animals in US are from CAFOs.
Any kind of slaughter is inhumane when there's no NEED to eat meat. The clean process you may have seen in TV is different from reality (see recent CO2 chambers relevations [0]).
Taking away mother's young and taking milk mother produces for her (him is killed usually immediatelly) is inhumane. [1]
Etc.
I've seen Dr Temple Grandin's "Glass Walls" ... she is not the right person for the job of representing "humane animal treatment." Yes, she says what you want to hear. But not the right person for the animals.
> 90+% (IIRC) of slaughtered animals in US are from CAFOs.
Sure. That's an argument for reformation though.
> Any kind of slaughter is inhumane when there's no NEED to eat meat.
That simply isn't true. Humane simply means inflicting as little suffering as possible. That's it.
> The clean process you may have seen in TV is different from reality
I'm well versed. I've been arguing against veganism for the last few years.
> I've seen Dr Temple Grandin's "Glass Walls" ... she is not the right person for the job of representing "humane animal treatment." Yes, she says what you want to hear. But not the right person for the animals.
She is very well respected in her field and her work is solid. If it makes things better for animals, why resist it?
"While some plant foods may be contaminated, animal food intake is the biggest source of certain pesticide exposure for both adults and children. Pesticides, as well as antibiotics, manure, pus cells, cholesterol, and saturated fat have all been found in milk. Factory farmed fish have higher levels of DDT and other banned pesticides than wild-caught fish, and even fish oil supplements may be contaminated with PCBs and insecticides. Many pesticides take a long time to degrade – the U.S. made arsenic-based pesticides illegal years ago, but they still persist in the soil. Similarly, though DDT was banned in the U.S. for agricultural use in 1972, people may still be exposed to the pesticide through contaminated dairy products and meat."
"Overall, those eating plant-based diets have been found to have a lower levels of pesticides than omnivores. Rinsing produce in a salt water solution may be an effective way to reduce pesticide residues on produce."
That alleviates hormones and animal welfare at least. Issues with allergies, sugar and sat fat remain. But hopefully also there is less consumed im this manner. Part of the problem is the pushing of people to consume quantities via celebrity advertising, USDA guidlines, etc.
How about kefir? It has much lower lactose and the benefit is all the probiotics. I drink a little every day and my gut is liking it. I don’t drink any milk at all
Sure ive been meaning to review them and make a super compilation... tho I have a small backlog of projects... But the above reasons are the shortlist of health ones: sat fat, sugar, hormones, T1D, and Parkinsons. There are addl args along the lines of economics (subsidies that don't benefit large pops of minorities), animal welfare, osteoporosis, casein addictiom in cheese and similar. I've been wanting to collect, validate, and index all the references... but for now one has to get them from the videos. And some of them are actually in favor of milk, btw... but generally the ones in favor are single sources and /or by the dairy industry. But if you want strong bones and muscles, eat what strong animals like gorillas and bulls eat.
> if you want strong bones and muscles, eat what strong animals like gorillas and bulls eat.
This is a really silly comparison, to compare humans to animals with different genetics, digestion traits and hormones. Male gorillas have MASSIVE amounts of testosterone and minimal myostatin. Their body doesn’t break down muscle. Humans are not like this at all, and if you disagree then please show me a vegetarian body builder, or even vegetarian elite strength athletes.
Bill Pearl, 4x Mr. Olympia, is a very famous one. There are other successful bodybuilders who are vegan. Pretty trivial to google, so I'm not sure why you issued such a challenge but there ya go.
"Meat is definitely not the secret to bodybuilding,” Bill Pearl later said.
Bill Pearl became a vegetarian at age 39, at the end of his bodybuilding career. And he ate eggs and dairy products. And he was a body builder in the 50’s and 60’s. But other than that…great example!
Some humans still have the ability, but most have lost it. It depends on you microbiome. But we still don't have to consume meat, not in this day and age, and with our supermarkets and online recipes.
> I'll take the diet and strength of a wolf
"Wolves are known to scavenge and consume dead or rotten meat when they come across it. Wolves have a remarkable ability to tolerate and digest decaying flesh. Scavenging on carcasses can be an important source of food for wolves, especially during times when hunting is challenging or prey is scarce."
I'd like to see it ... please find some friends, and without weapons hunt and with your teeth and nails take down an elk or something. Then eat it raw.
And if you're not successfull, find some carcass and enjoy ! Remember to start from the anus, where it's easier to tear.
More like a pile of clickbait and quackery. The video from a holistic dentist is enough to invalidate the entire list in my eyes. Holistic dentistry suggests that oil pulling can repair cavities, that cavities disrupt "meridians" in the body and can then cause cancer and other wild ideas. Not to mention the fact that having mercury amalgam fillings removed (a big topic in holistic dentistry) is far more dangerous than having them in your mouth.
You can find papers on anything if you put in the keywords. They'll also study bogus new age things, just to try to settle the actual science (and in some cases the papers will be from subpar journals that still get indexed. Pubmed has a big warning label that a paper being there doesn't mean the contents are endorsed by NIH.
In this case, the list is low tier papers and journals, or some systemical survey type papers (or low quality first order papers) that still don't reach any conclusions.
That's a terrible analogy but sure - if I were evaluating whether "SQLite is good" based on a playlist of programmers and one of them was by one person of an occupation based entirely around pseudoscientific beliefs, some of which are actively harmful, I would probably discount the rest of the videos in the playlist. Especially when many of the other videos are from creators who mainly post clickbait and more pseudoscience.
Nice troll account you've got there. Pity if someone were to have saved the results from the recent alias-account finder and unmask you...
Anyway as i said above i have been wanting to go through all of the dozens of references but i have other projects, a long drive to work, and little desire to please others. i have taken the step of standing on others' shoulders and collecting their work. Spoon-feeding it to people who have made up their minds to not consider science has lately sagged on my list of desires.
yeah take some random powder made by a lab that put some random "US SAFETY" trademark on it. much better than getting some organic milk from a farmer. thanks.
I recently pulled DuckDB out of a project after hitting a memory corruption issue in regular usage. Upon investigating, they had an extremely long list of fuzzer-found issues. I just don't understand why someone would start something in a memory unsafe language these days. I cannot in good conscience put that on a customer's machine.
We ended up rewriting a component to drop support for Parquet and to just use SQLite instead. I love the idea of being able to query Parquet files locally and then just ship them up to S3 and continue to use them with something like Athena.
The other thing that rubbed me the wrong way was that rather than fix the issue, they just removed functionality. DuckDB (unironically) needs a rewrite in Rust or a lot more fuzzing hours before I come back to it. While SQLite is not written in a memory safe language, it is probably one of the most fuzzed targets in the world.
Starting in this release, the DuckDB team invested significantly in adding memory safety throughout catalog operations. There is more on the roadmap, but I would expect this release and all following to have improved stability!
That said, at my primary company, we have used it in production for years now with great success!
> why someone would start something in a memory unsafe language these days
You might like what we (Splitgraph) are building with Seafowl [0], a new database which is written in Rust and based on Datafusion and delta-rs [1]. It's optimized for running at the edge and responding to queries via HTTP with cache-friendly semantics.
Hmm... thanks. Maybe it was a hiccup? Does it happen when you click these links in my comment? We haven't been able to replicate on Firefox mobile, but we do have an issue with 500 errors in Firefox when double clicking links in the sidebar of the docs (I know, I know...)
>The other thing that rubbed me the wrong way was that rather than fix the issue, they just removed functionality.
It is a limited team size. If they feel a feature is causing too much grief, I would rather they drop it than post a, "Here be dragons" sign and let users pick up the pieces.
Edit: missed an obvious opportunity to take a shot at MySQL
I think the critique is that not that they should have left the thing broken, but that a limited team should limit the work to match the team size so that they do not release broken things in the first place.
I wouldn't consider any of those written in a memory safe language. Although SQLite has been battle hardened over many years, while DuckDB is a relatively new project.
That being said, has been efforts of reimplementing SQLite in a more memory safe language like Rust.
At the level of engineering of SQLite, the choice of language is almost immaterial. Suggesting a low effort transpilation is a competitive peer seems unserious and vaguely disrespectful.
>Finally, one of the best written software paired with one of the best writable programming language‽ Fearless and memory safe, since the uncountable amount of unsafe {} blocks makes you not care anymore.
Plus it seems project is a parody of the RiiR trend.
The CVE list would dispute that assertion. There's a reason Microsoft is rewriting parts of the Windows kernel in Rust, and it isn't trendiness or because the kernel is at a trivial level of engineering.
It's the same reason Torvalds refused to have C++ anywhere near the Linux kernel but is not accepting patches in Rust. The advantage of C is its transparency and simplicity, but its safety has always been a thorn in the industry's side.
RiiR has become a bit of a parody of itself, but there is a large grain of truth from where that sentiment was born.
Life is too short for segfaults and buffer overruns.
In most C++ environments you will have std::string, STL vectors, unique_ptr, and RAII generally. Cleaning up memory via RAII discipline is standard programming practice. Manual frees are not typical these days. std::string manages its own memory, and isn't vulnerable to the same buffer overflow/null terminator safety issues that C-style strings are.
While in C you will be probably using null-terminated strings and probably your own hand-rolled linked list and vectors. You will not have RAII or destructors, so you will have manual frees all over.
Perhaps the big difference is that due to the nature of the language, C developers on the whole are probably more careful.
I started coding professionally in C++ back around 2000. Many things have improved in C++ such as the items you list above, but C++ remains a viciously complicated language that requires constant vigilance while coding in it, especially when more than one thread is involved.
CloudFlare is not lacking in good engineers with tons of both C and C++ experience. They still chose Rust for their replacement of Nginx. Now their crashes are so few and far between, they uncover kernel bugs rather than app-level bugs.
> Since Pingora's inception we’ve served a few hundred trillion requests and have yet to crash due to our service code.
I have never heard anything close to that level of reliability from a C or C++ codebase. Never. And I've worked with truly great programmers before on modern C++ projects. C++ may not have limits, but humans writing C++ provably do.
I'm not sure who you're arguing against, but it's not me, or it's off topic completely. The discussion and my reply was not between C/C++ and Rust, but between C and C++.
I'm a full-time Rust developer FWIW. But I also did C++ for 10 years prior and worked in the language on and off since the mid-90s. Nobody is arguing in this sub-thread about Rust vs C++, nor am I interested in getting into your religious war.
If you are looking for a query engine implemented in a safe language (Rust) I definitely suggest checking out DataFusion. It is comparable to DuckDB in performance, has all the standard built in SQL functionality, and is extensible in pretty much all areas (query language, data formats, catalogs, user defined functions, etc)
On the other hand, there are garbage collectors available for C and C++ programs (they are not part of their standard libraries so you have to choose whether you use them or not). C++ standard library has had smart pointers for some time, they existed in Boost library beforehand and RAII pattern is even older.
Don't put all the blame for memory bugs to languages. C and C++ programs are more prone to memory leaks than programs written in "memory-safe" languages but these are not safe from memory bugs either.
Disclaimer: I like C (plain C, not C++, though that's not that as bad as many people claim) and I hate soydevs.
Memory leaks are annoying and, yes, you can get them in memory safe languages.
But they are way less severe than memory corruption. Memory unsafe languages are liable to undefined behaviour, which is actively dangerous, both in theory and practice.
> I just don't understand why someone would start something in a memory unsafe language these days. I cannot in good conscience put that on a customer's machine.
We ended up rewriting a component to drop support for Parquet and to just use SQLite instead.
I am not sure that you realize that SQLite is written entirely in C -- a quintessential memory unsafe language. I guess quality of software depends on many things besides a choice of language.
SQLite also lives under a literal mountain of automated tests, an engineering effort I'm not sure I've ever seen elsewhere. The library code is absolutely dwarfed by the test code.
...and CVEs still pop up occasionally. The point about memory safety languages still holds, but can be mostly muted given you throw enough tests at the problem.
Blob to bitstring type casting for Parquet. They were doing a straight reinterpret cast on it which was causing an allocation of 18446744073709551503 bytes.
I was wanting to take a blob from Parquet and bitwise-and it against a bitstring in memory.
A serious and curious question. Are close to the point with LLMs where we can just point to the source of something like DuckDB, and it’s suite of tests and say “rewrite this in Rust with these set of libraries, and make sure all these tests pass”?
Even if not 100% complete, and produces in idiomatic code, could it work?
I don’t even have access to regular Claude so can’t confirm this but the 100K token model they released should in theory be able to handle this to a certain degree.
I haven't tried Claude but I have been tinkering with a lot of this in my home lab and there are various theories I have:
- GPT4 is not a model, it's a platform. I believe the platform picks the best model for your query in the background and this is part of the magic behind it.
- The platform will also query multiple data sources depending on your prompt if necessary. OpenAI is just now opening up this plugin architecture to the masses but I would think they have been running versions of this internally since last year.
- There is also some sort of feedback loop that occurs before the platform gives you a response.
This is why we can have two different entities use the same open source model yet the quality of the experience can vary significantly. Better models will produce better outputs "by default", but the tooling and process built around it is what will matter more in the future when we may or may not hit some sort of plateau. At some point we're going to have a model trained on all human knowledge current as of Now. It's inevitable right? After that, platform architecture is what will determine who competes.
Interesting speculation but I don’t think GPT-4 chooses any model, I’m pretty sure it’s just how good that one model is. I played with a lot of local models but the reality is, even with wizard vicuna, were at least an order of magnitude away from the size of GPT-4.
It is open source, you can do it with whatever you want, like fix it or write it in rust. I did not see the point of complaining it doesn’t work for you because it is not in rust and not doing it.
> The other thing that rubbed me the wrong way was that rather than fix the issue, they just removed functionality.
Yeah, DuckDB has some very cool features, but I with the community were less abrasive. I remember someone asking for ORC columnar format support, and DuckDB replied "that is not as popular as Parquet so we're not doing it, issue closed". Same story with Delta vs Iceberg.
Meanwhile Clickhouse supports both and if you ask for things they might say "tha tis low priority but we'll take a look". Clickhouse-local can work as CLI (though not in-process) DuckDB too.
> DuckDB has some very cool features, but I with the community were less abrasive.
I’m on the Discord and the community in my experience has been anything but “abrasive”. I’m just a random guy and yet I’ve received stellar and patient help for many of my naive questions. Saying they are abrasive because they’re not willing to build something seems so entitled to me.
Focused engineering teams have to be willing to say no in order to achieve excellence with limited bandwidth. I’m glad they said no when they did so they could deliver quality on DuckDB.
I certainly think ORC is a good thing to say no too - in my years of working in this space I’ve only rarely encountered ORC files (technically ORC is superior to Parquet in some ways but adoption has never been high)
Also realize that the team is not being paid by the people who ask for new features. If you’re willing to pay them on a retainer through DuckDB labs then you can expect priority, otherwise the sentiment expressed in your comment just seems so uncalled for.
I have a low volume of this work available. If your wife wants to help beta test a site where annotators bid their own rates I would be thrilled. Contact in bio.
Immediately thought of these when seeing the post. If you're in Los Angeles there is the Bikerowave [1], Bicycle Kitchen [2], and Bike Oven [3]. I'm about to donate two bikes and a ton of parts to Bikerowave this week. Really great places.
I've built a site to pay people for images and annotations. [1] I'm trying to onboard my first paid users right now. The plan is to build out a high quality 50k image license plate recognition dataset as a proof of concept.
Right now we own all of the datasets on the site and the idea is to license them out to companies while making them available to researchers under a non-commercial license. The market might take it a different direction to be more of a marketplace or Github style hosting. Email in bio if anyone wants to chat about this.
Also, if anyone wants to get paid 10 cents an image to take pictures of North American license plates, get in touch. Need about 1000 from each state. It's probably below most people's pay grade on here but there is a whole reverse bidding system, so you can always bid higher than 10 cents. Some user studies with a shared screen would be super helpful as well.
"Also, if anyone wants to get paid 10 cents an image to take pictures of North American license plates, get in touch."
Your blatant disregard for privacy is shocking, but perhaps unsurprising in the field. I guess you also didn't through the enormous risks for the photographer.
Wow, interesting project. Not sure how many people you can entice into providing 10k photos of license plates, firearms, and children in pools. I kid, but are you building an alert system for a superhero?
Ha! It is a bit public safety focused right now. The first two I will really build out (read: pay people to help with) are license plate recognition and vehicle make/model/year identification. I think those have a decent market.
The firearms one is tricky and I'm not sure it will ever be licensed commercially since we don't own the footage that the images are taken from. Valuable dataset in terms of what it could provide though, like an early warning system for active shooters.
Some day I'll have enough free time and just make a billiards dataset to help me find shots.
Have a good look at privacy laws in the states/countries of your users, in some things like this may be against the law or subject to restrictions. See:
Unattended security gates at a local campground are being used to gather data. To get access to the campground, you have to provide your plate number, vehicle make & model, and vehicle color. The gate only checks for the plate number, but the images captured can be used to build out a vehicle make/model ML model.
Maybe partner with such a system to gather your data?
I have several thousand images of vehicles on the roadway and parking lots that I used to understand the backend ML of an ALPR system. Selecting cameras was the most difficult part of the project. I did not attempt to determine the make/model. My patience for labeling had worn too thin.
Overall, as invasive as ALPR system are, the cat has long left the bag. I doubt the cat will ever return.
Hey there, just checked out the site and signed up. I’m not seeing anything about how to get paid for uploads. Can you provide some direction here? Thanks!
It is a little hidden still. On your profile page there should be a sign up link that will take you to Stripe. Feel free to email me. I will post two solicitations tomorrow, one for annotation work and one for plate photographs.
It is restricted to US only right now (Stripe setting) but I will change that if there is non-US interest and the country is supported.
I've only run through the whole process once so hopefully there are no hiccups. Thank you!
We haven't figured out commercial pricing yet. It may be a scale based on company revenue. Targeting bigger businesses though, it will be less than it would cost to develop the model independently, but likely trying to target low 5 figures.
The datasets are also available and a non-commercial Creative Commons license. You can pay $5 for a download and rehost it elsewhere, or just wait for me to upload it to Kaggle or Hugging Face
I'm building something right now that is a reverse bidding platform for image acquisition and labeling (company makes a solicitation, workers bid) where you get paid per accepted image and label. It will open to US labor first and might take up to a year to expand to India (assuming you're there).
If you want to be an early India user (paid) put an email in your bio and I'll get in touch. If you can program though data labeling might not be the best use of your time.
Wrapping up a system where I can pay people per image and annotation for building computer vision data sets. Applying for my federal firearm manufacturer license so I can 3d print guns in California. Hopefully landing SBIR funding to go full time on my projects.
reply