Hacker Newsnew | past | comments | ask | show | jobs | submit | MrPowerGamerBR's commentslogin

I haven't checked Hetzner's prices in a while, but OVHcloud has dedicated servers and they do have dedicated servers in the US and in Canada (I've been using their dedicated servers for years already and they are pretty dang good)


Seems to be broadly the same sadly, but thanks it's interesting to see they're all hovering quite close to eachother.


While true, at least with open source you can actually go into the code and try to fix the code if you really want to.

With a closed source business you are at the mercy of them to decide if they really want to fix your issue, even if you are a paid customer.


If I had to guess, they were talking about the DeepSeek iOS app: https://apps.apple.com/br/app/deepseek-assistente-de-ia/id67...


Except that if you require anything that is GPU-related (like gaming, Adobe suite apps, etc) you'll need to have a secondary GPU to passthrough it to the VM, which is not something that everyone has.

So, if you don't have a secondary GPU, you'll need to live without graphics acceleration in the VM... so for a lot of people the "oh you just need to use a VM!" solution is not feasible, because most of the software that people want to use that does not run under WINE do require graphics acceleration.

I tried running Photoshop under a VM, but the performance of the QEMU QXL driver is bad, and VirGL does not support Windows guests yet.

VMWare and VirtualBox do have better graphics drivers that do support Windows. I tried using VMWare and the performance was "ok", but still not near the performance of Photoshop on "bare metal".


People throw around the ideas of VMs or WINE like it's trivial. It's really not.


On linux it's quite trivial. KVM is part of the kernel. Installing libvirt and virt-manager makes it really easy to create vms.

I'd say even passing through a GPU is not that hard these days though maybe that depends on hardware configuration more.


“On Linux it’s quite trivial…” giving big

“ or a Linux user, you can already build such a system yourself quite trivially by getting an FTP account, mounting it locally with curlftpfs, and then using SVN or CVS on the mounted filesystem. From Windows or Mac, this FTP account could be accessed through built-in software.”[1] vibes.

Convenience features in software are huge and even if a system is well designed a system that abstracts it all away and does it for you is easier, and most new users want that, so it often wins. Worse is better etc

[1] https://news.ycombinator.com/item?id=9224


The comment you linked is one of the most misunderstood comments on this site, which makes sense because it's one of the most cited comments on this site.

https://news.ycombinator.com/item?id=23229275

This probably isn't even the best dang comment about the situation, it's just the one I could find quickly.


Perhaps I should have put a larger explanation around it but I am mocking neither sureglymop nor BrandonM but we can still learn lessons from hindsight.

Sure, it’s trivial to set the switch in BIOS for virtualisation, and download a couple of libraries but people like computers doing things for us, we like abstractions even if they sacrifice flexibility because they facilitate whatever the real world application we are attempting.

I think power users of any technology will generally overvalue things that 80% to 95% of the user base simply don’t care about.

I admit that having touched Windows twice in the last 5 years I wouldn’t know but I would be willing to wager that WSL has very few drawbacks or shortcomings in the minds of most of its users.


Also sometimes the harder approach is also not as capable as some people make it out to be, and there are some unsolved caveats.


I don't see what's misunderstood about it, but also it's not right to make fun of the user for it.


Because it's only silly sounding because of hindsight. With today's context of file sync applications being a huge industry, that comment seems silly. But that was the prevailing opinion at the time. Check out this blog post: https://www.joelonsoftware.com/2008/05/01/architecture-astro...

>Jeez, we’ve had that forever. When did the first sync web sites start coming out? 1999? There were a million versions. xdrive, mydrive, idrive, youdrive, wealldrive for ice cream. Nobody cared then and nobody cares now, because synchronizing files is just not a killer application. I’m sorry. It seems like it should be. But it’s not.

That's just what a lot of competent people thought back then. It seems hilariously out of touch now.


But it wasn't my opinion at the time, and I didn't hear from those people. I was in middle school, kids were commonly frustrated syncing their homework to/from a flash drive, family members wanted to sync photos, and everyone wanted something like this.

Before Dropbox, the closest thing we had was "the dropbox," a default network-shared write-only folder on Mac. Of course you could port-forward to a computer at home that never sleeps, but I knew that wasn't a common solution. I started using Dropbox the same month it came out.


I'm happy for you :)


The future is rarely made by people who are comfortable with the status quo. That’s the only thing we can get from this.


His comment appears to me to say "please don't bother my friend". Him saying that file sync "wasn't common knowledge at the time"...ok? It is much easier than the solution the commenter proposed. In this thread, it's the same, people are proposing a complex solution as if it's trivial just because it is trivial to them.


Even the described FTP-based Dropbox replacement is easier than getting a VM to work properly with DRM'd software and/or GPU acceleration.


Really? With GNOME Boxes it's pretty straightforward. I hear KDE is getting an equivalent soon, too.


You can do GPU passthrough in a Gnome box, as in, your VM can see the host's GPU (let's say Nvidia) and it works exactly the same as on the host? Or another metric is if you can run Photoshop in a VM with full hardware acceleration. I haven't tried Gnome box in particular, but this isn't what I'm seeing when I search.


Ah, yeah, seems like I was mistaken and maybe Red Hat's virt-manager was what I was thinking of.

virt-manager is a bit more involved than GNOME's Boxes, I'm not sure I could recommend it to someone that doesn't know what they're doing.


Yeah, reading your original comment I was about to go off until I saw GPU pass through with DRM software. Highly cursed.


Yep, regular VMs where you basically only care about the CPU and RAM are easy, provided nothing in the VM is trying to not run in a VM. USB and network emulation used to be jagged edges, but that was fixed. VirtualBox was my go-to. It never had great GPU support, but the rest was easy.

I'm pretty sure there are solutions to assign an entire GPU to a VM, which ofc is only useful if you have multiple. But those are specialized.


Yeah! Even as a dev who can navigate vim, I absolutely don't want to do that on a daily basis. Give me pretty GUIs and some automation!


Not even close. I mentioned a software package that literally offers a full gui for all your virtualization needs.. how is that comparable to the things mentioned in that comment?


That really depends on what you want to run. Dipping into a Linux laptop lately (Mint) there are things, old things (think 1996-1999) that somehow "just work" out of box on Windows 10, but configuring them to work under WINE is a huge PITA coming with loads of caveats, workarounds and silent crashes.


The silent crashes get me. Also running one exe spawns a lot of wine/wineserver/wine-preloaded processes.


Tried doing 3d modeling in a Windows VM - couldn't get acceleration to pass through.


What 3D modelling were you doing that couldn't be done on linux?


Fusion360 doesn't work on Linux. Or at least I tried multiple times and couldn't get it to work


Really? I recall installing it 3 years ago, and aside from some oddities with popups, it worked just fine. I think it was this script [0]. I don't know if they broke it, I switched to OpenSCAD, which meets my needs.

[0] https://github.com/cryinkfly/Autodesk-Fusion-360-for-Linux


Mostly having software better than FreeCAD, AKA everything that exists on Windows and macOS.


I needed to use Rhino 3D specifically because it had an environmental simulation plugin.


None of your business.


I'm hoping that IOMMU capability will be included in consumer graphics cards soon, which would help with this iirc there are rumors of upcoming Intel and AMD cards including it


Quite a lot of people have both integrated Intel graphics and a discrete AMD/NVidia card.


Sadly I'm not one of those people because I have a desktop with an AMD Ryzen 7 5800X3D, which does not have an integrated graphics card.

However now that AMD is including integrated GPUs on every AM5 consumer CPU (if I'm not mistaken?), maybe VMs with passthrough will be more common, without requiring people to spend a lot of money buying a secondary GPU.


Yes, my Ryzen 7600 has an integrated GPU enabled. AMD's iGPUs are really impressive and powerful, but I do not have any idea what to do with it and despite that I moved to an Nvidia GPU (after 20 years of fanboyism) specifically because I was tired of AMD drivers being terrible on Windows, I STILL have to deal with AMD drivers because of that damn iGPU.

I could disable it I guess. It could provide 0.05% faster rendering if I ever get back into blender.


AMD has SRIOV on the roadmap for consumer gpus which hopefully makes things easier in the future for gpu accelerated VMs

https://www.phoronix.com/news/AMD-GIM-Open-Source

Windows can run GPU accelerated Windows VMs with paravirtualization. But I have no use case for two Windows machines sharing a GPU.


There is also native context for VirtIO, but for now Windows support is still not planned.

Also note some brave soul implemented 3D support on KVM for Windows. Still in the works and WinUI apps crash for some reason.


Anything GPU related isn't great in WSL either.


True, but I don't have the need to run applications that require GPU under WSL, while I do need to run applications that require the GPU under my current host OS. (and those applications do not run under Linux)


I don’t know why there aren’t full fledged computers in a GPU sized package. Just run windows on your GPU, Linux on your main cpu. There’s some challenges to overcome but I think it would be nice to be able to extend your arm PC with an x86 expansion, or extend your x86 PC with an ARM extension. Ditto for graphics, or other hardware accelerators


There are computers that size, but I guess you mean with a male PCIe plug on them?

If the card is running its own OS, what's the benefit of combining them that way? A high speed networking link will get you similar results and is flexible and cheap.

If the card isn't running its own OS, it's much easier to put all the CPU cores in the same socket. And the demand for both x86 and Arm cores at the same time is not very high.


Yes, with pci-e fingers on the ‘motherboard’ of the daughter computer. Like a pci-e carrier for the RPI compute.

Good point about high speed networking. I guess that’s a lot more straightforward.


You may be interested in SmartNICs/DPUs. They're essentially NICs with an on-board full computer. NVIDIA makes an ARM DPU line, and you can pick up the older gen BlueField 2's on eBay for about $400.


> full fledged computers in a GPU sized package

.. isn't this just a laptop or a NUC? Isn't there a massive disadvantage in having to share a case or god forbid a PCIe bus with another computer?


There is ongoing work on supporting paravirtualized GPUs with Windows drivers. This is not hardware-based GPU virtualization, and it supports Vulkan in the host and guest not just OpenGL; the host-based side is already supported within QEMU.


I completely gave up on WINEing Adobe software but I didn't know about the second GPU thing, I thought it was totally impossible. Thank you!

I will do anything to avoid Windows but I miss Premiere.


If it is how I think it is, then yes, it is a proof that you attended the event.

I'm not sure how it is in other countries, but in some countries (example: Brazil) some courses (like Computer Science) require you to have "additional hours", where these hours can be courses, lectures, etc related to the course.

To prove to the university that you did these courses, you need a certificate "proving" that you participated. Most of the time they are a PDF file with the name of the event, the date and your name in it.


> One thing I (in general) miss from those days, was how easy it was to get into modding. Whether that be to make your own maps, or more involved game mods.

Another game from that time that was also easy to mod was The Sims 1.

For a bit of context, EA/Maxis released modding tools BEFORE the game was released, to let players create custom content for the game (like walls and floors) before the game was even released!

And installing custom content was also easy, just drag and drop files in folders related to what you downloaded and that's it.

Imagine any game nowadays doing that? Most games nowadays don't supporting modding out of the box, but of course, there are exceptions, like Minecraft resource packs/data packs. I don't think Fortnite and Roblox fit the "modding a game" description because you aren't really modding a game, you are creating your own game inside of Fortnite/Roblox! Sometimes you don't want to play a new game inside of your game, you just want to add new mods to enhance your experience or to make it more fun. There isn't a "base game Roblox", and while there is a "base game Fortnite" (Battle Royale... or any of the other game modes like Fortnite Festival or LEGO Fortnite) Epic does not let you create mods for the Battle Royale game. You can create your own Battle Royale map, but you can't create a "the insert season here Battle Royale map & gameplay but with a twist!".

Of course, sadly EA/Maxis didn't release all of the modding tools they could (there isn't a official custom object making tool for example, or a official way of editing the behavior of custom objects) but they still released way more modding tools than what current games release.

I think that most modern games don't support that easiness of modding because the games themselves are complex, because as an example: The Sims 1 walls are like, just three sprites, so you can generate a wall easily with a bit of programming skill, the skin format is in plain text in a format similar to ".obj", so on and so forth.

Lately I've been trying to create my own modding tools for The Sims 1, and it is funny when you are reading a page talking about the technical aspects of the game file formats and the author writes "well this field is used for xyzabc because Don Hopkins said so".


Factorio has an extensive modding community - one of the community mods was adopted and became an official expansion.


The official toolkit for modding Baldur’s Gate 3 is extremely extensive. You can make an entirely different game on top of the game if you wanted


Really? Officially they don't even support modifying or creating new levels: https://baldursgate3.game/news/community-update-27-official-.... You are mostly limited to visual changes, UI mods and balance / tuning changes.


That seems an easy fix until one gets nervous about working a month just to release a single -purchase game engine. That's a journey that you can take but you might at least consider a license.


> If I'm reading a blog or a discussion forum, it's because I want to see writing by humans. I don't want to read a wall of copy+pasted LLM slop posted under a human's name.

This reminds me of the time around ChatGPT 3's release where Hacker News's comments was filled with users saying "Here's what ChatGPT has to say about this"


Pepperidge Farm remembers a time where ChatGPT 2 made no claims about being a useful information lookup tool, but was a toy used to write sonnets, poems, and speeches "in the style of X"...


> Resource packs can change the music played by discs. The duration of the music disc stays fixed even if the audio is replaced

You can change the music disc duration with data packs since Minecraft: Java Edition 1.21, you can even add new music discs definitions without replacing any of the vanilla music discs.

I know that one of the rules was "no data packs", but hey, it is a cool thing if someone doesn't know about it. (also, in my opinion this wouldn't break the "no data packs" rule, because the "no data packs" rule seems mostly related to not using data packs to set blocks in the world)


My use case was a bit different: I was trying to use Chromium Headless in Playwright as a simple way to render a element on a page, I experienced tons of random "Page crashed" and "Timed out after 30s" from Playwright.

Switched to Firefox Headless and these issues stop happening, in fact, switching to Firefox made the renderer ~3x FASTER than Chromium Headless!

The Blitz project seems very interesting and is actually what I needed, because I'm using a headless browser as an alternative because rendering everything manually using Java Graphics2D would be a pain because the thing I'm rendering has a bit of a complex layout and I really didn't want to reinvent the wheel by creating my own layout engine.


While it isn't a "leak", the reason it has popped up recently is because people found out that there are beta versions of various apps and games in the dump.

As far as I know, at the time, no one had made a list of what apps were included, and if the apps included are prototype versions of famous apps (Angry Birds, Cut the Rope, etc), which is what people are doing right now. So now people are scrapping and tracking which apps are in the dumps and if the apps are special (prototype, unreleased, etc).

Dismissing the project as "straight stupid" is dumb. Because if you also think about it, are the archive team also "straight stupid" because they didn't extract and check that the dump had prototype and unreleased versions of apps? I don't think so.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: