Websites are seo-ing Reddit now (shit things like: Top ten airfryer recommended by Reddit), so appending reddit is not enough anymore. In a very close future
we are going to have to type the whole site:reddit.com.
Well, I already do that today. Probably won't help too much, though, cause AI will generate some legit looking posts/comments and you will have a hard time figuring out if they are written by a bot or person.
As a buyitforlife moderator I appreciate that. My current plan is hoping that the LLM-using astroturfers don't target us, because I have very few tools to deal with that kind of thing currently.
Exactly. I keep coming back to the fact that, between all the different ways AI can generate content, the entire user-contributed-internet is about to be completely flooded with amicable and true-sounding nonsense with a peppering of whatever viewpoint the bot-runner wants to push.
I honestly don't see any way out besides Digital ID -- if a person has to provide the host with information about who they really are, then once they're discovered the host can actually prevent further abuse. Otherwise, a single person can create infinite bots to shill whatever product or viewpoint they want.
Which is why I have conspiracy theories about the conspiracy theories about digital ID. The same actors who use the internet to push misinformation benefit from anonymity.
It's almost like the Matrix but in cyberspace where humans are constantly trying to escape the bots, except instead of mining the humans for energy, the bots are mining data.
> the entire user-contributed-internet is about to be completely flooded with amicable and true-sounding nonsense with a peppering of whatever viewpoint the bot-runner wants to push.
Feels like that's already happened before ChatGPT.
one use case that I can think of is for monitoring something. Let's see you have tmux open fullscreen, working in one of the panes and have an auto-updating graph at one of the corners to check the value of your stocks (?).
I wish distros would stop making "docker" an alias for "podman", they are not the same thing and breaks all light-k8s implementations.(looking at you redhat)
I've encountered cases where podman CLI does not match Docker, specifically for network creation with IPv6--the commands are different. What are you experiencing?
CLI is not my issue, k3s and kind wont work with podman (or any rootless container for the matter) out of the box, in both you need to do some non-trivial cgroups configuration on the OS to make it work (in k3s this mode is experimental)
I didn't like much this but having ARM in the hands of financial institution which doesn't have any motivation to invest in the technology doesn't seem any better (and probably doesnt understand/doesnt care about it). In the other hand, Nvidia have a good track record creating good products.
> Nvidia have a good track record creating good products.
What incentive would Nvidia have to develop new ARM features that would assist competitors in competing with them?
The Fujitsu's A64FX Processor for example somewhat competes with Nvidia datacenter GPUs. What incentive would Nvidia have to continue to develop the Arm Scalable Vector Extension?
The only good thing from this sale would be the pivot to RISC-V.
Money? like Samsung selling their chips to Apple. Nvidia have a lot of proprietary stuff which allows them to get away even with inferior hardware. Nvidia will prob lose the AI datacenter market anyways, customs asics will just use RISC-V if Nvidia try to stop them.
Nvidia doesn't care about secondary money. They're all about integrating stuff and denying others to do so.
Nvidia is a castle because of the moats they've built. Not because they're leaps and bounds ahead in terms of silicon and technical wizardry. They shunned OpenCL, they'll probably shun Vulkan for GPGPU and make sure that people are locked to CUDA and its ecosystem. They have no intention to play fair.
We're going to see how Mellanox will evolve under their command.
wasn't OpenCL a mess even on AMD cards and isnt it almost deprecated?* And Nvidia's OpenGL drivers on windows are way better than AMD's. I dont like Nvidia especially after all the error 43 bullshit, but they are competent. Now ARM is going public, wont that make things even worse?
it would be absolutely insane not to support either of those features. An absolute shit-ton of games use Vulkan these days, there is zero chance of that being deprecated in any timespan not measured in decades. Everyone is still supporting OpenGL after all, and that's like what, an early 90s API?
there is a larger point to be made here about some of the reflexively negative responses people have when the N-word is brought up. There are certainly criticisms you can make of NVIDIA, but people have this intense and reflexively negative emotional response that tends to slide these discussions into flamewar territory.
I know I'm about to hear about how Linus gave them the finger (cause he's a totally balanced and wholesome person), but providing a closed-source linux driver and not supporting wayland (which I think they did eventually anyway?) aren't the end of the world like people make them out to be. Pick the product that fits your needs, if not move on, you don't need to become emotionally attached and you can keep your positions fact-based.
> this comment is factually untrue and anyone can check it for themselves.
I agree that it's my mistake, however there was a gap between this support periods and levels, and I've been burned by nVidia's own driver on an nVidia 800 series card. It's not out of spite or prejudice. I stand corrected, but my experience also stands. Also nVidia's practices for treating technologies they don't like is still valid.
Also, being compatible doesn't mean that these technologies are first class citizens on the driver/hardware level, and they may not be allowed to use the card up to its full potential.
> it would be absolutely insane not to support either of those features. An absolute shit-ton of games use Vulkan these days, there is zero chance of that being deprecated in any timespan not measured in decades. Everyone is still supporting OpenGL after all, and that's like what, an early 90s API?
There's support and there's support. I'm not talking about games and graphics stuff. I was doing some research about nVidia's "Vulkan Compute" support, and seen this:
"Vulkan compute on AMD and Intel might be okay but we already know of ways Nvidia puts the clamp on Vulkan compute (doesn't make use of multiple DMAs for transfer) and given their history with OpenCL and the vested interest they have in developers preferring CUDA, they're really not to be trusted." [0][1]
The guy writing seems to know what he's talking about (reading about the history gives a lot of nice low level details about how these things work), and this is what I was exactly saying about "Ditching Vulkan Compute in favor of CUDA, like OpenCL". "Yeah, it's supported, but it might not be fast, sorry."
> there is a larger point to be made here about some of the reflexively negative responses people have when the N-word is brought up. There are certainly criticisms you can make of NVIDIA, but people have this intense and reflexively negative emotional response that tends to slide these discussions into flamewar territory.
I'm personally no fan of flamewars. I also don't like to disseminate wrong information knowingly (see above). Also, I work in a HPC center where we use nVidia hardware, and personally develop scientific software. So, I'm not distant to either tiers of nVidia hardware. So, maybe not having prejudice about people one's replying to the first time is a good thing, no?
> I know I'm about to hear about how Linus gave them the finger (cause he's a totally balanced and wholesome person)...
No, you're not.
TL;DR: You can't write multi-vendor, GPU acceleration code which uses every card to its highest potential, even if there's standards. You need to write at least two copies of the code, one being CUDA. Why? nVidia doesn't like other technologies to work as fast on their cards. That's it.
> What incentive would Nvidia have to develop new ARM features that would assist competitors in competing with them?
> The Fujitsu's A64FX Processor for example somewhat competes with Nvidia datacenter GPUs. What incentive would Nvidia have to continue to develop the Arm Scalable Vector Extension?
"the enemy of my enemy is my friend".
the more R&D that goes into the ARM ecosystem, the tougher things get for x86. At the end of the day, all that R&D spending is making NVIDIA's position stronger, it's a force multiplier for their own efforts.
I fundamentally don't understand why anyone would think the ARM acquisition was about anything other than making Jensen kingmaker over one of the two most important processor IPs in the world - and he gains absolutely nothing by becoming king and then slaying all his subjects. That would be an amazingly shortsighted decision from one of the most far-sighted tech CEOs in the business.
Selling some more Tegras is peanuts in comparison and that bump would never last in the long term. The money is in Qualcomm and Fujitsu and IBM's R&D budgets working in synergy with your own, and in the ability to leverage the ARM licensing model to push CUDA into the last places it hasn't reached.
The fact that NVIDIA was even making this offer at all pretty much means they were looking at loosening up their IP licensing IMO. As a black box, sure, but I don't see a world in which NVIDIA would buy ARM and either not license GeForce as the graphics IP, or would choose not to license ARM at all ("selling a few more Tegras is not worth $40b"). If you accept those two givens, then NVIDIA would have had to provide GeForce IP as a black-box SIP license.
The A64FX has way worse performance per watt than an AMD EPYC and an A100. The A64FX's density per U is higher, so you can get a lot more cores in the same datacenter, even if the TFLOPS/watt is far worse.
For example, if you look at the TOP500, #1 (A64FX) gets about 14.78 TFLOP/watt, #5 (EPYC/A100) gets 27.37 TFLOP/watt. Most of the A100 based solutions are similar performance per watt. Nvidia is already beating ARM in power efficiency. This looks closer to the 3dfx acquisition from Nvidia's prospective.
NVidia has a track record of creating good products, but all proprietary and with closed source firmware/drivers. And lots of arbitrarily software locking features to extract more money from different market segments.
> And lots of arbitrarily software locking features to extract more money from different market segments.
This is the cultural difference between ARM and Nvidia.
ARM does care enough about their IP that you'll need to pay them for ARM's effort to develop their IP, but beyond that they will not restrict you unless it'll put ARM into legal trouble (basically export restrictions). Hey, you could get everything now with a one-time license payment if you are willing to do that. It's not FSF open, but it's certainly better than a lot of hardware companies.
Nvidia is trying to extract everything into each pipeline. You want to use their hardware? You have to agree to only use this to precise conditions that is purely for market segmentation, which conflicts with ARM's culture. Compounding this with the fact that almost all acquisitions done by Nvidia that was more open was closed down (try talking to Nvidia about Mellanox), if the ARM acquisition proceeded, it'll be a bad day for everyone else.
That is good for Samsung/Qualcomm/Mediatek/Apple, but let's be honest, it is a failure as a business. Despite of being used on pretty much 100% of all smartphones' SoC, they barely make any money. Also, their designs are lagging behind, M1 and probably Nuvia are example of that.
> but let's be honest, it is a failure as a business.
Obviously not true, but I can see how some might think this if their main comparators are companies that get to $1trn valuations in 30 years. Not every company can or should be a FAANG.
I think their designs are meant to be lagging, M1 isn't "just better" it's a lot more about media encoding and packing more types of cores together, which while innovative is not what ARM should be doing.
ARM makes prototype designs that you can extend, M1 is an absolute success of that model.
Regardless: you say they've failed as a business, your metric is how much they earn from being nearly everywhere.
I would pointedly disagree: they're everywhere, that's success, they're everywhere and profitable. There's absolutely no need to rent seek... this is unsustainable capitalist thinking.
They're able to innovate and turn a profit with their "modest" income, why do you need them to do more than that? How do you know they would do better with more money?
Intel (yes, apple to oranges) has many orders of magnitude more money, but most of it goes into bureaucracies.
Imo the poor state of the GPU market competition-wise is the only way NVidia is able to get away with this, otherwise competitors would just eat their lunch by doing exactly the same thing without the arbitrary market segmentation.
I'd definitely agree on that though, Nvidia is really good on what they're doing, just disappointed that they don't view the open-source community as a positive.
F*** Oracle definitely, they can't even bother to have a proper SystemD service on systems which uses SystemD, relying on scripts that are definitely not redeeming them.
> And lots of arbitrarily software locking features to extract more money from different market segments.
To be fair, AMD does super annoying lock in/binning things too, though not with software locking.
For example, there was a Radeon that would have been a crazy awesome server rack mounted GPU for Deep learning 4U blades. You couldn't put it in a server because it was mounted on a chassis that was 4mm too large for PCI spec.
(Btw I hate hate hate Nvidia so this is not pointless whataboutism)
AMD is also moving towards proprietary interconnects for their add-on cards that provide higher performance links than their competitors are allowed to use. And they won't license them to their competitors of course - it's a "set bonus" if you buy all AMD. It's funny because people were panicking about Intel doing exactly that, and when AMD does it, crickets.
They also have moved to locking their CPUs to the motherboards, to restrict second-hand sales of surplus CPUs/etc. So far it's only locked to the brand, so you can put it in any Dell system but only a dell system, but it's still sufficient to cause a huge amount of headaches for the surplus sector. I was looking at some cheap Epyc 7402Ps on ebay until I saw... "locked to dell, only runs on Dell". Welp. And the other cheap listing, is that one locked too? Who knows, and that's the point AMD is going for, they just have to inject enough uncertainty to change consumer behavior. Most people aren't going to be willing to trace the provenance of a used CPU, they will just pay 20% more for a new one. A limited number will be sold as complete systems, but as the market shrinks and people stop considering secondhand purchases, eventually they will just be thrown in the trash. Mission accomplished.
And that's coming for consumer CPUs too. It's already here for AMD's "ryzen pro" desktop line, it won't be long until it hits walmart prebuilts too.
> Who knows, and that's the point AMD is going for, they just have to inject enough uncertainty to change consumer behavior.
That's not "the point". The purpose of the feature is legitimate, but you're right that the consequences suck hard for the second hand market.
The thing you're talking about is a security feature where the CPU would be locked to only boot when the firmware is signed by a particular signing key. If you're a high-value target such as a cloud vendor or a government / military, attacks on the firmware are a legitimate threat, and this ensures that the firmware cannot be silently tampered with. This is a big deal to them and that's the reason why AMD implemented this.
But yes, most people don't need or want this. It's entirely optional, it's supposed to be something the customer configures, but Lenovo turned it on by default. Pressure needs to be put on Lenovo to stop doing so.
I dont see the benefits tbh, as much as people here dislike them, a web interface would be better than its own desktop application in order to manage your homelab backups from different computers.
Also, the "Donation" edition is smelly, just be honest and call it "Pro" version to give consumer rights to the buyer.