But that's exactly the market I thought AI would eat first. The blog posts need some sort of picture to represent the general idea of a high tech server. Nobody cares which exact thing it is and if it even exists. In many respects a non-existent one may be better, it won't get obviously old.
So why pay money for a stock picture where you could have a passable substitute for free?
There's common wisdom that blog posts need an image, more or less any image. Larger publications probably have stock image contracts. Smaller ones use Creative Commons--maybe honoring non-commercial/non-derivative clauses or not--or just grab anything they can. Generative AI seems tailor-made for need an image, any image. I imagine I'll use it myself.
Here, have an upvote. The cover picture is there to represent the article and to invite potential readers to dig into it. I believe it is as important as the article text. If the cover picture which is taking a significant part of your screen when the page loads look like this, I already have a bad impression on the article itself. Sorry I am not using a plain text browser.
Linux distros and essential libraries has been more or less ported. There are some rough edges here and there, but on the whole all the pieces are there (software and hardware).. and yet the uptake has been slow. Turns out, while cool to hear about, most people at the end of the day don't really care about what architecture they're running.
> the uptake has been slow. Turns out, while cool to hear about, most people at the end of the day don't really care about what architecture they're running.
Price is a factor. €140 for the Lichee which "nearly catches up with standard ARM board performance like Raspberry Pi 4" (their own words from the AliBaba page) is hard to justify for most, especially since this also comes with less mature ecosystem support. The $90 Pine64 is cheaper but also slower than a RPI4 and Pine64 is a "by enthusiasts, for enthusiasts" type of deal, which means stuff like 30-day warranty (which is a fine, just not a good fit for many uses). There is no price for the Asus one yet, but it's a lot slower than a RPI and judging from their other boards I bet it's going to be a lot more expensive.
> Looking at how open source has eaten the proprietary software vendors
It has but mainly only in areas where adopting it allows cutting costs and is not the end product itself. Not sure why/how could that work for chips. Why would you give out your core designs for free to anyone?
> It has but mainly only in areas where adopting it allows cutting costs and is not the end product itself.
That's a lot of areas for RISC-V.
It doesn't matter to the end user if the end product (Phones, Tablets, TVs, routers, Home automation gadgets, etc) has ARM or RISC-V.
On the desktop, the end-user may care about whether their software still works, and as such the ISA matters.
In appliances, which is where ARM is dominating, the end user's don't care about whether they can run their existing spreadsheets, or powerpoint, or games, or other software that won't be ported. They only care about whether the device is still as usable as their previous device.
If end users cared at all about the ISA, ARM would never have taken off for phones, tablets, etc.
In this space (that ARM is dominant in), the ISA isn't the end-product; the device is.
> It doesn't matter to the end user if the end product (Phones, Tablets, TVs, routers, Home automation gadgets, etc) has ARM or RISC-V.
Yes but it's not about the end users and it matters for the producers. I mean if your core designs are 'open' and anyone can make just as good chips as you what are you competing on? It means you have no margins since you're selling a commodity, which means there is no incentive into investing anything into R&D (unless it allows you to cut production costs).
> In this space (that ARM is dominant in), the ISA isn't the end-product; the device is.
So all new chips will have to be designed in-house by the companies which make these devices (effectively eliminating 'middle-men' like Qualcomm, Intel, AMD etc.) so nobody is really competing on CPUs, since core design can no longer be part of your core business (the same way everyone is using Linux, Android, Blink/Chromium)? Wouldn't that lead to stagnation?
I don't really see this happening with hardware, though. Especially not high-end chips as long as you can gain a significant advantage by keeping your designs proprietary and all 'open' cores are not protected by something equivalent to GPL (which realistically would be very hard to enforce)
> Lots of riscv implementations are closed source though
Exactly. Which potentially can make it worse than ARM, making it impossible for new players to enter the market at some point. Catching up would require massive investment and you can't just buy a competitive design of the shelf (looking at ARMs business model licensing your designs to other is just not a good deal and they are effectively still a monopoly in certain segments)
Also see XuanTie C910, open-source yet competitive with Cortex-A73, been available for years.
Meanwhile, Eben Upton has been falsely claiming that no such cores are available for licensing.
A Raspberry Pi 5 could already be out with RISC-V and higher specs than pi4/pi400 if he hadn't ignored C910 or otherwise licensed any of the many competitive cores in the market that have been license-able for years now.
the price will take time and volume. to me risc-v is interesting largely because of how well thought out their vector instruction set is. it really shows the advantage you get when you are the last one to implement a feature and can see how everyone else messed it up.
Next year will have Tenstorrent's (CEO: Jim Keller) Ascalon, led by Wei-han Lien, who also led the M1 project at Apple, and expected to have similar IPC as projected Zen5 (also TBA 2024) but using significantly less power.
It has a range of smaller variants, potentially able to cover a range of products including servers, laptops and smartphones.
The provenance of the ASUS Tinker V units is Taiwan and Japan?
I wonder whether they'll embrace open firmware and mainline Linux (including all drivers) for the entire board.
That's what I really want from RISC V offerings. (Though I understand the current motivation of many with RISC V is simply to avoid ARM licensing fees, and that having trustworthy and long-term sustainable hardware usually isn't a requirement in IoT.)
Don't hold your breath. They've had a incredibly disappointing track record with Tinkerboards. Taiwan and Japan .. have a horrible record with open source as far as I can tell? I can't think of any counterexamples right now - and they don't have a Silicon-Valley/Shenzhen hacker cultural hubs to push things along externally
It's based on a bad chip that violates the privileged spec, by having a hardware pinned TLB entry. On a really bad memory location that breaks standard Linux userspace.
It will never be able to run standard distributions, only bespoke for the device.
Categorically avoid, and hope the next generation Renesas chip is less dumb.
The SoC IP used in ASUS's Tinker V deliberately violates the virtual memory spec in a way that affects userspace, rendering certain virtual addresses unusable. This region overlaps with the default base address of position-dependent executables, so those cannot and will not run on it unless rebuilt with a different explicit base address.
I'm more optimistic for RISC-V. For all the devs who worked on it here (at https://vates.tech), they told me it's very easy to work with since it's close to many Arm design principles.
That's why I believe it's important to prepare the platform today for those future machines. I think it's a great opportunity to not only get an alternative both x86 and Arm, but also really opening the choice of the design, letting new players mastering both hardware and software (I have to admit that's something I'm considering for my business at some point).
I think just a lot of the open source community and even ARM itself assumed there would be some inflection point at which point everyone would jump ship to RISC-V. I could be wrong, but at this point it seems more likely to be a gradual change unless there is some key gamechanger piece of hardware like the M1 chip
Most of the big moves in the RISC-V space seem to be coming from the low-end and from China. They might have trouble making the jump to a true x64 M1-like competitor given the geopolitics with cutting edge chip manufacturing (TSMC etc.)
I think other players are going to have funding issues to take on the big players as the open nature of RISC-V seems to mean it's harder to build an IP moat. I've noticed a lot of the Chinese chips come with stuff like NPUs to set themselves apart and presumably get some lock in on their "platforms". But that's just my naiive impression reading some blogs and looking at releases. (ie. I have no idea what I'm talking about)
I don't think it will be "instant change". I agree on the gradual result, but I think it might be faster than we think. Yes, it also depends on the ecosystem, but I think the world is more ready for ISA diversity than ever.
>I think just a lot of the open source community and even ARM itself assumed there would be some inflection point at which point everyone would jump ship to RISC-V. I could be wrong, but at this point it seems more likely to be a gradual change unless there is some key gamechanger piece of hardware like the M1 chip
I don't know why. It's not inherently better than any of the current architectures in meaningful way, just doesn't require license.
IIRC royalty per ARM chip was somewhere in single digit percentages which is basically meaningless for everyone but the chip company, or few big companies making super low margin stuff.
the expensive part of an arm license isn't the money, it's the fact that you have to deal with it (and be highly reliant on a 3rd party that doesn't share your interests).
Unless you design RISC-V core yourself that doesn't change with RV. Still need to find someone that will license you core (remember, ISA is the free part, not implementation).
I guess if using one of the open cores is enough in your use case that would make it easier.
it also means what you know you can switch vendors without rewriting code. you need to license the core, but you have options on who to license it from.
I’m interested to hear what you think the structure of the market for licensed RISC-V cores will be. How many vendors will there be and where will the revenue to support all these vendors will come from? At the moment licensing cores is not a massively lucrative business.
I don't know, but it seems likely to me that it should be able to sustain a few companies. All the really high end stuff will probably be first party developed (e.g. by NVidia) and not sold to 3rd parties, but there are a lot of places where flexibility is more important than raw performance. It's possible that market will be fulfilled by open source cores, but I wouldn't be surprised if there are some medium sized companies that make a business out of 3rd party custom chip design.
QEMU is probably the most used, most powerful non-embedded RISC-V platform today. If a customer really wants RISC-V for server applications, it's probably the fastest way to do this today- it seems we're still pretty far from server-grade RISC-V silicon.
It still feels like a broken mess on my Starfive Vision2, but things are improving. I just think I'm not developer brained enough to fix the rough edges. Debians repo's are still missing a lot of riscV ported software but that takes time.
That is actually starting to change, and per Eben Upton’s very recent interview with Jeff Geerling, should be approaching back to normal for hobbyists in coming months.
I’ve checked rpilocator a few times this week to find 4Bs available every single time. Not tons and not everywhere, but they’re starting to have availability again.
There have been way better options since almost the beginning in terms of design/arch/boarddesign than RPi with their weird VCore monster. Nobody has displaced them in like a decade. I doubt a change of architecture is the final push needed
ISA will not matter. Features and how well it works out of the box will.
rPi have hundreds of competitors and they still sell by far the most amount of SBCs, because "it just works" with many things supporting it out of the box
Xen provides a great security design and a protocol to do hypercall that makes sense (unlike kvm+virtio which is DMA all the way, with all the plus in terms of simplicity but the bad on the isolation aspects).
If I wanted to caricature the situation: KVM is more simple to work with in terms of dev (you have results fast), but kind of "fuck security".
Xen is hard from the dev perspective, because it's a more micro kernel by itself, and you can't cheat to have access to the memory, you have to use grant tables (see https://xcp-ng.org/blog/2022/07/27/grant-table-in-xen/ ).
So if a part of the industry took a shortcut, doesn't mean Xen isn't still relevant :)
I wish I could find a nice explainer on this. I heard this too, but not sure why.
If I understood correctly: Xen is a more secure design and has had a lot of work put into it (e.g. why Qubes uses Xen vs KVM), but KVM is: faster, in the kernel, and getting a lot more attention generally so in the future anything it's worse at now could get better because of the investment it is getting.
All the above is literally me guessing which is why I'd love someone to actually write a proper explanation as to why "KVM >>> Xen"
Which industry? There are several products that use xen under the hood, like AWS, Citrix, Qubes, Linode, etc. Why would you install an entire fully fledged OS like Linux to manage a company owned Windows machine that a data analyst uses when you can use a type 1 hypervisor instead?
Both Linode and AWS either migrated off Xen or are in the process of it. Qubes (last I checked) wants to move to a VMM independent design.
You can build a very minimal host image based on Alpine or something similar to reduce the surface. I'm not sure how this compares to Xen these days though.
At one point I managed to convince KVM to work with Linode-style disks where /dev/vda is the actual filesystem, no partition table, but it took a fair amount of fiddling.
I'm not sure why people don't do that more often, it's just a lot more comfortable when you can expand a host's LV as needed, and fsck and mount from the outside without having to deal with the embedded partition table.
>At one point I managed to convince KVM to work with Linode-style disks where /dev/vda is the actual filesystem, no partition table, but it took a fair amount of fiddling.
On standard KVM, if you make a VM and give it /dev/vgname/fedora, inside you get /dev/vda as your main disk, and then you have this:
/dev/vda1: /boot/efi
/dev/vda2: /boot
/dev/vda3: LVM
What I want is this:
Host side -> VM side
/dev/vgname/fedora-boot -> /dev/vda
/dev/vgname/fedora-root -> /dev/vdb
/dev/vgname/fedora-swap -> /dev/vdc
LVM on the VM side is superfluous when you're the owner of everything.
This way I don't have to deal with the whole rigamarole of /dev/vgname/fedora having a partition table inside. I can just resize/snapshot/mount/fsck every partition from the host directly.
I just find it odd that this seems like an unusual configuration and that pretty much no distro seems to want to work this way.
Oh, I know what you mean, I just value "it booting on anything" over wasting few hundred megs on /boot|EFI partition.
We went thru that phase with Xen and it just made it super annoying to migrate to anything else, because for that to work you need hypervisor-specific assist to boot the kernel instead of generic bootloader.
Previous admin also had "genius" idea on putting kernels hypervisor-side which made all kinds of messes, as now hypervisor had to had locally every kernel every machine that might run on it needs for boot...
> LVM on the VM side is superfluous when you're the owner of everything.
I mean, if you IDGAF how disks fill up, sure. I like to split it so, for example, app filling its data dir does not stop /var/log from logging. I did wrote script that auto-resizes LVs and filesystems (up to a given limits) so I just use that. That on my multi-purpose VPS at least
In day job we generally don't do LVM for smaller one purpose VMs but do when there is say a database + significant app data, because app filling disk so DB can't work properly is more annoying to fix than app where upload button just stopped working. Database servers usually get "vdc for database, vda for everything else" because that's nice and easy when it comes to giving it more space.