Rust codebases extend in size and scale to larger teams fundamentally better than C++ / C. Rust offers more leverage in building ambitious system software.
Since you mention Qt, imagine writing all of Qt in x86 assembly, vs. C++. "There's not particular reason this isn't doable." C++ to Rust is a similar jump. No silver bullets; just leverage.
Cross-platform toolkits — especially those aiming to abstract over native UI/UX patterns — are an ambitious, if not Sisyphean domain. Qt was about the best we could do in the C++ era, but a new era has dawned.
> Qt was about the best we could do in the C++ era, but a new era has dawned.
Based on the way the Rust community has been spinning its wheels for years in getting something even within a light year of feature parity with Qt, if a new era has truly dawned you might need to wait for the next one.
> Rust codebases extend in size and scale to larger teams fundamentally better than C++ / C. Rust offers more leverage in building ambitious system software.
I like Rust - but this is classic RSF/RIIR copypasta.
> And you can non-ironically say the same about non-Qt toolkit written in C++.
Maybe. Gtk and its related libraries don’t cover everything Qt does, but they’re not small either. It’s also fair to include the native toolkits themselves on their respective platforms that are written in a mixture of C++/C/Objective C.
I also will point out that one of the cross platform toolkits with some traction in the Rust world is amusingly fltk.
> What grandparent probably means something that leverages parallelism and/or GPU acceleration.
You mean like QtQuick/QML, skia (basically this is the effective underpinning of electron and flutter) or Dear Imgui, etc.
There are a handful of widely used GPU based GUI libraries.
The above examples are all C++.
> leverages parallelism
The memory model of Rust is still whatever C++ does. I get that Rust has some nice features and C++ makes it easy to fuck your self but people have been doing large scale parallel software development for years in C++.
> Qt is also old as Jesus, and making a cross OS GUI is extremely hard.
The claim was we are in a new era of “leverage” that will make the hard very easy (that’s what it sounded like at least). I found the claim at best vague and low on specifics or evidence - hence the mention of RESF.
> Maybe. Gtk and its related libraries don’t cover everything Qt does
And therein lies the problem. Qt is semi-open (the parent company tried to close source it[1]). If a commercial company has no interest in maintaining it, there is even less hope for other open source approaches.
Best cross OS system are almost always backed by a commercial supporter. See Skia - Google, Java Swing - Oracle, Qt - QtCompany, etc.
OSS offerings were always runner ups (e.g. Gtk - Gnome).
> The memory model of Rust is still whatever C++ does. I get that Rust has some nice features and C++ makes it easy to fuck your self but people have been doing large scale parallel software development for years in C++.
Memory model of Rust is undefined[2]. It might be anything Rust does to accomodate C++ bindings, but I don't think they really settled on one.
I'd like to add - people have been doing large scale parallel software development for years in C++, in spite of C++. What is a line of comment in C++ in Rust is a type system constraint.
It's a difference between having a seatbelt (Rust) and holding a piece of seatbelt (C++).
Rust was literally made to address C++ shortcomings when it comes to parallelism.
> You mean like QtQuick/QML, skia (basically this is the effective underpinning of electron and flutter) or Dear Imgui, etc. There are a handful of widely used GPU based GUI libraries. The above examples are all C++.
No. I mean like WebRenderer[3], Lyon[4]. Most things should be parallelized and done on GPU/SIMD. Layout, font shaping, etc.
Also neither of your examples do any text shaping on the GPU. Lyon doesn’t do text and Webrender (which depends on freetype) does regular old glyph cache built in CPU texture rendering. Neither involve GPU shaping.
Still trying to understand what earth-shattering, "leverage" levering feature this brings compared to Skia.
> Memory model of Rust is undefined[2]. It might be anything Rust does to accomodate C++ bindings, but I don't think they really settled on one.
> I'd like to add - people have been doing large scale parallel software development for years in C++, in spite of C++.
Do you not see at least some level of contradiction to these statements?
This is probably the more relevant explanation:
https://doc.rust-lang.org/nomicon/atomics.html.
"At very least, we can benefit from existing tooling and research around the C/C++ memory model."
> Sorry, font rendering. I thought font shaping is part of it. Pathfinder and Lyon were libs.
They don't do GPU font rendering either.
Pathfinder isn't widely used -- see my original post about "spinning wheels".
GPU font rendering hasn't demonstrated much real world value for GUIs. Where it does get used a bit is games - an industry that is overwhelmingly C++ for the foreseeable future.
> No? They are discussing Simd and generic interactions.
I was responding to this: "I'd like to add - people have been doing large scale parallel software development for years in C++, in spite of C++."
Regardless, Rust still depends entirely on the years of research and development in C++ for a memory model which is a core underpinning of parallel programming.
Ofc. Servo was shut down before anything could happen with it. Making production-ready GPU font rendering is hard.
It's not spinning wheels anymore than OSS life cycle is spinning wheels (author needs functionality X author makes a useful lib for X -> it becomes popular -> amount of work increases -> due to pressure/changes in life author abandons lib -> another author has needs functionality X -> ...)
> I was responding to this: "I'd like to add - people have been doing large scale parallel software development for years in C++, in spite of C++.
Saying Rust has issues doesn't negate C++ having massive issues but also a bigger mindshare.
> Rust codebases extend in size and scale to larger teams fundamentally better than C++ / C
There are about 3-4 orders of magnitude more large C++ projects developed by large teams compared to Rust projects as of now.
This might change in future, but your claim seems a bit premature.
In a previous company I worked at, we were forced to rewrite a greenfield project in C++ because it was going too slow with Rust. The team managed to ship the thing in a few months, compared to spending a month getting nowhere with Rust.
Sounds a lot like the effect of venture capital in the present day.
> First of all, the best scientists would be removed from their laboratories and kept busy on committees passing on applications for funds
Convince talented technical innovators that the best way they can apply themselves is to become a 'business person'"; then, talk to VCs and feel important by spending a lot of money instead of building something
> Secondly, the scientific workers in need of funds would concentrate on problems which were considered promising and were pretty certain to lead to publishable results
Do whatever VCs think is hot
> For a few years there might be a great increase in scientific output
The last 15 years?
> There would be fashions. Those who followed the fashion would get grants. Those who wouldn’t would not, and pretty soon they would learn to follow the fashion, too.
Except VCs lose all their money pretty quick if they never get it right, and don't get rewarded (to the same extent) for following along with whatever is in fashion. So, there is another, more fundamental, factor there- one that drives VCs to invest in things that are thought to be under-funded but have potential.
Honestly, it seems like trends for VCs cycle at around the same frequency as paradigms in most academic fields (on the order of around a decade). Unless the VC gets caught with their pants down right at the end of the transition, they'll probably turn a profit.
> Unless the VC gets caught with their pants down right at the end of the transition, they'll probably turn a profit.
Sure, it's like investing in stocks. But the incentives for VCs and academics are totally different. In academia, volume is a big part of prestige (not for everyone, of course, and to many people's dismay), whereas for VCs they're looking for that one huge disruptor, and are often willing to miss on hundreds to hit it.
Makes you wonder whether all of this makes any sense, since the end result isn't humanity ending up with great tools to assist human lives, but a select few with capital getting even more capital, to the detriment of the former goal.
> It's that Adobe was likely seeing subscription revenue take hit from customers
While being pummeled by public markets, and being forced to make a move that might keep shareholders from calling for blood.
This is certainly not the first time that Adobe has presented a number to Figma's board — but it has to be the biggest number yet, by far.
From Figma's position: take your chances on an IPO while the Fed is cracking skulls around inflation — or flip the bit on that liability, and cash out to a desperate Adobe?
Another interesting layer to this is that Adobe only has $5b in cash according to their balance sheet, so the overwhelming majority of this deal is probably in Adobe stock with a long vesting period. Also the deal being done in a downturn means that the difference between this and an IPO is academic in my view
Figma was never on track to change the world. They were an Adobe clone from the beginning, out-executing them, but fundamentally exactly as anti-innovative.
Not that $20B is anything to shake a stick at — but real innovation in this market will be worth one to two orders of magnitude more. Figma was scratching at this with their "whole org collab" vision and FigJam, but they lacked the vision to crack it, and their execution has been faltering since their early talent started jumping ship. Selling to a desperate Adobe, distressed by public markets, is the perfect chance to "fail up."
Why am I disappointed in Figma? Because they could have been so much more. Because in effect, they have held the creative world back by doubling down & cashing in on Adobe's corruption of design tooling. Play Adobe games, win Adobe prizes. It's just a shit game, and peanuts compared to latent opportunity in this space.
Figma literally changed my world so I couldn't disagree more.
Today I have +800 users and +100 editors in my Figma system; copy writers, ux, ui, ur, pm's, analysts, bizz, everyone, is collaborating like I have never seen in any Adobe setup.
Adobe hasn't even been a contender, meanwhile Figma won over Sketch and Invision as well. So while I agree that they are still missing some features, especially for shared design systems, then I don't understand your view, care to elaborate?
Figma is all about collaborating on software — yet they don't touch software.
They have been incrementally innovative in involving design-adjacent stakeholders in the design process, but the elephant in room is "how do we collaborate on _software itself_," rather than pictures of software?
When you have copywriters, designers, managers, and biz making pull requests to a Github repo via a design tool — that will be world-changing. This COULD have been our world already, but this paradigm is against Adobe's entrenched interests. Figma just played Adobe's game, and netted a cool $20B for it.
When Adobe acquired Macromedia, they extinguished an entire paradigm of design tool: "design tools that create software." Back in the booming 90's, this paradigm was _the future_.
Through that acquisition, Adobe shoe-horned the world into a paradigm of "hand-offs," and Figma's leadership (namely, Sho) doubled down on the "hand-off" vision. "Play Adobe games."
The future for collaborating on software design & build looks more like Flash, and less like Photoshop (though obviously, not quite like either.) "Exactly as anti-innovative."
"Hand-off" describes a paradigm of designer/developer collaboration where a designer creates a mock-up or prototype, then "hands off" that picture to a developer who then creates the "real thing."
Hand-offs are wildly inefficient, and the fidelity of creativity & artistic expression gets largely butchered on its way into the final medium.
Contrast with a design tool like Webflow, Flash, HyperCard, or Visual Basic, where the product of the design process is production software. Figma could have gone down this route — a harder route, admittedly, but ripe for innovation — and they chose not to.[1]
Good for them: $20B. Bad for the world.
--
[1] Sho's vision is that design _should_ live in a separate world, and that hand-offs are the ideal form of collaboration — because they enable design to be unfettered by the constraints of production software. I would call this a failure of imagination: it is quite possible to explore free-form design ideas within and around production software: c.f. Macromedia Flash.
Sure, $2T — a fundamental innovation in how we "design and build" software has implications as far-reaching as the World Wide Web itself. Google IS the World Wide Web — they have previously broken a $2T market cap.
Rust codebases extend in size and scale to larger teams fundamentally better than C++ / C. Rust offers more leverage in building ambitious system software.
Since you mention Qt, imagine writing all of Qt in x86 assembly, vs. C++. "There's not particular reason this isn't doable." C++ to Rust is a similar jump. No silver bullets; just leverage.
Cross-platform toolkits — especially those aiming to abstract over native UI/UX patterns — are an ambitious, if not Sisyphean domain. Qt was about the best we could do in the C++ era, but a new era has dawned.