Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Sustainability with Rust (amazon.com)
137 points by littlestymaar on Feb 12, 2022 | hide | past | favorite | 96 comments


Ah yes, we'll save the planet by writing web services in Rust, which serve inefficient, bloated webapps to billions people, with browsers constantly spinning the CPU near thermal shutdown...

If you want to save energy, write small software. Small, local, self-contained software. Not everything has to be a webapp. In fact, most things shouldn't.


WASM is more efficient than JS...


I think they were just pointing out that software architecture will usually make a much bigger impact on efficiency and energy usage than language choice

A bloated app is wasteful no matter what language its in


Ah, okay, that makes sense :D I agree


It highly depends on what the use case is.

For most use cases JS is still more performant than WASM unless you're doing a lot of very heavy number crunching which the vast majority of WebApps aren't.

Browser vendors have done a ton to make JS fast.


> WASM is more efficient than JS...

Last time I checked it wasn't. That's a few years back. I would be really happy if it is now.


WASM does not have native access to DOM for starters.


> Not everything has to be a webapp. In fact, most things shouldn't.

And be at the mercy of monopolies.


exactly. Or consume less products.


Getting more intrigued by Rust by the day. Mostly I use Java, Python, JS, some C#. Some golang. But Rust looks like it may actually be a language of the future.


It's a nice language, the first language where having learned it I really wanted to rewrite my existing code in it.

If you enjoy learning languages (you listed five) it isn't hard to play around with some Rust and see if you like it, the entry "price" in terms of time and effort is minimal, the tools even by default generate a Hello, World example so you don't have that intro-to-Java moment where there's like eight lines of inexplicable boilerplate to copy from the slides before you can print a word on the screen (what is "Public static void" and what are "String[] args" ?)

But it isn't a panacea, all the languages you list are Garbage Collected, so for you Rust would be a crash course in ownership / lifetimes. If you write enough of the languages you mentioned you might have touched on this, a database handle for example might have a distinct lifetime, the IDisposable interface in C# is in this ballpark, you may have some clue. In Rust this gets real immediately for even mundane seeming objects like a String (the mutable string type), and you may find that it's more productive to do work where you're unclear about the object lifetimes in a garbage collected language where it isn't your problem. Or you might find it all makes perfect sense and you wonder why anybody wasted their time with these GC languages. People vary.


Thanks for a comprehensive comparison.

> you may find that it's more productive to do work where you're unclear about the object lifetimes in a garbage collected language where it isn't your problem.

This is exactly why I haven't ventured into Rust yet.

But lately I've taken interest in WASM and HW programming. I write in Go-WASM, But have realized that concurrency prowess of Go is irrelevant in WASM as threads are not implemented yet, More over it's a recipe for getting the UI thread blocked.

And GC languages are generally a pain on HW with low memory.

Is Rust worth the effort for writing WASM? Thereby gaining the necessary skills to apply in HW systems programming?


Any analysis that seems to show C++ as 34% or 56% less energy efficient than C only demonstrates its own failure. And, any analysis that relies on such results is likewise a failure. We need read no further.


I’m not sure how I feel about the placement of Java and C# with regards to energy efficiency related to golang. I get that they are optimizing JIT compilers and golang is not going to get any better after the compile step but this seems odd nonetheless. Anyone have more insight?


You can't really trust the computer language benchmark's game. Many of the solutions there have code that's as far from idiomatic as possible (nobody would write code like that in real production applications).

That said, two things:

1. Go is not particularly efficient as a language and its compiler doesn't optimize much compared to the other "efficient" language compilers (even the JIT ones).

2. These small benchmarks benefit JIT languages quite a bit since the JIT cost is barely noticeable. In large real-world apps constantly running a compiler in the back is not exactly efficient. It'll pay for itself vs interpreted languages, but never against AOT compiled ones.


> Go is not particularly efficient as a language and its compiler doesn't optimize much compared to the other "efficient" language compilers (even the JIT ones).

I mean, relative to many other mainstream languages including many of the JIT languages, Go is very efficient, but a lot of that efficiency comes from the language design (e.g., idiomatic value types) such that the compiler and runtime don’t need to be as smart.

Agreed that the benchmarks game is a pretty crumby measure of performance, but it typically imposes strange restrictions on Go that prevents it from showing its idiomatic performance (never mind its ceiling). For example, languages with garbage collectors aren’t allowed to preallocate memory for certain benchmarks even though that’s absolutely idiomatic in Go (because Go has value types) so you have languages with GCs optimized for languages where everything is an allocation which perform well and languages like C and Rust which are allowed to preallocate because they don’t have a GC (which is a weird criteria) and then you have Go which appears to be very slow because it’s forced to allocate in a tight loop.


Yes Go's design is not bad as far as efficiency goes, but it's not amazing either.

From a language design perspective (when it comes to "ease of optimization") it's much better than Java and comparable to C#.

But compiling C# with CoreRT (which is basically .NET Core's RyuJIT configured as an AOT compiler) shows just how little Go's compiler optimizes. Go would need to be even more "easy to optimize" than it is to make up the difference. Rust and C++ are both easier to optimize and their compilers are in a whole other league entirely, and it shows on the horrific compilation speeds.

I'm not sure what the right way to do these "benchmarks" is. Some of the code I've seen there was more low level and convoluted than stuff I'd write in C. Not even remotely representative of how the languages are used. Stupid rules like that one you mentioned also don't help since some languages get to preallocate and others don't.


Agreed--Go isn't in the top tier of languages with respect to efficiency--the languages in this tier trade off a lot of other facets to get high performance, and those tradeoffs don't align well with Go's goals. That said, I would be curious what sorts of optimizations Go could add which would pay for themselves with respect to keeping compile times fast. I don't know a lot about compiler optimizations, but I find the topic very interesting at least as it pertains to languages I use (I really don't care much about Haskell or OCaml or any of the other PLT darlings).


> I'm not sure what the right way to do these…

In which case, maybe what you see is as good as you'll get.

> … some languages get to preallocate…

Here's Go using a library pool —

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


The sync library is part of Go's standard library and is meant to allow safe concurrent access by multiple goroutines. It's pretty idiomatic Go!


Yes — so "allowed to preallocate memory" ?


> … aren’t allowed to preallocate memory…

Because as-you-know the whole point is to "allocate zillions of short-lived nodes" —

https://news.ycombinator.com/item?id=29323468


The C++ and Swift implementations use a node pool from the Apache Portable Runtime. This benchmark would be a lot more interesting if we got to see the price of C++'s RAII or Swift's reference counting.

The Swift Code looks like something you'd write in C.


> … the price of C++'s RAII…

Like this? (There are 8 other C++ binary trees programs.)

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

> … or Swift's reference counting.

Like this? (There are 4 other Swift binary trees programs.)

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


You could ask the authors to update their study.

You could look at the different binary trees programs shown on the current benchmarks game.


“It’s not a bad benchmark, it’s just showcasing a contrived scenario with different sets of rules for different kinds of languages”.

Most people aren't building applications whose business requirements preclude preallocating memory except for languages which lack garbage collectors.


So those are scare quotes not actually quotation?

If we don't experience a performance problem, we don't need a workaround.


No, that’s not an actual quotation. I was having a laugh at the silly logic which attempts to justify the benchmark. And preallocating isn’t a workaround, it’s idiomatic in Go.


That is not a contradiction: it can be both idiomatic in Go and a workaround to avoid garbage collection.


And that’s a valid point for the set of applications whose purpose is generating garbage. But overwhelmingly this isn’t interesting and people mistake this benchmark for a useful indicator of programming language performance in the general case.


And the set of applications that will inevitably generate garbage, which might be left to automatic garbage collection.

Errare humanum est.

"Energy Efficiency across Programming Languages" not "programming language performance in the general case".


It’s not a very compelling defense of a benchmark to argue that it measures something different for some languages than others—in this case for Go the benchmark measures poorly/carelessly written code while for other languages it measures meticulously optimized code.

I maintain a contrived benchmark is not a useful benchmark.


In this case for Go, the same as other languages that provide automatic garbage collection.

In this case for Go, the same as other languages that provide a library pool implementation.

Apparently the study authors found a use.


> Many of the solutions there…

You are looking in the wrong place.

Here are the 4 year old programs used in the "Energy Efficiency across Programming Languages" study —

https://github.com/greensoftwarelab/Energy-Languages

> … nobody would write code like that…

When you tell us you wouldn't, we should all believe you.

When you tell us nobody would, we should ask how you know.


> You are looking in the wrong place.

Well, I randomly went and opened the C# binary tree example. I see a TreeNode struct with a private Next class inside of it. Can't think of any reason to write a BinaryTree like that in C# beyond trying to game some benchmarks.

The C++ implementation uses a node pool from the Apache Portable Runtime (a C library). Also not idiomatic (how did they get away with using a library?!). I particularly love how it doesn't even use std::string to write text to.

The Swift implementation also uses the Apache Portable Runtime node pool, completely non-idiomatic (even less justifiable than the C++ one), don't need to go into detail on this one.

The Go and Java implementations look perfectly normal for the respective languages. The Dart one too, but that one doesn't even use any concurrency so it's starting from a losing position.

This is not a fair comparison.


> … beyond trying to game some benchmarks…

Did it work?

If it wasn't massively more performant than other ways to write that program in C#, then maybe it's just fine for the purposes of "Energy Efficiency across Programming Languages".

> … but that one doesn't even use any concurrency…

What is the relationship between multicore and energy efficiency?

> This is not a fair comparison.

The comparison is energy efficiency not elapsed seconds, yes?


Comparing the energy efficiency of code that's not even doing the same thing is not a useful comparison. If you're running these benchmarks on a server that's always running full throttle then using more cores will use less power because the software finishes faster. If you're running on a laptop the relationship between core usage, elapsed seconds, and power usage is very non-linear.


So finally the greening of programming languages!

Kind of hard to argue with the article's premise. Though all languages aren't equal for all tasks. For example, its hard to use Rust for web projects because the ecosystem isn't mature enough. May be there should be investments into Rust ecosystem from pro-environment groups.

PS: that Go memory usage is incredible. Why is it so low when compared with even 'C'.


> May be there should be investments into Rust ecosystem from pro-environment groups.

Meh, I think they have better ways to spend their limited resources than putting them into something Amazon is also investing in.

> PS: that Go memory usage is incredible. Why is it so low when compared with even 'C'.

That study has been posted here and a bunch of other places and just isn't that reliable. It's based on The Computer Language Benchmarks Game. Some of the implementations are painstakingly optimized with SIMD and sophisticated algorithms. Some aren’t. It’s benchmarking N different programs written in M languages (where M<N) that satisfy a given challenge, not the same program ported to N languages. Which languages have high-quality implementations might say more about how obsessed a given language’s community is with this particular set of benchmarks and rules than the actual efficiency of the language in question.

Take a look at the dramatic difference between Javascript and Typescript. Then realize that Typescript is literally the same language. Basically I think someone wrote slow-solution.ts, then someone else wrote fast-solution.js, and no one bothered to cp fast-solution.js fast-solution.ts.

Edit: that said, I do think Rust is genuinely a great language when you need security and efficiency together. And Go is a great language when efficiency is not quite as important as productivity.


> And Go is a great language when efficiency is not quite as important as productivity.

The Go problem is not so much that it's inefficient, it's that the lack of compile-time checks encourages bug-prone cowboy coding which is often mistaken as "higher productivity". Because no one's measuring the very real productivity impact of easily preventable software defects.


> . It’s benchmarking N different programs written in M languages (where M<N) that satisfy a given challenge, not the same program ported to N languages.

And using K different standard libraries. Otherwise I can't explain that they put C on a different result as C++, considering they are often the same compiler (and not just the same backend). Probably, it is benchmarking stdio vs iostreams or the like, which is interesting.


Yes, iirc the Game's maintainer has some rules about idiomatic code. They might mean that if a language has a hash map built into its std library, you can't supply your own. C++'s std::unordered_map is infamously bad (as compared to e.g. absl::flat_hash_map) so that hurts. C has almost no useful library or idioms. In normal circumstances, I consider that a disadvantage, but in this case it may mean you write your own more optimal code and achieve a better score...


> might mean

None of the current 3 C++ programs use `std::unordered_map` they use `assoc_container.hpp`.

The current C program uses `khash.h`.

Someone who was interested in evaluating "Energy Efficiency across Programming Languages" would have to look at the authors public code archive and understand what they did.


> Otherwise I can't explain…

They provided the source code of their programs and makefiles. There's no need for guessing games —

https://github.com/greensoftwarelab/Energy-Languages


I've still not yet seen a proper benchmark of performance across programming languages that properly accounts for the spectrum of implementation ability that a particular engineer might have. If Leetcode published some of its statistics though it might be pretty good


> I've still not yet seen…

Why do you think that is?

Perhaps it's because "proper" is ill-defined and very much a matter of individual opinion.


> And Go is a great language when efficiency is not quite as important as productivity.

Which still significantly lags Java and C# when it comes to productivity.


> It's based on The Computer Language Benchmarks Game…

It's based on these 4 year old programs —

https://github.com/greensoftwarelab/Energy-Languages

> … the same program ported to N languages…

You can write Fortran in any language ?

----

> … no one bothered to cp fast-solution.js fast-solution.ts.

Perhaps figuring out correct type annotations was too much of a barrier.


> > It's based on The Computer Language Benchmarks Game…

> It's based on these 4 year old programs —https://github.com/greensoftwarelab/Energy-Languages

Yes, that repo's blurb confirms what I said: "The complete set of tools for energy consumption analysis of programming languages, using Computer Language Benchmark Game"

If you were making some other point, I missed it. As for the programs being four years old, the paper was published in 2017, so that's exactly as expected...

> > … no one bothered to cp fast-solution.js fast-solution.ts.

> Perhaps figuring out correct type annotations was too much of a barrier.

Sure, the details don't interest me. The point is that these two languages with identical compilers (TypeScript gets translated to JavaScript, discarding you're annotations) show wildly different scores, which makes it impossible to take any of these numbers too seriously.


> … I missed it.

Your "painstakingly optimized with SIMD" doesn't apply much to those 4 year old programs.

> … which makes it impossible to take any of these numbers too seriously.

Never throw the baby out with the bath water.

Never take any numbers too seriously, just seriously enough.


> Your "painstakingly optimized with SIMD" doesn't apply much to those 4 year old programs.

What's this then? https://github.com/greensoftwarelab/Energy-Languages/blob/ma... There are some explicit SIMD instructions, and some programs written to allow autovectorization.

> Never throw the baby out with the bath water.

/eyeroll

There's no baby. There's no significant investment in applying equal levels of optimizations to the programs across languages. TypeScript vs JavaScript is a telling example but hardly the only one.


> There are some explicit SIMD instructions, and some programs written to allow autovectorization.

Yes there were some way back then.

Many more have been contributed since then.

> … equal levels of optimizations to the programs across languages.

How do you propose to measure that exactly?

Or will we like Goldilocks know it when we see it?

Like these —

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


Oh, it just clicked for me that you are the Game' maintainer.

> How do you propose to measure that exactly?

I don't have a proposal, other than to avoid overselling. I like that the Game's website has the text "If the fastest programs are manually vectorized SIMD, does the host language matter? You might be more interested in less optimised programs." The "Energy Efficiency across Programming Languages" paper though is overselling its rigor.


Yeah, sorry about that, peoples comments usually seem more open before they realize.

It's difficult to find a balance that works — otoh for some hints give a context, otoh for others hints give stones to throw ;-)


> For example, its hard to use Rust for web projects because the ecosystem isn't mature enough.

Actix-web is finally getting a stable, production-usable release. That alone ought to amount to a pretty sizeable boost for the ecosystem.


What do you mean? actix-web hit 1.0 a long time ago. If that 1.0 wasn't actually stable and usable in production, or if there's another upcoming milestone, can you please provide a link?


It was unmaintained for quite some time, but it's been picked up again and is pushing out new releases.


Rust has a mature web ecosystem with actix web. It is almost at a 4.0 release.


When I looked into Rust for web development I looked at Rocket and Actix web. Rocket seems to be on some kind of hiatus and haven't had any development in months, so that seems like a risky bet. Actix web had some drama two years ago when the founder and lead developer moved the repository and quit. He later appointed a new lead and it seems to do well since then, but it does give cause for concern. Can you build on top of frameworks that so easily can get stale or taken down? Are all of them maintained by single unpaid open source developers? I think Rust looks like a great fit for efficient back end programs, so I want it to succeed.


Actix had drama and has been community managed for a long time since that. Last time I looked rocket was working toward stability but not there. It's more of a batteries included solution anyway so maybe better for small projects than production stuff, actix has some good flexibility for those.


Ok, so Actix could be considered a good option for a new project?

I liked Rocket's syntax, especially the Requests guards [0] I thought seemed like a good idea, but I haven't really looked that close to either since I never started my project.

[0] https://rocket.rs/v0.5-rc/guide/requests/#request-guards


Yes I say Actix is an excellent option. It has not as much stuff built-in like Django or Rails or others but if you are not looking for such a framework it is a good option, mature and stable between releases.


Sure, the base is there but for example, there's nothing like Java's itext. That is what devs refers to by immature ecosystem.


Is there something as featured as Rails, Django or even Elixir's Phoenix?


Tangent:

> Energy Efficiency in the Cloud

I recently moved the infrastructure for one of my company's products, from ~10 smallish EC2 VMs, plus durable storage in RDS, EFS, and S3, to a single dedicated box at OVH with local storage and backups in S3 Glacier and rsync.net. The application stack is the same in both cases. I wonder which is more energy efficient. The OVH box has a 6-core CPU, 32 GB of RAM, two 512 GB SSDs, and two 6 TB hard drives. I suppose AWS, because of their massive scale and use of virtualization, can be more energy-efficient than OVH's comparatively low-capacity dedicated servers; notice how absurdly high the specs are on all of the EC2 bare-metal instances.

On the OVH box, I set the Linux CPU frequency governor to "performance", to make sure all requests get the fastest possible response even though the box is ~98% idle. I wonder how much energy that's actually wasting.

The main reason for the move was to save money. And yes, since we moved to one box, we decided we could tolerate some reduction in uptime (this is a consumer app). But I wonder what the relative costs would look like if the externalities of having that relatively low-capacity, mostly idle box to ourselves, as opposed to using several small EC2 VMs (also mostly idle), were factored in. Maybe the only really fair answer is for everyone who doesn't actually own and operate their own servers to go all serverless and be nickled and dimed for every little thing.

Edit: And FWIW, no, we're not using energy-efficient programming languages at the application layer. Mostly CPython. I guess I could experiment with PyPy, and/or rewrite some of the smaller services in Rust.


It sounds like you had invested in a high redundancy deployment in AWS, and switched to a single host with a single DB in a colo facility given the background that you could tolerate downtime and the people time to remediate the outage.

I'm curious, what made you choose to also move to a colo facility vs. allocating a single ec2 instance to act as a VPS? I'd imagine the costs would have been comparable, and you could still get automated backups/recovery with EBS + snapshots.


Fair point. I was just thinking about that myself. One big reason is to avoid being charged for outgoing data transfer; with OVH, that's unmetered, with a gigabit port. Also, the OVH box, with the specs I laid out above, costs ~$150 per month. For roughly that same price on EC2, we could get an m5ad.xlarge on-demand instance, with 2 cores, 16 GB of RAM, and a 150 GB SSD. We'd have to add EBS and/or S3 (non-Glacier) storage on top of that, and as noted above, outgoing data transfer.


> On the OVH box, I set the Linux CPU frequency governor to "performance", to make sure all requests get the fastest possible response

I'm not sure that'll meaningfully improve latency given how much more relevant other factors can be. Have you measured a real difference? If not, you might be burning CPU-cycles for nothing.


Some things felt noticeably faster to me, but that could of course be the placebo effect. More importantly, the CPU usage percentages reported by ps have fallen (those are actually large enough to measure in a few processes, because "100" means a single CPU thread and not the whole CPU). That implies that the CPU is in fact spending less time on some things.


> More importantly, the CPU usage percentages reported by ps have fallen

That's not a meaningful comparison when the CPU itself is running at a higher frequency. Lower frequency enables the CPU to waste less effort, e.g. while waiting on memory or I/O.


I don’t know if the same holds true in servers, but in mobile you want to race to idle. In other words, consistent high CPU usage at a low frequency is worse than spiking the frequency up to get CPU usage down to 0. I would imagine that holds too although it’s more complicated because a server usage model is different (perhaps a constant baseload to begin with changes the calculus).


Race to idle is good in a compute heavy workload (up to thermal limits), but I'm not sure it is when the CPU is mostly waiting for memory. Also IIRC, the point of using "performance" in the first place is that it keeps the CPU at its design frequency even when idle - so it will be less able to boost further when needed.


https://en.m.wikipedia.org/wiki/Jevons_paradox

The more efficient you make it the cheaper it is to use, the cheaper it is to use the more people use it. Efficiency just creates more consumption.

You can’t “efficiency” your way out of climate change. Sorry.


I see Jevons' effect touted around a lot as if it's a law of nature when in reality it's just a name given an effect observed in very specific instances

Even within economics it's treated as a rare market effect and not as a given eventuality


It’s literally just a special case of the idea of price elasticity, a pretty core and uncontroversial idea in economics. For many (but not all) goods - if the price goes up, demand goes down. If the price goes down (due to efficiency improvements) the demand goes up.

If overall resource consumption stays the same or increases, then boom: Jevon’s paradox. And experience shows me that’s the norm, not an exception.


Have you heard of computers? Do you consider the development of computing power per Watt vs the total power consumed for computing over the last 70 years as an example of this rare market effect?


I agree that it's not a law of nature or economics, but it is a trap that we can fall into and that is why we have to be sceptical when we talk about technological solutions. Any efforts that try to make a harmful process less harmful can give a worse end result, because the improved processes will be used more than before because of efficiency gains.


Sounds like the same story as risk compensation. "If you let people wear seat belts, they'll drive recklessly and die more."

I don't believe it for the same reasons. Though I agree that what we actually need for climate change is a seemingly-impossible amount of political coordination. A global pollution tax (not just on CO2) which no country or state can race to the bottom to escape.

So many people are categorically "anti-globalist" (I don't know what this means, but that's the sentiment I feel from them) that I have no idea if we can pull it off.


A pollution tax is tool to increase the price in order to decrease consumption. Literally an attempt artificially cause the same effect as if polluting was LESS efficient.

I’m not saying efficiency isn’t valuable. I like an efficient process as much as the next nerd. What I am saying is that we can’t consume our way out of global warming. The total carbon cost of a tesla over it’s lifespan is in the same order of magnitude as that of a regular car (i.e. about half). The only choice that has a significant impact on climate change is to “consume less” driving. I.e. bikes, public transit, etc. those are 10x or 100x choices.

But if there isn’t good bike or transit infrastructure where you live - i.e. it’s not an available choice - then an electric is better than nothing.


I don't think it contradicts the article. They wrote: The energy consumption graph of data centers is basically flat. That's unexpected because big data, machine learning, etc have gotten popular in the last decade (or so). The way I interpreted this is that by improving efficiency, we kept the total energy usage close to constant.

I'm not saying that if you write Rust you save the planet, but in certain scenarios, the energy you save by having some parts of your services written in Rust can be significant.


> If you look at the graph of energy consumption, the top line is basically flat going back as far as 2010.

> There have been too many data center efficiency improvements to list

This seems consistent with jevon’s paradox. Efficiency improvements have not reduced the overall consumption.

Unless you believe that efficiency improvements have just randomly coincided with growth and they happen to cancel each-other out… but it seems more likely to me that cheap compute contributed to the huge uptick in machine learning / big data etc.


You can't efficiency your way out of climate change, but you can use efficiency to ensure people's living standards don't drop as a result of climate change mitigations.


Ah yes, the myriad ways cloud computing contributes to my standard of living /s

I wonder how much of that 200TWh is burnt trying to profile me, track me, and sell me stuff that has little or negative impact on my standard of living. I’d be willing to bet well over half. What’s facebook’s portion of that 200TWh?


OK.


I hope someday in the near future AWS Lambda supports Rust as a first class language.


You can already do this, but I'd think WASM is a better fit for Lambda than a native compilation target. Means AWS can run your code on ARM or x86 depending on available capacity.


Didn’t know that, thank you. Will check it our


In the first chart in the article, JavaScript has an energy efficiency of 4.45 (impressive) but TypeScript's efficiency is 21.50?

Is it transpiling on every run or something? Anyone have some insight on this?


It looks like the research paper indicated in the article isn't fair. D probably fits in the top 5 languages on performance and efficiency, but it's not even in the list of languages studied.


I really wish D was used more widely by companies. Out of all "new C" propositions, this is the one that appeals to me the most.

But it seems a bit counter-productive to commit yourself to it, since it's not gonna help you get a good job.


What about the carbon footprint of development time? Or maybe it is no longer an issue due to stagnated salaries and skyrocketed profits?


Interesting question, what about it? Would the incremental cost of more people learning the language mean that there'd be less output in the short term, thus offsetting the benefit of "RIIR"?

Rust can be a difficult language, but I think that for most common usecases (web), we could reach reasonable proficiency in a few months/years.

There's also other interesting aspects like more tooling written in Rust, benefiting some of the existing languages, with a positive benefit to development time. If there are say 1'000'000 CI JS CI jobs running at any time, and compile/transpile times are reduced from say 3 minutes to 30 seconds, would developers be more productive and thus produce more (higher energy) or complete work quicker (less energy)?


> we could reach reasonable proficiency in a few months/years.

This is silly, I hear rust called hard to learn but I picked it up in about 2 weeks of hacking around a bit with a side project and was quite proficient with working in web. By 1 month or so, maybe 1.5k loc, I was good with it. Only thing is there are still sometimes interesting features or better ways to do things that I find.


There is also the compilation time. So it depends if the program is meant to be run about as many time as being compiled (during development) Or if the program is compiled once, and run millions of times


Are there any use cases where its beneficial to compile frequently?


> Are there any use cases where its beneficial to compile frequently?

Writing software?

It's very useful to check your work. If can only verify your work slowly, then you will verify less frequently. This means you will work on a large feature but only check at the very end. This can be very inefficient, and checking for mistakes as you write code is much more efficient.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: