Hacker News new | past | comments | ask | show | jobs | submit login
Energy Efficiency across Programming Languages [pdf] (uminho.pt)
192 points by Malfunction92 on Sept 30, 2020 | hide | past | favorite | 158 comments



Lisp (Common Lisp) beats all the other dynamic languages by a considerable margin. This is why I am developing Clasp - a Common Lisp implementation based on LLVM that interoperates with C++/C (https://github.com/clasp-developers/clasp.git) for scientific programming.

With Clasp, we get the best of multiple worlds. We get a dynamic language (Common Lisp) with automatic memory management and enormous expressive power that can directly use powerful C and C++ libraries. All three of these languages are "long-lived" languages in that code that was written 10 and 20 years ago still works.

Performance is really important to me and I have written a lot of code over the past four decades. I won't develop meaningful code in any language that falls below Racket in table 4 because these language implementations are too inefficient. I furthermore want to keep using my code over the years and decades and so I won't develop meaningful code in any language where someone else can break my code by changing the standard. My program "leap" was written 27 years ago in C and it is still being used daily by thousands of computational chemists. But it's really hard to improve leap because the code is brittle largely because of malloc/free-style memory management (brrr). For a compiled, high-performance, standard language with proven lasting power and performance - Common Lisp is the best choice.


Have you had a look at Pliant? It is basically a Lisp with a modified syntax and has a really nice C FFI implementation. I wrote an Ncurses wrapper for it a decade or two back and remember it being one of the easiest FFI/wrappers I'd ever done.

http://www.fullpliant.org/


No, I just found out about Pliant from your post - but it doesn't make sense for developing large, stable codebases because it's not a standard language. (Sorry Pliant developers - I love your can-do attitude and I'd love to buy you a beer or a coffee sometime and talk about Sisyphean task management.) But Pliant is a reference implementation of a custom language. Programming language design is really, really hard - I wouldn't dare try and so I chose to go with a language that had literally hundreds of person years of design and testing underpinning it. Regarding FFI's - my approach is the same as the very clever pybind11, luabind and the older boost::python libraries. It works by using C++ template programming and letting the C++ compiler do the heavy lifting of generating wrappers at compile time. I recently updated our binding library to use C++17 std::apply with tuples. Freakin' finally! C++ has an apply construct that can be applied to a heterogenous list of objects - wow - lisp has only had it for 50 years! My point is that only recently has C++ developed the introspective capabilities to implement really powerful FFI's. Also - you have to use C++ exception handling for stack unwinding or you will break C++ RAII all the time.


"because it's not a standard language" Every language was a "non-standard language" at it's beginning... Including C, C++ and Lisp...


What 'drmeister means is that CL as an actual standard, as in ANSI standard. Ditto for C++ (an ISO standard). This means the language has a clear target for implementations to conform to, one that isn't vulnerable to "moving fast and breaking things".


Is better C++ interop the motivation for Clasp over Julia (which has lispy roots and CLOS-like open multimethods, though not lispy syntax)? Julia is conspicuously missing in the benchmark, but should do pretty well if you don't include JIT/start-up time.


If the question is why not use Julia? The answer is - for several reasons. I started implementing Clasp before Julia was a thing. The Julia language keeps changing and it doesn't have a standard like Common Lisp, C++ and C do. I need tight C++ interoperation and Clasp does C++ interoperation like no other language I've see. Clasp uses C++ exception handling and calling conventions and this allows us to compile and link C++ code with Common Lisp - it's all LLVM under the hood. Clasp stack frames and C++ stack frames are interleaved and try/catch and RAII and Common Lisp unwind-protect and dynamic variable binding work perfectly between C++ and Common Lisp code.


Didn't you do a talk on YouTube? I recall watching it when you went over the computational requirements of doing something with molecules maybe? I don't work in that field, but remember it being interesting enough to watch the full thing.


I do have a couple of talks on YouTube and thank you (the most recent one: https://www.youtube.com/watch?v=mbdXeRBbgDM). Yes - we are building molecules using our code. I have started a company and we are several months in to developing a new technology for warding off future pandemics (maybe even doing something about the current one). We literally started compiling the first molecules using an application I implemented in Cando (Clasp + computational chemistry) yesterday.


Your Google talk was one of my top 10 ever, it blew me away.

It's not uncommon in the tech industry to build some mad scientist solution - pardon the expression - because we're trying to do something pedestrian but have painted ourselves in to a corner, eg HipHop VM.

Doing it to help with cutting edge science is genuinely exciting.

https://www.youtube.com/watch?v=0rSMt1pAlbE


Thank you!


That is really cool and I feel your pain. I've mostly been dissapointed in numerical computing stacks. You've got C/C++/Fortran that are blazing fast, but very cumbersome to use. They are nice in that you can distribute an executable. Python and Julia are good at finding a sweet spot where your scripting language is fast and user friendly, but distribution is a pain I guess unless you have a SAAS product. Matlab, Mathematica, GNU Octave, Scilab...etc have their own problems like cost or performance and also make distribution painful.

I looked at SBCL, but didn't see good ways to use existing numerical libraries. I'll have to look at Clasp again.


It would be good to make the slides available.


Depending upon how you define "dynamic" Java wins by a small margin.

If we treat "dynamic" as a spectrum though, Lisp still gets you a lot more dynamism for a relatively small increase in power consumption (and a somewhat larger gap in performance).


Hey - props to the developers of Java for bringing automatic memory management into the mainstream. But it's not quite what I'm looking for when I want to do exploratory programming.


Indeed, the lack of a real "REPL" experience is quite limiting with the Java language (you can kinda sorta simulate it with bytecode manipulation, but please don't). However, you can get the full REPL experience on the JVM, along with a gc, if you run Clojure! (Which is itself a bit of an oddball language, since it's dynamic code, immutable data Lisp written for a vm that likes static code and mutable data...)


Q? did you ever try the jShell that comes with java11+. Curious what a REPL power user thinks about it.


Haven't used it, but I can't imagine it would be great. Java's issue is that you can't do anything outside of a class. So there really is no outside context in which a REPL makes conceptual sense.


Well it is not that rigid. You can do code outside of a class/method

  jshell> var x=8
  x ==> 8

  jshell> x*3
  $2 ==> 24
etc.

But I haven't used it either as I never worked with a REPL other than SQL clients.


The REPL in CL allows you to redefine running code, even code compiled with high optimization levels. This is a feature of the language rather than a feature of the REPL but it's in the REPL that such features shine. In the best case in Java you can redefine methods when running under a debugger, using a framework such as JRebel or leveraging an application server/framework ( Play for example). When changes are too radical you have to recompile and redeploy. In CL this is only rarely needed. In short a REPL in Java will never be able to reach the level of interactivity provided by a CL REPL simply because interactive development is not built in the language as in the case of CL.


Agreed - on both fronts.


Given that most of the computationally expensive work in Python is usually offloaded to C extensions (and JIT compilation is an option with Python), is the cost savings in electricity that significant? A desktop PC probably only costs about 10 cents an hour to run at most.

Perhaps your interest in CL is because you can recall using it in its prime. Nostalgia Or just novelty is certainly a valid argument.


We are doing computational chemistry, simulating molecular structure and designing molecules, and we want to use thousands of cores and get as much performance as possible. If my AWS bills are any measure - then yes - the cost savings in electricity and computing resources are very significant.

Also, developing and maintaining Python/C++ bindings for complex libraries is very painful and frustrating. I wrote Python bindings for years using boost::python and earlier Swig and keeping bindings working and dealing with the different memory management approaches of Python and C++... bleh - it's a nightmare. At the same time Python changed from version 2 to 3.x and libraries I depended on and my own Python code was being broken and becoming outdated in ways that I had no control over. It was like trying to build a house out of sand.

I've only been using Common Lisp for the past 6 years - after three decades of writing in other languages including Basic, Pascal, Smalltalk, C, Fortran, Python, PHP, Forth, Prolog... Common Lisp feels great, it feels powerful and every function I write I know will compile and run in 20 years. Common Lisp has real macros (programs that write programs! implemented in one language), dynamic variables, generic functions, the Common Lisp Object System, conditions and restarts... There are many features that haven't made it into other languages. Common Lisp makes programming interesting again.


Would you indulge me with an OOM estimate of your hosting costs for this use case?


I remember when the OP came into the CL community and it was not long after myself, and we both came long after is hey day.

I dare say that not many in the community are from some bygone time.

The fact is CL has by far and away a more sophisticated runtime environment than the vast majority of dynamic languages, with small talk being a stand out exception.

Some of the language aesthetics have not aged well, not talking parenthesis but rather the hideously long symbol names.


Agreed! There are a couple of really clever xkcd comics about this (google: xkcd lisp). And I also would rather use PATH than default-pathname-defaults and logical pathnames are, uh, crufty. But I see these as minor blemishes on what I think is as close to a perfect programming language as I have seen. Clasp is an implementation of Common Lisp and any differences between what it does and what the standard says is a bug in Clasp that we need to fix. But Cando, Clasp+computational chemistry code, is a superset of Common Lisp and we are adding things to make life a bit more convenient. We even added optional infix arithmetic as a standard part of Cando (I know! I'm going to burn in hell between the ninth nested set of parentheses as a heretic).


The last sentence painted a big smile on my face


> Given that most of the computationally expensive work in Python is usually offloaded to C extensions

That only works if your application's hotspots are in a few, decoupled parts of your code – good examples are FFTs, data compression, or encryption. It doesn't work if you can't cleanly separate your hotspots from the rest of your logic. E.g., if you write a parser and analyzer for a programming language, what part do you want to offload to C? Even if you could identify a small part that takes up the majority of execution time, it would have a complex interface, and it would take a lot of work implementing and testing that interface.


I'm often surprised how sbcl is 'benchmark' competitive with other native langs (c, ada, pascal)


SBCL is an amazing implementation and it has an amazing compiler. I would argue that it is one of the best compilers around. Javascript compilers in browsers are pretty impressive - but given the far fewer resources that SBCL development has had, SBCL is remarkable. It's a fast compiler that generates fast code. I would be using it (and do for some applications) if I didn't also need my own large C++ computational chemistry code written over decades.


yes sbcl is "only" a tiny impl. Makes me wonder how far franz or other can be .. you know how chez-scheme was leagues above the other implementations while nobody knew until it was open sourced


The smallness of SBCL helps speed its continued development. Compiling it from scratch on my laptop and running the smoke tests takes about 1 minute.


Would it have been feasible to improve the FFI of existing common lisp implementations? The last talk of yours that I watched showed clasp to be slower than common lisp relative to C.


There was a time when I asked myself that every day (sigh). But I don't think so. The details involved in interoperating with C++ are so intricate and involved. I don't think I could have gotten to the same place. Meanwhile - we are improving our compiler performance and the performance of generated code. Maybe in the far future I'll start a couple of ancestor simulations and do the experiment...


> The details involved in interoperating with C++ are so intricate and involved.

Yes in general, but you can pick a subset useful enough in practice.

Take a look: https://github.com/Const-me/ComLightInterop


Why would I use Clasp and not haskell?


Why would I eat and not drink?


Something I've been thinking a lot about lately is environmental friendliness in software, given that data centers contribute 3% of global greenhouse emissions (same amount as the entire airline indistry).

I'm thinking along the lines of using interpreted languages less server side because of efficiency, but also relying on JS less client side and using WASM where it makes sense.

This has stemmed from me leaning Go last year and being moved by actually how much faster it is than Node for my use cases (API development and data processing).

Where I am curious to see the total impact is how we can take advantage of economies of scale to save money and increase efficiency. I'm thinking along the lines of scale to zero, event driven architectures.

Google Cloud, for example, claims they operate their data centers with industry leading efficiency while also being powered by renewable energy. At scale, do services like Cloud Run or Cloud Functions actually make a difference?


Your introductory point about share of lobal emissions is something I think we as an industry don't really realize yet. So, to give some more precise numbers about that…

> data centers contribute 3% of global greenhouse emissions

Not true according to my research [0] — more like _Tech in general_ contributes ~4% of global GHG emissions. Within that, datacenters only represent ~20%. (The rest is 15% for networks, 20% for consumer devices consumption, and the remaining 45% for manufacturing of equipment.)

And also:

> (same amount as the entire airline indistry).

Air traffic accounts for ~2% of global GHG emissions [1], so Tech is actually twice as bad as air traffic there. Other ways to put it is: as much as all trucks on the planet, and 4x emissions of a country like France.

[0]: https://theshiftproject.org/wp-content/uploads/2018/11/Rappo... (FR) (p18, p20)

[1]: https://www.atag.org/facts-figures.html


When I look at other industries I see efforts to reduce energy consumption. A lot of programmers, in contrast, couldn't care less. The most popular languages are the least energy efficient and most resource intensive. But they make life easier for developers, or as programmers love to say 'more productive'.

When performance is an issue in running programs, a common response is: hardware is cheap, just add another energy-guzzling server or use a more powerful computer.

This attitude is embarrassing when you consider that in every other industry there is a push for reduced resource usage and lower energy consumption. The programming field is the exception.

On the other hand, when it's programmers who are on the receiving end of slow, resource intensive apps, they'll complain loudly. This industry is rife with hypocrisy.


IT-centric orgs definitely care. In my second (post-college, full-time) job I got to see the effect of this directly as truck loads of old server equipment were replaced with about 1/10th the hardware. In their case, predominantly by making good use of virtualization so that applications could share hardware. They cut their power consumption by more than 90% by upgrading to newer machines as well (which were more efficient, while still being faster than the old ones).

This broad brush view that the IT industry doesn't care is absurd, they have to pay bills too and efficiency reduces those. They may not be motivated by environmental concerns, but the costs are very much obvious to them and they do address it with improved efficiencies where possible.


> A lot of programmers, in contrast, couldn't care less.

You really think the problem is the programmers?

Here's a homework assignment for you: approach the product managers in any given software company and pitch a new development practice, which will improve UX through superior performance, reduce AWS bills, and save the planet, all for the small cost of doubling all product release timelines. Then come back here and report their response.

Just try not to take it personally when they laugh you out of the room.


There is absolutely an objective to reduce the use of compute resources. Especially in the current era of AWS/google cloud where you pay $50 per core per month.

Companies care very much about resource usage past some amount of machines. They don't think in terms of power consumption or environmental footprint though, but in real dollars (the costs of hardware/cloud) that is highly correlated.

Kubernetes is the latest trend in spite of being an overcomplicated mess, precisely because it's an overcomplicated mess that can deploy and pack services more efficiently onto (less) resources.


> The most popular languages are the least energy efficient and most resource intensive.

This is mostly incorrect. Of the top 10 programming languages on GitHub [1] only Python, Ruby and PHP are commonly used with an interpreter. The rest are all AOT or JIT compiled. I also suspect a large fraction of the Python projects are also data science / ML projects that heavily use packages like NumPy and TensorFlow that offload most of the work to highly optimized math libraries.

I also suspect if you were to look into the programming languages used by the companies with the most servers, they would skew more towards languages like Java and C++, or custom things like Facebook's Hack / HHVM.

[1] https://madnight.github.io/githut/#/pull_requests/2020/2


I’d be interested to know at what point the energy required for me to write in C (with a significantly longer development cycle, and thus more commutes, more time powering the office, more time running test harnesses, etc) would be outweighed by the gain in its energy efficiency over <insert other language>.


A lot of "I'm not productive in $LANGUAGE" is often "I don't know $LANGUAGE" very well, not "$LANGUAGE is inherently worse". However, if you subscript to the idea of blub programming and such take a look at C++ or Rust.


"It takes me longer in $LANGUAGE" != "I'm not productive in $LANGUAGE"

Some things are just faster to develop in some languages because they have different baseline capabilities. Try parsing and processing a lot of text input in C versus the same task in Perl. Assuming similar competency in both languages (hell, you don't even have to be fully competent in Perl for this, just not a total novice) and the Perl solution will come out faster unless you've already spent a lot of time doing specifically text parsing and processing in C (which is not its primary use-case for many, if not most, day-to-day C programmers).


> with a significantly longer development cycle,

is that the case though. I used to work in a company where we developped some apps in C++/Qt/QML and some others in Electron and for similar apps the development efforts were pretty much the same


There seems to be a trend toward using MORE computing power, if it saves on programmers. Continuous integration, fuzzing, other high volume automated testing techniques, all the various applications of machine learning.

If a programmer costs $200K/year, that salary could support about 300KW continuous power use at average US industrial electricity rates (about $0.07/KWh). So if you could spend (say) 20 KW to increase programmer productivity by (say) 10% you'd be coming out ahead.


Application code appears to be a small fraction compared to storage, databases, and various operational overhead: https://m.signalvnoise.com/only-15-of-the-basecamp-operation...


Kubernetes is popular in data centres but just take a look at how much CPU a fresh install of microk8s uses with nothing running on it (I often see ~4%). The various monitoring systems & addons can only make it worse. It puzzles me why optimising such things isn't more of a priority given that in the cloud resources cost money, not to mention the environmental unfriendliness..


People still religiously believe that development time is always more costlier than computer time in every situation, often misquoting Knuth.

Often, the layers add a lot of overhead, and these days not many would have understanding of underlying layers.


The k3s folks have wondered aloud how high a couple of their components would scale.


I did some research on this topic in university, and our consistent result for CPU-based programs was: if it finishes faster, it uses less energy, and vice versa.

So it's no surprise to see that VM-based programs use more energy; they're slower.


People like to argue this point because there are rare exceptions when it isn't true, but yes, generally the faster the things finishes, the less energy you use. It is a fine rule of thumb to use in absence of direct power draw.


In SDK docs for the GameBoy Advance there was a note along the lines of

"Even if your game is so simple that it does not require the full speed of the (16.78 MHz) machine, please put effort into optimizing it anyway so that it can spend more time sleeping in the low-power PAUSE instruction. Your players will notice the difference in battery drain and they will tell their friends."


Memory usage matters too! Languages lacking shared memory multiprocessing will use more energy even if they're equally fast.


That's a good point. DRAM only sucks power when you flip the bits right?


Well it has to be constantly refreshed (I think?) if there's something important in it. But more I was thinking that you need more memory to support less-efficient memory usage in the first place, or, if the amount of memory is fixed, less of it would be used for things like the disk cache.


So, the Steve Jobs rumor about only allowing compiled programs on the original iphone was right? There is about a 4x energy increase going to a VM'ed language and about a 19x going to a fully interpreted one over using a natively compiled language.

So, the energy efficiency is actually worse than the perf loss in general.


Doesn't just save battery but leads to more responsive interface, which people noticed.


>There is about a 4x energy increase going to a VM'ed language

That's not what this data shows. Java is at 1.98x, ahead of Swift (2.79x), Pascal (2.14x) and Fortran (2.52x).


Look in the summary section where they provide overall perf per class. As you point out some languages are better than others within the class. But your comparing the best in one class with some of the worst in others.

That is disingenuous, I might buy comparing the best in each class, but the results are much the same as the overall.


I am comparing with the best in class of non-VM languages (C). That's what all the multiples mean.

What Java shows (and a comparison of averages per class wouldn't show) is that there is not necessarily a 4x decrease in efficiency as a result of using a VM. It depends on the implementation. And there are quite a few other VM languages that are doing far better than 4x.

Swift clearly demonstrates that native AOT compilation is no guarantee for efficiency. Swift may well have become faster since this study was run (same goes for other languages), but using reference counting for garbage collection will make it very hard to catch up to the best.


I was really interested in how they measured power because there is a ton of nuance there.

They used the metric reported by a tool that limits average power via a programmable power limiter in hardware which an interesting way to do it. Totally valid but I really wish they provided more detail here. For example, did all workloads run at limit all the time? Presumably they did. Limit based throttling is a form of hysteretic control so the penalty part will be critical. How often and when the limit is hit will be critical too.


> In order to properly compare the languages, we needed to collect the energy consumed by a single execution of a specific solution.

With this, Java ranking on top 5 is quite impressive. Considering that JIT optimisations wouldn't have really kicked in. My hypothesis is that if the Java program was allowed to run a few more times, and then compared, it would rank higher.

And, along the lines, couldn't the other compiled languages and vm-based (common lisp, racket) be JIT optimised?


Funny (old) energy efficiency story that used to be published on- line but I can't find it.

It's about the first handheld scanner for a large shipping company. The hardware was engineered and nailed down and a team was contracted to write the software. They got about 1/2 way completed and said the box didn't have enough ROM to handle all the features in the spec. The company contracted Forth Inc. to try and salvage the project and that was possible because they used their proprietary Forth VM and factored the heck out the code so they could reuse as much as possible and got it all to fit. (Common Forth trick)

10 Years later, a new device was commissioned and management was smarter! They made sure their was a lot more memory on board and a new contracted team finished the job. In the field however the batteries would not last an entire shift...

Forth Inc was called again. They made use of their cooperative tasking system to put the machine to sleep at every reasonable opportunity and wake it from user input.

Maybe it ain't the language that matters as much as the skill and imagination of the designers and coders. Just sayin'


Marginal differences should be ignored in this kind of benchmark.

It is usually well accepted that faster execution leads to lower power usage, as long as the CPU is operating in a reasonable thermal envelope.

Nothing new here, except that we can have a better grasp of the different orders of magnitude.


I wonder what the carbon impact the use of those inefficient dynamic languages has had, in both desktop and backend environments?

I imagine it's substantial and worth considering.


Think of how much electricity is spent on cryptocurrency grifting, training state of the art AI so that it can help trick the elderly into clicking on ads, distributing porn, and streaming videos of kids eating tide pods. I think that eclipses the cost of developers using Rails or Django for their server backend.

I don’t think we have a problem with the “how” as much as the “what” or “why”.


Generally agreed but I don't think "distributing porn" compares to the other examples given. There's definitely a subset of porn that treats involved parties fairly and doesn't pose ethical problems. Demonizing porn as a medium only pushes it into a moral grey zone which benefits harmful entities because there's less public awareness of their actions.


> Demonizing porn as a medium only pushes it into a moral grey zone which benefits harmful entities because there's less public awareness of their actions.

ObNitpick: I think porn is a genre rather than a medium.


You're right, I wasn't sure what to call it but "genre" is probably more fitting.


When PHP 7 was released, Rasmus Lerdorf, the creator of PHP, said the performance improvements meant fewer servers, smaller memory use and reduced CPU activity - all of which equalled less power or electricity consumed. (And remember this is an interpreted language.)

When you consider the millions of servers in use, that additional language efficiency adds up to a substantial saving in electricity use. You can watch a segment from his presentation where he talks about this here - and the calculations he made of potential CO2 savings:

https://youtu.be/umxGUWYmiSw?t=15m16s


I bet it is completely insignificant in the great scheme. The average American, for example, uses the equivalent of 4 kW of motor fuels continuously, not even counting electricity consumption. It doesn't amount to a hill of beans if the Dropbox sync client uses an extra joule here and there because it's written in Python instead of C++.


Actually, that's a great example.

The sync client has 10^7 installations or more out there, but its backed by an engineering team of 10^2 or less. Maybe much less.

That scope of impact is fundamental to the economics of software, and its why software engineers have so much potential to do good (or ill) for the environment.


> The average American, for example, uses the equivalent of 4 kW of motor fuels continuously

how is that possible ? I have a normal house with a few appliances turned on, plus three fairly powerful computers and some musical gear and I'm at 530VA right now according to my home's electricity meter


Have you ever purchased anything from a store? How did that good get to the store?

Have you ever purchased anything online for home delivery? How did that good get to the delivery center, then your home?

Just because you didn’t pump a gallon of gas into your personally owned tank doesn’t mean it wasn’t burned on your behalf.


but surely that is already accounted for in another budget line right ? if you are just dividing total kwh of a country by households it does not make any sense as it would also count military stuff, etc.


But how do you move yourself from place to place?

Note that it's not necessary for you to personally burn the three gallons of fuel per day. It's an average.


I drive on average 8k kilometers per year. I doubt that this is amounting to all the rest, is it ?

a quick computation gives me, given that my car drinks roughly 7 liters of diesel / 100km if i'm not careful:

- 560 liters of diesel / year

- a liter of diesel is apparently ~10.74 kwh -> 6014 kwh total

- 6014 / 365 -> 16 kwh a day ? I don't see how this is making me any closer to 4000 kwh per day.


I don’t know if the 4kW statistic is right but 16kWh per day is 0.667kW continuous (16kW / 24 hours).They specifically say not including electricity use but could see that continuous number climbing closer to the 4kW number if it did.

People who regularly fly would also likely quickly make this number rise.


By your metric ways I see you might not be an American. However, you can readily divide the total motor fuel consumption in the US by the population in the US, and use your figure of energy per liter (which is a bit higher than pertrol, but it doesn't matter).

The point is that the global emissions story boils down to transport fuels, meat, leaky houses, and a long tail of irrelevant things, such as your choice of programming language at small scale.

At large scale the economic incentives alone are enough to encourage huge energy consumers to use a decent language (for example, Google and C++). But the whole information industry taken together is irrelevant to the global emissions story as long as we have an airline industry, cars, and hamburgers.


US Department of Transporation says the avg. American drives 13.5k miles per year.

Using an average mpg of 21.5 mpg (if the average age of a car is 12 years), this comes to ~628 gallons of gas per year.

EPA uses 33.7 kWh per gallon for ~21,160 kWh in a year. Divide by 360 * 24 and you get 2.5 kW continuous, so it seems plausible.

EU numbers based on [1] come out to ~1 kW for driving?

[1] https://www.odyssee-mure.eu/publications/efficiency-by-secto...


> 21.5 mpg

this is more than 11L/100 - I don't know anyone who has a car that consumes that much.


Fair - I don't own a car myself, but the US DOT reports 24.4 mpg in 2018 for cars, SUVs, vans, and light trucks shorter than 121 inches. (This is an estimate of vehicles on the road, not 2018 model year.)

If we use a 2020 model year for passenger cars of ~40mpg and a more conservative 11,500 miles per year, it comes out to 1.1kW.


> light trucks shorter than 121 inches.

This definition excludes the best-selling vehicle.


The Ford F-150 is the highest-selling vehicle in North America.

Also worth noting - the US gallon is smaller than the UK one. And anecdotally, the mileage rating in the US actually tends to represent real-world consumption, whereas the EU test - until very recently - did not.


4 kW continuous is 48 kWh per day. 4 kW * 24 h.


My bad, that's much closer.


Umm... I messed up the math. It's 96 kWh. There's enough embarrassment to go around it seems!


It likely only matters if you're computationally bound. Many (most?) of our uses of computers are IO bound, instead. So some degree of inefficiency in the use of CPUs is probably not as bad as things like the unnecessary round-tripping of data (or constant retrieval) over long distances.


These are computationally-heavy workloads. Do most HN programmers really do work in those domains? Is most computational work done today even in those domains (possibly, due to the amount of streaming videos, but also most of us are not coding video streamers)? Maybe a more interesting to test workload would be parsing medium-large random JSONs issued by concurrently-connecting entities. And also comparing the same setup under a low-workload scenario with a high workload scenario, possibly also comparing orchestration engines (e.g. kubernetes autoscaling).

I'd also be curious to probe "worst case" scenarios. Can you cause kubernetes to thrash spinning up and killing containers really badly, and how much of an effect does that have on energy consumption?


This appears to have been published in 2017?


Huh - I somehow submitted the same message twice. Hacker News doesn't let me delete it. So I'll edit it down - see the version above about Common Lisp and our implementation of it called Clasp.


You're developing clasp for use in your matter compiler project, correct?


I did - and we compiled our first batch of molecules using an application implemented in Cando (Clasp + computational chemistry code) yesterday. I'm absolutely serious about this.


Amazing. Have the goals of the project changed over time, or is it still to generate molecules that fit a certain shape/function based on a number of known building blocks?


The goals have not changed. The pace of development (in the chemistry) has accelerated by orders of magnitude in the last year. The software is advancing as well. We are still looking for good developers who want to work with us.


Do you have a public website or anywhere I can read more about your project? And if I'm interested, how can I contact you?


We do have a public website - you could contact me. We are still running under the radar so there isn't much detail but there is a lot in the scientific literature. http://www.thirdlaw.tech and https://www.schafmeistergroup.com/. I also have some talks up on the youtube.


Are you looking for students to work on development over a summer or semester-long break?


Contact me using my temple dot edu address - it's on my website. Hacker News doesn't appear to have a messaging feature.


How far along are you with Clasp?


It's working fine. We are developing applications within it for the past year. We have been kind of low key because we are using it as part of a much larger project.


It's worth noting that since this was published several years ago, and Rust has come a long way since then, it might very well top most of these benchmarks nowadays.


Isn't Rust dependent on LLVM for its optimizations? Also, do you imagine that C++ and Fortran compiler developers have stopped optimizing?


There is still a lot of work that has to be done before the ball is given to LLVM. If that gets optimized, it can improve the results.

Also, I would expect a more recent language to have a lot more low-hanging fruits than much older and highly used languages. The more you optimize, the harder it gets to optimize more.


Anything that seems to demonstrate C++ as slower than C is implicitly busted. You could compile the C code with the C++ compiler and get the same speed.

I'm looking at their table 4, with C:1.0, C++:1.56.

This throws the whole paper into doubt. Comparing crappy code in one language with good code in another reveals little of substance.


You have to include the compile time for C++ and the servers for cppreference.com.


This is funny but you can install OS documentaion for libc++ on linux systems, and access them through man pages.

As for compile time, the stuff is hard. There are some caching compilers, and build systems like bazel. A good build configuration can improve compile times.


C is not a true subset of C++ (though several compilers can handle both).


The difference between C and the C subset of C++ is negligible, for any practical purpose.


Idiomatic C++ has lot of performance pitfalls.


It would be interesting to have a similar research done for distributed systems as well. Then one would have to choose programming language and library or framework for distributed systems development, if the language or the runtime einvironment does not offer support for it out of the box.


I wonder how these rankings would have changed for C++ and Rust if they included compile times :)


I wouldn't consider compile time to be a useful metric here, as you will generally compile once (for the final release candidate(s)) and execute many times.


True. But maybe during development it could.


This is cool.

Now, can we get a comparison of these results vs. LOC?

I feel like almost any assessment of programming languages should have a table weighting the results based on how many lines of code it took to get that result.


That Erlang is relatively power inefficient doesn't surprise me. I wonder how much of that is due to the "busy wait" it uses to reduce the latency in message processing.


The workloads here are numerical so erlang has to do a bunch of unboxing and boxing. "Programming languages shootout" is generally not a useful metric to judge anything on the erlang VM.


not really a fair comparison since erlang is a fully fledged operating system (joke).

in general functional languages do worse due to the abstraction, and then VM languages do worse still, and then dynamically typed languages are less efficient than statically typed. Erlang is all the above.

F# fits the first 2 (functional lang on a VM) and has a pretty bad energy rating despite being a fast language.


> not really a fair comparison since erlang is a fully fledged operating system (joke).

Oh, Robert Virding told me once the plan was to make it essentially an OS. So, it's only a joke in the "Ha ha but seriously" sense.

Lisp also fits all those criteria and is quite efficient, but it developed under different design constraints.


Why do you consider typescript and javascript different languages?


Not OP and don't use JS, but generally speaking, a lot of these languages that transpile one language into another (or compile one language to another for those that insist transpile is the wrong word), it comes up with a less efficient result than if I had just used the original language. Or at least a different program. So if I use Type Script that gets converted to JavaScript, the end result will be different than if I had done it like that to begin with (not necessarily slower either).

Also...I guess there are situations where using something (Ex: C) could lead to faster code than me using Assembly by myself if the compiler is smarter than me (GCC knows a lot more about hardware than I do).


> So if I use Type Script that gets converted to JavaScript, the end result will be different than if I had done it like that to begin with

This isn't true for Typescript. TypeScript is a superset of JavaScript, and running JavaScript through the TypeScript compiler will produce identical output code[1]. If you add type annotations to that JavaScript to make it fully pass the strictest type checking settings, that will still be true provided you didn't otherwise rearrange or modify your program.

This makes me doubt their methodology on TypeScript at least, or wonder if they're running a tool like `ts-node` which compiles and runs at the same time, thus counting compile time in their execution time and energy.

1. As long as you're targeting the same language version as the original code was written for. For instance the compiler will downlevel async/await into slower async generators if you're targeting a version of JavaScript which doesn't support it yet.


I once read that coding in TypeScript leads to more monomorphic code, which can be better optimized by the JS VM.


Can you put a program written in typescript between <script> tags and have it run natively in the browser?



I would love to see the one where they do CRUD'y things or do data transformations which seem to occupy a large part of compute power.


I'd predict that it would look very similar to the TechEmpower benchmarks.


That's pretty neat. Somewhat reinforces my experience in how things will perform.


Which real life use case would Go be better than Java? Go uses less memory but is slower than Java. Which uses cases use a lot of memory?


Cloud.


Interesting to see Rust beat out C++ very slightly.


If you look at some of the individual benchmarks, they show that c, c++, rust, ada, and fortran are all over each other.

I expect the difference in this case is either due to differences between llvm and gcc; differences in the standard library implementation; or because rust requires strict aliasing by default.


Yup. In theory Rust and Fortran perform better than the others mostly due to the strict aliasing rules. However many benchmarks won't benefit from this difference and in practice it depends mostly on the standard library implementation and the "idiomatic" solution in each language. Of course compilers matter as well and even if they can all use LLVM the maturity of the frontends does matter.


It’s probable that they are mostly measuring implementations efficiency or GCC vs LLVM for these languages.


Given that it's from 2017, I'm guessing today's results might be quite different.


Given that C++ is (mostly) as superset of C its not simply interesting its odd. Did they not simply give the C program to C++? Are they claiming that the C++ compiler compiles C programs to slower lower perf binary.


That's a good point. But then you're not benchmarking C++ as a distinct language. So what would sufficiently distinguish a C++ program from a C program? Let's assume it's not just minor incompatibilities introduced to prevent compilation by a C compiler.

There must have used some definition that is not explicit in the paper, but you can see in this code sample that the author used various C++ standard data types (std::string, std::array), iterators, classes, concurrency (std::thread). I'm no judge of C++ style, but perhaps it's "C++ as a C++ developer circa 1997 would have written it".

https://github.com/greensoftwarelab/Energy-Languages/blob/ma...


Such a shame they didn't include LuaJIT.


As if there weren't enough reasons to learn Rust already, there's a new one: We owe it to the planet.


No Forth :(


Lua memory consumption is surprisingly very high.


I'm surprised that Rust is ahead of C++ in their ranking, and by quite a bit. I tend to use C++ as "C with STL and smart pointers" basically (a-la Google), I don't see why it'd be any slower or less energy efficient than C.


Agreed. Bad code is slower, news at 11.


Oh heavens, busybodies are gonna ban Python and Ruby under the guise of fighting climate change.


If you could save up to tens of watts by merely tripling your development costs, why wouldn’t you?


I've been fighting to ban use of interpreted code in production because I hate slow stuff, and like money. So you can use those motivations instead if you like. If you love Python and Ruby, make compilation a first class method of running them.


When I looked at the results seems Swift and C# are pretty good general purpose languages. Interesting to see how bad Typescript compares to JavaScript.


Sadly those two are fairly tied to a specific platform (note: yes, I know they are technically cross-platform).


c# is extremely cross platform now


I think the current big deal is GUI support. Seems like they are carefully attacking the problem. If they succeed react is going to be for a world of hurt.


the performance ceiling of c# has gone up a lot since that data was collected too.


How certain are you that the savings will be relevant? Are you sure your bottleneck is computation?


It often is, but nobody seems to acknowledge it.


You can't just assert that without measuring.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: