Hacker Newsnew | past | comments | ask | show | jobs | submit | ahoka's commentslogin

A.K.A. workaround for a software limitation with hardware. Mac font rendering just sucks.

Horrible IMHO, the old one was much better.

That's where the design is wrong.

If it’s present at all.

Not a problem in practice as you use val 99.99% percent of the cases (which shows why immutability should be the default, because most often that is needed) and Idea underlines any mutable references, so the sticks out. It also suggests val when a var is not actually mutated.

Are ARM processors inherently power efficient? I doubt.

Performance per watt is increasing due to the lithography.

Also, Devon’s paradox.


They aren't inherently power efficient because of technical reasons, but because of design culture reasons.

Traditionally x86 has been built powerful and power hungry and then designers scaled the chips down whereas it's the opposite for ARM.

For whatever reason, this also makes it possible to get much bigger YoY performance gains in ARM. The Apple M4 is a mature design[0] and yet a year later the M5 is CPU +15% GPU +30% memory bandwidth +28%.

The Snapdragon Elite X series is showing a similar trajectory.

So Jim Keller ended up being wrong that ISA doesn't matter. Its just that it's the people in the ISA that matter, not the silicon.

[0] its design traces all the way back to the A12 from 2018, and in some fundamental ways even to the A10 from 2016.


You’re conveniently skipping the part where x86 can run software from 40 years ago but arm can drop entire instruction sets no problem (eg: jazelle).

Had been arm so weighted by backwards compatibility i doubt it would be so good as it is.

I really think intel/amd should draw a line somewhere around late 2000 and drop compatibility with stuff that slow down their processors.


> jazelle

That’s a blast from the past; native Java bytecode! Did anyone actually use that? Some J2ME phones maybe? Is there a more relevant example?


> Did anyone actually use that?

AFAIK you needed to pay a license fee to write programs using Jazelle instructions (so you needed to weigh whether the speedup of Jazelle was cheaper than just buying a more powerful CPU), and the instruction set itself was also secret, requiring an NDA to get any documentation (so no open source software could use it, and no open toolchains supported it).

I remember being very disappointed when I found out about that


I generally agree, although one should not forget x86 (and AMD64) leaves about 3-10% performance on the table due to having a memory model that doesn't allow to reorder loads with other loads and stores with other stores.

As far as I know people aren't part of ISA :)

People are absolutely part of an ISA's ecosystem. The ISA is the interface between code and CPU, but the code is generally emitted by compilers, and executed in the context of runtimes and operating systems, all designed by people and ultimately dependent on their knowledge of and engagement with the ISA. And for hot code in high-performance applications, people will still be writing directly in assembler directly to the ISA.

ISA != ISAs ecosystem

ISA is just ISA


But you get the Environment for free if you choose the ISA, so ISA=>ISA ecosystem. It really does matter when making a decision

Do you have any actual evidence for that? Intel does care about power efficiency - they've been making mobile CPUs for decades. And I don't think they are lacking intelligent chip designers.

I would need some strong evidence to make me think it isn't the ISA that makes the difference.


https://chipsandcheese.com/p/arm-or-x86-isa-doesnt-matter

Basically, x86 uses op caches and micro ops which reduces instruction decoder use, the decoder itself doesn't use significant power, and ARM also uses op caches and micro ops to improve performance. So there is little effective difference. Micro ops and branch prediction is where the big wins are and both ISAs use them extensively.

If the hardware is equal and the designers are equally skilled, yet one ISA consistently pulls ahead, that leads to the likely conclusion that the way the chips get designed must be different for teams using the winning ISA.

For what it's worth, the same is happening in GPU land. Infamously, the M1 Ultra GPU at 120W equals the performance of the RTX 3090 at 320W (!).

That same M1 also smoked an Intel i9.


ARM doesn't use micro-ops in the same way as x86 does at all. And that's not the only difference, e.g. x86 has TSO.

I'm not saying the skill of the design team makes zero difference, but it's ludicrous to say that the ISA makes no difference at all.

The claims about the M1 Ultra appear to be marketing nonsense:

https://www.reddit.com/r/MachineLearning/comments/tbj4lf/d_a...


> Infamously, the M1 Ultra GPU at 120W equals the performance of the RTX 3090 at 320W

That's not true.


Isn't Lunar Lake first mobile chip with focus on energy eff? And it is reasonably efficient

We will see how big improvement is it's successor panther lake in January on 18A node

>I would need some strong evidence to make me think it isn't the ISA that makes the difference.

It is like saying that Java syntax is faster than C# syntax.

Everything is about the implementation: compiler, jit, runtime, stdlib, etc

If you spent decades of effort on peformance and ghz then don't be shocked that someone who spent decades on energy eff is better in that category


> Isn't Lunar Lake first mobile chip with focus on energy eff?

Not by a long shot.

Over a decade ago, one of my college professors was an ex-intel engineer who worked on Intel's mobile chips. He was even involved in an Intel ARM chip that ultimately never launched (At least I think it never launched. It's been over a decade :D).

The old conroe processors were based on Intel's mobile chips (Yonah). Netburst didn't focus on power efficiency explicitly so and that drove Intel into a corner.

Power efficiency is core to CPU design and always has been. It's easy create a chip that consumes 300W idle. The question is really how far that efficiency is driven. And that may be your point. Lunar Lake certainly looks like Intel deciding to really put a lot of resource on improving power efficiency. But it's not the first time they did that. The Intel Atom is another decades long series which was specifically created with power in mind (the N150 is the current iteration of it).


> It is like saying that Java syntax is faster than C# syntax.

Java and C# are very similar so that analogy might make sense if you were comparing e.g. RISC-V and MIPS. But ARM and x86 are very different, so it's more like saying that Go is faster than Javascript. Which... surprise surprise it is (usually)! That's despite the investment into Javascript implementation dwarfing the investment into Go.


Actually, if you had made an opposite example, it might have gone against your point. ;) C# gives you a lot more control over memory and other low-level aspects, after all.

That’s semantics though, not syntax. What’s holding Java performance back in some areas is its semantics.

It might be the same with x86 and power-efficiency (semantics being the issue), but there doesn’t seem to be a consensus on that.


Yet how much perf in recent dot nets comes from that, and how much comes from "Span<T>"ning whole BCL?

There’s much more to it than just Span<T>. Take a look at the performance improvements in .NET 10: https://devblogs.microsoft.com/dotnet/performance-improvemen.... When it comes to syntax, even something like structs (value types) can be a decisive factor in certain scenarios. C# is fast and with some effort, it can be very fast! Check out the benchmarks here: https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

I know that C# is fast, this is my favourite lang, but it is hard to say honestly which one is faster

I love the saying "i dont trust benchmarks that i didn't fake myself"


Intel has made SOC designs with power efficiency very, very close to M series. Look at lunar lake and compare it to what was available at the time.

According to an AMD engineer I asked at the time, when they evaluated Ryzen/K12, it was "maybe" a 15% advantage for ARM depending on scenarios.

The efficiency came solely from the frontend which is a lot heavier on x86, and stay up longer because decoding is way more complex. The execution units were the same (at least mostly, I think, might be misremembering) so once you are past the frontend there's barely any difference in power efficiency.


Aside from lithography there's clever design. I don't think you can quantify that but it's not nothing.

Actually power efficiency was a side effect of having a straightforward design in the first ARM processor. The BBC needed a cheap (but powerful) processor for the Acort computer and a RISC chip was When ARM started testing their processor, they found out it draw very little power...

... the rest is history.


You're getting your history mixed up.

Acorn won the bid to make the original BBC home computer, with a 6502-based design.

Acorn later designed their own 32-bit chip, the ARM, to try to leapfrog their competitors who were moving to the 68000 or 386, and later spun off ARM as a separate company.


The BBC Micro had a 6502

>Are ARM processors inherently power efficient? I doubt.

In theory yes. In practice none of the x86 design are even close, the lunar lake at same wattage barely competes with M1. And M1 is one node generation behind.


Yes they are. RISC philosophy apart from instruction set is also low gate count (so less energy used).

Nvidia can design super clean solution fron scratch - i can bet 50$ that its gonna be more efficient in MIPS/watt


Most of the gates on a cpu are not instruction decoding.

Monkey's paw: GPUs will be cheap but there will be no new ones sold for ten years. Basically Mad Max with computer parts.

I would recommend Krita over GIMP.

Aren't they're used for slightly different things though? GIMP for image manipulation and Krita for digital drawing?

(Krita is pretty awesome though, it's up there with Blender for me)


Inkscape is also great and sits closer to Krita.

What do you want to do in GIMP that Krita can't do with a better UI?

Skew transform and other transforms.

GIMP also has an excellent print interface. Krita doesn't have one at all.


> Skew transform and other transforms.

Krita has them both destructively, and non-destructively as transform layers. What is it you're missing?


I think I might've got confused with Inkscape. I remember GIMP handles transforms very well.

> What do you want to do in GIMP that Krita can't do with a better UI?

Adjust levels in photos.


Do you mean with the levels filter that Krita has, with the curves filter that Krita has, with the color balance filer that Krita has, with the slope, offset, power filter that Krita has, or with the hue/saturation/luma or red chroma/blue chroma/luma adjustment filter that Krita has?[1]

They are all available as non-destructive filter layers, by the way, and Krita users had access to this way before GIMP 3.0 was released with non-destructive filters.

[1] https://docs.krita.org/en/reference_manual/filters/adjust.ht...


> Do you mean with the levels filter that Krita has, with the curves filter that Krita has, with the color balance filer that Krita has, with the slope, offset, power filter that Krita has, or with the hue/saturation/luma or red chroma/blue chroma/luma adjustment filter that Krita has?

Honestly, I did not know that these existed in Krita (when I used Krita, I did not find them).

However, I still stubbornly maintain that I answered the question sufficiently, which used the qualifier "with a better UI".

Taking a leaf out of my wife's book "Even when I'm wrong, I'm right!* :-)

(Yeah yeah, I know I was wrong)


Does Krita let you change those black n white icons to something with some colour?


Honourable mention: https://jspaint.app


Why would anybody think it is a real alternative to upload your photos to website which is running proprietary garbage. Just use Adobe if you are going to do that.

The first feature paragraph on the Photopea landing page:

> There are no uploads. Photopea runs on your device, using your CPU and your GPU. All files open instantly, and never leave your device.


I strongly prefer local software, but as someone coming from Photoshop who now only does the occasional edit (and therefore can't justify the price), I find Photopea to be a good alternative, especially since it closely mimics Photoshop's interface so I don't have to learn a new UI. Also, your images stay local on your computer and aren't uploaded to their servers.

It's developed by a single guy, which I think is very impressive given how much of Photoshop's functionality it has. I just really wish it were open source (and not a web app).


Oh null is fine, but "everything is nullable" is the devil.

But then you are creating references with larger then needed "reachability".

I don't see a problem with that. This code would typically be inside it's own function anyway, but regardless I think your nitpick is less important than the readability benefit.

Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: