Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What are memory and CPU costs of using this in web pages? For example, when a page with 100 images loads, won't decompressing hashes into 100 canvases take lot of resources and block the page or entire system for several seconds?

The placeholders are blurred and I assume that creating blurred images is pretty expensive and takes O(N^2*M) where N is size of the image and M is number of points.

Wouldn't it be cheaper to use blocky placeholders that take only O(N^2) time to paint?




What are memory and CPU costs of using this in web pages? For example, when a page with 100 images loads, won't decompressing hashes into 100 canvases take lot of resources and block the page or entire system for several seconds?

If you're sensible and use small input images, and therefore small hashes, it should be fine. It's not actually doing all that much 'work'. Besides, browsers are much faster than people think they are, especially for things like canvas drawing because that happens on the GPU (even in a 2d context.)

I assume that creating blurred images is pretty expensive.

The input image is converted to a hash. When the placeholder is needed the hash is converted to a gradient. Essentially it's like picking a few points in an image and then using a gradient function to fill in the spaces between the points. That's something that's easy for a computer for do quickly.

Also, as it happens, blurring an image is fast too. You can implement a Gaussian blur in a convolution filter, and that's just a simple matrix.


> browsers are much faster than people think they are

Serious question: my 2017 MacBook Air grinds to a halt on many web pages, especially ones with video animation, to the point where I can't type because it drops multiple characters. I have to use an ad blocker to make pages workable. Is this normal or is there something wrong with my machine?


Occasionally I will accidentally not have an ad blocker, and yes, the amount of CPU required by them can be enormous.

People seem to just be used to them and claim that their computers are ‘getting slower’, but really they don’t use their computer for anything but web browsing and it’s only slower because ads are getting worse and worse.


> browsers are much faster than people think they are

Does "people" include you?

Also web pages are more bloated than people think they are.


Does "people" include you?

Without wishing to create a paradox, it absolutely does, yes. I've been a web dev since 1997 and I frequently have to remind myself that I don't always need to optimize things, memoize things, or throw features out based on browser performance any more. I constantly underestimate what browsers are capable of. There's a 'pandemic' of over-optimization that heaps complexity (aka bloat) into web apps unnecessarily based on the mostly wrong belief that browsers are slow.

Devs need to be careful, and they need to measure things. They shouldn't start with the assumption that something will be slow.

What's really interesting about this whole question is that HN's least favorite frontend library, React, suffers this problem. The virtual DOM implementation was necessary a decade ago when React started, but DOM manipulation has been optimized in browsers so now the vdom is actually a bit of a hinderance (React has advantages other than speed, so it's still a fine choice.) Libraries that HN likes, such as Svelte and Solid, rely on the browser to be fast, because the browser is fast.


I don't know in which world you live in but most websites are incredibly unoptimised and over-engineered. They use a huge amount of resources on expensive machines and bring consumer machines to a grind.

I wish we lived in a pandemic of over-optimization.


It's entirely possible for both things to be true at the same time. Tons of developers prematurely optimizing things that don't matter because they make assumptions about performance. Those assumptions can also mean an epidemic of not optimizing things that *do matter*.


> Devs need to be careful, and they need to measure things. They shouldn't start with the assumption that something will be slow.

I disagree. The specs of developers' machines typically so far surpass those of users as to make that assumption valid more often than not.

Further, the fail case for incorrectly assuming that a web page will be fast ranges from it being slow to it being unusable, while the fail case for incorrectly assuming that a web page will be slow is that it is slightly more responsive than expected.


> The specs of developers' machines typically so far surpass those of users as to make that assumption valid more often than not.

Billions of people are using computers that are 100 times slower than your machine in different metrics. 100 times less space on hard drive, 100 times slower hard drive, 100 times slower processor, 8-16 times less memory, etc.

But then they probably won't pay you anyway, also if you target them, you have to make compromises that will affect the overall quality.


The blurring in this case is just a natural consequence of throwing away all the high-frequency DCT components. And since DCT is used to create JPEGs there are highly optimized versions available.


The amount of work it takes to decode an image is incredibly tiny by today's standards: https://github.com/woltapp/blurhash/blob/master/C/decode.c




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: