Hacker News new | past | comments | ask | show | jobs | submit login

Old undo steps could be dumped to SSD pretty easily.

And while I understand that many people are stuck on photoshop, I bet it would be easy to beat 800MB by a whole lot. But so I can grasp the situation better, how many non-adjustment layers do those professional photographer use? And of those layers, how many have pixel data that covers more than 10% of the image?




From what I've seen, quite a lot of layers are effectively copies of the original image with global processing applied, e.g. different color temperature, blur, bloom, flare, hdr tone mapping, high-pass filter, local contrast equalization. And then those layers are being blended together using opacity masks.

For a model photo shoot retouch, you'd usually have copy layers with fine skin details (to be overlaid on top) and below that you have layers with more rough skin texture which you blur.

Also, quite a lot of them have rim lighting pointed on by using a copy of the image with remapped colors.

Then there's fake bokeh, local glow for warmth, liquify, etc.

So I would assume that the final file has 10 layers, all of which are roughly 8000x6000px, stored in RGB as float (cause you need negative values) and blended together with alpha masks. And I'd estimate that the average layer affects 80%+ of all pixels. So you effectively need to keep all of that in memory, because once you modify one of the lower layers (e.g. blur a wrinkle out of the skin) you'll need all the higher layers for compositing the final visible pixel value.


Huh, so a lot of data that could be stored in a compact way but probably won't be for various reasons.

Still, an 8k by 6k layer with 16 bit floats (which are plenty), stored in full, is less than 400MB. You can fit at least eleven into 4GB of memory.

I'll easily believe that those huge amounts of RAM make things go more smoothly, but it's probably more of a "photoshop doesn't try very hard to optimize memory use" problem than something inherent to photo editing.


So why are you blaming the end user for needing more hardware specs than you'd prefer because some 3rd party software vendor they are beholden to makes inefficient software?

Also, your "could be stored in a compact way" is meaningless. Unless your name is Richard and you've designed middle out compression, we are where we are as end users. I'd be happy if someone with your genius insights into editing of photo/video data would go to work for Adobe and revolutionize the way computers handle all of that data. Clearly, they have been at this too long and cannot learn a new trick. Better yet, form your own startup and compete directly with the behemoth that Adobe is and unburden all of us that are suffering life with monthly rental software with underspec'd hardware. Please, we're begging.


Where did I blame the end user?

> Also, your "could be stored in a compact way" is meaningless. [...]

That's getting way too personal. What the heck?

I'm not suggesting anything complex, either. If someone copies a layer 5 times and applies a low-cpu-cost filter to each copy, you don't have to store the result, just the original data and the filter parameters. You might be able to get something like this already, but it doesn't happen automatically. There are valid tradeoffs in simplicity vs. speed vs. memory.

"Could be done differently" is not me insulting everyone that doesn't do it that way!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: