1) Your site's burning, pitty most people won't see your article/images.
2) The claim to "unlimited everything" repeated numerously in the article as well as "instant" anything is just hyperbole. There's no such thing in software. It might've been cleverer than photoshop about things, but unlimited and instant? No.
3) The claim to complete resolution independence is false. You might store editing operations/layers as parameters, but the source material is still resolution bound.
4) 48-bits is not that great honestly. At RGB (no word about alpha) that's 16-bit per channel. If it's normalized that's 64k graduations vs. 256 graduations. If interpreted as half float you'll only get 2 bits more per channel (half-floats use a 10-bit significand). That's cool, but not all that cool, either you get a non HDR format with a cool channel resolution, or you'll get a HDR format with 4x more graduations than 8-bpc. No you know what's cool? 32-bit single precision floats. 4 bytes, 32-bits per channel, 128-bit per pixel. You'll get 23 bits in graduations and HDR. Desktop graphics cards use this format internally anyway.
As someone that used it, it did indeed have all of those features. Sorry to burst your bubble.
As for resolution independence, what they mean is that all of the operations and manipulations you performed were resolution independent. If you upscaled the image, any manipulations weren't simply upscaled, but reperformed at the new scale which provided significantly better results.
It was indeed instant. You could load a 500 meg image on a machine with a quarter of the ram and zoom, pan and mess with it in realtime. It was amazing at the time.
Sorry to burst your bubble, if you think there's such a thing as unlimited and instant you're sorely mistaken.
It might've been fast in simple cases compared to the alternatives, but there's no two ways about it that all computation takes time, and you can make tradeoffs where you cache more and compute less, but that's it.
It's not instant and unlimited. It's Limited and fast, or more limited and faster, or more ram and faster, or more data and slower.
Stop bending semantics to fit your binary view point.
Instant in this case means you move the brush and it distorts as you move the brush. No parallax, no beach ball waiting for it to do the computation. It happened as you moved it. It was instant. Even with brush sizes the size of the image you were using.
It had unlimited undo. It had unlimited layers. That's not an arguable point.
Don't take words literally. Instant in this context means what jawngee said, that it was so responsive to the degree that people conceived it as instant. Awesome. Layers were unlimited meaning you could add as many as you want. Undo operations were unlimited meaning you could work on it for 10 hours straight and undo to the point it was before you started working on it. I believe that's what everyone gets from these words, except you.
It's not like people will think it's unlimited for real, there's no such thing.
It seems to me you are having problems interpreting words in context.
EDIT: I mean, if he said you could have 8K undos, or 8K layers and the response time is <15ms would that make you feel better? Does it make a difference?
We write software for humans. The same humans who can't tell the difference between instantaneous events and events separated by less than a few milliseconds. The same humans who won't be around for long enough to recognize the difference between a really large number of events and an unlimited number of events.
So, by all means, continue to write software for those ideal beings that live in a world of comprehensible instantaneous and unlimited events, but you'll find that most people on this forum are operating under a different assumption.
It's just not a useful factor of the conversation if it is actually instant or if it was just so fast people felt it was instant. I could spend years getting the frame rate of one of my games from 59 to 60 per second, but it isn't a meaningful difference.
In and Out of context, there's no such thing as instant and unlimited. There might be fast, and there might be a lot, and there might be faster and more than you usually need. But I repeat, stop claiming utter nonesense.
I think you don't know what unlimited means. It doesn't mean "infinite" - it means - there was no limit placed on it. Sorry, but you didn't understand that word, plain and simple, and now you really have created an ass-thread for yourself.
Second: "instant". It means, quite literally, "an infinitesimal or very short space of time; a moment". So, calling software "instant" - this is perfectly valid.
I think you should go back and re-read your thread with those two definitions in mind, and figure out who you need to apologize to for being ignorant ..
If you ran into a limit, submitted a bug, and they moved the limitation out further, and they continued to do this every time you submit a bug, is it unlimited?
I'm not suggesting this is what they did, but if it was, would you consider that unlimited?
There are other limits you'll hit long before you'll hit the computers raw storage capabilities. CPU <-> RAM access takes time, CPU <-> Disk access takes ages compared. As your dataset grows you'll exhaust page caches, cpu caches, etc. and things start to slow down disproportional to access (twice more data is more than twice as slow). All modern computers have more RAM and Disk space than you can use simultaneously in a usable (human term) time frame.
They're the same image sizes we use today. And it was instantaneous, or relatively so in comparison to Photoshop 3 - which couldn't edit the images at all due to size. We would have to use a plugin to work on parts of an image most times.
I dunno how this particular app worked but there is this thing called "deferred processing".
The thing to realize is that the max resolution I can ever see on screen is the resolution of that screen I am working on.
Even a retina MacBookPro has only 4 Mpixels on screen, at any time.
Now lets say I am painting a mask on a 30 Mpixel medium format image with a brush of 500 pixel diameter.
Lets say I am zoomed out, so the image is displayed filling the entire screen. This requires a zoom level of 13% for the retina example.
That means I only have to paint with a brush of 67 pixels diameter on a 4 Mpixel image in realtime and record the stroke!
Because the brush path is recorded, the brush is resolution independent. That is what the claim refers to, no to the source material.
When I zoom into such a 30 Mpixel image, to better see what I am doing, the area visible on screen remains 4 Mpixel. Many image editing apps (or most) completely ignore this.
What's more, while I am working, I do not have to use sub-pixel precision when blitting brushes onto the image )or doing whatever else) because this level of precision, in general, but specificially when editing, is almost always irrelevant at that high resolution.
Memory wise, too, I don't ever have to hold (many layers) of 30 Mpixel res in ram.
When a user loads an image, I build a pyramid (a mip map) dump that to disk and load tiles into a cache, as I go.
This never changes. When I have finished my 6 hour editing session, I press 'render' and everything is carried out, at 30 MPixel res. and subpixel precision. This render may then take an hour, but I don't care, I can go to bed and deliver the result to the client the next day.
When I do this in Photoshop, everything is always done at full res, with full subpixel precision. Ok, since a few years PS, too, uses a pyramid (mip mapping) in RAM, but it is far from optimal.
That is the reason why Photoshop's speed always sucks (because as hardware specs increase, so does the res. and number of layer people use when editing images).
It always uses too much RAM and too much CPU because it carries out a shitload of stuff you can't ever see until the final image is used in print or you zoom in at 1:1 and pan the entire image. Two things that, together, almost never happen in image editing.
P.S.: the only acceptable minimal bitdepth for image editing is 16bits/component float (linear space). Anything else will compromise quality, one way or the other.
P.P.S.: I was the product manager for Eclipse after Alias/XYVision sold it to Form & Vision in 1998. AMA. :)
Deferred processing is fine, until it isn't. For instance it's fine for things that don't have a lot of non-local changes, such as addition, subtration, multiplication, etc. as in brightness, contrast, compositing and so on.
It breaks down however when you do have processing that produces nonlocal changes, such as convolution, blurring, sharpening, distortion, etc. as in soft filter, gaussian blur filter, distort filter and so on.
You can run these at screen resolution only, but the result you will get can vary wildly from what the ultimate render will be.
I disagree.
Yes, there are operations that whose results will vary.
But many non local changes will work fine and deliver very close previews when ran on properly filtered mip versions of an image.
Particularly the ones you listed work well, non of them is a good example of something that will give you trouble with the approach I described (convolution, blurring, sharpening, distortion).
Try Darktable to see how well these work.
I used to use live picture back in the day. It was pretty amazing, but we never really fit it into our workflow because we were a very heavy digital shop and photoshop still had a leg up in a few areas (this was a digital pre-press and multimedia dev shop back in the early 90's).
Interestingly, we were also one of the first shops to have a digital camera for use in pro photo shoots. It was a Leif back that fit onto a Hasselblad. It would take one photo for each color plane, R, G, B, and each shot took about 30 seconds. You couldn't photograph people because of how long it took. But it was good for catalog and still life and high resolution enough for print. I think it cost about $10K if my memory serves.
The history of desktop applications is full of examples of clever or unique applications that never took off or failed to gain widespread adoption. Sometimes these applications had a superior interface to the dominant app (and sometimes not).
It would be really useful to see a few side-by-side comparisons of how a task is accomplished in LivePicture compared with Photoshop.
Although Photoshop is powerful and feature rich, I find the interface clumsy and awkward (Illustrator, in my opinion, has an even clunkier interface). Does the lack of serious competition against Photoshop keep Adobe from re-thinking the interface?
Apple's Final Cut Pro clearly had some influence on subsequent releases of Premiere Pro, but there's no serious competitor to Photoshop that I can think of (yes, there are alternatives, but none that are likely to take users away from Photoshop).
What's more, many people simply don't go looking for alternatives. Mastering Photoshop or Illustrator will stand you in good stead in the employment market if you're looking for a visual design job. And if you get stuck with an application task, there's a good chance you'll find a solution by searching for it online. There is an absolutely enormous number of supporting resources around Adobe's Creative Suite of products: tutorials, training, books, discussion sites etc.
All that helps to maintain the status quo and makes it much harder for competing apps to gain attention.
>Does the lack of serious competition against Photoshop keep Adobe from re-thinking the interface?
It's probably more a case of pros who have invested a great deal of time in adapting themselves to Photoshop's workflow and quirks would be really upset if they rethought the interface--however excellent a job they did.
FWIW, Adobe did a really excellent job when they had a clean sheet of paper with Lightroom, although it took a couple iterations to get there. (Mostly--it's more modal than I'd prefer in some respects.)
> The history of desktop applications is full of examples of clever or unique applications that never took off or failed to gain widespread adoption. Sometimes these applications had a superior interface to the dominant app (and sometimes not).
The scary thing is that some of these great features can never be used without fear of a huge legal attack over patents.
EDIT:
>Does the lack of serious competition against Photoshop keep Adobe from re-thinking the interface?
I don't think so. I think the enormous user base prevents them from changing the interface. That's probably a good thing, even if the interface is sub-optimal.
AutoCAD has a very powerful interface, with a mix of WIMP and typed commands, with some scripting.
In theory GIMP being open source means anyone can give it a new interface, but even though the interface is one of the biggest reasons many people give for not liking the GIMP there are few projects that have given it a saner interface.
The photoshop competition was all on IRIX, and was a LOT better. Matador and Amazon were great applications, and better performers on 64bit MIPS hardware. Some got ported to Linux, but were then not maintained, or integrated into other codebases.
Old / dead image editors (with deep colour) better than Photoshop at the time:
* IFX Amazon
* Alias Eclipse
* Da Vinci or something? Not the Colour/DI suite.
* Deep Paint
* Avid Matador
* I actually used Combustion for years. You can keep your photo-pap!
Edit: Maybe I'm mistaken and the linked site is talking about _software_ from the 90s? It's down, so I really don't know. All I know is that they need to stop calling that an "app", then.
We are currently working on a application with similar features called Leonardo although Leonardo is streamlined for digital painting, not photo editing.
Leonardo is still at least 6 months away from being released but all the features like "unlimited canvas", "instant painting" and handling of "huge files" are already in place.
Unfortunately there is no video right now showing the "unlimited canvas" and "instant painting", but I can make one within the next couple of hours... :)
Even apps like Pixelmator don't quite get enough in there to adequately replace Adobe's offerings.
In fact, I can only think of Dreamweaver as the prime example of something that has plenty of superior competition, attacking from a variety of angles.
indeed, thats a very good example, but still, what we are forgetting is the insane amount of money it costs to buy adobe software, even if they have lesser market share, they sure make up for it. Supposedly, their customers are all the artist, for whom 1000-2000$ is like buying new clothes. This allows adobe to further seal its position, through spending on R&D to pull further ahead of the competition.
so we can say, adobe would not stand challenged until someone can beat them on this part, with a proper strategy.
Which is true for its current competitors, all of which only try to make the best from their current possibilities.
The probable reasons this app failed, and any app that competes with PS will (regardless of how well it tackles the resolution independence/feedback speed issue): are two things:
1. Workflow must match PS 1:1 for the majority of everyday image editing operations.
People that use PS are mostly creative folks who do no understand image editing from a technical perspective. Solving a problem (a use case) to them means to internalize a workflow.
Mastering a 'deep' app like PS this way takes years.
If you write a competitor to PS and dont honor this experience that took your target users years, often over a decade to aquire, you're shooting yourself in the foot too hard to ever gain enough momentum on a market that is dominated by PS (resp. its users).
This is imho also the reason why Adobe hasn't touched basic workflow in PS, ever. Because if they did this, they risked alienating users and driving them to test a competitor's product.
Recall when Apple 'improved' the UI/workflow of FinalCut Pro? The screams of outrage echoing through the web? :)
2. Feature set must be more or less identical to PS. You can 'plus' in some areas but you can't 'minus'.
If you have a use case that is not covered by your app but by PS and it is even used by the average target user only once a week in PS, this will be enough reason for them to not consider your app a worthwhile alternative, even if you do get 1. right.
1. is not too hard to do, engineering wise. But 2. is a huge task. PS simply has a lot of features.
I heard that it was due to the lack of plugins, and that was in turn due to a tricky and complicated API. Live Picture used some interesting and complicated data structures to work its magic, but all of that complication was exposed to and required of plugin authors.
I recall trying Live Picture a few times, back in the early 90s. It was promoted in the same kind of breathless tone as this article. My recollection is that it in no way lived up to the hype. It was slow and crashed a lot.
I set up a workflow for InDesign CS5.0 and everything worked. InDesign CS5.5 came out and they refused to sell me CS5.0. They introduced no new features, but the export as HTML option started producing random crashes. I spoke with their engineers and they said they would fix it in the next version.
We still experience a random crash in 1/30 automated jobs - which requires us to redo a large portion by hand. I don't know if they had fixed it, but I know they want thousands of dollars to upgrade to CS6.0.
Why did this problem happen? Because they released a new version called CS5.5 that introduced only bugs. Using Adobe software feels like you are being robbed.
Still you're using it, because there is no alternative.
Since it's the best software for what it does, your idea that "Adobe must die" is based solely on a fantasy notion that whatever replaces it wont have bugs, and will be all unicorns and love.
Not to mention that the bugs you mention are mostly specific to your workflow (specific automated jobs et al), and don't mean that the most used software in the industry is problematic in general. I've never been biten by any bugs in other parts of 5.5 I use, like PS and Premiere, for example.
I also don't see why you were quick to jump to 5.5 since you "set up a workflow for InDesign CS5.0 and everything worked.".
I setup the work-flow and was ready to deploy. When I went to purchase the licenses, the adobe sales rep said it was their policy to only sell the CS5.5 version.
I feel betrayed because each new version of Adobe's product brings more bugs and no useful features. This is is a direct consequence of adobe's monopoly, which disregards the needs of its users in favor of push new versions down our throats. Reminds of Windows ME.
>I setup the work-flow and was ready to deploy. When I went to purchase the licenses, the adobe sales rep said it was their policy to only sell the CS5.5 version.
That's bad. Adobe does tend to screw their customers in similar business ways.
>I feel betrayed because each new version of Adobe's product brings more bugs and no useful features.
Well, I don't know about InDesign much, but this is not true for: Photoshop, Lightroom and Premiere.
Adobe are the incumbent in this space and, seemingly, will continue to be. There is so much in Photoshop (and other Adobe software) that could be improved from an experience point of view. Simple things like the layers palette need a rethink; there are small inconsistencies with many of the tools; and the software is generally buggy. Often when I get frustrated with the software it's usually over something that one infers is still there due to legacy.
There are a lot of more niche tools coming out which do look really promising, but there are a couple of issues:
1. Professional software is often quite pricey, and understandably so. Even if a company offers a 30 day trial, that sometimes isn't quite enough to evaluate a piece of software.
2. The new tools are rarely compatible with PSDs and other Adobe file formats. I have to sympathise with the software creators here, Adobe file formats are a complete mess, nevertheless it's a required feature for anyone who wants to work with other companies within the industry.
It's a shame really, Adobe's monopoly on creative industry type software is holding everyone back, but no one is really in a position to do anything about it.
I agree, but I feel the wording of the article title is the issue here, I feel that while Photoshop is a great piece of software, there is room for improvement, if this program can achieve that, kudos to the developers.
The problem is no open source (or even closed) come even close to the Photoshop workflow, and also integration with other monopolized software like Illustrator and Fireworks. It's also one of the reasons why a lot of people can't switch to Linux, winin' it is just not good enough
I've worked professionally with Photoshop and After Effects and access to Adobe software is the only reason I still have a Windows computer. I don't even want to try getting it to work in Wine, and dual-booting just to be able to use them seems like a waste of time.
And until professionals use it in large numbers and are comfortable around it and potential employers want to see it on a resume, other solutions aren't going to matter.
this is a problem imho - why should a vendor specific product be a defacto qualification for a job? I know it is, and plenty of office clerk jobs "require proficiency" in office/word/excel. I say, instead of requiring a product name on your resume, you instead have to show you can achieve a particular effect (say, describe a procedure to achieve a rock texture in the abstract).
I suppose it can matter if that vendor's product is what they use in house and if whatever the other thing is that you know, isn't. They may be worried about the amount of effort they'd need to put into training you to do things their way.
2) The claim to "unlimited everything" repeated numerously in the article as well as "instant" anything is just hyperbole. There's no such thing in software. It might've been cleverer than photoshop about things, but unlimited and instant? No.
3) The claim to complete resolution independence is false. You might store editing operations/layers as parameters, but the source material is still resolution bound.
4) 48-bits is not that great honestly. At RGB (no word about alpha) that's 16-bit per channel. If it's normalized that's 64k graduations vs. 256 graduations. If interpreted as half float you'll only get 2 bits more per channel (half-floats use a 10-bit significand). That's cool, but not all that cool, either you get a non HDR format with a cool channel resolution, or you'll get a HDR format with 4x more graduations than 8-bpc. No you know what's cool? 32-bit single precision floats. 4 bytes, 32-bits per channel, 128-bit per pixel. You'll get 23 bits in graduations and HDR. Desktop graphics cards use this format internally anyway.