Question for anyone who happens to be an expert: Is there any way to quantify how much better Webb is independently of the amount of time used to take the exposures? Like, could Hubble achieve the same quality of images as Webb if it was given 100x (or whatever) more time exposure?
I'm trying to understand how much the improvement is "speed of convergence" vs. "quality of asymptotic result". (Though... is that even a valid way of trying to understand things?)
IANA{astrophysicist, space engineer} but I do follow this closely and have what I call a working armchair understanding of this stuff. Anyone from relevant fields is welcome to gently correct any imprecisions. I always want to learn more and will thank you for it
>could Hubble achieve the same quality of images as Webb if it was given 100x (or whatever) more time exposure?
No, for a different and simpler reason: Hubble isn't as sensitive in the infrared as Webb. A lot of the stars and structure Webb has revealed in the two nebulae especially is due to it picking up a lot more of the infrared light to which the gas and dust of the nebulae are essentially transparent. In other words the data is qualitatively different in addition to the increased resolution. This also will see much older light which is redshifted(the longer the travel, the greater the shift) out of Hubble's range of sensitivity.
As for the quantitative part, I guess mirror size is what you'd want to look at? Hubble has a single circular primary mirror with a diameter of 2.4 metres.[0]
Webb has 18 hexagonal mirror segments that are combined into the equivalent of a circular mirror with diameter 6.5m. That is ~6.25 times the light collection area of Hubble(25.4m² vs 4m²)[1]
> That is ~6.25 times the light collection area of Hubble(25.4m² vs 4m²)
This would have to be scaled by the wavelength being observed, for a resolution comparison. Hubble actually has better absolute resolution, when viewing shorter the wavelengths that JWT can't sense (0.05 arcseconds vs JWT 0.1 arcseconds).
Right, that didn't occur to me at first, but is just obviously true when you point it out, thanks. Though I didn't know that hubble is actually higher resolution in that comparison.
Then, in some sense, the first part of my explanation is most of the story in the case of comparing MIRI(mid-infrared instrument) to hubble in the near-infrared.
But in comparing NIRCAM to Hubble in the near-infraread JWST would in fact have greater resolution, no?
Plus, upgrading Hubble wouldn't get us close either. JWST is specifically designed to shield the sensors from IR/heat, and it's 1 million miles from Earth for a similar reason.
Redshift is indeed a matter of speed. But due to the expansion of the universe, relative speed and distance are directly related (Hubble's law).
So farther away means faster relative speed and thus more redshifted (Doppler effect)
Farther away also means older light (due to the finite speed of light).
Putting that all together means that to observe old light from the start of the universe we have to look in the IR spectrum.
For the wavelengths that the telescopes are designed to observe (primarily ultraviolet & visible for Hubble, though it can do a little bit of infrared, while JWST looks at Infra-red and mid-infrared) resolution is fairly comparable, though JWST has a much wider field of view and doesn't half to sit idle when it orbits the sun side of the earth like Hubble does.
A major issue with Hubble & JWST comparisons is just that they're designed to look at different wavelengths of light. A lot of what JWST will see is completely invisible to Hubble, and no amount of observing time can compensate for that.
A crude analogy is like this: Two cameras are pointed towards a wall. Camera #1 is good, but it is blocked by the wall. Camera #2 has a special trick, it does some magic that can look behind the wall.
Now both have resolutions and stuff. But no matter how big the resolution or how long it stares, cam1 is fundamentally blocked by the wall. It can take extremely high res photos of things inside the wall, but it can never see anything behind the wall.
Cam2 could have infinitely higher quality than cam1 — because who knows, there can be 100, 1000, million or a never ending world of things behind that wall that can never be seen or captured by cam1.
Cam1 is Hubble, cam2 is JWST, and the wall is infrared wavelength which is all around us. JWST can peer deeper into the _same area_ of space, and see more things behind the infrared wall, which Hubble can never see.
No, expsoure time is not enough. Resolution is a factor of the size of the primary mirror. Exposure time just allows capture of photons at that resolution. With the JWST primary mirror dwarfing the Hubble's, then it will always have better imagery.
Regardless of exposure, you have to consider wavelength. There are some things JWST can see that are completely invisible to Hubble, or, similarly, there are objects that are opaque to Hubble that JWST can see right through. Just look at all the extra stars that appear in the image of the Carina Nebula for an example of that.
Webb's physical dimensions are larger than Hubble's. The "collecting area" of Web is 273 sq ft to Hubble's 46, per Wikipedia. The two telescopes are sensitive to different (but somewhat overlapping) bands of light. Hubble worked through the visible spectrum while JWST is almost exclusively infrared.
To the "can Hubble do anything Webb can do but with more time", the answer is no, due to the lack of mid-infrared sensitivity, among other things like atmosphere.
I worked in astronomy software for a few years for a different telescope, the LSST. I am not an expert, but I was in this world enough to answer.
The short version - it converges faster (probably like 5-10x faster), but also (as everyone else said) works in different wavelengths.
You can think of a telescope as a "photon bucket." The number of photons it collects is proportional to the area of the aperture. Webb's aperture area is 25.4 square meters, while Hubble's is 4 square meters, so roughly speaking JWST will get photons about 6 times quicker than Hubble.
But that's only the roughest measure. Once you've got the photons, what do you do with them? You send them to a detector. There's loss in this process - you bounce off of mirrors, with some small loss. You pass through band filters to isolate particular colors, which have more loss. The detector itself has an efficiency; in CCD cameras people speak of "quantum efficiency" - the probability that a photon induces a charge that can be counted when you read out the chip. That quantum efficiency depends on the photon's wavelength.
Furthermore - the longer your exposure, the more cosmic rays you get which corrupt pixels. You can flush the CCD more often and detect the cosmic rays and eliminate them, but you'll eventually brush against the CCD's read-out noise, which is a "tax" of noise you get every time you read out data.
So this all get's complicated! People spend many years characterizing detection capabilities of these instruments, and write many pages on them.
HST's camera is more complicated to characterize, partly because it's older. Radiation has damaged and degraded many of the components so they have a lot of noise. The details of how this works are at the edge of human knowledge, so we don't have a great model for them. From the STIS handbook:
Radiation damage at the altitude of the HST orbit causes the charge transfer efficiency (CTE) of the STIS CCD to degrade with time. The effect of imperfect CTE is the loss of signal when charge is transferred through the CCD chip during the readout process. As the nominal read-out amplifier (Amp D) is situated at the top right corner of the STIS CCD, the CTE problem has two possible observational consequences: (1) making objects at lower row numbers (more pixel-to-pixel charge transfers) appear fainter than they would if they were at high row numbers (since this loss is suffered along the parallel clocking direction, it is referred to as parallel CTE loss); and (2) making objects on the left side of the chip appear fainter than on the right side (referred to as serial CTE loss). In the case of the STIS CCD, the serial CTE loss has been found to be negligible for practical purposes. Hence we will only address parallel CTE loss for the STIS CCD in this Handbook.
The current lack of a comprehensive theoretical understanding of CTE effects introduces an uncertainty for STIS photometry.
Now - this was all about how many photons you collect. When humans look at an image, they also care a lot about how fine the details are on it. This has to do with the resolution of the telescope's imaging systems. Resolution is limited by the number of pixels on the detector, and (to a much lesser extent) by the optical train of the telescope - the aberrations and distortions introduced by mirrors that focus light onto the detector's pixels.
Hubble has a high-res camera, and a separate wide-angle camera. Hubble's high-res camera actually outperforms JWST - it can resolve down to 0.04 arcsec, while JWST's can go to around 0.1 arcsec. But JWST's camera has a much wider field of view.
I'm no expert either, but I imagine that high exposure times come with more motion blur. So just cranking up exposure time does not necessarily result in better pictures.
The most pronounced effects might be paralax of nearby stars to thousands of light-years at the outside. That would be observable in images taken at opposite sides of Earth's orbit around the Sun, a baseline of about 300 million km (186 million miles). Even that will be phenomenally small, too small to be observable for most objects within our own galaxy (the Milky Way) let alone the distant objects JWST is most concerned with.
From Wikipedia:
In 1989 the satellite Hipparcos was launched primarily for obtaining parallaxes and proper motions of nearby stars, increasing the number of stellar parallaxes measured to milliarcsecond accuracy a thousandfold. Even so, Hipparcos is only able to measure parallax angles for stars up to about 1,600 light-years away, a little more than one percent of the diameter of the Milky Way Galaxy.
The Hubble telescope WFC3 now has a precision of 20 to 40 microarcseconds, enabling reliable distance measurements up to 3,066 parsecs (10,000 ly) for a small number of stars.[10] This gives more accuracy to the cosmic distance ladder and improves the knowledge of distances in the Universe, based on the dimensions of the Earth's orbit.
JWST's optical acuity is roughly similar to Hubble --- despite the larger mirror surface, it's using longer wavelengths of electromagnetic radiation, with lower resolving power.
Movement of the JWST itself is kept to an absolute minimum for obvious reasons. It would simply be unusable as a telescope if this weren't the case.
Absolute motion of objects being imaged ... also isn't a factor, as the maximum resoultion of JWST (the smallest pixels on an image) are still tremendous. It's possible that a nearbye (neighbouring galaxy) nova event might generate observable motion over days or weeks, but even that is unlikely. The interesting stuff in that event is actually the changes in brightness and evolution of light emissions, for the most part.
In the case of the Carina Nebula image 8,500 light years distant (that is, astronomically near), the individual dust segments are light years in length. The distance from the Earth to the Sun is roughly 1/64,000th that distance --- too small to visualise in thos images. The individual stars show are not dots or disks, but points, whose apparent size is a matter of refraction and saturation effects on the JWST itself.
Even where there migh be any movement, individual images are composed of multiple exposures and "stacked" to take median observed signal strengths. This is, in a way, to eliminate motion effects, but the moving entities are cosmic rays which create random signatures on the sensors of JWST, and not movement of the telescope or its targets themselves.
Well I mean the JWST is in orbit both around the L2 point and the sun. It's sensitive equipment must also be facing away from the sun. So there's a lot of movement going on out there.
I'm sorry but no no no. These telescopes are tracking the objects they are imaging specifically to avoid imaging issues from motion. This isn't some dude in the backyard with an alt-az scope bought from a Sears catalog.
I really hope you were trolling with this response
I'm trying to understand how much the improvement is "speed of convergence" vs. "quality of asymptotic result". (Though... is that even a valid way of trying to understand things?)