Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Naive question here, but how much more resolution can we ever expect to get out of pictures like these? If it gets 100 times better will we be able see what the planets look like?


You may be interested in:

Direct Multipixel Imaging and Spectroscopy of an Exoplanet with a Solar Gravity Lens Mission

https://arxiv.org/abs/1802.08421

...and an excellent video someone made about that paper:

https://www.youtube.com/watch?v=NQFqDKRAROI


That was fascinating. I guess problems of scarcity / economics mean we can't just launch a spherical shell of satellites into the 500ish AU orbit of the sun, not to mention the data retention requirements but man that would be even more awesome just to be able to synthesize an image at any point in the 100ly range... One can only dream I guess, but even just a single image of a single planet's surface would blow everyone's mind I think, so string of pearls it is! Please happen.


That's probably a decent next step, I'd estimate two decades, after the James Webb telescope going online soon. Hoping for a good launch.


Oh don't remind me of JWT. I'm actively pretending it doesn't exist until it gets results because if anything happens, I mean there's different degrees of disaster but in terms of... you know let's just pretend we never mentioned it.


"just don't think about it, Morty"


We better hurry that up. 500 AU is 3.5 times more than Voyagers currently made, and it took them 40+ years to get there. It would be interesting to fiddle with fuel/payload/acceleration/speed metrics with LEO [re]fuel.


At ~14:20 in the video they start describing the solar sails that get the spacecraft up to 22 AU/year in velocity, passing Pluto's orbit in 2 years (230,000 miles/hour or 105,000 m/s or 0.035%*c).


I am interested in that. Thanks for sharing!


That is a really interesting video!


Note that in these images, the planets are unresolved. They are point sources. Point sources span multiple pixels due to the sampling theorem.


Suprisingly you can get alot of information from a system, that seems "lost" at first.

https://www.csail.mit.edu/news/imaging-method-uses-shadows-r...

The problem here is the complexity of the room in between and the background. As in- light deformed by gravity lenses etc. - but in theory alot more can be extracted from the "noise" in the neighbouring pixels.


If I have learned correctly by watching police procedurals on television, it's simple a matter of verbalizing the mystical incantation "zoom. enhance." and typing rabidly on a keyboard and within second you'll be able to see the make of the watch on an alien's left tentacle.

Is it not just that easy?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: