Hacker Newsnew | past | comments | ask | show | jobs | submit | DarkSucker's commentslogin

The paper describes a split Alvarez (Lohmann) lens [1,2] with a phase modulator between them. I didn't do the math, but it looks like the phase modulator is optically equivalent to a mechanical shift of the Alvarez lenses over regions of the field of view. Alvarez lenses have higher aberrations, and are relatively bulky, compared to normal lenses. AR was referenced in the paper, but this lens will be hard to make compact, and have great image quality, over large fields of view.

1. https://www.laserfocusworld.com/optics/article/16555776/alva... 2. https://pdfs.semanticscholar.org/55af/9b325ba16fa471e55b2e49...


Each pixel of such a scanner would need to somehow scan the spectral content. For example, imagine an array of fibers each transporting light from image plane pixel coupled to a spectrometer (bulky, expensive). Slit (a.k.a. push broom) scanners take each pixel of its slit and disperses the light perpendicular to the slit onto a 2D sensor array (more compact, 1D mechanical scan required). I recall seeing spectral (color) filters made from dispersive materials sandwiched between rotating polarizers to filter (scan) the light entering camera (expensive, compact).


Sounds great, but I often find myself wondering "where's the catch?". There's not enough info in the abstract judge for myself whether the idea has legs. I'm sure it'll get more press if there's something to it.


An apt username to be commenting on light-related topics.


As I read the article. The tsunami wave (water) displaces air at the surface and creates a sound wave, and gravity waves, that travels to the upper atmosphere. These waves then interact with electrons in the upper atmosphere.


> and gravity waves, that travels to the upper atmosphere

You spoke correctly. But to further clarify, these are gravity waves, not gravitational waves.


The article's `expression problem matrix` section states that the goal is make it `easy to add ops` and `easy to add types`. My learning of Rust so far indicates Rust satisfies both: traits satisfies the `ops` problem for all traits you want to support the op, and Rust's implementations (impl) solves the problem of adding types. Of course, for each new {op,type} combination, one must write the code, but Rust allows you to do that with its trait and generic systems. Am I missing something important?


When people talk about the "expression problem", what they're describing is the fact that, (in your example) if you add a new method to the trait, you have to go around and implement that new method on every type that implements the trait.

This is in contrast to if you had used an enum (sum type) instead, wherein adding a new operation is easy and can be done in a single place. But then in exchange, adding new variants requires going around and updating every existing pattern match to support the new variant.


Thanks. I wasn't thinking of enums. To the extent one designs a trait to use an enum type (or the enum to satisfy the trait), one wins. But it seems impossible to avoid code to handle all future {type, op} combinations. The nice thing I've seen with Rust is the ability to add to what's been done before without breaking what already works. I'm thinking of "orphan rules" here.


I'm thinking, didn't inheritance and abstract methods work to solve the problem?

I know inheritance has its own severe problems, but adding a generic abstract method at the base class could create reusable code that can be accessed by any new class that inherits from it.

P.S. ah ok, it's mentioned in the article at the Visitor section.


I think the problem is, that at the base class, you don't necessarily know how to handle things, that are encountered at the concrete class level, or don't want to put the logic for all implementations into your base class. That would defeat the purpose of your abstract class and the hierarchy.


If you have to handle specific behavior for the new method in an existing type, of course you will need to add a new implementation of the method for that type.

As I understand the Expression problem, the limitation of programming languages is that they force you to modify the previous existing types even if the new behavior is a generic method that works the same for all types, so it should be enough to define it once for all classes. A virtual method in the base class should be able to do that.


That only moves the problem to the base class. In this case there is no difference in effort modifying the base class to foresee all cases that could happen with the derived classes, compared to adapting all the derived classes. In fact, you might even be typing more, because you might need some more "if"s in that base class, because you don't know which specific implementation you are dealing with, for which you implement the behavior in the base class. You still have to deal with the same amount of cases. And it is still bad, when you need to extend something that a library does, that you are not maintaining.

This does not solve the issue at its core, I think.


This seems like it's easy to workaround by not modifying the existing trait but by defining a new trait with your new method, and impl'ing it atop the old trait.

That seems like a pretty ok 90% solution, and in a lot of ways cleaner and more well defined a way to grow your types anyhow.


I think the alternative is shown in figure 2. SiN (silicon nitride) is different from optical fiber. Both versions shown in figure 2 lack detail, especially in the small boxes labeled GC, Mirror, and PD. Depending on the details, one might put micro optical assemblies between or in those boxes in the figure. In any case, SiN waveguides are small, so you can pack many lanes in a small space.


Can anyone speak to the image processing used? In particular, how is depth inferred from a single 2D image? It seems to me one would need both depth and angle over the field of view to back out the lens focal length. The EXIF format doesn't seem to contain meta data helpful to the focal length calculation.


As @nanoanderson mentioned, we use the "FocalLengthIn35mmFormat" tag from EXIF. Let me explain why: since cameras have different sensor sizes, comparing actual focal lengths directly can be misleading (https://en.wikipedia.org/wiki/Crop_factor#Introduction). By using the 35mm film equivalent, we can compare field of views across all cameras on the same scale. For instance, a 50mm lens on a crop sensor camera shows a similar field of view to a 75mm lens on a full-frame camera.

We're using the 'exifr' library for EXIF data extraction: https://www.npmjs.com/package/exifr


There is a FocalLength tag in the EXIF spec. https://exiftool.org/TagNames/EXIF.html


That link made my day! Thank you. From what I read, their builds perform better than the originals, and they even make landscape versions. Woo hoo! RPN rules in my book. I picked up an HP 32 before they were gone forever, but I've been afraid to use it. It's great to know about SwissMicros; they'll get my business soon.


Not a problem.

I have their DM42 and it feels good in the hands. The case is solid metal too and the screen is nice. I believe the performance is better than the originals (not hard as some of these were sold before my birth and I'm not exactly a spring chicken anymore) and they fixed several bugs.


Thanks! I'm sick and couldn't bring myself to do the wavelength calculation. Your comment helped my thoughts. I think people working on microwave equipment (frequency counters, ...) work in Hz.That's probably why they used the term.


Neither the article nor official link gave much optical design detail. Here's my guess. A (substantially) radially symmetric system comprising a wide-angle positive-short-focal-length first lens followed closely by a negative-short-focal-length whose diameter is small (thus covering a small range of angles, in the center field of view, from the first lens). A single central sensor behind the negative element is for telephoto images, while a collection of sensors distributed radially around the first lens and off axis capture wide angle images.

Imagine a low-index ball lens in contact with a thin high-index negative lens. That's the idea. I'm sure the real design uses multiple surfaces/elements for each lens, and I'm sure it's hyper-optimized. I'm interested to learn how close my guess is to reality.

Apologies for the complex wording to describe geometry.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: