upvote
You could probably use just an X-scanner, and instead of a CCD line sensor, use a regular 2D image sensor if you used a "1 pixel wide" slit aperture to crop the image perpendicularly to the direction that the prism disperses the light. So instead of a single pixel being dispersed, you disperse a line.

You would reduce the time required by the root of the number of pixels you want (assuming a square image).

(This is what we do in momentum-resolved electron energy loss spectroscopy. In that situation we have electromagnetic lenses that focus the electrons that have been dispersed, so we don't have as bad a chromatic aberration problem as the other response mentions).

I would love to see e.g. a butterfly image with a slider that I could drag to choose the wavelength shown!!

reply
> I would love to see e.g. a butterfly image with a slider that I could drag to choose the wavelength shown!!

Here[1] are some 31-band hyperspectral images of butterflies. Numpy/pillow can unpack the .mat files into normal images. Then perhaps vibecode a slider, or just browse the band images?

[1] http://www.ok.sc.e.titech.ac.jp/res/MSI/MSIdata31.html (includes 8 butterfly 31-band hyperspectral visible-light images). These butterflies are also their VIS-SNIR dataset, and others.

I knew of the site having explored "First-tier physical-sciences graduate students are often deeply confused about color. Color is commonly taught, starting in K... very very poorly. So can we create K-3 interactive content centered around spectra, and give an actionable understanding of color?"

reply
Very nice idea! That makes it much easier!
reply
A problem for multispectral imagery (even within visible rgb), is that the wavelengths of light are different so the lens cannot be in focus for all spectrum at once. I have tested this out with a few of my slr lenses. If you have blue channel perfectly in focus, red isn't just a little out of focus, it is actually noticeably way out.
reply
This is called chromatic aberration, for those who are intrigued.

Given that regular phone cameras have sensors that detect RGB, I wonder if one could notice improved image sharpness if one had three camera lenses (and used single-color sensors) next to one another laterally, with a color filter for R, G and B for each one respectively. So that the camera could focus perfectly for each wavelength.

reply
Next issue would be the perspective distortion in the merged image
reply
there are lenses out there designed for apochromatic performance across the UV-Vis-IR band, but they tend to be really pricey.

The Coastal Optical 60mm is a frequently cited one. UV in particular is challenging, because glass that works well in the visible light range can be quite poorly translucent in UV. Quartz is better, but drives up the cost a lot, and comes with other tradeoffs.

reply
I've had this problem as well, but it's just due to optical properties of the lens and extremely consistent from image to image, so you can calibrate and correct for it as long as you focus each wavelength and collect data separately.
reply
I don't think you can property calibrate for it unless you also move the camera to compensate for focus breathing. I'm not sure if that would fully account for it either. That being said these things are only very noticeable pixel peeping.
reply
Focus breathing can be compensated for. The "breathing" only changes the effective focal length, not the location of the camera, so you can map the pixels to match where they should be and bilinear/bicubic interpolate appropriately.

Shoot a checkerboard at both wavelengths each focused properly and then compute the mapping.

If you're shooting macro stuff then maybe you are changing the effective location of the camera slightly depending on the exact mechanics of the lens and whether the aperture slides with the focusing, but the couple of mm shift in camera location won't matter for landscapes.

Alternatively, use cine lenses which are engineered not to breathe, but they are typically more expensive for that reason.

reply