Jump to content
UltravioletPhotography

How to make a TriColour image, my method


Stefano

Recommended Posts

This discussion became fun.

Stefano, I like how you modeled in the different bandpass filters using various percentages of RGB. I am surprised your stacking software doesn't do alignment. Whats the point of it then? Just use Gimp to stack the images in layers. You can use the free alignment software from hugin or any of the free focus stacking packages then.

 

Andy I like those papers. Interesting approach.

I wonder if we could just take an image of the standard color checker passport under the our favorite UV light source with a suitable UV filter over our detection (to avoid fluorescence) and model in the shift from the standard reflectance D50 values here:

https://xritephoto.com/ph_product_overview.aspx?ID=820&Action=support&SupportID=5159

 

That then should give you a standard "color" on the UV illuminated bands.

 

 

 

Link to comment
Alignment is defined an issue, although sometimes it isn't a big problem if the lens doesn't have a big focus shift.
Link to comment
Andy Perrin
David, for the method to work properly, the color checker samples have to have large variance at each wavelength. Judging by prior pictures that have appeared on the board, this is not the case (for example there are no false yellows in the color checker). It would actually be best to use a s Sparticle with some PTFE behind it or something. Unfortunately I don’t have one yet.
Link to comment

Do you think some lavender, yellow and neutral targets would be enough? For UV-yellow, I noticed that some black and yellow fabrics are UV-yellow. Flowers have brighter UV-yellows, but they wilt.

 

I am guessing you want some UV-green if you want more accurate "deep UV" colors?

Link to comment
Andy Perrin
Stefano, what’s needed is large variation in dominant wavelength reflected (short waves, long waves, medium waves...) but unfortunately we also need the spectrum of each sample to be known. If I had a spectroscopy setup I could measure it but I don’t.
Link to comment
So, say the dominant wavelengths are 315, 340 and 380 nm, we need targets that vary a lot there (like some very reflective at 340 nm, some very absorbing)? And with known spectrums. Am I understanding?
Link to comment
Andy Perrin

Yep that’s right. You can think of what’s happening here as a form of interpolation between the known spectrums in the color checker. So if there is not enough variation at a particular wavelength then the interpolated value will not be accurate since it will always be within the range of the known reflectances at that wavelength.

 

The nature of the method (Wiener filtering) is statistical, and the underlying assumption is that the spectrums of the samples in the color checker are truly random, meaning that all possible spectrums will occur if you have enough samples. This is why in the the papers they use a very large color checker, much more elaborate than the Passport, with several hundred colors.

Link to comment
But... how can you tell the spectrum of something given its color? There is more than one combination to give a certain color (except for pure primary colors).
Link to comment
Andy Perrin

But... how can you tell the spectrum of something given its color? There is more than one combination to give a certain color (except for pure primary colors).

Two things.

1) you aren't working out the spectrum given JUST the color, you are working it out given the color AND the colors of the color checker squares under the same lighting. That puts a big constraint on possible metamers. On top of that, people usually don't stop with just the three R,G, and B filters, sometimes they include additional filters which further nails it down.

 

2) At the end of the day it IS just an estimate of the spectrum at each pixel, not a measurement of the spectrum. Specifically, it's the spectrum that minimizes the mean square error between the spectra of all the color checker squares and the pixel you are trying to find the spectrum of. However, there is an error bar on this estimate which depends on things like we've already discussed, like how much variance there is in your color checker, and even just how many squares are in the color checker.

--

 

Speaking more broadly than just this particular algorithm, this is an example of what's called an "inverse problem" -- the "forward problem" would be "how do you calculate the color given the spectrums" (you just do the three integrals) and the "inverse problem" is "how do you find the spectrum given the color." Inverse problems as a class suffer from being not well-posed, meaning that the solution may not be unique without making additional assumptions or adding additional data (like the color checker squares).

Link to comment
Andy, thanks for the explanation. You see the color, but in context. The context is given by the other color squares, and the more you have the better.
Link to comment

Andy a Sparticle wouldn't work.

One its not standard, just a bunch of filter seconds.

Two it doesn't have a standard known color value.

Three wouldn't you just be better off using your filters photographing a sheet of uniformity lite ptfe?

But then you see the problems with a Sparticle.

 

I will have to get around to actually photographing the color checker with various filters and lights and see if there is a difference.

Link to comment
Andy Perrin

Dabateman, a sparticle would work fine. To answer these one by one:

 

One its not standard, just a bunch of filter seconds.

 

Nothing has to be standardized about the color checker (or equivalent). The requirement is that the spectra be *known*, whether from correct manufacturer data or by direct measurement. Many of the people with sparticles around here also have spectrometers.

 

Two it doesn't have a standard known color value.

The color value isn't supposed to be standard, it's got to be MEASURED under the same light source as the image you are trying to analyze. Technically the colors are different every time because the light source is.

 

Three wouldn't you just be better off using your filters photographing a sheet of uniformity lite ptfe?

Ok, this makes me think you haven't read the paper or haven't understood how the process works. The filter spectra actually do not need to be known for this, they just need to have significant variance across the spectrum from each other (so, can't use the same filter 3 times). The filters don't even need to be bandpass filters! You could use the ROSCOE catalog in visible light.

Link to comment

Sorry Andy not using a filter three time. Should have written 3. As in my third point. You don't need a Sparticle. Just your own filters and a sheet of PTFE.

 

I see in your response to my third point you agree.

Link to comment
Andy Perrin

But I don’t agree with you. I’m sorry I also used the number 3, but that was a coincidence nothing to do with anything you said. The purpose of using the sparticle here is to have many different filters with known spectra. It is a replacement for the color checker. I don’t know what PTFE has to do with anything here.

 

Also no idea what you mean by “my filters.” There are filters that go on the camera and then you also need a color checker or equivalent. I’m just saying replace the color checker with a sparticle.

Link to comment

Since the Sparticle is typically assembled using cheap second small filters, and not a standard thing you buy off the shelf. Than I guess I have to be more open in my mind with what those filters are.

 

You actually had a good point with Roscoe filters. Lee filters I know and have experience using them for lighting plays. I used to do lighting and sound work way back.

 

The advantage of the Lee filters is they have a known spectrum. They also have known colors with different lights and different intensity of light. I also made a list somewhere, need to find it, of one with peaks into UV.

That would be a better option or a better option to make your Sparticle with.

Link to comment

Read some of this, and wanted to add a few comments.

 

Say the dominant wavelengths are 315, 340 and 380 nm (as Stefano put), then you need 'color' standards that cover each of these wavelengths.

 

Unlike visible light though you have a couple of issues to deal with. Camera sensitivity is very different at 315nm to 380nm, especially with the Bayer filter still present. Also, light intensity tends to vary drastically across the UV waveband, certainly the case with sunlight and flash which is what most of use use as broadband light sources. I photo which had the right exposure for the 380nm standard in sunlight would show the 315nm as black. Likewise a photo of the 313nm one which was correctly exposed, would have the 380nm as completely over exposed. Therefore you'd need multiple images at different exposures to capture the different standards.

 

Saw the same issues with some of my Sparticle experiments - https://www.ultravioletphotography.com/content/index.php/topic/2580-build-thread-at-home-measurement-of-camera-uv-spectral-response/page__view__findpost__p__22577

 

Not a simple problem to solve.

Link to comment
Andy Perrin

Oh, I would not even try this with a Bayer camera for UV. It is true the exposure thing is a big issue for tricolors in UV in general. My interest is more on the NIR-SWIR-MWIR part of the spectrum where it won’t be an issue for me since the TriWave and Thermovision have good gain there.

 

That said, exposure can be lumped in with the light source variation in the method of the papers I quoted, so it won’t matter as long as the color checker equivalent is photographed with the SAME exposure settings.

Link to comment

Andy, thanks for all the explanations from me too!

 

(And Tricolor NIR/SWIR/MWIR would be equally as interesting as Tricolor UV.)

 

I have a side question which I hope isn't too off-topic.

If more than 3 bandpass filters are used, how do you "stack" those given only 3 channels in our photo apps?

Link to comment

In this topic you can find my attempts at this. I think the best way is to simulate a camera or our eye's response and stack the images so that each one contributes to one or multiple channels at the same time. So if you have the equivalent of a "yellow" wavelength, that image will contribite to both the red and the green channels.

 

Andy perhaps knows a better method.

Link to comment
Andy Perrin

I have a side question which I hope isn't too off-topic.

If more than 3 bandpass filters are used, how do you "stack" those given only 3 channels in our photo apps?

 

Andrea, the way the method works is 3 steps:

1) Train the algorithm on the color checker using the same light (and camera settings) as the scene.

 

2) Run the algorithm using the training data to estimate hyperspectral images of the scene. In other words, you don't have just three images, you have an image at every wavelength in your training set.

 

3) Using the integrals we discussed a few pages back, calculate SYNTHETIC R, G, and B images using rescaled or shifted versions of the curves Colin posted or similar. Your (purely imaginary) light source spectrum can be anything, but the choice will define the white balance, so obviously using the standard D65 spectrum is a good starting point. You could also use the spectrum of your actual light source if known.

 

Since the number of synthetic channels is totally independent of how many filters you used to make the hyperspectral image, there's no issue of how they should be combined. You could even make photos that imitate another camera by this method if you had data on the quantum yield and Bayer filters for the other camera.

Link to comment

If you are using Image Stacker, and you have issues with .tif files, you may have two problems:

- the images you are trying to stack don't have the same size;

- you saved the .tif file with a software that gives issues.

 

It happened to me that the dark frame didn't have the same size as my images, because if you save a .CR2 file as .tif you gain resolution in some way (I think cameras cut the borders of the images by 10-20 pixels, and i didn't know that). So make sure the images have the same exact size;

 

regarding the second problem, .tif files saved in Photo Ninja give problems for odd reasons. Trying to re-save them in other programs as Andrea suggested, I found that IrfanView fixes this. So either re-save them in IrfanView or simply open your raw files there (IrfanView can handle .CR2 files, I don't know about other formats).

 

This way you will be able to use .tif images for better quality. There is still the alignment issue, so probably Bernard's technique is better (at least for merging three images only).

Link to comment

slightly off topic.

Photoshop Elements can usually be purchased for not too much. It has layer capability for aligning images. I couldn't do without that!!

My version of PSE is old and does not have channels per se but they can be emulated with a couple of tricks.

Link to comment
  • 9 months later...

Here is another example, this time in UV.

 

All images taken under sunlight, with a full-spectrum Canon EOS M and a SvBony 0.5x focal reducer lens, slightly stopped down (f/3.5 or so).

 

Filters:

Blue channel: double 310 nm Chinese bandpass filter;

Green channel: BrightLine 340/26 filter + ZWB1 (2*2 mm);

Red channel: BrightLine 387/11 filter + Chinese BG39 (2 mm);

 

Visible reference: Chinese BG39 (2 mm).

 

As a white reference I used a paper tissue (wrapped around a lead block. That piece of lead is itself wrapped in two layers of paper tissue, then wrapped in black tape, so it's 100% safe to handle).

 

I usually take at least 5 or even more photos for each band, using different exposure times (from underexposed to overexposed). This is to make sure I have some correctly-exposed photos.

 

Since the 310 nm filter is too dark to see anything in live view, I have to take some test photos (at settings like ISO 25600 and 4-8 s of exposure) to see if the image is focused, and to find the focus. For infinity focus I can point the camera directly at the Sun or bright sky areas and focus in live view. You can do this with the lens wide-open as the amount of UVB light hitting the sensor is very low. Pointing the camera at the Sun with a broader bandpass-longpass filter (such as for visible and NiR photography) can damage the sensor if the lens is wide-open.

 

Since it is much harder to focus with the 310 nm filter, I start with it and then use the other filters. My lens has a lot of chromatic aberration being a singlet, so refocusing is needed for each band. Refocusing with the longer-wavelength filters is possible in live view.

 

Here are three images I took, rendered in raw colors in Photo Ninja (removing all ticks):

 

310 nm (ISO 25600, 8 s exposure):

878106723_310raw.jpg.3ceb7e9b650d31ce83d966829f3197d5.jpg

 

340 nm (ISO 1600, 1/8 s exposure):

636296521_340raw.jpg.4a633789572e2c665832220da3a7332d.jpg

 

387 nm (ISO 1600, 1/30 s exposure):

277827062_387raw.jpg.916871129087b4ad4dcd097e2b6d286a.jpg

 

Note: for the final image, I combined two 310 nm exposures to reduce the noise, so I didn't only use the photo shown above.

 

To convert them in B&W, I take the green channel only from the 310 nm image, as most of the signal is there (this helps to reduce the noise). I remove the blue channel from the 340 nm image and keep all three in the 387 nm one. After doing this, I measured the brightness of the paper tissue in Microsoft Paint to then adjust the exposures of the images so that the paper tissue had the same brightness in all of them.

 

Then I colored them in red, green and blue, and stacked them in ImageStacker:

somma700.jpg.8a6ed5f23c19777d850a9223863dab8e.jpg

 

The color is balanced, but the images need to be aligned. This is mainly due to refocusing, and the slightly different focal lengths my lens has at different wavelengths. The longer the wavelength, the more "zoomed-in" the image becomes, after refocusing.

 

There are many ways to do this, and what I usually do is to adjust the blue and green channels first, creating a cyan channel, and then adjust it against the red channel.

 

In this case, the procedure was the following:

 

- removing 46 pixels above, 25 below, 64 on the left and 51 on the right from the blue channel to make it match the green channel (to make the cyan channel);

- removing 23 pixels above, 25 below, 19 on the left and 34 on the right from the cyan channel to make it match the red channel;

 

All trimmings were done in Microsoft Paint.

 

After all that, this was the final result:

somma755.jpg.27a8001a78ab1f0bd8670096cf2b87f9.jpg

 

Visible reference (taken under the same conditions). ISO 100, 1/4000 s exposure (the lowest settings on my camera):

IMG_0353.JPG.6aae580392ad3593e106bb8c129f4fba.JPG

Link to comment

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
×
×
  • Create New...