• Ultraviolet Photography
  •  

How to make a TriColour image, my method

TriColour Multispectral
73 replies to this topic

#21 Andy Perrin

    Member

  • Members+G
  • 4,416 posts
  • Location: United States

Posted 05 June 2021 - 18:24

Well of course in MATLAB or any programming language would be easy. But normally what balance is not done by just reweighting channels.

#22 Stefano

    Member

  • Members(+)
  • 2,174 posts
  • Location: Italy

Posted 05 June 2021 - 19:08

An alternative I can think of is to measure the color of the white standard after the stacking. If it is something like 220, 200, 240, one can then increase the red channel by 25 units, the green by 55 and the blue by 15, to obtain 255, 255, 255. This way the image will be balanced.

The colors can be measured in many softwares, Paint can do that but only for a single pixel as far as I know. An average over an area is better.

#23 Andy Perrin

    Member

  • Members+G
  • 4,416 posts
  • Location: United States

Posted 05 June 2021 - 19:24

Stefano, that is a bad idea because it alters the saturation. I don’t think a white balance is necessary until the final step and then it’s perfectly okay to use PN’s WB tool. I don’t know why you don’t like that idea since the colors are false even for a tricolor.

May I suggest an experiment? Let’s do our procedure using visible light bandpass filters on a monochrome camera and see what comes closest to reproduction of a normal photo. Then we can try in other bands.

Edited by Andy Perrin, 05 June 2021 - 19:28.


#24 Stefano

    Member

  • Members(+)
  • 2,174 posts
  • Location: Italy

Posted 05 June 2021 - 19:52

I meant to do the channel correction as a last step, after the stacking, as I said. And I don't understand how it can alter the saturation, it just corrects the exposures for each channel so that a white/gray object (flat reflectance) appears white or gray (one doesn't have to push it to 255, 255, 255, but whatever values produce a correctly exposed image, as long as the RGB values are all equal).

I don't like the idea of a typical white balance as it isn't just a channel re-weight, it can even create a channel that wasn't there, as we once shown with Birna's help. But that was a quite extreme case.

For light white balances, the difference between the two algorithms is low, almost zero. So not a big problem. But if one wants a scientifically-accurate image (and I am one of those people), then it matters.

Your experiment is nice, I can try it with my LEDs (I think 450, 520 and 635 nm may be good), and can tell us more about colors.

All this discussion of course doesn't matter if one doesn't care about color accuracy and is fine with visually-appealing images. One can assign the colors in other orders, and manipulate them in any way.

#25 Andy Perrin

    Member

  • Members+G
  • 4,416 posts
  • Location: United States

Posted 05 June 2021 - 20:07

Stefano, adding the same value to all three channels is equivalent to adding gray to whatever the original color was, therefore it decreases saturation.

Quote

But if one wants a scientifically-accurate image (and I am one of those people), then it matters.
I think you are deceiving yourself there. The idea that mixing channels when white balancing makes the result scientifically inaccurate is the issue, but when you have overlapping filters, that is the right thing to do to unmix the channels. This is because each wavelength can contribute to all three channels in the general case (such as with a Bayer that has overlapping spectra) and it's necessary to have a rotation in color space to unmix them in the overlap regions. If you have bandpass filters that truly have no overlap then a simple rescaling would suffice, though.

Edited by Andy Perrin, 05 June 2021 - 20:16.


#26 Stefano

    Member

  • Members(+)
  • 2,174 posts
  • Location: Italy

Posted 05 June 2021 - 20:15

No, I don't add the same value to each channel, I add different values to bring the channels to the same level. I'm actually adding a color, more precisely white (or gray) - the un-white balanced color of the white reference.

I will now take some images with my LEDs. I thought to use all the wavelengths I have available (on heatsink) between 400 and 700 nm. Should I post them here or in another topic?

#27 Andy Perrin

    Member

  • Members+G
  • 4,416 posts
  • Location: United States

Posted 05 June 2021 - 20:20

Well you can prove your method works. If it works then great, we have a method. If not we can reevaluate.

Posting here seems fine? It's the same topic, just validating the method.

Quote

No, I don't add the same value to each channel, I add different values to bring the channels to the same level. I'm actually adding a color, more precisely white (or gray) - the un-white balanced color of the white reference.
Show me how it works. I think that will affect the brightness and saturation since you are adding onto the values. But if it works, fine. I am willing to accept experimental results.

Edited by Andy Perrin, 05 June 2021 - 20:24.


#28 Stefano

    Member

  • Members(+)
  • 2,174 posts
  • Location: Italy

Posted 05 June 2021 - 22:12

Andy, I will find a way to show you how it works (provided it works, I too can be proven wrong).

Here's my LED series in visible light, I had some connection problems, so I am a bit late. The wavelengths are not exact, they are either what the seller claimed or what I have measured myself with a diffraction grating, and they can be a bit off.

Same equipment as before, full-spectrum Canon EOS M and Soligor 35 mm f/3.5. No filters in any image. The camera was set to auto exposure, ISO 100 and the lens at f/8. I put a normal paper sheet on the wall and a paper tissue in front. The normal paper fluoresces blue under the violet LEDs, and the paper tissue does not (in any appreciable way). This is why the paper looks brighter than the tissue in the first images. Also, the diffraction grating (put for color) has a lot of glare, I didn't notice it in time.

400-405 nm (violet):
Attached Image: 405.JPG

425 nm (violet):
Attached Image: 425.JPG

450 nm (royal blue):
Attached Image: 450.JPG

465 nm (blue):
Attached Image: 465.JPG

500 nm (bluish green):
Attached Image: 500.JPG

525 nm (green):
Attached Image: 525.JPG

587 nm ("sodium" yellow):
Attached Image: 587.JPG

600 nm (orange):
Attached Image: 600.JPG

625 nm (red):
Attached Image: 625.JPG

660 nm (deep red):
Attached Image: 660.JPG

I did a TriColour with my usual technique with the 450 nm, 525 nm and 625 nm images as the blue, green and red channels. Here is the result:
Attached Image: somma 318.jpg

Not so pretty, it clearly needs a white balance. I did it on the background paper in IrfanView and in Photo Ninja, and the results are similar, but with some differences:

IrfanView:
Attached Image: somma 318 IV.jpg

Photo Ninja:
Attached Image: somma 318 PN.jpg

I have to say, I like the Photo Ninja image better.

Here's an image taken with a white LED torch, white balanced on the paper in Photo Ninja:
Attached Image: White.jpg

The TriColour image doesn't have accurate colors because if those LEDs are used to produce white light the CRI (Color Rendering Index) is quite low. I already tried to run these LEDs in series at the same time and the color rendition is awful. But we can't talk about CRI outside the visible spectrum. Notice how the orange square in the Rubik's cube appears red in the TriColour stack. Even the white LED image is a bit off from what I see with my eyes, probably because the camera is picking up some far red from the LED. This is kind of the same issue Andrea wrote about here.

Anyone is free to experiment with the other images.

Edited by Stefano, 05 June 2021 - 23:21.


#29 dabateman

    Da Bateman

  • Members+G
  • 3,004 posts
  • Location: Maryland

Posted 06 June 2021 - 00:12

Interesting that the tricolor tape is clear, whereas in the white LED its yellow. So the colors aren't correctly being taken with this method. I have used a monochrome imager or camera with wratten #47, 58 and #25 and had near equal representation. Maybe you need more overlapping spectral regions.

But I think you are on to a simpler method. However, I usually have problems at the alignment stage of the photo process.

#30 Stefano

    Member

  • Members(+)
  • 2,174 posts
  • Location: Italy

Posted 06 June 2021 - 00:59

Try to play with my images if you have time, and see if things improve. For example, the orange exposure should help with the Rubik's cube. I only did a basic TriColour just to test this, but using all images in the appropriate way should improve the colors. I can try this myself tomorrow.

It would have been much better if I also had my 480 nm cyan and 565 nm lime green LEDs, but they are not mounted on heatsinks yet. I really have to finish them.

To answer both Andy and David, yes, using three bands without overlap is not the correct way to reproduce a color image in the visible spectrum. This is especially true when you have materials with sharp absorption peaks. A famous real-life example is neodymium glass under fluorescent lighting vs incandescent/LED.

I don't understand what Andy means by mixing/unmixing the channels, but we both agree that when the bands have no overlap then a simple re-weighting is enough. I still want to show an example, I would like to do it in IR but my 730 nm LED doesn't turn on anymore, I think I killed it, already ordered a new one... I must be more careful.

For a UV TriColour image with actual overlap, I thought at the nice bandpass filters Jonathan has, but I don't want to press him into doing that experiment, since it is time-consuming and of course I respect that. He did already post a set of images though... https://www.ultravio...ltiple-filters. That can be a starting point.

Edited by Stefano, 06 June 2021 - 01:02.


#31 Stefano

    Member

  • Members(+)
  • 2,174 posts
  • Location: Italy

Posted 06 June 2021 - 01:16

Actually, but only in this very specific case where we are using visible light, the best way to stack a series of images is to literally stack the raw versions, in color, and white balance the result. If a camera (meant for visible light) recorded 600 nm as a shade of orange, then that image must contribute to the output with that particular color, neither red nor yellow nor anything else. But only in this case.

#32 Stefano

    Member

  • Members(+)
  • 2,174 posts
  • Location: Italy

Posted 06 June 2021 - 17:23

Here's what happens when you save the raw LED images as .jpg, stack them (I actually did an average, since stacking overexposes the result), and then white balance the result in irfanView and Photo Ninja (white balance on the paper sheet):

Stack:
Attached Image: somma 321.jpg

IrfanView:
Attached Image: somma 321 IV.jpg

Photo Nnja:
Attached Image: somma 321 PN.jpg

The colors are better since the combined spectrum of all LEDs is better and with less gaps. Now the orange square in the Rubik's cube is actually orange, and the paper tape is yellow. Working with .jpgs is not ideal, but I still have to find a fix for that issue.

Edited by Stefano, 06 June 2021 - 18:43.


#33 OlDoinyo

    Member

  • Members(+)
  • 887 posts
  • Location: North Carolina

Posted 07 June 2021 - 01:08

If using Photoshop, the preferred method is to paste the constituent images into their respective channels--it is not necessary to invoke layers to do this. It is also not necessary to do greyscale conversion; that happens automatically if a color image is pasted into a single channel.

#34 Stefano

    Member

  • Members(+)
  • 2,174 posts
  • Location: Italy

Posted 07 June 2021 - 18:01

Jonathan gave me permission to download his images here, made with 7 of his UV bandpass filters, and I stacked them in three different ways. IrfanView was used to "color" the images, and Image Stacker was used to stack them.

Method 1: classic TriColour
In this method I took just the extreme and the middle images (310, 340, 370 nm), and stacked them in the usual way. The result was white balanced on the PTFE target in IrfanView, but the image was already balanced.

Colors:
Red: 370 nm;
Green: 340 nm;
Blue: 310 nm.

Final result:
Attached Image: somma 322 IV.jpg

Color distribution:
Attached Image: Immagine.png

Method 2: TriColour with overlap
Here I stepped up things a bit, using a more "precise" method (even if we are still dealing with false colors), by making use of all 7 images and simulating a bell curve response from the three channels, with overlap.

Colors:
370 nm = 128, 0, 0;
360 nm: = 255, 0, 0;
350 nm = 128, 128, 0;
340 nm = 0, 255, 0;
330 nm = 0, 128, 128;
320 nm = 0, 0, 255;
310 nm = 0, 0, 128.

Each channel has this sensitivity pattern: 0, 128, 255, 128, 0. The peaks are at 360 nm (red), 340 nm (green) and 320 nm (blue). Since the sum of the columns is 511 (256*2 - 1), I did a stack/average combo, stacking the images and dividing the result by 2. The result was already balanced, but like before I still did apply a white balance on the PTFE target in IrfanView.

Final result:
Attached Image: somma 323 IV.jpg

Color distribution:
Attached Image: Immagine.png

Method 3: camera channel response "transposed" in UV
Here I complicated things even more. Jonathan has measured the channel response of a stock Canon EOS 5DSR camera, that he posted here on his website. I wanted to simulate the colors a normal visible light camera would see if instead of seeing 400-700 nm saw 305-375 nm, but with the same channel response.

To find the corresponding visible wavelengths given the UV wavelengths, I did these calculations:

305-375 nm is a 70 nm-wide interval;
5 nm is 1/14 of that interval;
400-700 nm is a 300 nm-wide interval;
1/14 of 300 is ~21.43 nm.
So the 10 nm separation between the UV bands is equivalent to a ~42.86 nm sepatation between 400 and 700 nm.

The wavelength correspondence is the following:

370 nm -> 679 nm;
360 nm -> 636 nm;
350 nm -> 593 nm;
340 nm -> 550 nm;
330 nm -> 507 nm;
320 nm -> 464 nm;
310 nm -> 421 nm.

Note that I chose to use 305-375 nm instead of 310-370 nm as the UV range as I didn't want the extreme wavelengths to have almost zero contribution.

I then measured from the graph the sensitivities of the camera at those wavelengths, using the green peak visible in the graph as the 255 value, and using similar calculations as the ones above to find the correspondence between pixels and wavelengths. These are the colors that came out:

370 nm = 13, 2, 1;
360 nm = 53, 8, 2;
350 nm = 93, 80, 5;
340 nm = 39, 206, 24;
330 nm = 15, 251, 128;
320 nm = 2, 55, 206;
310 nm = 0, 10, 79;

Those colors are odd, with almost no red, some blue and mostly green. But I used them anyway.

I colored the images with these colors, did a stack/average combo dividing by 3, and white balanced the result on the PTFE target in Photo Ninja (this time the image was not already balanced, but very green/blue).

Final result:
Attached Image: somma 324 PN.jpg

Color distribution:
Attached Image: Immagine.png

Comments:
- the images clearly need to be aligned, but I don't have the softwares to do that;
- the colors in the last method are odd, with the vase being black (very dark red) instead of red, and the general color palette being more monochromatic than in the other methods;
- reading the original topic where these images have been posted (linked above), I found a post by Bernard where he already had the idea of overlapping the channels. I don't remember reading that post, so we had the same idea in different times. I still want to link his post. https://www.ultravio...dpost__p__30827
And in the post below, Andy describes a nicer method in MATLAB;
- there are surely many more complications one can try, but I will stop here for now.

Also, thanks again Jonathan for letting me use your work.

#35 Andy Perrin

    Member

  • Members+G
  • 4,416 posts
  • Location: United States

Posted 07 June 2021 - 18:19

There is actually a very nice way to combine images from multiple filters based on Wiener filtering that I haven't gotten around to implementing yet, but has strong theoretical justification and should allow correct reconstructions with filter overlap based on multiple individual band photos. It's been used successfully for hyperspectral imaging. Unfortunately, I suspect the mathematics is going to be beyond what most of the people around here are willing to learn, so unless I write some kind of non-MATLAB software to let everyone else do it, it's probably going to be just me using this method:
http://citeseerx.ist...p=rep1&type=pdf
https://link.springe...054661807020101
https://www.ncbi.nlm...les/PMC7041456/
Several of those papers use just RGB data to estimate a hyperspectral response(!) with so-so accuracy, but it's possible to do better if you have 7 or 8 bandpass filters like Jonathan. (It is even possible to use non-bandpass filters with this method provided that the transmissions are varying enough at each wavelength when you consider the whole set of filters.)

Once you have an estimated hyperspectral reflectance image, you can obviously define any kind of synthetic bandpass filters you please (and imaginary light sources too!) and render a tricolor.

Edited by Andy Perrin, 07 June 2021 - 18:31.


#36 Stefano

    Member

  • Members(+)
  • 2,174 posts
  • Location: Italy

Posted 07 June 2021 - 18:44

...sounds like the ultimate method!

Interesting how you can calculate an approximate hyperspectral image given only the RGB channels. You can't create information out of thin air, so this method probably needs additional information to do that. It's probably not about creating information, but more about extracting it in clever ways. It reminds me a bit of this: https://www.ultravio...ing-ad-with-ica

#37 Andy Perrin

    Member

  • Members+G
  • 4,416 posts
  • Location: United States

Posted 07 June 2021 - 18:50

Yes, the additional information comes in the form of KNOWN spectral responses from a color checker under the same light source as the intended hyperspectral image. We don't have that in UV (no UV color checker) but I think it should be possible to rederive a version of the method for known filter responses rather than known color checker reflectivities.

Edited by Andy Perrin, 07 June 2021 - 18:52.


#38 Andrea B.

    Desert Dancer

  • Owner-Administrator
  • 9,115 posts
  • Location: UVP Western Division, Santa Fe, New Mexico

Posted 11 June 2021 - 19:39

Andy, thank you for those links. I've thought before about how one might combine more than just an R,G and B image. And even fussed around in PS Elements with layer opacities of R,G,B together with say C,Y,M and orange, cerise, purple and so forth. Nothing satisfactory ever came of it of course, and I always thought that one would have to MathLab it in some way. So it was interesting to see how these papers were handling this.

Anyway.....an initial question about the integral in those papers.

I get why we would multiply the power of the light (illumination) * filter transmission * sensor response.
But why is there the additional factor of channel response? Isn't that taken care of in the sensor response factor?
Thanks for any light you can shed on this for me. (Pun intended. La!!)




Yes, the additional information comes in the form of KNOWN spectral responses from a color checker under the same light source as the intended hyperspectral image. We don't have that in UV (no UV color checker) but I think it should be possible to rederive a version of the method for known filter responses rather than known color checker reflectivities.

Couldn't you simply define what the false color spectral responses are based on the wavelength-to-falseColor maps people have made Iand shown on UVP)? Noting that even though I tend to rant about the lack of one-to-oneNess in those maps and their over-dependency on what white balance is used, I can still see some use for those maps if used *carefully*.)
Andrea G. Blum
Often found hanging out with flowers & bees.

#39 Andy Perrin

    Member

  • Members+G
  • 4,416 posts
  • Location: United States

Posted 11 June 2021 - 20:53

Quote

I get why we would multiply the power of the light (illumination) * filter transmission * sensor response.
But why is there the additional factor of channel response? Isn't that taken care of in the sensor response factor?

Thanks for any light you can shed on this for me. (Pun intended. La!!)
Not sure which paper you mean, but for example in the skin paper, the equation reads,
Attached Image: Screen Shot 2021-06-11 at 4.34.51 PM.png
So say you want the response of the green channel G to a swatch of blue ColorChecker tile under white light. Then for each wavelength, you multiply together:
- l, the intensity of the white light at that wavelength (probably peaking in the low 500nm's)
- gamma, the reflectivity of the blue color checker sample at that wavelength (probably peaking in the 400nm's)
- f_G, the transmission of the green Bayer filter at that wavelength (probably peaking I the low 500nm's)
- s, the spectral sensitivity of the camera sensor without the Bayer (equivalent to what you'd get in a de-Bayered camera)

The channel response is actually the product f_C*s, but the way they worded it in the description makes it hard to read.

Quote

Yes, the additional information comes in the form of KNOWN spectral responses from a color checker under the same light source as the intended hyperspectral image. We don't have that in UV (no UV color checker) but I think it should be possible to rederive a version of the method for known filter responses rather than known color checker reflectivities.

Couldn't you simply define what the false color spectral responses are based on the wavelength-to-falseColor maps people have made Iand shown on UVP)? Noting that even though I tend to rant about the lack of one-to-oneNess in those maps and their over-dependency on what white balance is used, I can still see some use for those maps if used *carefully*.)
If your objective is to see "true" UV colors in the sense that you get false colors that depend on wavelengths in a way that isn't as constrained as what our cameras naturally spit out, then this would be counter-productive? You would just reproduce the same false colors we already get, so things like red-green variation, which we almost NEVER see in our usual UV photos, would continue to not-appear. It would defeat the point of a tricolor.

Also, in the papers above, one gets an approximate hyperspectral image of the scene, and if you just made something up then you wouldn't get that, you'd just get garbage numbers. The point is that the color checker has known (measured) spectral responses in visible light, and you USE those numbers to deduce the visible light spectral responses of objects in your photo. So if you took a picture of a leaf, in theory you could get the spectrum of chlorphyll!

Edited by Andy Perrin, 11 June 2021 - 20:55.


#40 Andrea B.

    Desert Dancer

  • Owner-Administrator
  • 9,115 posts
  • Location: UVP Western Division, Santa Fe, New Mexico

Posted 11 June 2021 - 20:54

But if one wants a scientifically-accurate image.....

Then one cannot simply add/subtract n units from each channel to obtain a white balance because white balance (ideally) is a non-linear thing == in the sense that WB should be applied differently for shadowed areas versus sunlit (illuminated) areas. ((And WB is built around keeping green as the "1" value and adjusting red and blue to that.))

Also note that when converting an image to "black and white" or "greyscale" prior to inserting it into a channel, remember that there are are large number of different algorithms which do that. Even greyscale algorithms are not all the same across the channel tools in various photo editing apps. (Big Photoshop has a notoriously mysterious greyscale conversion.) Which one you use will alter the outcome of your stack.

Then too, many colors are not in the RGB gamut, so no image is ever scientifically accurate. We can only try to get a good approximation. But I am digressing........
Andrea G. Blum
Often found hanging out with flowers & bees.