Jump to content
UltravioletPhotography

How to make a TriColour image, my method


Stefano

Recommended Posts

No, I don't add the same value to each channel, I add different values to bring the channels to the same level. I'm actually adding a color, more precisely white (or gray) - the un-white balanced color of the white reference.

 

I will now take some images with my LEDs. I thought to use all the wavelengths I have available (on heatsink) between 400 and 700 nm. Should I post them here or in another topic?

Link to comment
Andy Perrin

Well you can prove your method works. If it works then great, we have a method. If not we can reevaluate.

 

Posting here seems fine? It's the same topic, just validating the method.

 

No, I don't add the same value to each channel, I add different values to bring the channels to the same level. I'm actually adding a color, more precisely white (or gray) - the un-white balanced color of the white reference.

Show me how it works. I think that will affect the brightness and saturation since you are adding onto the values. But if it works, fine. I am willing to accept experimental results.

Link to comment

Andy, I will find a way to show you how it works (provided it works, I too can be proven wrong).

 

Here's my LED series in visible light, I had some connection problems, so I am a bit late. The wavelengths are not exact, they are either what the seller claimed or what I have measured myself with a diffraction grating, and they can be a bit off.

 

Same equipment as before, full-spectrum Canon EOS M and Soligor 35 mm f/3.5. No filters in any image. The camera was set to auto exposure, ISO 100 and the lens at f/8. I put a normal paper sheet on the wall and a paper tissue in front. The normal paper fluoresces blue under the violet LEDs, and the paper tissue does not (in any appreciable way). This is why the paper looks brighter than the tissue in the first images. Also, the diffraction grating (put for color) has a lot of glare, I didn't notice it in time.

 

400-405 nm (violet):

post-284-0-12123900-1622928875.jpg

 

425 nm (violet):

post-284-0-77518800-1622929247.jpg

 

450 nm (royal blue):

post-284-0-58981500-1622929287.jpg

 

465 nm (blue):

post-284-0-72212000-1622930309.jpg

 

500 nm (bluish green):

post-284-0-40922700-1622930324.jpg

 

525 nm (green):

post-284-0-85807300-1622930341.jpg

 

587 nm ("sodium" yellow):

post-284-0-22888100-1622930363.jpg

 

600 nm (orange):

post-284-0-46297500-1622930392.jpg

 

625 nm (red):

post-284-0-87289100-1622930415.jpg

 

660 nm (deep red):

post-284-0-54786200-1622930442.jpg

 

I did a TriColour with my usual technique with the 450 nm, 525 nm and 625 nm images as the blue, green and red channels. Here is the result:

post-284-0-25644800-1622930510.jpg

 

Not so pretty, it clearly needs a white balance. I did it on the background paper in IrfanView and in Photo Ninja, and the results are similar, but with some differences:

 

IrfanView:

post-284-0-47512400-1622930540.jpg

 

Photo Ninja:

post-284-0-18357300-1622930566.jpg

 

I have to say, I like the Photo Ninja image better.

 

Here's an image taken with a white LED torch, white balanced on the paper in Photo Ninja:

post-284-0-38538100-1622930642.jpg

 

The TriColour image doesn't have accurate colors because if those LEDs are used to produce white light the CRI (Color Rendering Index) is quite low. I already tried to run these LEDs in series at the same time and the color rendition is awful. But we can't talk about CRI outside the visible spectrum. Notice how the orange square in the Rubik's cube appears red in the TriColour stack. Even the white LED image is a bit off from what I see with my eyes, probably because the camera is picking up some far red from the LED. This is kind of the same issue Andrea wrote about here.

 

Anyone is free to experiment with the other images.

Link to comment

Interesting that the tricolor tape is clear, whereas in the white LED its yellow. So the colors aren't correctly being taken with this method. I have used a monochrome imager or camera with wratten #47, 58 and #25 and had near equal representation. Maybe you need more overlapping spectral regions.

 

But I think you are on to a simpler method. However, I usually have problems at the alignment stage of the photo process.

Link to comment

Try to play with my images if you have time, and see if things improve. For example, the orange exposure should help with the Rubik's cube. I only did a basic TriColour just to test this, but using all images in the appropriate way should improve the colors. I can try this myself tomorrow.

 

It would have been much better if I also had my 480 nm cyan and 565 nm lime green LEDs, but they are not mounted on heatsinks yet. I really have to finish them.

 

To answer both Andy and David, yes, using three bands without overlap is not the correct way to reproduce a color image in the visible spectrum. This is especially true when you have materials with sharp absorption peaks. A famous real-life example is neodymium glass under fluorescent lighting vs incandescent/LED.

 

I don't understand what Andy means by mixing/unmixing the channels, but we both agree that when the bands have no overlap then a simple re-weighting is enough. I still want to show an example, I would like to do it in IR but my 730 nm LED doesn't turn on anymore, I think I killed it, already ordered a new one... I must be more careful.

 

For a UV TriColour image with actual overlap, I thought at the nice bandpass filters Jonathan has, but I don't want to press him into doing that experiment, since it is time-consuming and of course I respect that. He did already post a set of images though... https://www.ultravioletphotography.com/content/index.php/topic/3536-composite-uv-imaging-using-multiple-filters. That can be a starting point.

Link to comment
Actually, but only in this very specific case where we are using visible light, the best way to stack a series of images is to literally stack the raw versions, in color, and white balance the result. If a camera (meant for visible light) recorded 600 nm as a shade of orange, then that image must contribute to the output with that particular color, neither red nor yellow nor anything else. But only in this case.
Link to comment

Here's what happens when you save the raw LED images as .jpg, stack them (I actually did an average, since stacking overexposes the result), and then white balance the result in irfanView and Photo Ninja (white balance on the paper sheet):

 

Stack:

post-284-0-47935200-1623000027.jpg

 

IrfanView:

post-284-0-12835400-1623000037.jpg

 

Photo Nnja:

post-284-0-82276900-1623000047.jpg

 

The colors are better since the combined spectrum of all LEDs is better and with less gaps. Now the orange square in the Rubik's cube is actually orange, and the paper tape is yellow. Working with .jpgs is not ideal, but I still have to find a fix for that issue.

Link to comment
If using Photoshop, the preferred method is to paste the constituent images into their respective channels--it is not necessary to invoke layers to do this. It is also not necessary to do greyscale conversion; that happens automatically if a color image is pasted into a single channel.
Link to comment

Jonathan gave me permission to download his images here, made with 7 of his UV bandpass filters, and I stacked them in three different ways. IrfanView was used to "color" the images, and Image Stacker was used to stack them.

 

Method 1: classic TriColour

In this method I took just the extreme and the middle images (310, 340, 370 nm), and stacked them in the usual way. The result was white balanced on the PTFE target in IrfanView, but the image was already balanced.

 

Colors:

Red: 370 nm;

Green: 340 nm;

Blue: 310 nm.

 

Final result:

post-284-0-12810700-1623088669.jpg

 

Color distribution:

post-284-0-00880300-1623088683.png

 

Method 2: TriColour with overlap

Here I stepped up things a bit, using a more "precise" method (even if we are still dealing with false colors), by making use of all 7 images and simulating a bell curve response from the three channels, with overlap.

 

Colors:

370 nm = 128, 0, 0;

360 nm: = 255, 0, 0;

350 nm = 128, 128, 0;

340 nm = 0, 255, 0;

330 nm = 0, 128, 128;

320 nm = 0, 0, 255;

310 nm = 0, 0, 128.

 

Each channel has this sensitivity pattern: 0, 128, 255, 128, 0. The peaks are at 360 nm (red), 340 nm (green) and 320 nm (blue). Since the sum of the columns is 511 (256*2 - 1), I did a stack/average combo, stacking the images and dividing the result by 2. The result was already balanced, but like before I still did apply a white balance on the PTFE target in IrfanView.

 

Final result:

post-284-0-70430600-1623088698.jpg

 

Color distribution:

post-284-0-80385200-1623088708.png

 

Method 3: camera channel response "transposed" in UV

Here I complicated things even more. Jonathan has measured the channel response of a stock Canon EOS 5DSR camera, that he posted here on his website. I wanted to simulate the colors a normal visible light camera would see if instead of seeing 400-700 nm saw 305-375 nm, but with the same channel response.

 

To find the corresponding visible wavelengths given the UV wavelengths, I did these calculations:

 

305-375 nm is a 70 nm-wide interval;

5 nm is 1/14 of that interval;

400-700 nm is a 300 nm-wide interval;

1/14 of 300 is ~21.43 nm.

So the 10 nm separation between the UV bands is equivalent to a ~42.86 nm sepatation between 400 and 700 nm.

 

The wavelength correspondence is the following:

 

370 nm -> 679 nm;

360 nm -> 636 nm;

350 nm -> 593 nm;

340 nm -> 550 nm;

330 nm -> 507 nm;

320 nm -> 464 nm;

310 nm -> 421 nm.

 

Note that I chose to use 305-375 nm instead of 310-370 nm as the UV range as I didn't want the extreme wavelengths to have almost zero contribution.

 

I then measured from the graph the sensitivities of the camera at those wavelengths, using the green peak visible in the graph as the 255 value, and using similar calculations as the ones above to find the correspondence between pixels and wavelengths. These are the colors that came out:

 

370 nm = 13, 2, 1;

360 nm = 53, 8, 2;

350 nm = 93, 80, 5;

340 nm = 39, 206, 24;

330 nm = 15, 251, 128;

320 nm = 2, 55, 206;

310 nm = 0, 10, 79;

 

Those colors are odd, with almost no red, some blue and mostly green. But I used them anyway.

 

I colored the images with these colors, did a stack/average combo dividing by 3, and white balanced the result on the PTFE target in Photo Ninja (this time the image was not already balanced, but very green/blue).

 

Final result:

post-284-0-35565800-1623088723.jpg

 

Color distribution:

post-284-0-31862300-1623088731.png

 

Comments:

- the images clearly need to be aligned, but I don't have the softwares to do that;

- the colors in the last method are odd, with the vase being black (very dark red) instead of red, and the general color palette being more monochromatic than in the other methods;

- reading the original topic where these images have been posted (linked above), I found a post by Bernard where he already had the idea of overlapping the channels. I don't remember reading that post, so we had the same idea in different times. I still want to link his post. https://www.ultravio...dpost__p__30827

And in the post below, Andy describes a nicer method in MATLAB;

- there are surely many more complications one can try, but I will stop here for now.

 

Also, thanks again Jonathan for letting me use your work.

Link to comment
Andy Perrin

There is actually a very nice way to combine images from multiple filters based on Wiener filtering that I haven't gotten around to implementing yet, but has strong theoretical justification and should allow correct reconstructions with filter overlap based on multiple individual band photos. It's been used successfully for hyperspectral imaging. Unfortunately, I suspect the mathematics is going to be beyond what most of the people around here are willing to learn, so unless I write some kind of non-MATLAB software to let everyone else do it, it's probably going to be just me using this method:

http://citeseerx.ist...p=rep1&type=pdf

https://link.springe...054661807020101

https://www.ncbi.nlm...les/PMC7041456/

Several of those papers use just RGB data to estimate a hyperspectral response(!) with so-so accuracy, but it's possible to do better if you have 7 or 8 bandpass filters like Jonathan. (It is even possible to use non-bandpass filters with this method provided that the transmissions are varying enough at each wavelength when you consider the whole set of filters.)

 

Once you have an estimated hyperspectral reflectance image, you can obviously define any kind of synthetic bandpass filters you please (and imaginary light sources too!) and render a tricolor.

Link to comment

...sounds like the ultimate method!

 

Interesting how you can calculate an approximate hyperspectral image given only the RGB channels. You can't create information out of thin air, so this method probably needs additional information to do that. It's probably not about creating information, but more about extracting it in clever ways. It reminds me a bit of this: https://www.ultravioletphotography.com/content/index.php/topic/2849-revealing-the-faded-text-on-an-old-building-ad-with-ica

Link to comment
Andy Perrin
Yes, the additional information comes in the form of KNOWN spectral responses from a color checker under the same light source as the intended hyperspectral image. We don't have that in UV (no UV color checker) but I think it should be possible to rederive a version of the method for known filter responses rather than known color checker reflectivities.
Link to comment

Andy, thank you for those links. I've thought before about how one might combine more than just an R,G and B image. And even fussed around in PS Elements with layer opacities of R,G,B together with say C,Y,M and orange, cerise, purple and so forth. Nothing satisfactory ever came of it of course, and I always thought that one would have to MathLab it in some way. So it was interesting to see how these papers were handling this.

 

Anyway.....an initial question about the integral in those papers.

 

I get why we would multiply the power of the light (illumination) * filter transmission * sensor response.

But why is there the additional factor of channel response? Isn't that taken care of in the sensor response factor?

Thanks for any light you can shed on this for me. (Pun intended. La!!)

 


 

Yes, the additional information comes in the form of KNOWN spectral responses from a color checker under the same light source as the intended hyperspectral image. We don't have that in UV (no UV color checker) but I think it should be possible to rederive a version of the method for known filter responses rather than known color checker reflectivities.

 

Couldn't you simply define what the false color spectral responses are based on the wavelength-to-falseColor maps people have made Iand shown on UVP)? Noting that even though I tend to rant about the lack of one-to-oneNess in those maps and their over-dependency on what white balance is used, I can still see some use for those maps if used *carefully*.)

Link to comment
Andy Perrin
I get why we would multiply the power of the light (illumination) * filter transmission * sensor response.

But why is there the additional factor of channel response? Isn't that taken care of in the sensor response factor?

 

Thanks for any light you can shed on this for me. (Pun intended. La!!)

Not sure which paper you mean, but for example in the skin paper, the equation reads,

post-94-0-79465500-1623443705.png

So say you want the response of the green channel G to a swatch of blue ColorChecker tile under white light. Then for each wavelength, you multiply together:

- l, the intensity of the white light at that wavelength (probably peaking in the low 500nm's)

- gamma, the reflectivity of the blue color checker sample at that wavelength (probably peaking in the 400nm's)

- f_G, the transmission of the green Bayer filter at that wavelength (probably peaking I the low 500nm's)

- s, the spectral sensitivity of the camera sensor without the Bayer (equivalent to what you'd get in a de-Bayered camera)

 

The channel response is actually the product f_C*s, but the way they worded it in the description makes it hard to read.

 

Yes, the additional information comes in the form of KNOWN spectral responses from a color checker under the same light source as the intended hyperspectral image. We don't have that in UV (no UV color checker) but I think it should be possible to rederive a version of the method for known filter responses rather than known color checker reflectivities.

 

Couldn't you simply define what the false color spectral responses are based on the wavelength-to-falseColor maps people have made Iand shown on UVP)? Noting that even though I tend to rant about the lack of one-to-oneNess in those maps and their over-dependency on what white balance is used, I can still see some use for those maps if used *carefully*.)

If your objective is to see "true" UV colors in the sense that you get false colors that depend on wavelengths in a way that isn't as constrained as what our cameras naturally spit out, then this would be counter-productive? You would just reproduce the same false colors we already get, so things like red-green variation, which we almost NEVER see in our usual UV photos, would continue to not-appear. It would defeat the point of a tricolor.

 

Also, in the papers above, one gets an approximate hyperspectral image of the scene, and if you just made something up then you wouldn't get that, you'd just get garbage numbers. The point is that the color checker has known (measured) spectral responses in visible light, and you USE those numbers to deduce the visible light spectral responses of objects in your photo. So if you took a picture of a leaf, in theory you could get the spectrum of chlorphyll!

Link to comment

But if one wants a scientifically-accurate image.....

 

Then one cannot simply add/subtract n units from each channel to obtain a white balance because white balance (ideally) is a non-linear thing == in the sense that WB should be applied differently for shadowed areas versus sunlit (illuminated) areas. ((And WB is built around keeping green as the "1" value and adjusting red and blue to that.))

 

Also note that when converting an image to "black and white" or "greyscale" prior to inserting it into a channel, remember that there are are large number of different algorithms which do that. Even greyscale algorithms are not all the same across the channel tools in various photo editing apps. (Big Photoshop has a notoriously mysterious greyscale conversion.) Which one you use will alter the outcome of your stack.

 

Then too, many colors are not in the RGB gamut, so no image is ever scientifically accurate. We can only try to get a good approximation. But I am digressing........

Link to comment

If your objective is to see "true" UV colors in the sense that you get false colors that depend on wavelengths in a way that isn't as constrained as what our cameras naturally spit out, then this would be counter-productive? You would just reproduce the same false colors we already get, so things like red-green variation, which we NEVER see in our usual UV photos, would continue to not-appear. It would defeat the point of a tricolor.

 

I suppose I had been thinking that there was enough variation across the false color maps that this might work over the 300 - 400 nm band. But if not, then nevermind..... :grin:

 

The point remains though, that you could pre-define a false color response. Scale the 300 - 400 nm interval to 360 units and apply the color wheel.

 


So say you want the response of the green channel G to a swatch of blue ColorChecker tile under white light.

 

Yes, I see the answer to my question now in the integral you presented. I was missing the understanding that for each channel we needed to know its response to a particular reflected color -- the gamma factor above. Which of course is now so obvious that I'm not sure why I missed it. Isn't that always the way!

 

Mind you, I only partially understand those papers. But I wanted to understand that integral better.

 

Thank you!!

 

 


Link to comment
Andy Perrin
The point remains though, that you could pre-define a false color response. Scale the 300 - 400 nm interval to 360 units and apply the color wheel.

 

Yeah, I know, but then you lose the ability to do hyperspectral imaging? You get tri-colors, sure, but not tri-colors that are meaningful and related to physical properties of objects in the scene.

 

Like, hypothetically, suppose we did it in visible light. You could have red trees and green skinned people and blue elephants? But that's not related to the actual spectral responses of those objects in visible light.

Link to comment

....not related to the actual spectral responses of those objects in visible light....

 

But that doesn't really matter in the UV band. There is no "real" way to relate actual spectral responses in UV to a color. It's all false color. So we can go with any application of color we want. Use the camera map. Or roll the color wheel across the 300-400 nm band.

 

 

Like, hypothetically, suppose we did it in visible light. You could have red trees and green skinned people and blue elephants? But that's not related to the actual spectral responses of those objects in visible light.

 

The color wheel is already made for the visible band. So unless you assigned the color wheel "backwards" to 400-700 nm, you aren't going to get blue elephants.

 

 

Are we talking past one another here? I R confuzed.

Link to comment

Let's not dive too deeply into color theory, because it is very very complex. We don't even perceive the same colors at different light intensities (Bezold-Brücke shift)...

 

We can't say that TriColour images are "true color" images, even if they probably are the closest thing we can get to that. If you want really true colors you have to render infrared as monochromatic red and UV as almost monochromatic violet (and out of focus), and there's more to say...

 

Nevertheless, I find these "simulated colors" fascinating. Rendering TriColour images with colors in the "right" order (red for the longest wavelengths etc.) has two advantages: making images intuitive (if you see red, it means an object is emitting/reflecting/transmitting longer wavelengths more than the others etc.), and the images look "realistic". As Bernard also said, you preserve blue skies as you still have shorter wavelengths scattered more, etc.

 

What I want to do is to find a way to render UV/IR (or any band really) images the way (approximately) our eyes would see them if they saw UV or IR etc. instead of visible light. What Bernard did was great and I really liked his images, but as he also pointed out with the filter set he used there is no channel overlap (link).

 

My last attempt with the camera channels response mapped in UV would simulate how a normal camera would see UV if it had the same color response there. Cameras are quite good at reproducing colors, so that was a good method I thought. The colors were odd, and the image looks a bit off, but that was the data I had available.

 

As Andy suggested, we first have to find a method that works in visible light. There we really do have true colors, so we can see how correct TriColour methods are. Only then we can apply them outside visible light.

 


I originally started this topic as a tutorial showing how I stack the channels in a TriColour image, without any concern about "color accuracy", but now it has become a discussion about the best method to take those images and how to stack them in the sense of what is the correct way to blend them together (my original discussion was more about the best softwares/techniques to do that). This is perfectly fine, it would be good to have a long rich discussion and then to write the conclusions in a well-organized way. Something like this:

 

Method 1: standard three images stack

Suitable softwares, suitable "method" (like using .tif instead of .jpg images, taking only the brightest channel from an image for the B&W conversion, etc.), examples (Bernard's method, my method (to be improved), etc.);

 

Method 2: multispectral stack (using more than three channels...)

...

 

Mention about using different filters (like Bernard) or different light sources (like me), overlap, and so on...

 

and when we will eventually do that, should we start a separate topic (even a sticky, why not)?

Link to comment
Andy Perrin

....not related to the actual spectral responses of those objects in visible light....

 

But that doesn't really matter in the UV band. There is no "real" way to relate actual spectral responses in UV to a color. It's all false color. So we can go with any application of color we want. Use the camera map. Or roll the color wheel across the 300-400 nm band.

 

 

Like, hypothetically, suppose we did it in visible light. You could have red trees and green skinned people and blue elephants? But that's not related to the actual spectral responses of those objects in visible light.

 

The color wheel is already made for the visible band. So unless you assigned the color wheel "backwards" to 400-700 nm, you aren't going to get blue elephants.

 

 

Are we talking past one another here? I R confuzed.

 

Yes, and I think even Stefano isn't following me.

 

First let's establish some key facts:

FACT: for visible light, if you specify the:
- scene lighting,
- Bayer filter bandpass distribution,
- camera sensitivity distribution, and
- the reflectance spectrum of some object,
then the color of that object in a photo is nailed down.

In other words, even the white balance is just an effort to correct for the fact that we don't have perfect knowledge of all those distributions, and our monitors aren't accurate color reproduction devices. But assuming a perfect monitor and perfect knowledge, the above would be enough to nail the color down. AND NO COLOR WHEELS ARE NECESSARY (in fact the actual color wheel is derived from the above, not the other way around).

 

FACT: the reflectance spectrum of a object is a reality independent of human beings, and has nothing to do with our eyes, regardless of wavelength.

This should be obvious.

 

My notion of what a "proper" tri-color should do is this:

PROPOSAL: The objective of a tri-color is to take photographs outside the spectrum and replace the "Bayer filter bandpass distribution"
for R,G, and B with similarly shaped distributions in a spectral region of our choice, but preserving the shape of the distributions so as to
make an image as much like what our eyes would see if they worked in UV/IR as possible. Which means accepting whatever false colors
naturally appear in the scene under this constraint.

As you can see, my proposal does NOT allow for arbitrary colors, except via how we rescale the equivalent of our fake-Bayer distributions to fit in the band of interest.

 

All those weird perceptual color effects (Abner etc.) will happen automatically by themselves if you get the optics part right.

Link to comment

Isn't this 'correctly' what our cameras are seeing in the out-of-visible spectrum areas, these mixes of RGB & however they fluoresce if they do, & realising that different brands / models will vary ?

 

post-31-0-19821200-1623459744.jpg

Link to comment
Yes, but that's not the point. We can already take UV/IR photos with the colors our cameras see there, just look at the raw colors in a UV image for example. Here we are trying to find a method for making TriColour images in a "correct" way, and I used Jonathan's camera sensitivity graph (like yours above, but for a stock camera) to shift the curves into UV. I pretended the camera saw red at 360 nm instead of 650, and so on.
Link to comment
Andy Perrin

Colin, taking what our cameras actually see out of spectrum (using the built-in Bayer filters) is what we do now, but as you can see from your own image, the red and green channel basically fuse together into one color between 300 and 400nm, so we aren't seeing all there is to see. On top of that, the R,G, and B channels are not peaked at all in the 300-400nm range as they are in visible light, which is key to mimicking the kind of colors we have in visible light.

 

The ideal situation would be three overlapping bandpass filters that have the same shapes as the ones you displayed, but MOVED and RESCALED (shrunk) into the 300-400nm range. (Or equivalently, three light sources with those spectra.) Using the method in the article, one can SIMULATE this in the computer mathematically. I'm gonna try to put my money where my mouth is sometime soon and try the article's method in visible light using the TriWave and some Cokin filters I have. Then we'll see if we can detect chlorophyl and stuff like that.

Link to comment

Andy, I follow you. I agree with everything you said above, and your definition of TriColour is very much like mine.

 

Once you obtain a TriColour image in a "correct" way (is there really a correct way?), you can treat it like a normal visible light image, applying the necessary corrections to our imperfect vision.

Link to comment

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
×
×
  • Create New...