Jump to content
UltravioletPhotography

How to make a TriColour image, my method


Stefano

Recommended Posts

[since a "generally formal presentation" is required, any help from the admins is appreciated]

In a TriColour image we want to construct an image in which each RGB channel represents a certain wavelength band. This is analog to how our eyes and camera work in the visible spectrum, having red light in the red channel, green light in the green channel and blue light in the blue channel. In a TriColour image we do the same, except we don't use red, green and blue light, but other bands, often in the UV or IR spectrum.

 

This is how I do it. I used an image I already posted here as my example.

 

To build this particular image I took three photos of the same subject at 730, 850 and 940 nm. If you use different light sources, it is important to place them in the exact same spot. If you use the same light sources and filter the bands with banpass filters, you have to be careful not to move your camera when changing them. As Bernard already said, this technique is only suitable for static objects. I suggest reading his topic too, where he describes his method.

 

Here are my images, converted to black and white (I took them directly in monochrome in-camera, and I only have the .jpgs). If your images have colors, it is very important to convert them to black and white.

 

730 nm:

post-284-0-08469500-1622879552.jpg

 

850 nm:

post-284-0-74079900-1622879559.jpg

 

940 nm:

post-284-0-31522200-1622879567.jpg

 

You can already see differences between the images.

 

Now I would transform these images into "channels":

 

I open them in IrfanView, and go to "images":

post-284-0-05080400-1622880520.png

 

and then go to "Color corrections..." (I couldn't take a screenshot of the dropdown window). Here you will find RGB sliders in the lower left:

post-284-0-03304800-1622880632.png

 

If you want to make a "red channel", drop the G and B sliders to zero. if you want to make a "green channel", drop the R and B sliders to zero, and for a "blue channel" drop R and G to zero.

 

I converted the images as follows:

 

Red: 940 nm;

Green: 850 nm;

Blue: 730 nm;

 

This is how they should look like after the procedure:

 

730 nm:

post-284-0-62079500-1622880995.jpg

 

850 nm:

post-284-0-08164600-1622881005.jpg

 

940 nm:

post-284-0-59365900-1622881016.jpg

 

The last step is the stacking. I use a software called "Image Stacker" for that. If you are going to use the same software, remember to select "Stack":

post-284-0-71222100-1622881132.png

 

and this is the final result:

post-284-0-40513300-1622881173.jpg

 

Placing a neutral target in the images (such as PTFE) can help to balance colors. One advantage of doing the white balance in IrfanView is that it just re-weights the channels, without creating anything that wasn't there.

 

Other members use other softwares, and you can use your own.

Link to comment

Thanks Stefano

If you are using mono images, where does the colour come from when you move the RGB sliders ?

Link to comment
They are actually color images with the three channels equal. By removing two of them you are left with one component only.
Link to comment

" Here are my images, converted to black and white (I took them directly in monochrome in-camera, and I only have the .jpgs). If your images have colors, it is very important to convert them to black and white. "

Even though though you converted to black & white, there is colour information ?

Link to comment
No, there is no longer color information. You still have three channels, bùt they are exactly the same. The only color is grey, lighter or darker.
Link to comment

They are essentially the same black and white images, but "colored" red, green and blue. When you have all three channels, the image looks gray. When you have the red channel only, for example, you have the same exact image, but "black and red" instead of black and white.

 

You have color information when the channels are not the same. For example, a red object in a color image is bright in the red channel and dark in the green and blue channels. Here I intentionally converted the images to black and white to get all the brightness information they have (averaging the channels, in a way) and condense it into one custom colored image.

 

You can see this in two ways:

- these black and white images are actually color images that appear black and white, and I have taken the red, green and blue components;

- I colored the black and white images in red, green and blue.

 

I know it may sound confusing, I am trying my best to explain it.

Link to comment

Yes it's free for images up to 640x480 and up to 10 images in a single stack. If you buy it you have no restrictions, I used to stack 200 images with my old camera.

 

I am sure you can write a code in MATLAB that does the same.

 

That is the same software that gives me issues with .tif files. If I find a fix I will report you.

Link to comment

Oh I can't write in MATLAB, don't own it and don't know how. But I should learn the free version Octave one day. Then I could use my Lodestar camera better, or write something to see it as an attached web camera on my phone in Android.

 

I now see the difference though. The Bernard method is to take your images, align them and then merge them as seperate channels in a software that allows for adding images as separate channels like Gimp or Photoshop. I can't see how to do it in Affinity photo, but might be possible.

 

The Stefano method is to create the red, green, and blue images from the source images. Then use a focusing stacking software to blend them together.

 

Any focus stacking or stacking software could be used. You may even be able to do all this in ImageJ. First pass to get your color files. Then second pass to merge them with a plugin.

 

I am not sure which method would be best. You both are running into WB and possible tone issues.

Link to comment

Stefano is creating the actual red, green and blue images in those colors using the images with the specific colors. They way he is doing it may not be best.

These are alternatives I see:

 

1. You can take an image with your filter. Process it from Raw to a tiff. Get the white balance, everything adjusted in whatever photo editor you like. Save as .tif file. Open that file in imageJ and separate out the red, green, blue channels. Save those as individual files labeled with the filter bandpass. Do the exact same thing for two other band pass filter images. Open your favorite image stacking software and select just one of the color output files to stack (merge) together. This should do the alignment work for you.

 

2. Process the image and make it just one channel. Just like Stefano indicates above.

 

3. Use dcraw or 4channels to cut out the individual color response channels from your raw. No demoisacing. Then make that image colored in imageJ or use Bernard method. If made colored, stack in your favorite stacking software.

 

I may need to play with this. The thing is that certain dyes on our sensor are best for a specific bandpass filter. The other channels just blur or add noise to an image. Thus even if only 1/4 resolution, the output from a 4channels method looks better.

Link to comment
Andy Perrin
David, you own a monochrome camera, so I don’t understand why you don’t just use that and eliminate the issues with Bayer filters.
Link to comment

Stefano, thank you for this tutorial. We really like to see this kind of write-up which stimulates discussion and helps newbies learn how to do things.

 

(I will pass along a few minor suggestions to you in a PM. Later when I have a bit of time.)

Link to comment

David, you own a monochrome camera, so I don’t understand why you don’t just use that and eliminate the issues with Bayer filters.

 

Andy I wanted discussion here for others whom don't own a monochrome camera. Also I like the output from the Em5mk2 the best.

But each camera I own is good at one thing.

Link to comment

In the first method, I don't understand why one should white balance the images. You are going to turn them into channels anyway, so the color of the single exposures doesn't matter.

 

Is it better to extract the brightest channel only from the images? Are you sure you add noise if the other channels are underexposed?

Link to comment
Andy Perrin
I don’t think white balancing the individual channels is a good idea either. I agree it’s best to take the brightest channel in the raw data (with no processing of the colors or anything else). This should all be done with 16 bit channels also (use TIFF or PNG).
Link to comment

I like that we are perfecting my method.

 

Another thing: When the channels need to be calibrated in the final image, so that a block of PTFE is white for example, we need a way to do this kind of white balance only re-weighting the channels, and as I already mentioned IrfanView does that (and it should support .tif files too). Are there other softwares to do that, in case someone wants to use them instead?

Link to comment
Andy Perrin
Well of course in MATLAB or any programming language would be easy. But normally what balance is not done by just reweighting channels.
Link to comment

An alternative I can think of is to measure the color of the white standard after the stacking. If it is something like 220, 200, 240, one can then increase the red channel by 25 units, the green by 55 and the blue by 15, to obtain 255, 255, 255. This way the image will be balanced.

 

The colors can be measured in many softwares, Paint can do that but only for a single pixel as far as I know. An average over an area is better.

Link to comment
Andy Perrin

Stefano, that is a bad idea because it alters the saturation. I don’t think a white balance is necessary until the final step and then it’s perfectly okay to use PN’s WB tool. I don’t know why you don’t like that idea since the colors are false even for a tricolor.

 

May I suggest an experiment? Let’s do our procedure using visible light bandpass filters on a monochrome camera and see what comes closest to reproduction of a normal photo. Then we can try in other bands.

Link to comment

I meant to do the channel correction as a last step, after the stacking, as I said. And I don't understand how it can alter the saturation, it just corrects the exposures for each channel so that a white/gray object (flat reflectance) appears white or gray (one doesn't have to push it to 255, 255, 255, but whatever values produce a correctly exposed image, as long as the RGB values are all equal).

 

I don't like the idea of a typical white balance as it isn't just a channel re-weight, it can even create a channel that wasn't there, as we once shown with Birna's help. But that was a quite extreme case.

 

For light white balances, the difference between the two algorithms is low, almost zero. So not a big problem. But if one wants a scientifically-accurate image (and I am one of those people), then it matters.

 

Your experiment is nice, I can try it with my LEDs (I think 450, 520 and 635 nm may be good), and can tell us more about colors.

 

All this discussion of course doesn't matter if one doesn't care about color accuracy and is fine with visually-appealing images. One can assign the colors in other orders, and manipulate them in any way.

Link to comment
Andy Perrin

Stefano, adding the same value to all three channels is equivalent to adding gray to whatever the original color was, therefore it decreases saturation.

 

But if one wants a scientifically-accurate image (and I am one of those people), then it matters.

I think you are deceiving yourself there. The idea that mixing channels when white balancing makes the result scientifically inaccurate is the issue, but when you have overlapping filters, that is the right thing to do to unmix the channels. This is because each wavelength can contribute to all three channels in the general case (such as with a Bayer that has overlapping spectra) and it's necessary to have a rotation in color space to unmix them in the overlap regions. If you have bandpass filters that truly have no overlap then a simple rescaling would suffice, though.

Link to comment

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
×
×
  • Create New...