• Ultraviolet Photography
  •  

How to make a TriColour image, my method

TriColour Multispectral
73 replies to this topic

#1 Stefano

    Member

  • Members(+)
  • 2,080 posts
  • Location: Italy

Posted 05 June 2021 - 08:26

[Since a "generally formal presentation" is required, any help from the admins is appreciated]
In a TriColour image we want to construct an image in which each RGB channel represents a certain wavelength band. This is analog to how our eyes and camera work in the visible spectrum, having red light in the red channel, green light in the green channel and blue light in the blue channel. In a TriColour image we do the same, except we don't use red, green and blue light, but other bands, often in the UV or IR spectrum.

This is how I do it. I used an image I already posted here as my example.

To build this particular image I took three photos of the same subject at 730, 850 and 940 nm. If you use different light sources, it is important to place them in the exact same spot. If you use the same light sources and filter the bands with banpass filters, you have to be careful not to move your camera when changing them. As Bernard already said, this technique is only suitable for static objects. I suggest reading his topic too, where he describes his method.

Here are my images, converted to black and white (I took them directly in monochrome in-camera, and I only have the .jpgs). If your images have colors, it is very important to convert them to black and white.

730 nm:
Attached Image: 730.JPG

850 nm:
Attached Image: 850.JPG

940 nm:
Attached Image: 940.JPG

You can already see differences between the images.

Now I would transform these images into "channels":

I open them in IrfanView, and go to "images":
Attached Image: Cattura7.PNG

and then go to "Color corrections..." (I couldn't take a screenshot of the dropdown window). Here you will find RGB sliders in the lower left:
Attached Image: Cattura2.PNG

If you want to make a "red channel", drop the G and B sliders to zero. if you want to make a "green channel", drop the R and B sliders to zero, and for a "blue channel" drop R and G to zero.

I converted the images as follows:

Red: 940 nm;
Green: 850 nm;
Blue: 730 nm;

This is how they should look like after the procedure:

730 nm:
Attached Image: 730.JPG

850 nm:
Attached Image: 850.JPG

940 nm:
Attached Image: 940.JPG

The last step is the stacking. I use a software called "Image Stacker" for that. If you are going to use the same software, remember to select "Stack":
Attached Image: Cattura6(2).PNG

and this is the final result:
Attached Image: Somma.jpg

Placing a neutral target in the images (such as PTFE) can help to balance colors. One advantage of doing the white balance in IrfanView is that it just re-weights the channels, without creating anything that wasn't there.

Other members use other softwares, and you can use your own.

Edited by Stefano, 05 June 2021 - 22:14.


#2 colinbm

    Member

  • Members+G
  • 2,625 posts
  • Location: Australia

Posted 05 June 2021 - 09:19

Thanks Stefano
If you are using mono images, where does the colour come from when you move the RGB sliders ?

#3 Stefano

    Member

  • Members(+)
  • 2,080 posts
  • Location: Italy

Posted 05 June 2021 - 09:23

They are actually color images with the three channels equal. By removing two of them you are left with one component only.

#4 colinbm

    Member

  • Members+G
  • 2,625 posts
  • Location: Australia

Posted 05 June 2021 - 09:50

" Here are my images, converted to black and white (I took them directly in monochrome in-camera, and I only have the .jpgs). If your images have colors, it is very important to convert them to black and white. "
Even though though you converted to black & white, there is colour information ?

#5 Stefano

    Member

  • Members(+)
  • 2,080 posts
  • Location: Italy

Posted 05 June 2021 - 10:00

No, there is no longer color information. You still have three channels, bùt they are exactly the same. The only color is grey, lighter or darker.

#6 colinbm

    Member

  • Members+G
  • 2,625 posts
  • Location: Australia

Posted 05 June 2021 - 10:10

So where do the RGB images come from ?

#7 Stefano

    Member

  • Members(+)
  • 2,080 posts
  • Location: Italy

Posted 05 June 2021 - 10:27

They are essentially the same black and white images, but "colored" red, green and blue. When you have all three channels, the image looks gray. When you have the red channel only, for example, you have the same exact image, but "black and red" instead of black and white.

You have color information when the channels are not the same. For example, a red object in a color image is bright in the red channel and dark in the green and blue channels. Here I intentionally converted the images to black and white to get all the brightness information they have (averaging the channels, in a way) and condense it into one custom colored image.

You can see this in two ways:
- these black and white images are actually color images that appear black and white, and I have taken the red, green and blue components;
- I colored the black and white images in red, green and blue.

I know it may sound confusing, I am trying my best to explain it.

#8 colinbm

    Member

  • Members+G
  • 2,625 posts
  • Location: Australia

Posted 05 June 2021 - 10:32

No worries, Thanks Stefano, I'll get some images & have a go at it.

#9 dabateman

    Da Bateman

  • Members+G
  • 2,933 posts
  • Location: Maryland

Posted 05 June 2021 - 11:49

Stefano is this the software you use:
https://www.tawbaware.com/imgstack.htm

The advantage of the Bernard method is all the software used is free.


#10 Stefano

    Member

  • Members(+)
  • 2,080 posts
  • Location: Italy

Posted 05 June 2021 - 12:00

Yes it's free for images up to 640x480 and up to 10 images in a single stack. If you buy it you have no restrictions, I used to stack 200 images with my old camera.

I am sure you can write a code in MATLAB that does the same.

That is the same software that gives me issues with .tif files. If I find a fix I will report you.

Edited by Stefano, 05 June 2021 - 12:05.


#11 dabateman

    Da Bateman

  • Members+G
  • 2,933 posts
  • Location: Maryland

Posted 05 June 2021 - 12:23

Oh I can't write in MATLAB, don't own it and don't know how. But I should learn the free version Octave one day. Then I could use my Lodestar camera better, or write something to see it as an attached web camera on my phone in Android.

I now see the difference though. The Bernard method is to take your images, align them and then merge them as seperate channels in a software that allows for adding images as separate channels like Gimp or Photoshop. I can't see how to do it in Affinity photo, but might be possible.

The Stefano method is to create the red, green, and blue images from the source images. Then use a focusing stacking software to blend them together.

Any focus stacking or stacking software could be used. You may even be able to do all this in ImageJ. First pass to get your color files. Then second pass to merge them with a plugin.

I am not sure which method would be best. You both are running into WB and possible tone issues.

Edited by dabateman, 05 June 2021 - 12:23.


#12 Stefano

    Member

  • Members(+)
  • 2,080 posts
  • Location: Italy

Posted 05 June 2021 - 12:44

You gave me an idea, I have ImageJ, the stacking can be done there.

#13 colinbm

    Member

  • Members+G
  • 2,625 posts
  • Location: Australia

Posted 05 June 2021 - 12:53

How is this stacking different to 'flattening' in layers in Photoshop Elements ?

#14 dabateman

    Da Bateman

  • Members+G
  • 2,933 posts
  • Location: Maryland

Posted 05 June 2021 - 14:26

Stefano is creating the actual red, green and blue images in those colors using the images with the specific colors. They way he is doing it may not be best.
These are alternatives I see:

1. You can take an image with your filter. Process it from Raw to a tiff. Get the white balance, everything adjusted in whatever photo editor you like. Save as .tif file. Open that file in imageJ and separate out the red, green, blue channels. Save those as individual files labeled with the filter bandpass. Do the exact same thing for two other band pass filter images. Open your favorite image stacking software and select just one of the color output files to stack (merge) together. This should do the alignment work for you.

2. Process the image and make it just one channel. Just like Stefano indicates above.

3. Use dcraw or 4channels to cut out the individual color response channels from your raw. No demoisacing. Then make that image colored in imageJ or use Bernard method. If made colored, stack in your favorite stacking software.

I may need to play with this. The thing is that certain dyes on our sensor are best for a specific bandpass filter. The other channels just blur or add noise to an image. Thus even if only 1/4 resolution, the output from a 4channels method looks better.

#15 Andy Perrin

    Member

  • Members+G
  • 4,313 posts
  • Location: United States

Posted 05 June 2021 - 15:32

David, you own a monochrome camera, so I don’t understand why you don’t just use that and eliminate the issues with Bayer filters.

#16 Andrea B.

    Desert Dancer

  • Owner-Administrator
  • 8,987 posts
  • Location: UVP Western Division, Santa Fe, New Mexico

Posted 05 June 2021 - 16:44

Stefano, thank you for this tutorial. We really like to see this kind of write-up which stimulates discussion and helps newbies learn how to do things.

(I will pass along a few minor suggestions to you in a PM. Later when I have a bit of time.)
Andrea G. Blum
Often found hanging out with flowers & bees.

#17 dabateman

    Da Bateman

  • Members+G
  • 2,933 posts
  • Location: Maryland

Posted 05 June 2021 - 16:48

View PostAndy Perrin, on 05 June 2021 - 15:32, said:

David, you own a monochrome camera, so I don’t understand why you don’t just use that and eliminate the issues with Bayer filters.

Andy I wanted discussion here for others whom don't own a monochrome camera. Also I like the output from the Em5mk2 the best.
But each camera I own is good at one thing.

#18 Stefano

    Member

  • Members(+)
  • 2,080 posts
  • Location: Italy

Posted 05 June 2021 - 17:38

In the first method, I don't understand why one should white balance the images. You are going to turn them into channels anyway, so the color of the single exposures doesn't matter.

Is it better to extract the brightest channel only from the images? Are you sure you add noise if the other channels are underexposed?

#19 Andy Perrin

    Member

  • Members+G
  • 4,313 posts
  • Location: United States

Posted 05 June 2021 - 18:03

I don’t think white balancing the individual channels is a good idea either. I agree it’s best to take the brightest channel in the raw data (with no processing of the colors or anything else). This should all be done with 16 bit channels also (use TIFF or PNG).

Edited by Andy Perrin, 05 June 2021 - 18:04.


#20 Stefano

    Member

  • Members(+)
  • 2,080 posts
  • Location: Italy

Posted 05 June 2021 - 18:20

I like that we are perfecting my method.

Another thing: When the channels need to be calibrated in the final image, so that a block of PTFE is white for example, we need a way to do this kind of white balance only re-weighting the channels, and as I already mentioned IrfanView does that (and it should support .tif files too). Are there other softwares to do that, in case someone wants to use them instead?