Jump to content
UltravioletPhotography

Discussion on methods for TriColour video


Stefano

Recommended Posts

The TriColour technique is probably my favourite way of representing false color in invisible light photos (UV, IR, and theoretically any other band of the EM spectrum). False colors will always be false, but I feel like this technique makes "truer" false colors, as there's a logical meaning behind.

 

Traditionally, this is done by taking three photos of the subject at three different wavelengths. The images must be superimposable, meaning the subject must stay still, the lighting should stay the same, and the images must be taken from the same point of view, otherwise color fringing will occur.

 

This is fine if the conditions above are met, and I have taken some images this way which have little to basically no color defects. For videos, however, things are much different. Normal color sensors have subpixels for red, green and blue, and take the frames at the same time in all spectral bands. Outside the visible spectrum, if such sensors are not available, one has to use different strategies.

 

Method 1 (naive method)

The most obvious approach is to use three cameras, as close as possible, with three filters on their lenses, and take the frames at the same time. It would work fine for far away subjects, but at close distances parallax would be obvious.

Method1.jpg.18f13a272dd0a2d857fe90bea7f35e79.jpg

 

Pros:

- simple to implement;

 

Cons:

- needs three sensors and three lenses;

- parallax at close distances.

 

Method 2: filter wheel

I discussed this idea here: a spinning filter wheel is placed in front of the lens, and the setup is timed so that the sensor takes a frame every time a new filter is in place. If done quickly enough, this could allow for TriColour video. The problem is that fringing would be visible for fast-moving objects, and the sensor will have different sensitivities at different wavelengths, so ND filters might be needed.

Method2.jpg.234d253b8d90db8466004fa283b7d7f1.jpg

 

Pros:

- only one sensor and one lens are needed;

 

Cons:

- difficult to build (the sensor and the filters must be synchronized);

- the lens must be corrected for chromatic aberration.

 

Method 3: dichroic mirrors

To take three images at the same time at three different wavelengths from the same point of view, dichroic mirrors can be used. They reflect certain wavelengths and transmit others, essentially splitting the image. The biggest downside is that the lens must be either telephoto or strongly retrofocus in design, as the image plane cannot be close to the rear element.

Method3.jpg.21fc58ae868e1c7683e9c0962a9b71b1.jpg

As for the retrofocus lens, here's a very raw attempt, at f/8:

Screenshot2023-12-18013147.jpg.296482a674942879288a29e34ae4cabc.jpg

 

Pros:

- allows for true simultaneous images without parallax;

- corrects chromatic aberration (by adjusting the individual sensors);

 

Cons:

- requires three sensors;

- for wide angle images, the lens must be strongly retrofocus, which makes it difficult to design;

- dichroic mirrors in UV are not easily available (maybe interference filters at 45° could be used, although they are usually designed for near perpendicular light beams).

 

A similar technique has been successfully used here.

 

Method 4: dichroic mirrors with image screen

This is a possible improvement of the previous method. It's the same camera as before, but the image is first projected onto a screen by a first lens, and then the screen is imaged with a second lens with longer focal length. This way a retrofocus lens is not needed. To increase the brightness of the image, the screen could be made with a microlens array or a Fresnel lens, although I doubt it would work much better.

 

Something similar was used by Andy for his early SWIR experiments: https://www.ultravioletphotography.com/content/index.php?/topic/2112-swir-camera-setup-and-some-pics

 

Method4.jpg.ee031c3a29ec1a9565b5f08ca31399b1.jpg

Pros:

- allows for true simultaneous images without parallax;

- doesn't need a retrofocus lens;

 

Cons:

- requires two lenses;

- the sensitivity is likely lower than in the previous method, which is a problem especially for UVB;

- the first lens must be corrected for chromatic aberration.

 

To connect multiple sensors, I think a Raspberry Pi or similar could be used.

 

I had other more exotic ideas (like using phosphors excited by different wavelengths), but I don't think they could be practically built. I think method #3 is the most reasonable.

Link to comment

Well method 3 is what Panasonic did for the 3ccd video cameras. I have one, but its broken as was dropped. 

The new pi5 boards can natively support 2 camera sensors without switching. You may be able to adapt one to handle 4 cameras with switching.  Then you could do UV/blue/green/red or blue/green/red/IR.

I build a rig with all HQ cameras,  but never played with instantaneous capture of all.

There may be more apps and ways to do it now though. 

I would say the Raspberry pi way would be best as fully open to experiment with.

Link to comment

3CCD cameras achieved color splitting in a more compact way, using a dichroic prism. This prism is hard to make DIY, I don't think they exist pre-made for UV or IR.

 

For UV, at least the UVB sensor must be monochrome for sensitivity, and you need dichroic mirrors for UV (for example with cut-offs at 320 nm, or 360 nm, etc. So far I found this for 350 nm).

 

And the lens would be hard to design. I have some experience using Winlens 3D, and designing lenses is not easy. You often manage to minimise spherical aberration, but with a lot of field curvature, or a lot of barrel distorsion. Minimising all aberrations at the same time is quite hard.

Link to comment

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
×
×
  • Create New...