Jump to content
UltravioletPhotography

Bayer filter transmission in UV and IR


Recommended Posts

I am with Andy :)

 

A further note: I f you look at the chart in #30 at the wavelength between 300 and 340 you get almost the same response from all channels. So the color you get will be very similar as well. Assuming it is green (what we seem to get after WB), we can not distinguish any wavelength, we even can not tell, how far down the response of the sensor is. Further down to 300 might just give a darker green??

Link to comment

As I may have said before (or maybe that was in my post that vanished) - you can only do the inversion to at best the same dimensionality as the data - to put it a simpler way if you measure in three channels you can only invert to three parameters. Hue is defined in relation to the three human visual receptors. So in this case of a UV image you could invent three basis functions in the UV range - these could be say gaussian curves covering the range of interest ( say UVs, UVm, UVl) (like surrogates for the human "RGB" receptors - with knowledge of the light source spectrum (park the issue of this varying for now) and the spectral response of the detectors then you could solve to find the relative magnitude of these functions. You could choose some other basis functions modeling say reflectance of certain pigments. Or you could model response of a certain animal. We don't know of any animals though that have tri-chromatic response just in the UV, though the bee does have some response to uv in each of its three receptors.

 

I suspect in practice because of noise and the degree of overlap between the RGB detectors in our UV cameras then actually doing what I suggest would be difficult. What I am looking at is how to convert the camera data to a simple greyscale image that as a rough approximation mimics the response of the bee (or bird) UV visual receptor. Even this looks tricky mainly because the peak of the Bee UV receptor is at rather shorter wavelengths (around 340) than our systems - although when sunlight is taken into account the difference is not as great.

 

I suspect my conclusion is going to be something extremely basic - like to make the greyscale image just sum the RGB channels without white balancing ( I will post my reasoning separately when I have done this).

 

From an artistic point of view I think the current camera systems and images produced after white balancing look really good (and I really appreciate all the ingenuity of members of this forum that has got us to this point and their generosity in sharing their knowledge). In a non-scientific way I think they can represent in an ill-defined way - "what the UV world would look like if we could see it" and also in a poorly defined way revealing in some way a world that other animals may be aware of in a way that despite all our efforts we have to admit we don't fully understand. (well that how I think of it). With my artists hat on this mystery of a world that we are not aware of but animals are ( in some way) is part of the attraction.

 

I think I better stop typing and spend more time taking pictures ..... :)

Link to comment

Me, myself and I -- we all totally get the non-invertability of the problem. And I have written several times about it on old UV forums and on this one even way back several years ago when Dr. Klaus first made a wavelength/colorMap with his Lumix and we all pointed out the non-invertability and other problems and he got really mad at us. And I've mentioned more than once about all those factors like illumination and UV sunlight content and filters dependencies and lens differences and so on and so on. But I have never been able to dissuade anyone yet from trying to correlate wavelength and colour.

 

So I've decided why not just see what happens if someone tries to make a poor-man's camera spectro? I've provided all the warnings I can make.

 

Let the wavelength-color Correlator beware the response from all those Physicists who are going to throw some serious Shade at this. Don't ruin a reputation over this!! Just have some fun with the investigation and enjoy where it goes even if nowhere.

 

Sorry for all those italics. Just attempting to make it clear that I get it. :)


 

I should mention that there is a set of spectral floral references set up by Chittka and colleagues. What few I've checked do seem to correlate to simple wave/color approximations. But as I recall, Bjørn Birna had some serious reservations about how their measurements were done.


 

Jim - I totally agree that even if 'ill defined', our UV/IR photographs indicate the beauties of the hidden world seen by other creatures on this earth and give us surprise and joy.


 

I am thinking that I should perhaps split off some of this from Jonathan's topic which has been pretty well trampeled upon at this point!

Link to comment

Time for me to get back to work then, and try and found out what if anything it can tell us about wavelength. As with all techniques, part of the work is knowing the limits, and I'm an experimenter at heart, and will always take data above all else.

 

I'll post up anything I find if it's of use.

Link to comment

Jonathan,

Is this post similar to what you were thinking:

http://photographyof...violet.html?m=1

Where we first see the spectrum off the flowers, then the actual flower images?

 

I read things ten times over, and every once in a while I remember what I have read.

 

Thanks Da Bateman. I've been looking for some UV reflection spectra for the flowers - I've not been able to get reliable spectra so far, with my little USB spectrometer, although to be fair I've not optimised it. I've found a couple of papers which have them too.

Link to comment

JMC: I've been looking for some UV reflection spectra for the flowers

 

Here is a link to Chittka's Floral Reference Database (FReD) which has what you want.

The Paper explains how the data was collected.

 

LINK to the Paper: http://journals.plos...al.pone.0014287

LINK to the FReD: http://www.reflectance.co.uk/

 


 

Editor's Note: We all need to let Jonathan have his topic back !! We are all so interested in this, but I fear that we have gotten a bit carried away. I don't want Jonathan to think we are trampling on his effort however we might feel about its feasibility. It will be quite interesting to see how it all works out.

Link to comment
Andrea, thanks for the link. The FReD database is excellent and just what I needed. Saves me buying a new integrating sphere for now anyway :)
Link to comment

It's been a warm sunny day here, so I did what any self respecting English would do - hide himself away in a room with the blinds down and work on the computer.

 

Thanks to Andrea for that link to the FReD database I've been thinking more about this problem. So I did the following. Firstly I wanted to try and predict the camera response in the UV, across the 3 channels (R, G, B) when I take an image under sunlight, using the multispectral 5DSR, Rayfact 105mm lens and Baader U filter. To do this I calculated the following in 20nm intervals based on a combination of my own data, and the values from the FReD database, for 3 things I have UV images of (Yelow part of the Buttercup petals, White part of Daisy petals, surrounding foliage/grass). It was literally a case of multiplying the following together;

 

Sunlight spectra * Baader U transmission * Camera Sensor response (per channel) * Reflectance from plant

 

As as result I ended up with 3 sets of curves - Red, Green and Blue response for the Grass/foliage, Buttercup and Daisy. These are as follows;

post-148-0-72727300-1527448603.jpg

 

post-148-0-62274400-1527448619.jpg

 

post-148-0-00331500-1527448627.jpg

 

These show the predicted response of the colour channels in the camera to the light coming from the 3 types of plant. The scales are kept the same between the graphs. As expected the foliage has the lowest the response (lowest reflectance of the 3), the Buttercup gives more of a signal, and relatively more at 360nm. The Daisy is odd. The daisy reflectance climbs rapidly as the wavelength increases, so going into blue it climbs at an enormous rate. This combined with the sunlight spectra which is also climbing rapidly as the wavelength increases, seems as though it would result in a signal at 420nm in the camera, despite the incredibly low transmission of the filter. I realise this is controversial, however as with all my work, I am sharing what I find. And as I shall discuss later the Daisy in the RAW image oes have a distinct blue tinge, which can only really be coming from light above 400nm in wavelength.

 

The good thing for me though, is that the shapes of the curves are different for the 3 plants. As the final colour is dependent on the amount of Red, Green and Blue making up the image, this is a good thing, otherwise all our image would be monochrome.

 

What about predicting colour? A very simplistic approach for this. I simply average the R values between 340nm and 420nm, where there is light getting through, and get an average R, G and B value for each of the plants. I then scaled these values for each plant so that the highest was set to 255, and the R : G : B ratio for each remained the same as measured.

 

Up until now this this is predicted data, based on my camera calibration, measured sunlight spectra and Baader U curves, and the FReD database reflectance curves. Before I get too bogged down with predicted data, what does a Raw Composite image of a Buttercup llok like when taken with the 5DSR, Rayfact lens, Baader U filter, under sunlight. Here is one, no processing;

post-148-0-99018600-1527449506.jpg

 

With my camera the Raw composite has a distinct pinky tinge. However the Buttercup does still slightly yellow/orange in this image, and importantly looks to be a different colour the surrounding foliage. I Then got Red, Green and Blue values from this RAW file and another for the Daisy, for the Yellow part of the Buttercup, the White of the Daisy, and the foliage. These values were treated the same way as the predicted ones above - scaled the values for each plant so that the highest was set to 255, and the R : G : B ratio for each remained the same as measured.

 

By now you may be getting bored or lost, but I'm nearly there. I took the RGB values (predicted and measured) and created a set of colour bars in Powerpoint for the 3 plants, varying from RGB 255:255:255 on the left to RGB 0:0:0 on the right, with the measured or predicted values in the centre. This was to give me an idea of the potential range of tones for lower or higher exposure, and this is what those bars look like (screen grab from PPT);

post-148-0-93380900-1527449913.jpg

 

Overall there is some degree of correlation between my predicted colour ranges, and the ones derived straight from the RAW files. While it may not obvious on here there was some pretty servious deviation in the blue channel, between modelled and predicted. The RGB ratios, predicted and measured, are given below;

 

Foliage predicted - RGB 1.00 : 0.54 : 0.58

Foliage measured - RGB 1.00 : 0.50 : 0.48

 

Buttercup predicted - RGB 1.00 : 0.55 : 0.49

Buttercup measured - RGB 1.00 : 0.47 : 0.29

 

Daisy predicted - RGB 1.00 : 0.55 : 0.75

Daisy measured - RGB 1.00 : 0.56 : 0.94

 

It looks as though for the Daisy, the blue is considerably underestminated in the predicted model (the actual image is much more blue than predicted), while the Buttercup is much less blue than predicted by the model (the actual image has much less blue than predicted). The Daisy does have a strong purple tinge in the RAW composite image, which suggest to me that there is likely to be contributions from into the blue part of the spectrum, to try and compete with the red tinge typically seen in the UV.

 

Overall, am I happy? To be honest I was glad I had enough to produce some predicted curves, and that these looked different between the 3 plants. My predicted data is certainly not a one for match with reality, but the colour tones produced by the predicted models, I think were a reasonable match to real life. Especially given the number of assumptions made to get this far. Can the camera be used as a spectrometer? Certainly not without a huge lot more work (and more resolution than I can achieve), and not for things with such a complex reflection spectra as flowers. For something with narrower reflection bands, who knows? Can the image tell us something about the wavelengths of light being reflected? Yes I think so, the 3 plants here had very different reflection spectra, and both the predicted curves and RAW composite images were different from each other.

 

Time for a beer now....

Link to comment

 

I forgot to mention the dependence on camera sensor, lens in use, filter in use and illumination. IF sunlight is used as the illumination, then all color-wavelength maps will need to be correlated with time-of-day, altitude, lat/long, time-of-year? Or can that be skipped? I don't really know.

 

Just to add to Andrea's criteria.....UV index....perhaps ?

 

Col

Link to comment

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
×
×
  • Create New...