Jump to content
UltravioletPhotography

Discussion: Simulating Bee Vision


kylenessen

Recommended Posts

EDITOR'S NOTE: I split Kyle's post from the thread in which it was originally posted so that I could provide some notes about filter stacks and channel stacks.

 

 

Hi all,

 

I'm new to this idea of filter stacking and find it an intriguing alternative to making a composite in post processing. I also appreciate you all spelling things out so thoroughly so that a newcomer like me can make sense of this dense concept. If you'll allow me, however, I'd like to throw some questions out there for clarification.

 

As I understand it now, the idea of a filter stack is to control which wavelengths of light are entering our full spectrum cameras. To simulate bee-vision, we cut out the reds and focus on green to UV. If we include red, we are entering tetrachromatic realms. My issue, however, is that there is no "UV channel" built into the cameras, and instead that UV reflectance information spills into the RGB channels. This undoubtedly must have an effect on how the colors we can see are rendered. Any appearance of red can be interpreted as UV reflectance in the bee-vision scenario, but how do we cope with changes in the blues and greens in the presence of UV light? I think Cadmium's idea on subtracting a UV image is trying to address this issue, but I don't see how it is any advantage over simply combining UV and VIS images in photoshop.

 

On a more practical concern, I am using a Nikon Micro-NIKKOR-P Auto 55mm f/3.5 lens and am wondering if a filter stack approach is even appropriate with this lens? It's certainly capable of taking UV images, but the lighting requirements are so different from the visible (6 stops difference) that I wonder if any UV information would be recorded. I'm, unfortunately, not at the point in my career where I can enjoy a well-made quartz lens :P Any feedback on this would be appreciated it.

Link to comment
Hm. A 6 stop difference UV-Vis with a non-specialised lens tells me you don't have a pure UV image in the first place. Even with a UV-Nikkor you get a bigger difference. In daylight, 8 stops or more is common even with a quartz lens.
Link to comment

I'm embarrassed to say that my number could very well be wrong. It's a figure I am remembering from some problem I was trying to overcome and could be what was required to meet a range and not the absolute difference. In any case, I'll double check and come back with a more confident answer tomorrow.

 

I am using a full spectrum Nikon D7000 with a Micro-NIKKOR-P Auto 55mm f/3.5 lens. The Baader Planetarium U-Filter is what I'm using, with an uncoated xenon flash as a light source.

Link to comment
The Baader U should be OK with an uncoated Xenon flash tube and would cut off IR very well. Thus I believe your figure is incorrect.
Link to comment

We have to start with some facts and an opinion or two.

There are lots and lots of other links to scientific papers about ultraviolet vision in bees and other critters which are available in our Lists section.

 

*************************************************

 

Facts of Bee Vision

All facts here should be researched to see what the latest results are.

 

Chittka, L., Shmida, A., Troje, N., & Menzel, R. (1994) Ultraviolet as a component of flower reflections, and the colour perception of Hymenoptera. Vision Research, 34, 1489-1508.

Notes:

  • Bee perceptual colour space is trichromatic
    with peaks of UV @ 345nm, blue @440nm, green @535nm.
  • Bee colours are: UV, UV-blue, blue, blue-green, green, UV-green.
  • Bees have two opponent channels.
  • Bees see foliage as uncoloured.
  • Bees do not discriminate brightness, i.e., they don't see black & white. [ED NOTE: There may be new thinking on this.]

  • 1063 petals from 573 flower species were studied.
  • 68% of the flowers were UV-absorbing.
  • There are no cyan flowers or UV-reflective black flowers.
  • Green flowers are very rare (i.e., red-absorbtion is very rare in flowers).
  • Most blue flowers have a red component.
  • Orange or brown flowers were not included in the study.

[Note]

If you wish to photographically simulate bee vision, then you must have a camera/lens/filter which can record around 345 nanometers?

Open to discussion.

[/Note]

 

 

Chittka, L. & Waser, N.M. (1997). Why red flowers are not invisible for bees. Israel Journal of Plant Sciences, 45: 169-183 (with commentary in TREE).

 

Now, here is an opinion about what we like to do photographically - to simulate Bee Vision. This opinion says: you are doin' it wrong! We do it anyway. But - the point is very important - we must consider reflected flower colours in context with habitat and environment.

 

Chittka L & Kevan PG (2005) Flower colour as advertisement. Practical Pollination Biology. pp. 157-158

Full text: http://chittkalab.sb...aKevan05new.PDF

Notes:

Singling out UV reflections is improper in studies of pollination.

Ignoring the nature of the colours of the backgrounds against which flowers are presented is improper.

Floral colours should be considered [together] with:

  • Animal's colour vision system
  • Flower's background colour
  • Flower/background contrast
  • Visual, olfactory or other cues
  • Presence & exposure of rewards
  • Other co-flowering species

*************************************************

 

 

Bees can see iridescence and it enhances target detectability.

So again, iridescence is contextual factor to be considered when examining flower reflections.

 

Whitney HM, Bennett KM, Dorling M, Sandbach L, Prince D, Chittka L, and Glover BJ (2011) Why do so many petals have conical epidermal cells? Annals of Botany 108: 609-616

Full text: http://chittkalab.sb...20Ann%20Bot.pdf

Whitney, H.M., Kolle, M., Andrew, P., Chittka L., Steiner U. & Glover B.J. (2009). Response to Comment on “Floral Iridescence, Produced by Diffractive Optics, Acts As a Cue for Animal Pollinators” Science, 325: 1072.

Full text: http://chittkalab.sb...eReply_2009.pdf

Whitney, H.M., Kolle, M., Andrew, P., Chittka L., Steiner, U. & Glover B.J. (2009). Floral Iridescence, Produced by Diffractive Optics, Acts As a Cue for Animal Pollinators. Science, 323: 130-133.

Full text: http://chittkalab.sb...cience_2009.pdf

 

 

*************************************************

 

Color

Yes, I'm offering Wikipedia links. So far so good. There are books & papers also available and you should reference books or papers when writing something up for a scientific publication.

 

Make sure you understand colour.

The Wikipedia article seems to be as good as any: https://en.wikipedia.org/wiki/Color

 

Know that there are spectral colours and that magenta is not one of them.

https://en.wikipedia.../Spectral_color

 

Get a feel for alternate vision by trying the cross-eye test here.

https://en.wikipedia...mpossible_color

 

The eyes need the brain to really "see" a colour.

https://en.wikipedia...pponent_process

 

Illumination can alter the way color is rendered.

https://en.wikipedia...rendering_index

 

 

*************************************************

 

 

The Bees' 345/440/535 visual receptor peaks converted to RGB values.

The UV 345nm colour is, of course, an attempt to render the human-invisible UV peak at 345nm into a reasonable false colour. This colour is actually closer to a 380nm low-violet/high-UV look. So take this translation with a grain of salt, please.

 

Here is Bruton's algorithm for converting wavelengths to rgb code.

http://www.physics.s...or/spectra.html

 

More about this conversion.

Note in particular that: There is no unique one-to-one mapping between wavelength and RGB values.

http://www.efg2.com/...ing/Spectra.htm

 

Some code for the algorithm:

There are conversions using Python, Javascript or whatever if you search around.

The one linked here goes down to 350 nm and uses slightly different ranges for the intensity calculation than some other versions do. Ultraviolet is mapped to magenta (purple in some vocabularies) with decreasing intensity (gets darker) as the wavelengths get shorter.

 

UV 345-350 nm -> (25,0,25), a very dark magenta, almost black, with %100 saturation and 10% brightness. I'll take this for UV 345 nm. Although I'm not at all sure how useful this dark colour would be in modeling anyone's vision?

 

B 440 nm -> (0, 0, 255).

This is a 100% saturated and 100% pure, bright blue.

 

G 535 nm -> (91, 255, 0).

This a 100% saturated and 100% bright green-yellow. Green is moving towards yellow and has reached about 36% of the 'distance'.

 

Bee visual receptor peaks mapped to RGB values.

BeeColorPeaks.jpg

 

Chittka and Kevan (1978) give 340 nm, 435 nm and 560 nm as bee receptor peaks. Not too much difference.

Bee visual receptor peaks mapped to RGB values.

BeeColorPeaksChittka.jpg

 

*************************************************

 

And for the record, here are the human visual cone peaks rendered into RGB colours. I used the midpoint of the range for the cone peaks. This has surprised me because I tend to think of human cones as RGB. It would seem like they are somewhat blue-violet, green-yellow and yellow. My goodness.

 

B 400 - 420 nm -> 410 nm, midpoint -> (74, 0, 222).

This is %100 saturated and 87% bright. It might be called a blue-violet if you take violet to be something like (128, 0, 255). Your call.

 

G 534 - 555 nm -> 544 nm, midpoint -> (124, 255, 0)

 

R 564 - 580 nm -> 572 nm, midpoint -> (226, 255, 0)

 

Human cone peaks mapped to RGB values.

HumanConePeaks.jpg

 

*************************************************

 

No brief review of bee vision facts is complete with a reference to Horridge's work.

 

The Anti-Intuitive Visual System of the Honey Bee by A. Horridge

 

In this paper by Horridge, he observes that honeybees really do not discriminate colour as we humans do.

So our colour models of bee vision are a bit off the mark if the bee cannot "see" large colour shapes. :D

 

The neuron properties are more suitable to detect line-labelled modulation than a tonic, maintained photon flux. None of that seems to be adapted to the discrimination of colour as a palette of persistent sensations. Moreover, within the optic lobe there is no sign of the colour triangle, or of neurons that could discriminate between colours. The spatial fields for colour-coded neurons are large, which implies that they are adapted to quantitative measurement of a summed mixture, region by region, but not the identification of the original inputs. The new results imply that much old work will have to be revised, for example the effect of the colour of the illumination upon another colour, and the discrimination of large areas of colour. The part played by the UV receptors is little understood.

 

Abundant positive evidence that shapes and local layout are not recognized.

 

line-labelled = signal from stimulation of a particular receptor is transmitted over particular pathways to particular parts of the nervous system

modulation = process of facilitating the transmission of signals over a transmission pathway (carrier)

More specifically,

modulation = conversion of lightwave/intensity into nerve impulses; neurotransmitters released enable cortex identification.

signal = lightwave/intensity

carrier = optic nerves

tonic = describes a maintained response to constant photon flux

 

For Bees: Modulation is a measure of the flicker induced at the eye by motion of the bee relative to the total contrast in a local region of the eye. Modulation is the highest priority cue.

 

 

How Animals See the World: Comparative Behavior, Biology and Evolution of Vision

Edited by Lazareva, Shimizu, Wasserman. Oxford University Press, 2012.

Chapter 10: Visual Discrimination by the Honeybee (Apis mellifera)

By Adrian Horridge

 

Far from being a pattern perception device, bee vision destroys the pattern in the image and replaces it by the layout of a few labels. This is the sparse code for a small brain.

 

Bee vision is a set of coincidences.......

(Bee) vision is not a separated modality, as it is in humans, for there are neurons that respond to other modalities in the bee optic lobe, and the visual cues are linked to odors and the time of day.

 

Orientation detectors: detect contrast and respond to edges of a particular orientation. At least 3 types which are colorblind, green sensitive and do not distinguish between black-white and white-black edges.

 

Modulation detectors: receive excitation from both blue and green receptors; have better resolution than orientation detectors.

 

Tonic color channels: peaks in UV, blue and green. Measure areas and intensities of color.

 

Local motion detectors: respond to successive modulation of 2 or more adjacent receptors; detect direction of motion of contrasts; green-sensitive, colorblind.

Link to comment

Does anyone still want to attempt to model Bee Vision photographically given that it will never be a reasonable simulation of how this insect really sees? :D :D :D

 

 

Please do ask yourself what might be gained by way of good scientific knowledge with photographic simulations of bee vision? What are you aiming for? What can these photographs tell us?

 

I really don't have answers to these two questions for myself. Mostly because I have other UV fish to fry, so I'm not sure how far I want to pursue simulating bee vision.

 

Sometimes we do not know where a model is going to take us. A good model gives insight into the modeled topic and predictions about how it might behave. An attempted model may also give us nothing useful, in which case it's not a good model. I do not wish to say that you will have wasted your time if your model turns out to be un-useful, because we always learn from failed attempts.

 

**************

 

How Does a Digicam Record UV?

Let's look briefly at how UV is recorded in one digital camera, my D600-broadband with its typical Bayer filter. (Nothing I write about this applies to Foveon Sigmas or to the Xtrans Fujis.) My main point here is to show you one way to look at the actual raw data of a UV photograph so that you can determine what your camera is really recording.

 

Recall that recorded UV depends on which camera sensor, camera Bayer array, lens and filter(s) are used together with the illumination shining on the subject. So there are at least five input variables which go into making a UV photograph. Using apps like Dcraw or RawDigger, we can discard any effects from white balance and just look at the raw record.

 

Bayer Filters: https://en.wikipedia...ki/Bayer_filter

Raw Digger: http://www.rawdigger.com/

 

I think we have had more elaborate discussions about the factors affecting how we record UV photographs. Look around when you get a moment.

 

UV photographs are recorded in all 3 channels in the typical DSLR or mirrorless converted camera. Thus far our experience has been that UV is recorded mostly in the Red channel for typical Bayer-filtered cameras. But there is also information in the Green and Blue channels.

 

Here is an example UV photograph together with its raw composite and RGB channels. The red gebera in these fotos is a flower I'm using to test filter stacks, so it does appear in another thread.

 

Equipment[ D600-broadband + UV-Nikkor 105/4.5 + Sunlight ]

It was breezy, so I stayed between ISO 200-400 and set aperture a little more open at f/8 than usual for my work.

 

Visible Reference[ f/8 for 1/1250" @ ISO-400 with Baader UVIR-Cut Filter ]

I had a terrible time with the conversion of this very red Gerbera even though I shot white standards and a CCPassport for colour profiliing. Either the red was too orange or not dark enough. I made tweaks to the foto in both Photo Ninja and in Capture NX2. The foto is close-enough at this point.

 

Red Gerbera in Visible Light

gerbera_visSun_2015.10.23wf_40894pf.jpg

 

 

Ultraviolet[ f/8 for 1/1.6" @ ISO-400 ]

The conversions are showing a bit of false colour difference. That's OK - there is no 'right' conversion. I did profile visible colour and shoot standards for the UV white balance. In Capture NX2, I have no way to restore the colour changes caused by removal of the internal filters on the D600 like I can do in Photo Niinja. However, who's to say that is even necessary for a UV shot? (Bjørn and I have been back & forth on this topic a few times. On both sides of it.)

 

Red Gerbera in UV Light: Conversion in Photo Ninja

gerbera_uvBadSun_2015.10.23wf_40916pn.jpg

 

Red Gerbera in UV Light: Conversion in Capture NX2

gerbera_uvBadSun_2015.10.23wf_40916nx2v1.jpg

 

 

**************

 

So now let's look at where that UV light is being recorded with the equipment listed above by examining the raw composite of this file.

 

Ultraviolet Raw Composite[ Raw Digger 1.1.2 ]

This raw composite was created with a gamma 2.2 curve, 16-bit autoscaling (display range is aligned with data range, else we might not see much), and a 3-channel RGB output (G1 and G2 averaged because both the same).

 

Resizing and stuffing the Raw Composite into a displayable sRGB Jpg probably causes a bit of damage. But this is close enough.

Raw Composite

gerbera_uvBadSun_2015.10.23wf_40916rawCompV1.jpg

 

Just for grins here is the straight-out-of-camera Jpg that I shot. Seems like I have managed to concoct an in-camera white balance preset which produces something quite close to the raw composite. Very interesting.

Sooc

gerbera_uvBadSun_2015.10.23wf_40916_sooc.jpg

 

The Raw Data

I used a Raw Digger sampling square to create my histogram of raw data so that it would correspond to the cropped version of the Red Gerbera as I am showing you above. Here is a screen shot of that sample area followed by the histogram. BTW, Raw Digger shows that this photo has a .5% underexposure in the Blue channel which corresponds to the shadowed areas under some petal edges so no biggie.

 

Sample Area

gerbera_uvBadSun_2015.10.23wf_40916rawDigSampBigSquare.jpg

 

Histogram of the Sample Area

Really, I am somewhat struggling to figure out what this histogram is telling us about that UV-absorbing Red Gebera? The equipment has captured a great deal of UV-reflection else we would not have a UV photograph, would we? I suppose we have to conclude that the Red Gebera reflects some UV even though it is mostly UV-absorbing? Of course there is UV reflecting off objects the remainder of the scene.

 

In this histogram, I chose an EV scale along the x-axis with increments of 1/3 stop. Bright tones are counted to the right of EV0, and dark tones to the left of EV0. Midtones are of course clustered around EV0. The y-axis of counts is linear.

 

This histogram certainly tells me that my UV photograph had a LOT of dark tones (no surprise). Yet the photo is not really underexposed. The histogram shows me that the brighter tones and midtones are mostly red (again, no surprise).

But what are the green and blue channel counts in this histogram telling me?

gerbera_uvBadSun_2015.10.23wf_40916rawDigSampBigSquareHisto.jpg

 

Red, Green 1, Green 2, Blue Channels

Here are the extracts of each of the four channels. Clearly the red channel is brightest, as the histogram also shows.

If you expand your browser, these will appear as a 4x4 display.

gerbera_uvBadSun_2015.10.23wf_40916rawDigRchan.jpggerbera_uvBadSun_2015.10.23wf_40916rawDigG1chan.jpg

gerbera_uvBadSun_2015.10.23wf_40916rawDigG2chan.jpggerbera_uvBadSun_2015.10.23wf_40916rawDigBchan.jpg

 

Histograms of Small Sampled Areas on the Flower

I made two samples on the center disk of the Gerbera: one in the center and one on the disk florets. I made a sample on a fairly uniformly toned ray on the right. And I made a large sample on the left covering some brighter (possibly iridescent?) areas and some shadowed areas as well as the rays.

 

As before the x-axis of each histograms is shown in 1/3 EV stops and the y-axis of counts is linear. I did not make the y-axis quite tall enough so got some count truncations.

 

Sampled Areas

gerbera_uvBadSun_2015.10.23wf_40916rawDigSamp4Small.jpg

 

Sampled Areas Data Summary

gerbera_uvBadSun_2015.10.23wf_40916rawDigSamp4SmallData.jpg

 

1. Histogram of Center Disk Sample

gerbera_uvBadSun_2015.10.23wf_40916rawDigSamp1DiskCenter.jpg

 

2. Histogram of Disk Florets Sample

gerbera_uvBadSun_2015.10.23wf_40916rawDigSamp2DiskFlorets.jpg

 

3. Histogram of Right Ray

gerbera_uvBadSun_2015.10.23wf_40916rawDigSamp3RightRay.jpg

 

4. Histogram of Left Rays

gerbera_uvBadSun_2015.10.23wf_40916rawDigSamp4LargeLeftRays.jpg

 

 

****

 

CONTINUED IN POST #15 BELOW

Link to comment

Saying that bees cannot perceive brightness seems logically tantamount to saying that they cannot distinguish bright light from total darkness; but this sounds nonsensical. Is it rather true that their vision merely has less output dynamic range than that of mammals?

 

It also strikes me that precision optics probably produce a far more detailed image than that generated by a compound eye with only a few thousand ommatidia. Are images produced with cameras and precision lenses not then misleading in this regard?

Link to comment

Clark, the Chittka research says that bees do not discriminate brightness. This means that they perceive their world at a uniform level of brightness rather than seeing some objects as brighter or darker than other objects. It was thought that bees did not have a black/white opponent channel in the brain which enables discrimination of tones. As noted above, other researchers may have a different idea about this.

 

I am not sure how closely we can equate an object's reflected light with its photographic brightness as measured in our photo editors. I tend to assume they are correlated, at least. The actual definitions are complex and technical. The point being, I have not yet gone as far as giving my bee vision photos a uniform brightness. And I have not seen anyone else doing that either.

 

...precision optics probably produce a far more detailed image than that generated by a compound eye with only a few thousand ommatidia.

Likely so. However, bees are able to see what they need to see. The biological need for detail is more of a human thing I suppose. I think that we can all agree that our bee vision photographic images are also likely to be misleading w.r.t. detail.

 

There are so many variables (inputs) to consider when attempting photographic bee vision simulation !! This is why I raised the question of whether we should even attempt it.

Link to comment

...There are so many variables (inputs) to consider when attempting photographic bee vision simulation !! This is why I raised the question of whether we should even attempt it.

 

I was struggling with the same question this morning. The fact that we cannot see into UV (let alone have a composite eye) calls into question the whole approach. I think attempting multispectral images can offer insight, however, and at the very least corroborate findings from other papers. For example, I've noticed a tendency to greens and blue-greens in my multispec images where nectar guides are found. This, of course, was exciting, but even more so to find similar results in the literature using different approaches (the Chittka article in particular). Here is another paper that shows a similar pattern. In this way, the model is predictive to some degree.

 

In the Chittka article, there was one line that has stuck with me all day. "Generally, no flowers, aside from the green foliage type functions, absorb in the red domain of the spectrum." This has had me wondering if the presence of red light allows the bee to discriminate between foliage and the flower? Has anyone found any literature on the topic? If not, it would make a great project for my class :D

Link to comment

By the way, Andrea, thank you for putting together such a great summary of bee vision!! I wish I came here sooner! So much easier when you talk it out!

 

Bjørn, I went back and checked my gear today, came away with a 8 stop difference. Cause for concern?

 

Thanks,

Kyle

Link to comment

Kyle: "Generally, no flowers, aside from the green foliage type functions, absorb in the red domain of the spectrum." This has had me wondering if the presence of red light allows the bee to discriminate between foliage and the flower?

 

Kyle, if you look at the end of Post #6 above, you can read my notes from a paper and book by Horridge who writes about how bees use multiple cues to determine what they are "seeing": motion, orientation, time-of-day, odors, and more.

Some of these cues were also covered by Chittka and Kevan, 2005. (Linked also in Post #6 above.)

 

In one the referenced papers above, we learn that most foliage reflects smidgens (not a scientific word!) of UV and blue in addition to its green reflection. So, the bee receives only a low frequency stimulation of its visual receptors from leaves as it is moving over flowers and their foliage. Think of the foliage as being "bee-grey". Thus the flower stands out in the bee's vision. Note that motion cues are working with visual cues to produce this discrimination between flower and foliage.

 

***********

 

In the summaries above, one topic I haven't mentioned is the pigments in flowers and whether they are UV-absorbing/reflecting. Pigmentation is a fascinating, complex and very biochemical topic. "-)

 

 

***********

 

Informal Observation ---- Bees go to almost all flowers regardless of UV-signature?

 

When outdoors looking at bees and flowers, I have pretty much seen bees on every flower regardless of whether its UV-signature is dark, bright or 'patterned'. I have pretty much given up trying to figure out why some flowers have central bullseyes (as just one example), given that bees go to all flowers. There are even types of bees who know how to drill a hole in long, trumpet-shaped, human-red, hummingbird-pollinated flowers in order to find the nectar. IIRC, it's called "nectar robbing".

 

There is evolutionary advantage to UV-signatures in some way or they would not exist. I wonder if we will ever figure it out.

 

***********

 

Another Informal Observation -- Where did that bee come from??

 

Within the last week or so I have been working on making some filter stack studies of red flowers. I set up my flowers and photo equipment in the large sunny portion my front lawn. It is autumn, a little chilly, there are only a few scraggly flowers left growing in the neighborhood. We haven't had a freeze yet, but we have had some nights in the 30s (F) .

 

Inevitably about 30 minutes or so after setting up in the front yard, here comes a bee of one kind or another to buzz around my red gerbera or red hibiscus !! How did these bees know there was a red flower in my front yard which they should come to investigate?? It's a puzzlement.

Link to comment

I would like to be clear on two things. :D So this is a little digression.

 

Anonymous 'you': When I use the 2nd person 'you', whether singular or plural, I am attempting to NOT refer to any specific person, OK? I am using an 'anonymous you'. When I want to use 'you' specifically to respond to a particular person, I usually try to preface the remark with a specific name.

For all its utility, English can be an impossible language at times.

 

I am not an expert on bees or bee vision. I only know what I've read so far in various putlished papers and books. I do no formal research in bee vision. I do not wish to portray myself as an expert in any way on the general topic of insect vision. Everything I have learned is freely available to everyone else. I attempt to keep posts honest by supplying references or links to lists of references. I am trying only to aggregate basic information so that folks wishing to attempt photographic bee vision simulations have some facts and techniques available to inform their work.

 

Everyone, please do feel free at any tme to correct errors or question comments. When I get something wrong, I go back and cross out the error and replace it with the correction. [There are days when I do far too much of that!]

 

:D :D ;)

 

OK, I'm now going back to finish Post #6.

Link to comment

I know very little about insect vision either, nor do I have any bespoke bandpass filters; but I decided to have a little Photoshop fun...

 

I had no suitable images of my own for this exercise, so I took the liberty of borrowing a pair of Andrea's, the Echinacea UV and visible pair (I hope you do not mind, A.)

 

Guessing from the tables in the literature cited, the L channel peaks between 435 and 605 nm, whereas the M channel peaks between 325 and 490 nm, dropping steeply off after 490. The S channel is pretty pure UV.

 

Starting from the initial registered image pair using the channel mixer, I constructed a surrogate L channel file out of 70% green and 30% red (rough guess.) I constructed a surrogate M channel out of roughly equal proportions of green and blue with about 10% UV thrown in. I then cross-mapped LMU-->RGB.

 

Changing to Lab mode, I isolated the luminance channel. Separately, I took the UV image and applied Edge Detect to find abrupt brightness changes (scintillations, specularity, etc.) Inverting this image, I massaged contrast and then pasted it into the luminance channel. A bit of color enhancement and contrast and baseline massaging gave this final result:

 

post-66-0-05959300-1445978262.jpg

 

The hue and chrominance information is thus mapped [RG][G'BU]U'-->RGB. The luminance is derived entirely from the UV channel, which shows the relevant features most strongly. Whether this image tells us anything whatsoever about how an insect might have perceived the scene, I cannot say; but it would at least be possible to locate the relevant parts of a flower using this crude image. Perhaps this could at least serve as a starting point for further speculation...

Link to comment

I will read this over with interest, Clark. Thanks !!!

This is exactly the kind of thing I hope to see more of.

I will be coming back later to think about it in more detail.

Link to comment

Emulating Bee Vision via Channel Stacks in the Editor (Photoshop Difference Layers)

 

How can we merge our Visible and corresponding UV fotos to emulate the UvBG of Bee Vision? Several ways - all of which make sense.

You have a lot of control when making layered channel stacks in an editor. However it can become quite tedious. There have been a couple of automation tools discussed previously.

 

Let me first mention one historical approach to Bee-to-Human colour mapping so that we can keep in mind that we are not the first guys with cameras trying to do this.

Kevan in 1978 proposed a wavelength-preserving mapping as follows. Bee colours are on the left. Human colours on the right. I'm using 'bee' as a label prefix for bee colours (no prefix for human colour references) so that we keep it clear whose colour we are referring to. I do not know whether Kevan actually constructed any film examples of this kind of stack. It is certainly much easier to accomplish with digital frames than it would have been with film.

  • short bee-UV to blue
    • short bee-UV-blue to blue-green

    [*]medium bee-blue to green

    • medium bee-blue-green to yellow

    [*]long bee-green to red

    • long bee-uv-green to purple (i.e., magenta)

    [*]mixed bee-uncoloured to white

Let's review the building blocks we have and try out a Kevan Channel Stack.

 

Here are the raw R, G, and B channels of the Visible photograph of the Red Gerbera.

If browser is expanded, these will display in one row.

gerbera_visSun_2015.10.23wf_40894pfBlueChan.jpggerbera_visSun_2015.10.23wf_40894pfGreenChan.jpggerbera_visSun_2015.10.23wf_40894pfRedChan.jpg

 

Here are the raw R, G, and B channels of the UV photograph of the Red Gerbera.

I'm using the SOOC version of the photograph.

If browser is expanded, these will display in one row.

gerbera_uvBadSun_2015.10.23wf_40916soocBlueChan.jpggerbera_uvBadSun_2015.10.23wf_40916soocGreenChan.jpggerbera_uvBadSun_2015.10.23wf_40916soocRedChan.jpg

 

To make the Kevan stack, I will need first to convert the Visible blue channel to a green channel and to convert the Visible green channel to a red channel.

The blue channel from the UV shot stays as it is.

Here are the input frames after channel conversion.

If browser is expanded, these will display in one row.

gerbera_uvBadSun_2015.10.23wf_40916soocBlueChan.jpggerbera_visSun_2015.10.23wf_40894pfBlueChan01toGreen.jpggerbera_visSun_2015.10.23wf_40894pfGreenChan01toRed.jpg

 

Channel Stack which preserves increasing wavelength order with components Buv + G(Bvis) + R(Gvis)

It is a bit dark, so the 2nd version has a reset white point.

gerbera_uvBadSun_2015.10.23wf_40916stack1v1.jpg

 

A brightened version of that stack.

gerbera_uvBadSun_2015.10.23wf_40916stack1v2.jpg

 

 

Channel Stack which preserves visible blue and visible green with components Bvis + Gvis + Ruv

This forces the UV frame to be placed into the red channel.

gerbera_visSun_2015.10.23wf_40894stackBvisGvisRuv.jpg

 

Out of curiosity, I used auto-levels on that stack and rather like the result.

gerbera_visSun_2015.10.23wf_40894stackBvisGvisRuvLeveled.jpg

 

 

 

Channel Stack with components Buv + Gvis + Rvis

This is a very pretty stack. It does not always resememble the visible flower like this.

gerbera_visSun_2015.10.23wf_40894_StackBuvGvisRvis.jpg

 

 

 

Channel Stack which attempts to use the translated Bee peak values.

I couldn't make this work.

 

 

Some drawbacks of Channel Stacks in the Editor

The UV and Visible photos need to match as closely as possible. If there are any breezes outdoors, your shots won't work well for this kind of stackng. Shoot indoors with UV flash to avoide such matching problems. You can see breeze artifacts in the preceding examples due to mismatch. I did not clean all of them up.

You can also get mismatches in stacks because flowers are living, responsive organisms. The flower wilts or droops gradually while you are shooting. Sometimes a flower will exhibit photo-tropism if outdoors, and you will detect motion of the captitula as you move through frames.

Then there is always the happy bee who lands on the flower for a tasty snack and completely disturbs your set-up with its industrious foraging.

 

These stacks do not look very Bee Vision-ish to me.

This is partly because red flowers are really difficult in bee vision simulations. If you look through our posts here on UVP, you will find lots of examples where the channel stack or filter stack does give a nicer simulation of Bee Vision.

 

 

(Notes to self Next we'll look at how a bee simulation filter stack records the UV, B and G mix. And address the issue of possible "separation" of the UV from the B and/or G. Bee vision via channel stacking in the editor. Use of RGB versus use of other schema. Examples. Bee vision via UvBG filtering. Which wavelength predominates. Raw values. White balanced values. Examples.

Link to comment

Simulation of Bee Vision with UvBG Filter Stacks

 

We have had several discussions here on UVP about using UvBG filters which pass ultraviolet, blue and green light.

Here is a tiny sampling of that:

 

Discussion: UG5 + IR-Cut Filter Stack, Part 1

BUG Impression of Marsh Marigold

Thinking about Tones in Simulated Bee Vision

 

[if someone has some time on their hands, please make a list of all Bee Vision topics. We will post it in Lists. Thank you! I always worry about leaving someone out of little lists like that one.]

 

Several UV photographers have tried UvBG filters and related stacks, and I do not know complete history about who might have been first(1). Steve at UvirOptics has been instrumental in the last couple of years in making good combinations of filter glass available for UvBG work(2) and also in providing excellent transmission charts of various filter combinations.

 

Key to the UVBG effort is knowing from perusal of transmission charts that glass like Schott UG5 or Hoya 330 passes some Blue and Green as well as some UV. When such a filter is stacked with an IR-blocker, it becomes useful for UvBG work. Here is one chart of UG5 transmission so you can see the blue and green transmission bumps. [Yes, I have a plan to obtain a more relevant transmission chart. Just no time currently. :D]

 

BugFilterWithColor.jpg

 

 

My own opinion is that UvBG filter stacks seem to do a fairly good job of mapping bee-colours to a reasonable RGB model. That is not to say that the result is necessarily the best photographic simulation of bee vision. Bee vision is very complex, and I think we would all agree that simulation of all aspects of that complexity cannot be done with a photograph. (I'd be most happy to be proved wrong.) Currently, we are just trying to simulate the colours.

 

Do look at Clark's very interesting edit of a bee vision photo above as an example of what might be accomplished with editing. Edits might possibly enable an illustration of other aspects of bee vision which our photos do not address - like brightness levels, or how edges are perceived and so forth.

 

Anyway, the techniques of shooting with a UvBG filter stack are well-known here on UVP. What I want to do in this post is to look at RGB charts from UvBG fotos to see how these filters record. And then to compare them to RGB charts from a corresponding UV foto. The question has been asked whether there is any way to "subtract" the UV from the UvBG frame? I think not, but gotta run some experiments to show that.

 

 

Bee Vision Example

 

Equipment: Nikon D600-Broadband + 105/4.5 UV-Nikkor + Sunlight

Exposure: f/8 for 1/10" @ ISO-400

Lots of strong sunshine.

Filter: H330 (1.5 mm) + S8612 (1.75 mm) + S8612 (2.0 mm)

This filter stack was reasonably effective in cutting red - at least after white balancing - but I prolly need a thicker H330 or UG5.

There is still red recorded in the raw file.

 

Conversion: Photo Ninja

Using pre-sets and colour profiles, it took less than a minute. I'm not saying it doesn't need more work. "-)

gerbera_ug330_150_s8612_175_s8612_200_Sun_2015.10.23wf_40989pn.jpg

 

Bee Vision Foto: Raw Composite & Histogram

 

Raw composite as cropped foto.

gerbera_ug330_150_s8612_175_s8612_200_Sun_2015.10.23wf_40989rawDigger.jpg

 

Raw composite after resetting black/white points. This is rather fetching.

gerbera_ug330_150_s8612_175_s8612_200_Sun_2015.10.23wf_40989rawDigLchLumOnly.jpg

 

RawDigger View :: The bright blue areas mark a 3% red and a 1% green underexposure. But it occurs in the shadows made by the flower rays so I'm disregarding it.

gerbera_ug330_150_s8612_175_s8612_200_Sun_2015-20151028-133744-RawDigger-ScreenShot.jpg

 

RawDigger Histogram for the cropped foto.

gerbera_ug330_150_s8612_175_s8612_200_Sun_2015_histo.jpg

 

 

 

Corresponding UV Foto

Equipment: Nikon D600-Broadband + 105/4.5 UV-Nikkor + BaaderU UV-Pass Filter + Sunlight

Exposure: f/8 for 1/1.6" @ ISO-400

 

Conversion: Capture NX2

gerbera_uvBadSun_2015.10.23wf_40916nx2v1.jpg

 

UV Foto: Raw Composite & Histogram

 

Raw composite as cropped foto.

gerbera_uvBadSun_2015.10.23wf_40916rawCompV101.jpg

 

Raw composite after NX2 Levels&Curves tool.

And so there is some new Secret Sauce: you can get a quite nice UV foto by taking its RawDigger raw composite and brightening it up a bit. I added one more step - a bit of local contrast sharpening. No other edits.

gerbera_uvBadSun_2015.10.23wf_40916rawCompV1LevCrvAuto.jpg

 

RawDigger View

gerbera_uvBadSun_2015-20151028-142021-RawDigger-ScreenShot.jpg

 

RawDigger histogram for the cropped foto.

gerbera_uvBadSun_2015-uvhisto.jpg

 

 

 

********************************************************************

 

RawDigger R, G and B channels for the UvBG and UV Raw Composites

If browser is expanded, this will display as two row of 3 fotos each, total 750 pixels wide.

The Blue channels are on the left. The Red channels are on the right.

The Green channel is in the middle. I'm presenting only the G1 channel. The G2 channel is identcal.

 

1st Row: UvBG Raw Composite Channels

2nd Row: UV Raw Composite Channels

 

gerbera_ug330_150_s8612_175_s8612_200_Sun_2015.10.23wf_40989_BchanX.jpggerbera_ug330_150_s8612_175_s8612_200_Sun_2015.10.23wf_40989_G1chanX.jpggerbera_ug330_150_s8612_175_s8612_200_Sun_2015.10.23wf_40989_RchanX.jpg

gerbera_uvBadSun_2015.10.23wf_40916rawDigBchan01xyz.jpggerbera_uvBadSun_2015.10.23wf_40916rawDigG1chan01xyz.jpggerbera_uvBadSun_2015.10.23wf_40916rawDigRchan01xyz.jpg

 

Those raw composites are so dark! Let me make them look like real fotos by pulling in the right side of the curve.

1st Row: UvBG Raw Composite Channels

2nd Row: UV Raw Composite Channels

gerbera_ug330_150_s8612_175_s8612_200_Sun_2015.10.23wf_40989_BchanWpt.jpggerbera_ug330_150_s8612_175_s8612_200_Sun_2015.10.23wf_40989_G1chanWpt.jpggerbera_ug330_150_s8612_175_s8612_200_Sun_2015.10.23wf_40989_RchanWpt.jpg

gerbera_uvBadSun_2015.10.23wf_40916rawDigBchanLev.jpggerbera_uvBadSun_2015.10.23wf_40916rawDigG1chanLev.jpggerbera_uvBadSun_2015.10.23wf_40916rawDigRchanLev.jpg

 

 

********************************************************************

 

OK, so look at all that technical data and attempt to figure out if we can extract the UV from the UvBG?

I can't think how.

The UV histograms and channel extracts show data in all three channels.

So how would you separate out UV data from a UvBG file which also has recorded data in all three channels?

 

 

********************************************************************

 

FOOTNOTES:

 

(1) Please remember that if you want to be declared 'first' in some scientific effort, then you must write up your results for submission to a peer-reviewed, scientific journal for formal publication. This website is not a scientific journal. We have a little peer-reviewing going on, but our peers are other UV photographers. :D

 

(2) UV photography is a very small world. We have two filter vendors as members here (and we are happy to have them). The 'rule' is that no direct solicitations or advertising is permitted in forum posts. References to vendors of various UV related equipment are listed in our Stickies. See this section for the Stickies: Lists

Link to comment

As I understand it now, the idea of a filter stack is to control which wavelengths of light are entering our full spectrum cameras. To simulate bee-vision, we cut out the reds and focus on green to UV. If we include red, we are entering tetrachromatic realms. My issue, however, is that there is no "UV channel" built into the cameras, and instead that UV reflectance information spills into the RGB channels. This undoubtedly must have an effect on how the colors we can see are rendered. Any appearance of red can be interpreted as UV reflectance in the bee-vision scenario, but how do we cope with changes in the blues and greens in the presence of UV light? I think Cadmium's idea on subtracting a UV image is trying to address this issue, but I don't see how it is any advantage over simply combining UV and VIS images in photoshop.

 

 

The UvBG filter stacks made with a UG5 or a U-330 so far seem to consistently produce the bee-vision colours which we would expect to see based on a flower's colour and its UV-signature. For one example, a yellow, UV-absorbing flower which stimulates a bee's green receptor, photographs as green under a UbVG filter stack, after white balance. So, in practice, UvBG filter stacks are proving their utility.

 

When using a UvBG filter, the fact that some of the UV light is recorded in either the blue or the green channel does not seem to cause any apparent loss or damage to the UvBG photograph. Given that the UG5 and U-330 pass so very little blue and green, a small boost in those channels might even help the foto?

Anyway, I don't think you need to be concerned about separating or subtracting the UV out of the UvBG foto. Besides which, I don't think there is any way that can be done!

 

Channel stacks are also useful. Both channel stacks and filter stacks have their pros and cons. So make some channel stacks and some fotos with UvBG filter stacks, compare them and see which type best represents your idea of how bees perceive colour.

 

My personal favorite, currently, is the UvBG filter stack. I have still a few things to work out, but I've made progress!

Link to comment

Re simulating the output of a compound eye:

 

http://andygiger.com...e/beyehome.html (spectral issues not taken into account, but fun interactive site.)

 

http://www.smithsoni...8582355/?no-ist (Hardware photos, but no actual output.)

 

These are two different approaches. The first takes an ordinary image and transforms it into a compound simulation via mathematical transformations. The second, more difficult route is to build an actual hardware analogue of a compound eye. For now, the first approach seems the more accessible of the two, and avoids the issues associated with spectral transmission properties of actual hardware.

 

Thoughts re channel stacks: for the long, medium, and short wavelength channels (L, M, S,) filters must obviously be used. For the S channel, this is no big deal, as a Baader filter is a reasonable approximation. For the other channels, things look more complex, at least for Apis mellifera. I do not think that the camera's G and B channels are good surrogates for the L and M channels, even though I have seen GBU images naively touted as "bee vision." Perhaps there exist filter packs whose transmission curves convolve with sensor response in such manner as to approximate these spectral curves; otherwise, one may need to build up the separate channel images out of multiple images taken with more narrow bandpass filters, mixed with the appropriate weighting factors. The interpolation exercise from an unfiltered image in which I engaged in post #13 should not be taken as a very accurate likeness either, though it may be marginally better than merely using the camera's native channels.

Link to comment

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
×
×
  • Create New...