Jump to content
UltravioletPhotography

Here's something to discuss


Andrea B.

Recommended Posts

I'm going to copy a comment I made elsewhere because I'd like to know what members think about it.

 


A photograph made with a lens which has a range restricted to the Upper Quarter of 375-400 nm is a legitimate UV photograph. There seems to be a perception that a UV-capable lens should reach deeper than that. I don't quite understand that.

 

Of course, out of simple curiosity, we all want to try shooting below 350 nm. I do grant that. But nothing I've seen photographed below 350 nm so far in the last 10 years is all that different from what is photographed above 350 nm except for some false colours after white balancing. Of course, even though differences may be small, that doesn't mean that they are not scientifically meaningful. But anyone performing scientific or professional reflected UV-photography already owns a corrected UV-Nikkor/Rayfact or a Coastal Optics 60 or 105.


 

So, why do so many dislike or discount that Upper Quarter 375-400 nm?

Why does everyone want UV "reach"?

Why is UV "reach" more important than other optical qualities in UV light

like corrections for chromatic or spherical aberrations or sharpness, for example?

 

Added: I should add that the lack of differences above and below 350 nm which I mentioned above is noted only when shooting with the typical broadband filter of filter stack. If one were using tight, narrowband UV-pass filtration, then it is possible you might catch some shoulder or foot where UV reflectivity cuts-in or drops-off. But the whole shootable waveband is only 100 nm wide. So it would likely be a rather strange photographic subject which has a dramatic steep slope in its UV reflectivity. With narrowband filtration one mostly catches a slow fade of slow increase of reflectivity. And even to do that you need to be shooting with good calibrated standards so that processing doesn't inadvertently wipe away some small reflectivity difference between, say, 320 nm and 380 nm.

 

.

Link to comment

P.S. My job here is to stimulate discussion and to keep the conversation going.

The preceding discussion topic is most certainly not intended to be perceived as a rant or a stance.

:grin: :bee:

 

 

 

Please think before you write.

Address the questions.

Stay on topic.

ha-ha.....I sound like a teacher.

Link to comment

Well, Iwas involved in the other discussion that spawned this so I've already made some input, but let me give my thoughts on your specific questions in this thread.

 

So, why do so many dislike or discount that Upper Quarter 375-400 nm?

I don't dislike or discount that Upper Quarter - I just don't want to be limited to it. I want UV reach.

 

Why does everyone want UV "reach"?

I'm interested to see what you get when you photograph in other regions of the spectrum. I'm also interested in seeing what the NIR "looks" like - and I'd love to photograph in the further reaches of the spectrum but don't have the cash.So it's all about inquisitiveness.

 

Why is UV "reach" more important than other optical qualities in UV light like corrections for chromatic or spherical aberrations or sharpness, for example?

I'd like both UV reach and high optical quality. But if, like me, you want to have fun playing in the UV but don't have $000's to spend on that area, your have to make a choice. Either I use modern high-quality optics designed for the vis region, and limit myself to the Upper Quarter. Or go for UV reach and be limited to a few mediocre (sorry, wasn't supposed to be using that word), poorly coated 1960s triplet lenses which are UV-friendly but are never going to win any awards for optical quality. I lust after UV reach, as I've explained, so I have no choice but to forego optical excellence.

Link to comment

I will answer your question with some background. I started in UV photography back in 2008, after seeing some great flower images by Birna.

 

I got the Baader venus U filter, and flipped it in holder and 3 step down rings. A 52mm to 48mm, for my Olympus 35mm macro. A 55mm to 48mm for my YUS 135mm f2.8, which has 4 lens elements. And a 58mm to 48mm for my other lenses, which at that time I had standardize to 58mm.

 

I would use my Olympus E3, then got a full spectrum converted E510 used for an amazing price in 2009. I was surprised it was only 1 stop faster for UV with my best lens at the time, the 35mm f3.5 macro.

I didn't know it at the time but my 135mm YUS was separated and just needed to be tightened to get sharp images. I took it apart in 2017 to discover this.

I then still wanted faster shutter speed, so got a Sd14. But had a hard time using it, as the only lens I had that would adapt was the Tamron 90mm f2.8 Adaptall, which is horrible for UV.

 

So I was blissfully happy, randomly taking odd UV flower photos from 2008 til 2017 with my stock E3, as its high ISO was better than the E510, that I used mainly for IR.

 

Then in 2017 I realized that my pentacon six 80mm f2.8 had 5 lens elements and could be adapted to all my cameras I had. I discovered this as my zeiss 120mm f2.8 needed an aperture repair and took it apart.

So I posted this comparison:

 

https://www.dpreview.../thread/4145287

 

As others and even I, based on my experience with the Adaptall 90mm, didn't think the SD14 was good for UV.

 

But then I wanted a better lens. I discovered this website and read everything. I then got a igoriginal 35mm f3.5, a Steinheil 50mm f2.8 and a Canon 199A flash.

Then I joined this site in 2018, to start to share my experience and help others.

But got a series of UV band pass filters and became a UV snob.

390bp25, 370bp15, 340bp10, 313bp25, 405bp10 and 335bp10 filter.

Using these filters an a flower I saw distinct differences in the UV signature of a dandelion between the 390bp25 and the 370bp15. Lower wavelength UV did look different.

But if you are happy with a broad band filter like the Baader venus filter, then you will not see it. The difference is only there with tight bandwidth filters.

 

So that my answer. There is a difference between 370nm and 390nm. But you will almost never see it.

I also posted a recent flower that had strong difference between 313nm UVb and UVA.

 

There was also the Aircraft canopy photo Jonathan took with Baader venus filter, being completely black. And the SEUmk2 wher you could see inside at the pilot. So depending on your image it may make a difference.

Link to comment

This is great stuff! And I thank you both for your input.

 

Bernard, I have no objection to your use of the word 'mediocre'. You should call it as you see it. In the other topic I simply made an observation that some folks are happy with the Upper Quarter and some aren't. I make no judgement myself. I'm not here for that kind of thing. But I am curious.

 

David, very cool to be reminded of your finding of a dandelion difference under narrowbands. As I mentioned somewhere today, I have always wanted to shoot narrowband UV, but my first attempts have been so frustrating. I *will* try again someday after we have gotten resettled.

Link to comment

As for the reasons of others, I shall leave that to them.

 

For myself, why do I want 'reach'?

 

Curiosity is one reason. The further the reach, the further away it is from visible light, and I'd like to know what the world looks like there (or at least how the camera interprets it). I have seen some differences down at 300ish nm compared to 350nm and upwards.

 

Work is another reason. I am imaging skin for part of my work, and that includes imaging sunscreens. As such I need ways to image the skin and sunscreens in the wavelengths they can be exposed to on a daily basis, so between 300nm and 400nm.

 

I admire quirky and unusual engineering. It becomes very hard, very quickly to see deeper into the UV. Many specialist UV lenses are unique or interesting designs, and as such they interest me.

 

And I suppose to give a mix of 'colours'. By having a range of wavelengths that can be observed, a wider colour palette is available for the image.

 

Other attributes are of course important. If the lens transmits to 200nm but has huge chromatic aberration then that can obviously be an issue, depending on the style of image you want.

 

I don't dislike the 375-400nm region, but for me, I'd like to see further as well.

Link to comment

For me UV "reach" is not most important.

A lens's good transmission, sharpness and low focus shift is for me much more important than any really deep UV-reach.

 

For wanting some more UV-reach I second almost all Jonathan's points above except for the work thing.

(For me this is purely a hobby driven by curiosity.)

 

I think that deep inside of me there might be a bit of an artistic soul.

I like to discover the wonders and beauty if natures creations, that normally is overlooked or invisible.

That is the reason for me exploring macro and then mainly flowers and their hidden colours.

 

The need for deeper UV-reach than 375nm is there for expanding the colour palette and keeping the exposure times as short as possible.

However, the needed UV-reach that actually affects images visibly, when using the sun as light source, is not as deep as commonly believed.

 

Only if you are into narrow-band UV with special illumination like David or working with specially converted or monochrome cameras like Jonathan, the deeper UV-reach becomes very important.

Otherwise the combination of the sensor's sensitivity- and available light's-spectrum normally makes any attempt to gain anything from the shorter UV-A spectrum futile.


There is a difficulty in how the original question: How shall the shorter 375nm limit be interpreted?

I answer as if it is a loss of 1 stop. That is technically logical.

However sometimes, also in the Lens Sticky a 3-stop limit is used.

Link to comment

I care about "reach" up to a certain limit for ordinary wideband UV photos. As you said, Andrea, much can be done by just changing the processing if you are taking wideband UV photos. If this is the kind of imaging you restrict yourself to, then you probably only need to get to 375nm for some false yellows. I definitely care about image quality also, but I think it's not the dominant thing for me since I'm in this to experiment at least as much as to make art photos.

 

One thing that I don't believe has been pointed out much on this board is that if you transform the images to Lab color space, nearly ALL the image information is in the blue-yellow channel ("b") and very little or none in the red-green "a" channel. (You can see this in the relative widths of the histograms for the "a" and "b" channels.) So our UV false color is essentially just one channel of information, not two. Which means we are missing some important wavelength-dependent information in our pictures. This is why I'm very much looking forward to Bernard Foot's results (or other people's, preferably using monochrome sensors) using narrowband imaging, since combining the results of three bandpass filters will not only correct for the gain decrease of the sensor by using different exposures, we should actually see the full two channels' worth of color information. You can't do that without enough "reach"!

Link to comment

Andy,

I don't understand your last point.

Look at this image:

 

https://www.ultravioletphotography.com/content/index.php/topic/3229-freeware-em1-dye-channel-images-using-bandpass-filters/page__view__findpost__p__26949

 

This is specific wavelength filters teased out for the individual color channels with an Olympus camera.

 

Now look at this image by Andrea for a Nikon camera:

 

https://www.ultravioletphotography.com/content/index.php/topic/3471-coastal-optics-10545-diffraction-grating/page__view__findpost__p__29741

 

The blue channel is mostly lost by 370nm.

Just green and red signal. How would merging my image regain information, if its lost below 370nm?

Link to comment

One thing that I don't believe has been pointed out much on this board is that if you transform the images to Lab color space, nearly ALL the image information is in the blue-yellow channel ("b") and very little or none in the red-green "a" channel. (You can see this in the relative widths of the histograms for the "a" and "b" channels.) So our UV false color is essentially just one channel of information, not two.

 

I don't know much about Lab color space, so these next two questions are not meant to be challenges. I'm asking because I don't know.

 

Why can't we say that white-balanced reflected UV images are represented by two L*a*b* channels: L* and b* ?? The white balanced false UV-colors are white/grey/black and yellow/blue.

 

Also, the blue is never a "pure" blue. It is sometimes violet-ish blue or purple-ish blue. Where does that red component go in Lab color space?

 

The raw colours vary amongst violet, violet-blue, magenta, red, orange. Where would those fall in Lab color space?

Link to comment
Why can't we say that white-balanced reflected UV images are represented by two L*a*b* channels: L* and b* ?? The white balanced false UV-colors are white/grey/black and yellow/blue.

Sure, you can, but the L is just the monochrome part of the image. It's not really color information and it's unrelated to wavelength (~more or less~). If you kept just the L, you'd have a nice grayscale photo. The Lab space is intended to represent human vision, in fact, which apparently works on a blue-yellow and red-green axis.

 

Also, the blue is never a "pure" blue. It is sometimes violet-ish blue or purple-ish blue. Where does that red component go in Lab color space?

In the "a" channel. I'm not saying that the "a" channel has zero histogram, just that it's pretty narrow compared to the "b" so TO FIRST APPROXIMATION it is being ignored.

 

The raw colours vary amongst violet, violet-blue, magenta, red, orange. Where would those fall in Lab color space?

Raw colors are not really relevant to the point here? I mean, you can do the same thing with the raw colors if you want but Lab color space would not be the right one to be able to tell that you are in a subspace of RGB, not taking up all of it. Using the Lab space was just my way of explaining that fact, but it's true regardless of color space and regardless of white balance even. The point is that if you were to plot all the colors, whether raw, white balanced, whatever in a 3D Red-Green-Blue cube, you would see that you don't fill the whole cube, instead you lie on some kind of surface within the cube that has a smallish thickness. I may try plotting it for you tomorrow or something.

 

[OR I could be totally wrong about all of this, just to cover my bases, but I'll try getting you some more convincing evidence soon.]

Link to comment

Andy has me wondering if some changes I see in floral pattern maybe due to loss in channel response.

What would be great is if Jonathan could image a Lilly like this one:

https://www.ultravioletphotography.com/content/index.php/topic/3385-a-striking-flower-in-uvb/page__view__findpost__p__28645

 

Using first his Multispectral Canon camera and 308nm filter and Baader venus filter using a 302nm light source.

Then image the same flower with the same filters and light source but using his Monochrome Canon camera.

I wonder if the style would be black in the monochrome camera image using the 308nm filter, but grey using the Baader venus filter.

 

It would also get to this threads point of reach and curiosity.

 

 

Link to comment

Ok, I promised I'd give you some evidence, so here go you. What I did was take two photos of the same scene through a quartz lens (Resolve 60mm), one using the 330WB80 filter and the other through a stack of S8612 1.75mm+DB850 filter (should be 400-650nm for the stack). That gives one visible light photo and one UV photo. They have been white balanced off PTFE in both cases.

 

Then I converted each of them to L*a*b* in MATLAB and plotted the b* channel versus the a* channel in each case. The points are their actual colors in the image (including brightness).

 

 

Visible photo:

post-94-0-40583100-1567461242.jpg

 

UV photo:

post-94-0-13919700-1567461251.jpg

 

post-94-0-20003300-1567461276.png

 

You can see that the UV graph shows that the "effective gamut" is very skinny and almost one-dimensional compared to the fatter visible light photo's. My conclusion is that the Bayer dyes are NOT giving us the full range of possible colors we could be seeing in our UV photos. By using bandpass filters and a monochrome camera, it should be possible to bypass this limitation.

---

 

An amusing thing to do is to squeeze the visible light colors into that narrow UV "effective gamut" just to see what a visible light photo would look like if our Bayer dyes behaved the same way in visible light as they do in UV. By doing a simple least squares fit to the b* vs a* UV graph above, I found a formula that let's you calculate a new b* value for each a* in the visible light image. If I do that and transform back to RGB, I get the following results:

 

Unmodified visible on left, and with the remapped b* channel on the right:

post-94-0-52333000-1567485246.jpg

Link to comment
enricosavazzi

My take is similar to others'. I am not aware of substantial differences in UV reflectance of common natural subjects across the UVA range. I am aware, however, that numerous chemicals display significant absorbance changes across this range. So the potential cannot be discounted. Only weeks ago I posted samples of plastics, most of them with rather uniform UVA reflectance but one with rather sharp cutoff at around 380 nm. Not to speak of polycarbonate, which is VIS transparent but UV black.

 

How would I know this firsthand without "more reach" into the UV? No way, other than using a spectroscope. So I do need at least one lens with "more reach". I don't need 5 or 10 of these, however. Just a few of different focal lengths with "enough reach" to produce two or three false colors, so that my pictures are not boring monochromatic.

 

Good image quality, on the other hand, is a must for me. More than UV reach.

Link to comment

So I do need at least one lens with "more reach". I don't need 5 or 10 of these, however. Just a few of different focal lengths with "enough reach" to produce two or three false colors, so that my pictures are not boring monochromatic.

 

Good image quality, on the other hand, is a must for me. More than UV reach.

 

Sounds like an argument to stop gear acquisition syndrome. Like as if your wife is over your shoulder.

 

However, I do think I may agree. You need one perfect lenses. A couple excellent filters. A great modified camera and then good lights. Once you work that out. That really should be it.

 

But then why do I still look at cameras, filters and lights? Oh well the grass may just seem greener on the other side.

 

 

Link to comment

Andy, your a* b* plots remind me of Starling murmurations.

 

Building on the idea of differences between UVA and B, I did this a while a go with my monochrome camera;

 

https://www.ultravioletphotography.com/content/index.php/topic/3170-uva-and-uvb-polariser-moxtek-uvd260a/page__view__findpost__p__26168

 

In the polarised images especially the middle part of the flower looked very different in UVA and B, although the difference was much more subtle without the polarisers, due to the shine.

Link to comment
  • 3 years later...

Well, I am four years late to this party, but there is another reason for wanting "reach", a reason no one mentioned (though I know Jonathan cares about this too). Theoretical maximum resolution of a photo is inversely proportional to wavelength. 200nm light can give up to twice the resolution of 400nm light. So if we want the highest possible resolution and we cannot afford an electron microscope, deeper UV imaging is the only thing we can do. Microscopists knew this a hundred years ago, and built deep UV quartz microscope objectives to do the job. In modern times, electron microscopes have made these obsolete for university researchers, but deep UV imaging is still the only option for regular folks like most of us. Jonathan's blog and his posts on this forum show some good examples of the advantages of UV-B and UV-C microscopy, especially for diatoms (which are also my reason for starting down this path). This resolution advantage also applies to regular macro photos, and the semiconductor industry takes advantage of this fact to make ever-smaller integrated circuits using shorter and shorter wavelengths.

 

Edit: Another reason to look for more UV "reach" is that some difficult-to-photograph materials, like the silica frustules of diatoms, become more opaque in shorter-wavelength UV, making them much easier to photograph.

 

 

Link to comment
On 9/2/2019 at 12:23 AM, dabateman said:

 

I don't have  D610 + CO105/4.5 + BaaderU is marked HWB 325-329 nm.

In this site I have seen many yellow flowers (like dandelions) and many blue-violet flowers (like daisies)

I have never seen green flowers (~325nm) like your equipment can see.


these days I did some tests with my little equipment
Sony A7 f.s. with Meritar 50mm f2.9 (it's a good triplet with little focus shift, which sees green therefore also 325 nm)
.
With the diffraction grating and with the flash light, only with ZWB2 do I see green, but there is also a lot of IR
.
as soon as I put IR cut filters, the green disappears.
ZWB2 + TNS575 (2 mm t)
ZWB2 + BG18 (Jena glass , 2mm t)
ZWB2 + QB39 + QB39 (1,5 mm t)
with
ZWB2 + only one QB39 (1,5 mm t) there is IR contamination and some green is visible

.

I'd love to try a Baader-U2 filter one day, but I think things won't change...and neither will a Rayfact

.
i think the only way to have an accurate view is without the bayer filter, using a monochrome sensor like @JMC  or @lukaszgryglicki and using narrow filters.

only by allocating RGB to narrow frequency bands will it be possible to obtain images that are truly different from those altered by the bayer filter
The same thing the Hubble Space Telescopes do with a narrow UV+VIS filter wheel or Weeb with a VIS+IR filter wheel

Link to comment

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
×
×
  • Create New...