Jump to content
UltravioletPhotography

Sharpening Images


Adrian

Recommended Posts

Has anyone done any tests re. sharpening UV (or IR) images? The usual reason cited for sharpening images is the presence of the anti-aliasing filter, which slightly blurs the image to minimise moire and other effects.

If we are using converted cameras without the AA filter, how much effect does sharpening have on UV images? Personally I find the judicial use of Unsharp Masking still beneficial, particularly when making ink jet prints, and to enhance the perhaps slightly inferior sharpness of the El Nikkor lenses I use for UV.

Link to comment
I use SmartDeblur 2.3. It allows you to directly edit the blur kernel, so quite reasonable results can be obtained if you are willing to put in the time (and you have a high quality image to begin with). In general, you can do somewhat better than Photoshop but not nearly as good as the CSI:Las Vegas show. It helps if lens blur is the ONLY source of blur in your image, the noise is low, and you denoise using software beforehand. (I also tend to take the sharpened image and recombine it in Photoshop with the unblurred image, so that bokeh and so on can be preserved.)
Link to comment

Like any other images, UV images can benefit from judicious sharpening and/or detail enhancement. Given that the short UV wavelengths bring out masses of surface detail, sharpening techniques must take that into account because it becomes very easy to over-sharpen UV images. In other words, typically UV images require a bit less sharpening or detail enhancement than would a corresponding Visible image might need. As always, the photo subject plays a role here. As does one's artistic intent.

 

IR images - in the long wavelength opposite direction - are inherently "soft" and require a great deal of detail enhancement and sharpening, imho. Again, imho, I find IR to be quite tricky sometimes. The typical lens shows diffraction "earlier" when shooting in IR. But to grab more detail in IR it is sometimes necessary to stop way down, let more diffraction happen and then sort it all out in the converter/editor with local contrast techniques.

 

I've never tested any particular techniques for sharpening in UV or IR that are different from whatever we normally use in Visible images. It is simply that different amounts of the usual sharpening and detail techniques are applied than what one would apply in Visible photos. And, of course, different settings in particular tools.

 

I particularly like Photo Ninja's Detail Slider for use on IR images. But because PN only performs global edits, I currently must create a detail enhanced copy and a softer copy from PN and then layer them in Photoshop Ele to brush in the areas to which the detail enhancement applies.

 

Like Adrian, before printing, I find it necessary to sharpen even further than what I might do for display here.

 

Andy, your CSI reference made me laugh. :D Those shows have some amazing software. If only it were real

Link to comment

I have used unsharp-masking quite a bit, although one must be careful not to overdo it; it works best with images whose blurring is very slight.

 

I have played with Smart Deblur, but it seems finicky and prone to generate ugly artifacts. If one is using it for forensic purposes, such as to render blurred text legible, this is irrelevant; but such artifacts are ruinous for artistic work.

 

I do not find IR images to be inherently soft--some of the sharpest images I have ever taken were IR. Some lenses, however, may suffer excessive CA in the IR, just as they can in the UV. Ir can be more prone to halation, but that is not a sharpness issue per se.

Link to comment

I have played with Smart Deblur, but it seems finicky and prone to generate ugly artifacts. If one is using it for forensic purposes, such as to render blurred text legible, this is irrelevant; but such artifacts are ruinous for artistic work.

Finicky, I'll give you, but whether you get artifacts or not is partly a function of what kernel you use, and whether you edit it. It is also a strong function of noise. Deblurring in general just doesn't deal with noise very well.

--

 

I agree about the IR not necessarily being soft. IR only has inherent softness issues if your optics are diffraction-limited, and most of us are not anywhere close to that situation.

Link to comment

Just referring to the physics of the light. :)

The longer the wavelength, the larger the diameter of the Airy disk. And so on.

 

But it is certainly true that IR images can appear to be bitingly sharp with proper conversion/editing.

I've made a few myself (....unfortunately languishing in my long untouched digital albums.....).

 

****

 

Question about Adrian's original post -- has anyone ever seen much moiré with their converted cameras in UV or in IR?

I haven't found that to be any more than a very occasional occurrence in UV shots.

 

Although my typical subject matter could play a role in that observation because I don't seem to be shooting many subjects which have any high frequency patterns. [Flowers are so rarely plaid or tightly striped... :lol: ...little joke there...]

Link to comment

Greetings, friends!

 

Been away for a while, but just getting back to UVP again. I thought I'd start by randomly diving back into a thread that compels me to participate.

 

I do not find IR images to be inherently soft--some of the sharpest images I have ever taken were IR. Some lenses, however, may suffer excessive CA in the IR, just as they can in the UV. Ir can be more prone to halation, but that is not a sharpness issue per se.

 

I believe that your eyes may be confusing "sharpness" for the exceptional contrast that IR images can impart. As it turns out, what IR typically lacks in sharpness, it significantly makes up for by often rendering very contrasty imagery. Especially as one incorporates IR-pass optical glass with increasing wavelength cut-offs towards the deeper IR direction. (800nm will be more contrasty than 700nm, 900mm will be more contrasty than 800mm, and so on and so forth; all other factors remaining equal).

 

Indeed, we may forget that a bump up in contrast can also impart a sense of "sharpness", even though we are technically dealing with lower-energy wavelengths

 

Of course, this is not to say that IR images cannot be made sharper through other forms of compensation. (We can partially compensate in other ways, such as the type of lens we use, the coatings on the lens, its optical configuration, the aperture setting we use, the speed of the shutter, the stabilization of the lens/camera body, the time of day/direction of light, the use or elimination of the camera's AA filter, post-photo software manipulation, etc.) But, the point remains (just as Andrea already pointed out), that with all other physical factors remaining equal, higher-energy (shorter) wavelengths will render sharper images than lower-energy (longer) wavelengths. That's solid physics, and that is a fact.

 

Anyway, hello all! It is good to be back. :)

Link to comment

Greetings, friends!

 

Been away for a while, but just getting back to UVP again. I thought I'd start by randomly diving back into a thread that compels me to participate.

 

 

 

I believe that your eyes may be confusing "sharpness" for the exceptional contrast that IR images can impart. As it turns out, what IR typically lacks in sharpness, it significantly makes up for by often rendering very contrasty imagery. Especially as one incorporates IR-pass optical glass with increasing wavelength cut-offs towards the deeper IR direction. (800nm will be more contrasty than 700nm, 900mm will be more contrasty than 800mm, and so on and so forth; all other factors remaining equal).

 

Indeed, we may forget that a bump up in contrast can also impart a sense of "sharpness", even though we are technically dealing with lower-energy wavelengths

 

Of course, this is not to say that IR images cannot be made sharper through other forms of compensation. (We can partially compensate in other ways, such as the type of lens we use, the coatings on the lens, its optical configuration, the aperture setting we use, the speed of the shutter, the stabilization of the lens/camera body, the time of day/direction of light, the use or elimination of the camera's AA filter, post-photo software manipulation, etc.) But, the point remains (just as Andrea already pointed out), that with all other physical factors remaining equal, higher-energy (shorter) wavelengths will render sharper images than lower-energy (longer) wavelengths. That's solid physics, and that is a fact.

 

Anyway, hello all! It is good to be back. :)

Again, most of our cameras are not actually at the limit defined by the diffraction of light, which represents the best possible scenario. Saying it's basic physics ignores the reality that we just don't have such perfectly aligned setups (although Andrea may be close -- those Coastal Optics lenses are amazing). You are right that contrast and sharpness seem to go hand-in-hand for our eyes. This is what the Modulation Transfer Function tells us, isn't it?

Link to comment

Again, most of our cameras are not actually at the limit defined by the diffraction of light, which represents the best possible scenario. Saying it's basic physics ignores the reality that we just don't have such perfectly aligned setups (although Andrea may be close -- those Coastal Optics lenses are amazing). You are right that contrast and sharpness seem to go hand-in-hand for our eyes. This is what the Modulation Transfer Function tells us, isn't it?

 

I understand what you are saying, to be clear. I have no disagreement with regards to those additional points. But the fact remains is that the differences in wavelengths between UV and IR still introduce noticeable differences in sharpness between UV and IR comparative shots, even when not hitting the limitations which you speak of. Especially so, when one conducts a comparison UV/IR photo testing of a target with closely-spaced and repeating patterns.

 

Even when using non-dedicated ("accidental") UV lenses (those made of conventional optical glass), such as the popular Kyoei / Kuribayashi / Petri Orikkor 35mm F/3.5 lens, I have noticed significant drop in sharpness of the same target patterns when doing a UV and IR comparison. Especially when you crop on the image, or even get into "pixel peeping" analysis of the two images.

 

Granted, the human eye will not be able to discern such differences when looking at web-published (downsized) images viewed on an average-sized computer monitor. And this becomes even more irrelevant, considering that the majority of web-sized images undergo even further loss of quality when converted to loss-yielding JPEGs.

 

But the fact remains is that the underlying physics is there, whether it asserts itself more noticeably or less noticeably. And I have seen with my own eyes (through extended cropping and/or pixel peeping) that yes, UV images are sharper than IR images (again, all other attributes and settings remaining equal). I notice these differences, whether using more budget-level smaller-sensor cameras, or more pro-minded larger-sensor cameras. The phenomena is still present, and asserts itself (more or less), and it can be seen if the examination is thorough. Even if one doesn't hit those limits which you speak of.

 

In fact, one doesn't even have to do a UV-IR comparison test. Even in UV-VIS image comparisons, the physics can notably assert itself (however more subtly), because VISIBLE wavelengths are still longer than UV wavelengths. Not as long as IR, of course, but still lower-energy enough to see a subtle loss in sharpness / resolving ability / detail on the smaller level ("micro resolution", as some people colloquially refer to it).

 

But, anyway. Maybe when this weather clears up (it's been raining non-stop for four days, now), I can go outside and do a photo comparison test of closely-spaced patterns/lines in UV/IR, and even UV/VIS, and then report back and show you the differences in resolving ability when doing extreme cropping. It is there, and it does exert its effects. Hence, my overall position still stands ... although I am certainly not rebutting your other points, to be clear (which also remain valid).

Link to comment

I think we must take a step back here and remember that when we categorize an image as sharp, we are using the word in a very broad sense. The way the eyes (i.e., the brain) actually perceives this quality of sharpness in a photograph is dependent on upon several inputs: edge acutance, diffraction, resolution, presence/absense of fine detail, local contrast, overall contrast, noise, viewing distance and viewing medium.

 

I'm sure I've either missed something or listed something-dependent-upon-another-list-item, but I'm sure you get my drift. If not, the drift is this: sharpness - as a photographic quality - is a very complex photographic quality. :D So, what I'm trying to get to is this: What are you seeing when you say an image is sharp? Are you seeing lots of detail? Lots of sharp edges? Lots of local contrasts?

 

It's instructive to make a diffraction series if you've never done that. Set sharpening to 0, set contrast to "normal", pick a lens and shoot a detailed subject from f/2.8 through f/22. Convert the images, preserving the 0 sharpening and normal contrast. At what point does diffraction begin decrease local contrasts or start to blur detail? How much of the losses due to diffraction can be overcome with good editing using your favorite sharpener and/or detail enhancer, or other tools? Repeat the entire exercise under a UV-pass filter and then under an IR-pass filter. I predict (....just kidding...) that you will see delayed diffraction in the UV series and early-onset diffraction in the IR series as compared to the first visible series.

 

And, while you are at it, look for any moiré artifacts due to the missing anti-alias filter which was removed from your converted camera. :D (She tries to steer back to some of the original post.)

 

P.S. I've never yet seen a photograph which looked good at 100%, but you need to be at 100% to properly pixel peep for diffraction effects.

 

P.P.S. Don't forget that some lenses have aperture focus shift. You might need to refocus with each aperture change?

Link to comment

As an example of an IR image which I would consider reasonably sharp, let me adduce the following (via link; Sony A900, Zomei 850 filter:)

 

https://www.flickr.com/photos/ol_doinyo/30677421836/sizes/o/

 

Even though this is the JPEG knockdown rather than the lossless original, one can discern individual grains on the rock monument and, in some cases, vein structure in the small leaves on the overhead branches.

 

Some of my IR images are less crisp than this one, to be sure, but that was almost inevitably due to slight focusing or motion faults rather than diffraction issues. I cannot honestly say that my visible or UV images are consistently any sharper than this one, and the blurring is barely perceptible (to me) when the image is viewed full-frame. The theoretical arguments bruited above are correct in principle, but I, at least, seem to be operating well short of the diffraction limit. The only glaring exception I have noticed is with the pinhole photos--but that is a radically different optical regime.

Link to comment

One additional thought, while we are on the subject of image sharpness and infrared images:

 

In the far end of the NIR, toward 1000nm, the Bayer array is transparent. It may be possible to reprocess IR photos to produce a higher resolution monochrome image by combining all of the subpixel values in one channel. Whether this will look sharper would depend on the issues already discussed...

Link to comment

Again, we should distinguish between presence of small details and presence of edge contrasts when discussing "sharpness". It is truly a loaded term. But I agree that principle is not always obvious in practice simply because there are so very many inputs into the making of a UV or an IR photo.

 

It would be interesting to see side-by-side comparisons between the vis, uv and ir versions of a particular subject.

 

.......

 

Andy, I'm not sure I quite understand what you mean by combining all subpixel values into one channel? Do you mean skipping the demosaicing step?

Link to comment
Andrea, yeah, pretty much, or at least doing the demosaicing differently -- my idea would be that since all the "R,"G," and "B" sensels are exposed the same way (because the Bayer is transparent), then in principle you could use them as separate pixels in one monochrome channel.
Link to comment

While there may not be colour differences, I wonder if there would perhaps be tonal differences due to the different chemicals?

 

It would be easy enough to test this out by using Dcraw or Raw Digger and extracting the un-demosaiced photo. (Would that be the "mosaiced" photo?)

Link to comment
The version of the image before the demosaicing step may not be stored in the output file--the process could be handled internally on the fly by the camera's electronics. But it were interesting should this prove not to be the case.
Link to comment

A raw image file contains only the raw data together with metadata and a small copy of the jpg.

So it is indeed possible to extract a pixel map from the raw file.

 

Here are two sets from a Visible file and from an Infrared file.

 

Visible Photo: Lichens on red rocks in Valley of Fire State Park in Nevada.

lichen_visible_20160225valleyOfFireStateParkNV_45289crop.jpg

 

Visible Raw Composite: Minimal autoscaling and gamma has been applied.

Raw colors are used (no white balance). Both green channels have been combined.

lichen_visible_20160225valleyOfFireStateParkNV_45289rawComposite.jpg

 

Visible Pixel Map: This is the greyscale version produced by Raw Digger.

Raw colors could be used, but we are interested in looking at tones in the Infrared case, so I'll show that for the Visible case also.

lichen_visible_20160225valleyOfFireStateParkNV_45289pixelMap.jpg

 

Visible Unresized Extract from Pixel Map: The squares are obvious if you

magnify your browser as needed to enlarge the tiny grid. That's Cmd + on a Mac.

lichen_visible_20160225valleyOfFireStateParkNV_45289pixelMapCrop.jpg

 

 

Infrared Photo: Made using B+W 093 IR-pass filter. IR only from about 820 nm. (? verify.)

I know Andy is interested in a 1000 nm cut-in, but this is what I have available tonight.

I don't have much yet from my 1000 nm IR-pass filter because I only just got it very recently.

lichen_093IR_20160225valleyOfFireStateParkNV_45293crop.jpg

 

Infrared Raw Composite

lichen_093IR_20160225valleyOfFireStateParkNV_45293rawComposite.jpg

 

Infrared Pixel Map

lichen_093IR_20160225valleyOfFireStateParkNV_45291pixelMap.jpg

 

Infrared Unresized Extract from Pixel Map: The grid pattern is much much less obvious in this IR photo

than in the corresponding Visible photo.So Andy's speculation that a pixel map could be used for a 1000 nm IR-pass filter might be true.

lichen_093IR_20160225valleyOfFireStateParkNV_45291pixelMapCrop.jpg

Link to comment

I made an unresized crop from both pixel maps and then enlarged them to better see the Bayer grid. In the enlargement I can now find remnants of gridding in the IR photo.

 

Visible enlarged exerpt from Pixel Map

lichen_visible_20160225valleyOfFireStateParkNV_45289pixelMap01.jpg

 

Infrared 093 enlarged exerpt from Pixel Map

lichen_093IR_20160225valleyOfFireStateParkNV_45291pixelMap01.jpg

Link to comment

Yes, this is actually looking quite promising. The remaining part of the grid can be removed with a custom MATLAB script.

 

Edit: Andrea, did you make the IR results monochrome? If you can give me the color version, I'll see what I can do about removing the remnants of the grid.

Link to comment

Using Raw Digger there is no choice except greyscale when making a Pixel Map.

With Dcraw I think color grid can be extracted.

 

But remember that the B+W 093 filter can sometimes still produce an IR photograph with a tiny bit of colour.

 

I meant to say: the B+W 093 filter produces an IR photo which is not entirely monochrome.

The base raw colour is a pale lavendar, almost mono-lavendar, but not quite.

 

Let me dig around for an IR photo made with the 1000 nm filter.

Link to comment

Here are some extractions of the undemosaiced raw data from Dcraw and from Raw Therapee.

I'm using again the IR photo previously shown.

 

 

The Dcraw app creates only a Greyscale pixel map.

But the grid pattern is very much more evident in the Dcraw version than in the Raw Digger version.

I wonder why that is?

Of course you can't see the Bayer grid at all in this resized version, so an enlarged screen shot excerpt follows.

lichen_093IR_20160225valleyOfFireStateParkNV_45293-d.jpg

 

Screen Shot 2016-12-12 at 2.29.46 PM.jpg

 

 

Using Raw Therapee I could create a color pixel map.

Funny though, the blue Bayer squares look black.

Again, an enlarged screen shot excerpt follows.

lichen_093IR_20160225valleyOfFireStateParkNV_45293.jpg

 

Screen Shot 2016-12-12 at 2.29.10 PM.jpg

 

 

Andy, at this point it isn't clear to me what can be gained by creating a pixel map given the way the these look?

Link to comment

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
×
×
  • Create New...