Jump to content
UltravioletPhotography

Basic Computational Averaging


Nemo Andrea

Recommended Posts

Since aquiring my full spectrum G7, I have been playing around with ways to get shorter exposure times with UV photos. Naturally the best way is to either have a better lens or better illumination, but I was thinking about another way that might work in some situations. I'll post some of the results here as you might be interested in seeing results.

 

So the idea here is to use modern (mirrorless) camera's high burst rate. Similarly to how modern smartphones can achieve very nice results by combining many fast (and noisy) exposures, I figured something similar might work for UV, as our big sensors are dealing with the same kind of problem in the UV. Naturally smartphones have a lot of smart HDR exposures and clever raw image alignment behind them, but some basic tests are described below. Of course as cameras get better, these burst rates will continue to improve so its a thing that should get more useful as time goes on*.

 

This then could allow for handheld UV photography at reasonable sharpness; as it should eliminate camera shake and allow for longer exposures (think 0.5-1sec)

 

My panasonic G7 has 2 burst modes:

1. A traditional burst mode, shooting RAW+JPG at about 10Hz

2. 4K video mode, where the camera shoots a series of JPGs at 30Hz** which is essentially just 4K video; but in any aspect ratio :).

 

For the first test I decided to just do a basic handheld test using option (2). the 4K images are 8mp, which is just about acceptable if you get the framing right on site. Another note is that the camera saves the images as MP4 unfortunately, so I fear some more detail is lost in that step compared to a series of JPGs, but it is what it is. Perhaps they use a higher bitrate format for this mode as they do allow JPG extraction in camera.

 

So the preliminary results. Images taken on my G7 with Kolari Vision UV pass filter at 6400 ISO, 1/80s with Nikon 50mm f/1.8D (@f2.8).

 

1. First the out of camera JPG. Lots of noise as expected from that ISO on MFT. Crops are about 900x900 px.

post-261-0-78574600-1574520387.jpg

2. The processed RAW file. Processed using DxO photolab, using the PRIME denoising option (usually works really well for low colour saturation images)

post-261-0-43776700-1574520396.jpg

3. The averaged 4K video files. Frames extracted using Fiji, aligned and averaged in Affinity Photo (paid software, but Hugin or similar will do the trick too). 30 frames averaged (i.e. 1sec of ''video'' under ideal conditions; likely a bit longer)

post-261-0-25688500-1574520434.png

 

! some items that should be mentioned: the images were taken handheld and as such the framing is not exactly the same between shots. Still its reasonably close id say.

 

So when might this be useful?

I think this might be handy in situations where you are shooting handheld and as such are limited to e.g. 1/50s, but want a longer exposure to use lower ISO or stop down the lens for improved sharpness. Even if your scence is not static (e.g. a person posing) you can probably get away with 8 exposures of 1/50s (i.e. scene needs to be static enough for ~1/4s), which, assuming that averaging noisy images works comparably to equivalent reduction in ISO (to be tested at later date), is a full 3 stops compared to the single image case.

 

limitations:

The camera burst and framerates are a direct limit on how much information you can get per exposure time. For example, if you need 1/80 to eliminate motion blur, the burst is still limited to 30Hz and as such 62.5% of the time between frames is not used to capture the scene, making 1s of exposure equivalent to (at most) 0.375s. If your camera can do 60Hz this already improves to only 25% waste exposure time.

 

Anyway I'm happy enough with this as a proof on concept, further tests to be explored in the future.

 

todo: test option 1 (to see what is possible when retaining RAW images and full sensor resolution.

 

* as an example, the panasonic G9 which is the newer and more pro oriented MFT camera from panasonic can do 6K burst at 30Hz and 4K burst at 60Hz, which is already much nicer.

** The camera has negligible readout and clearing time between frames.

Link to comment
You also have compression artifacts too with this method (using 4K anyway). I think I will stick with just taking a longer exposure. I am somewhat interested in the super resolution images, but so far my results with that method have not been impressive enough to be worth posting about.
Link to comment

I think in stead of using the rapid 4K photo mode. You would be better off with regular 7 frames per second. But under expose them.

Then align using Hugin method Bernard posted.

Then use the Sum function in either Afinity or Fiji to get to normal exposure.

In Afinity this may also remove some noise or not. Can't remember if it focuses on a static thing increasing in amplitude or if just boosts all signal.

There may be a sum+mean function that does that in Fuiji, I used ImageJ more.

But that will improve quality, signal, but you will have to merge many images.

Link to comment

I think in stead of using the rapid 4K photo mode. You would be better off with regular 7 frames per second. But under expose them.

Then align using Hugin method Bernard posted.

Then use the Sum function in either Afinity or Fiji to get to normal exposure.

In Afinity this may also remove some noise or not. Can't remember if it focuses on a static thing increasing in amplitude or if just boosts all signal.

There may be a sum+mean function that does that in Fuiji, I used ImageJ more.

But that will improve quality, signal, but you will have to merge many images.

 

I´ve done some trials with this in the past, and while I might try again, I found that this is rather problematic as the dynamic range is affected tremendously. As of course it only uses a tiny fraction of the available dynamic range when underexposing, which cannot be recovered. So it´ll get the exposure right but look terrible with colours and such. At least that's my experience. I do think the RAW images will be most promising, but I'd like to see how far the 4K burst can be pushed.

Link to comment

Alright, so a small comparison of noise performance in averaging vs raising ISO. Here the JPGs were averaged as affinity seemed to produce a lot of chroma noise when aligning/averaging raw files, but I think it's still representative (if anything averaging should be worse for JPGs as it probably loses more detail in compressing noisy images). This was all on a tripod, and hence just serves to answer the question of low ISO vs high ISO averaging.

 

First test: (1) 4sec ISO 800 vs (2) 8 images at ISO 6400 and 1/2s.

Process: align JPGs in affinity, average, and sharpen. Sharpen was done as I noticed it was the same process that seemed to be going on in camera to get the low iso jpg. In my RAW editor I have to sharpen a fair amount before I get the out of camera JPG.

Image order: single image of (2) - the low iso image (1) - the averaged images of (2) and sharpened a bit

post-261-0-32694800-1574606681.jpgpost-261-0-79629000-1574606672.jpgpost-261-0-18835600-1574606667.png

 

 

Seondtest: (1) 4sec ISO 200 vs (2) 8 images at ISO 1600 and 1/2s.

Process: same as above.

Image order: single image of (2) - the low iso image (1) - the averaged images of (2) and sharpened a bit

post-261-0-46634200-1574606879.jpgpost-261-0-11199000-1574606875.jpgpost-261-0-17415700-1574606859.png

 

The unsharpened images (RAW export without sharpening and unsharpened average) look fairly comparable also. One interesting thing is that the colour rendering seems to change the most between ISO settings, even in the JPGs. I cannot say if it has an effect of the ability to capture colour detail at the moment. Upping the saturation a bit seems to bring the averaged images to the colours of the lower ISO equivalent.

 

So I suppose averaging is not noticably worse than raising the ISO, which I guess is to be expected and nice to know.

 

Next up I'll have a look at the full sensor readout burst (10Hz) approach.

Link to comment

Your subject is monochrome and smooth.

Do you have a color checker?

If not a box of crayons?

I would need to see different colors and some textures to compare sharpness. The markings on the color checker or the fine paper fibers of a crayon wraper help evaluate that.

Also my idea was to under expose by holding at base ISO. If you capture an image at ISO 1600, f8, 2 seconds. Try capturing 5 images at ISO 200, f8, 1/2 second and sum or sum+mean merge them.

You may get a better result.

Just do the stop math obviously if your ideal exposure is different than my made up one.

 

Maybe not possible in Affinity photo. But try total to see the result. Here are the options:

https://affinity.help/photo/en-US.lproj/index.html?page=pages/Stacking/stacks.html?title=Image%20stacks

Link to comment

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
×
×
  • Create New...