Jump to content
UltravioletPhotography

Tree stump UVIVF with contaminating light removal by image subtraction


Andy Perrin

Recommended Posts

Awhile ago I found that it was possible to remove visible light contamination in UVIVF photography by subtracting an (averaged) image without the torch from an image illuminated with the torch. This method was used to excellent effect with the Queen Anne's Lace that I showed some time ago. The secret is to only subtract on 16 bit linear images, which can be obtained from PhotoNinja by turning off everything except the white balance. Do not try image subtraction on JPEGs! You won't get nice results.

 

The filters used were BG38 2mm + Tiffen Haze 2E, and the contaminating background light is streetlights. The white balance and color correction were taken from the profile I made for the gourd photos the other day.

 

The torch was the Nemo.

 

Procedure was to take 30 photos with the torch off using the built-in intervalometer in my Son A7s, and then repeat the process with 30 more photos while light painting with the Nemo. I then took the median of the no-torch photos to get a combined no-torch image, and took the MAXIMUM of the light-painted images to get a combined with-torch image. Then I subtracted the streetlight-only image from the streetlights+UV image in Photoshop and adjusted contrast on the results.

 

Final result:

post-94-0-03966800-1606100145.jpg

 

Image with torch + streetlights:

post-94-0-81768800-1606100428.jpg

 

Image with streetlights only:

post-94-0-46266600-1606100452.jpg

Link to comment

That worked wonderfully! You really cleaned your image, this may have interesting applications.

 

A question: Why did you take the maximum of the torch images? Does it work better than the average/median* by experience, or is there a deeper reason?

 

*are the average and the median the same here?

Link to comment

Thanks, Colin!

 

A question: Why did you take the maximum of the torch images? Does it work better than the average/median* by experience, or is there a deeper reason?

 

*are the average and the median the same here?

Stefano, median is not the same as average -- see the wiki article here:

https://en.wikipedia.org/wiki/Median

 

The reason for taking the median of the streetlight images is that medians *reject outliers* very well, and in this case that means it removes the headlights of passing cars! Typically if you take 30 images at least one or two will contain headlights but by taking the median those won't be included in the combined image.

 

The reason for taking the max of the torch images is that you want to get the brightest pixels of the set, since the brightest pixels have the most UVIVF signal. (No cars passed by while taking the UVIVF ones, or I would have had to redo it.)

Link to comment

I know that a geometric mean works better than the arithmetic one (the typical average) at removing outliers, since it uses more "powerful" operations to calculate it. But I guess you don't have the tools to calculate it on a series of images, and maybe the median is just better anyway.

 

Can you pick the dimmest pixels to remove "positive" spikes in brightness (for the no-torch image)? That should work too.

Link to comment
Stefano- yes, the minimum would have worked on the no-torch. I prefer the median. Most software doesn’t do geometric means and median works very well for the purpose.
Link to comment

David, I have no idea. It could be a piece of trash I missed or something?

 

Thanks Gary. Not as much work as you would think. The photos took about 15 min and the processing another 20! I guess to people who like “straight out of camera” as the standard then it might seem like a lot?

Link to comment

David, I have no idea. It could be a piece of trash I missed or something?

 

Thanks Gary. Not as much work as you would think. The photos took about 15 min and the processing another 20! I guess to people who like “straight out of camera” as the standard then it might seem like a lot?

 

Now that I look closely it might be a leaf that blew in between the shots. Thus not subtracted out.

Link to comment

In this case, if you used the median for the torch image, you should have removed it, since it is an outlier (and quite a strong one). But you used the maximum for a good reason that you have already explained, so you caught it. I think the only solution sometimes is just to do a "manual" check and remove defective images.

 

Was it windy when you took the images?

Link to comment

Andy,

Do you have sony playmemories installed on your A7S?

The light painting, smooth reflection or light trails apps, $5 each would let you do this live in camera. I was jusst ready about that last night as the cost of a brand new A7Rii is $1200, which is tempting and its one of the last to support the apps, which do stuff useful tbat Sony never added. Also a focus bracketing app or hack app out there as well. Looks like with open memories you can write your own to get the camera to do whatever you want.

Link to comment

Dabateman, yes I do have it installed, but it's not the simple - When I did the above, I used PlayMemories Time-lapse app as my intervalometer to get the shots. Then I did the stacking and alignment with the median and max on the 16bit TIFFs using Long Exposure Stacker on my mac (from the astro community). The results are almost certainly better than what could be achieved in-camera using an app, which would use JPG probably and no alignment.

 

I honestly don't understand why people are so set on doing things in-camera with shitty quality when you can do better with a custom program on the computer. Convenience isn't worth the price you pay in quality.

Link to comment

Actually depends on the app. The camera processor is specialized for image editing. It can be faster and cleaner with the right software. This is why Fuji and Olympus with recent cameras have the option to edit off the camera tethered to the computer. The camera processor is just better.

But software is lacking. I like the open memories concept, people have added stuff. But still not much available.

 

I don't know what these apps do or how they handle the data. But might be worth at least researching.

Link to comment
The camera processor is the same as any other computer. At best it might be faster but the end results will either be the same (if it uses RAW) or inferior (JPEG). It may depend on the app. What is absolutely certain is that there is less flexibility and it is less clear what is being done to the image unless the app is open source. Sony in particular is very vague about anything to do with software.
Link to comment
  • 6 months later...

I honestly don't understand why people are so set on doing things in-camera with sh***y quality when you can do better with a custom program on the computer. Convenience isn't worth the price you pay in quality.

 

Agree strongly !!


 

Question:

Does your method work for different colors of fluorescence and different colors of subject?

....not sure how to word that.....

For example, could you retrieve the red fluorescence of chlorophyll from a "contaminated" photo?

Would it matter what color background the green plant was photographed against?

Thank you.

 

Off topic:

Have you considered switching to one of the newer Sonys?

The A in particular seems to be a real gem.

Link to comment
Andy Perrin

Andrea, it should work for any color of fluorescence as long as it's bright enough. I've been trying recent experiments with this method using a laser to induce the fluorescence, but if I am too far away from the subject, the laser is too dim to make enough fluorescence and the streetlights drown out the image (even with the subtraction).

 

The worst-case scenario is when the fluorescence happens to be the same color as the streetlights, and then you are trying to separate out a tiny difference in brightness, which is hard.

Link to comment

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
×
×
  • Create New...