Jump to content
UltravioletPhotography

Filtering by image subtraction


Andy Perrin

Recommended Posts

Andy Perrin

(continued from the monochrome image thread)

 

The question, proposed by Cadmium, is whether out of band light can be subtracted off somehow. Having played with this for the EIR-type images, I think the answer is yes, but it has to be done very carefully on the unwhitebalanced TIFF generated straight from RAW. Otherwise JPG algorithm messes with it potentially. Also there is the issue of how many bits per channel — 256 colors/channel is not enough when you subtract, because subtraction amplifies noise. Also small values can get rounded to zero.

 

For example, consider these two monochrome one dimensional "images" meant to represent gray with a few spots of noise:

 

image1 = [254 254 256 254]

image2 = [253 252 253 252]

 

image1 has average value 254.5 ± 1.5 of noise or so, so the noise is 1.5/254.5 = 0.6% of the mean

image2 has average value 252.5 ± 0.5 of noise or so, so the noise is 0.5/252.5 = 0.2% of the mean

So there is a good signal to noise ratio in the original images.

 

If we subtract them,

image1-image2 = [1 2 3 2]

and it has average value of 2 ± 1, so the noise in the subtracted image is 50% of the mean!

 

When you have more levels of gray (and the image is properly exposed) then you are less likely to end up in this situation. For a 16 bit image, the difference between two gray levels is 1/65536 instead of 1/256, so you have more wiggle room. The out of band image is going to have a lot of very small values which will get rounded to zero in an 8 bit situation (and likely also by the JPG algorithm), and that will mess things up also.

----

 

Next, software.

Funny, the first software that came to my mind was ImageJ. I guess that shows more of my background. I still want to try this with a couple subjects and see if there are differences.

Even if I had Photoshop, I wouldn't know how to do that subtraction.

The free ImageJ is by far better, insert smilly face here.

I have heard good things about ImageJ but have never used it before. The main issue with just using Photoshop is that we don't know what it's doing behind the scenes. For instance, it has both a "Subtract" and "Difference" option for layers and I've never been quite sure what the, er, difference is between them. If you code in MATLAB or other computer language, you can do the subtraction directly and then you don't have to worry so much.

----

 

Finally, there is the issue of whether the RAW converter is doing anything to the image. Potentially the RAW converter might apply a curve to the image values, but I haven't seen anything like that in PhotoNinja's, if white balance is turned off. I think the best would be doing the subtraction on the RAW subpixel values themselves before demosaicing, but I'm not sure how to do that yet.

Link to comment

I would say yes, the raw converter is definitely doing something. This is why Rawdigger is so popular here as it doesn't alter the raw.

Again try ImageJ, after you install go to the plugin page and get the dcraw plugin to open raw camera images. It hasn't been updated in a while so may not support the newest cameras, but at least dcraw will allow some control over the raw processing into ImageJ.

 

 

http://ij-plugins.so.../plugins/dcraw/

 

Updated, just say that this uses dcraw v9.20, which was last updated January, 2014. So quite old.

Link to comment

Yes, raw converters can each present slightly different conversions even before a white balance is applied. This is because there are different demosaicing algorithms in use. Most differences having to do with the Bayer decoding are somewhat subtle. But the scaling of the linear data and the application of a "gamma" curve vary also. Then too, how you are viewing the raw conversion depends on what default color space is set in your raw converter. What Photo Ninja considers the raw image to be varies from what Raw Digger considers the raw image to be.

 

As David has mentioned, if you use dcraw from the command line in a Terminal window, then you have some control over the demosaicing choices.

Link to comment
Andy Perrin
Yeah, you don't want to have any curving of the linear data (or scaling either, unless it's the same between both images). Ideally you subtract each subpixel (RGGB) and demosaic afterwards. I have to say, PN may be curving things, but it may not have mattered as much for my EIR images. Those would be a lot less sensitive because I'm subtracting the blue channel from the others, and the blue channel is well-exposed, so it's unlikely to be curved as much. It's the deep shadows that I would expect to be curved most, and in the case of OOB stuff, those deep shadows are the whole image. So we need to do this before the demosaicing.
Link to comment

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
×
×
  • Create New...