Jump to content
UltravioletPhotography

Search the Community

Showing results for tags 'Processing'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Office
    • Announcements
    • UVP Rules & Guidelines
    • Requests for Photographs
    • Feedback & Support
  • Introductions
    • Who & Why
    • Introduce Yourself
  • UVP Technical Zone
    • Techniques, Tests & Gear
    • UV Lens Technical Data
    • Non-technical Experiences
    • STICKIES, References & Lists
    • Essays & Tutorials
    • ID Help
  • UVP Photo Zone
    • Ultraviolet & Multispectral Photos
    • Fauna: Animals, Birds, Insects or Other Critters
    • Forensics & Other Investigations
    • Fluorescence and Related Glows
    • Infrared and its Friends (SWIR, MWIR, LWIR)
    • Macro
    • People and Portraits
    • Scapes: Land, Sea, City
  • UVP Botanicals
    • UV Wildflowers by Family
    • UV Cultivars: Garden & Decorative Flora
    • UV Cultivars: Vegetables, Herbs & Crops
    • UV Other Botanicals
    • Index

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

  1. Last night I took some photos of this flower, which looks something like a coneflower. [Edit: It is a zinnia. Thanks, Andrea!] I tried a new method for processing. My previous attempts at shooting flowers in-situ have had issues with flower movement from wind, so this time I decided to try taking multiple images with high ISO and short exposures to maximize the chance of getting a sharp image, then throw away all the blurry ones and stack the rest to reduce noise. First a visible photo. This was taken with the Sony A7S converted camera, a Hoya UV/IR cut (which transmits UV up to 380-390nm or so) and the LED on my iPhone (which probably doesn't emit much UV). I subsequently corrected the colors using a Color Checkr Passport in Photo Ninja. The lens was the EL-Nikkor 80mm/5.6 metal. Visible reflectance photo F/8, ISO1600, 1/100" -- Onward to the UVIVF. These used the Daylight white balance setting on the camera for white balance. This was chosen simply because other people on here have used it and I don't know what else to do. The torch was the new 15W torch from eBay (we NEED a better name for that thing!) and I forgot to remove the glowing ring, although I don't think it affected the results much because the torch was ~0.5-1 meter from the flower. The torch was unmodified. Next, 30 photos were taken with the torch on, of which 16 were usable (sharp). I also took another 30 "dark frame" images so that the stacking software would remove any remaining visible light from the scene. (The flower was in near total darkness, so these frames appeared black.) These images were taken automatically for me using the Sony TimeLapse camera app, which they sell in their app store for $12US. It is essentially a built-in intervalometer for the camera. The images were then stacked using the Mac program Starry Sky Stacker, intended for astrophotography purposes, but well-suited to dealing with moving blossoms also. I chose to take the arithmetic mean of the images, as opposed to median, 60th percentile, or max value, which are other options in that software. Final post processing involved a small amount of denoising with Neat Image plugin for PS, and sharpening with SmartDeblur applied to the center of the flower. Only the disc/cone in the center of the flower was sharpened, leaving the petals unsharpened. UVIVF, whole frame (reduced size) F/8, ISO1600, 1/10" x 16 images 1:1 crop [ETA: the color profile was messed up on this one, reupload is below with proper profile] F/8, ISO1600, 1/10" x 16 images ETA: here is a reupload using the original color profile. Overall I would describe this as a very successful experiment, and definitely I recommend the stacking method for dealing with flower motion.
  2. I have had occasion to notice that a number of flowers start to show colors again in SWIR, and it turns out that the tiny flowers (florets?? I don't have the vocabulary down, although I'm sure Birna could help) on Queen Anne's Lace have dark centers at 1500nm. The darkening starts before then and is visible also in the longer wavelength parts of NIR. Using the TriWave (which has a germanium-on-silicon sensor with range 350-1600nm), I made the following "true color" IR image from two Omega bandpass filters and a Thorlabs longpass (but effectively a bandpass since it's at the end of my sensor's range of 1600nm). - 1500nm hard coated premium edgepass from Thorlabs (blocked OD5+ through 200nm, which is rather important with the Triwave since it's much more sensitive in visible than at 1500nm+). - 1064BP25 Omega - 780BP30 Omega These were placed in the R, G, and B channels respectively to produce the following image: The image has been processed by registration of the channels, contrast adjustment, noise reduction, sharpening, and boosting saturation. Original images: 1500nm-1600nm (but probably mostly 1500-1550nm because the TriWave's gain falls quickly in that range): 1064nm: 780nm: This result is startling given that in visible and in UV, the flower shows a uniform light or dark appearance, with the flower centers undistinguished from the petals. VIsible: UV (S8612 1.75mm + UG11 2mm): -- Edited to add a large pano of the whole head at 1500nm. 58 images.
  3. Andy Perrin

    House boat, UV HDR

    There's a new house boat on the Annisquam river where my mother lives in Gloucester, Massachusetts. I used it as a subject for a UV HDR shoot, using 3 photos. Camera was the Sony A7S, lens was the El-Nikkor 80mm/5.6 metal. Filters were S86512 1.75mm + UG11 2mm. The exposures were F/22, 1.6", and ISO 320, 640, 1250. WB was in-camera off PTFE (but done several days ago, so the light may have shifted...I meant to WB again but I forgot). Raw conversion was in Photo Ninja. The HDR program was Aurora HDR (which I particularly like for its ability to adjust the sky and ground independently). Further adjustments were made in Photoshop. The three individual frames that were merged (and then further tweaked in Photoshop) are here:
  4. This information has been posted in the bowels of another post, but is repeated here for ease of reference. The problem of sensor dust becomes acute as magnification in macrophotography increases above magnifications of about 2:1. Dust is particularly problematic when using focus stacking software because this results in even otherwise unnoticeable spots becoming highly visible as streaks. For users of Zerene stacking software there is a new utility (at the time of writing available only in the latest downloadable beta version) which is very effective in overcoming this dust problem. These notes, using extensive input from Zerene, explain how to use the utility, and in particular how to create the dust masks that the utility needs. 1. At the Zerene End You will need Beta Version T2020-06-01-1033-beta or later or a full release version dated later than July 2020 to get this capability. To turn the feature on go to Options > Preferences > Preprocessing and check the Use dust and defects mask box. Select the file that contains the mask (see below). Make sure that either "In-Fill before aligning" or "Explicitly propagate good pixels" is checked: the former is probably better in most cases. To use this feature you will need a Pro licence for Zerene. With Student and Consumer licences you can trial the feature for 30 days. NOTE: if you upgrade to a Pro version, you may get out-of-memory errors when using the dust removal utility. If this happens, go to Options > Preferences > Memory Usage and increase the amount of memory currently allocated to Zerene. If the value is 4,000 then it is definitely too low. A setting of 5,000 seems to work OK but you may wish to build in a greater margin. 2. Make a Grey Background Image (GBI) You will need an image which contains your current dust spots. You could use one of the images that you are going to stack, but it is easier if you make an image (referred to as GBI here) with a plain grey background using the same lens, aperture, and bellows/tube extension (if relevant) that will be used for your real images. IMPORTANT: make sure the GBI is the same image type (e.g. JPEG) as the images you are going to stack. If you create, for example, a TIFF version of the GBI from RAW output from your camera, it will probably have slightly different dimensions (in pixel count) from a JPEG output by your camera. IMPORTANT: Zerene cannot detect image orientation, so the orientation of the GBI (and the mask created from it) must be the same as that of the images being stacked. If, for example, you made the GBI in landscape but your stacking images are in portrait then you will have to rotate either the GBI or the stacking images. 3. Making a Dust Mask manually This description uses GIMP, but a similar process must be possible using Photoshop. · Ensure that Preferences > Image Import & Export > Promote imported images to floating point precision is off. · Load the GBI into GIMP. · Create a 2nd layer, pure white · Move the GBI to the top and set Opacity to about 90%. · Activate the lower, GBI layer. · Select the Pencil tool and set a size big enough to cover typical dust spots – 20 pixels worked well for me. Set pencil colour to black. · Zoom the display to 200% or more. · Scroll around the image and click on the centre of each dust spot. This should create a black disk under the dust spot on the lower white layer. By having the GBI layer Opacity at 90%, you can faintly see this black disk. · For larger blemishes: o Increase the Pencil size, or o Keep the left mouse button down and paint over the blemish.· When you have done all the dust spots, delete the top, GBI layer. · Flatten the image (Right-click on the white layer in the Layers panel). · Save the lower white layer as your manual mask. IMPORTANT: save this as an uncompressed TIFF. 4. Making a Dust Mask automatically There are 2 stages to this: Capture the dust spots Expand the Dust Spots 4.1 Capture the Dust Spots There will be multiple ways of achieving this, but this approach uses GIMP and is easy. · Load the GBI into GIMP · Set sharpening to maximum ((Filters>Enhance>Sharpen (Unsharp mask)). · Set image to Greyscale (Image>Mode>Grayscale) · Set Contrast to maximum (Colors>Brightness-Contrast) · Set Brightness to the minimum that does not cause noise dots to appear (best to view the image at 200-400% so you can see this.) · You should now have a pure white field with pure black dust spots. Save this interim mask as a TIFF file. 4.2 Expand the Dust Spots This ensures that the dust spots in the mask are: 1. slightly larger than the dust spots on the target images 2. are solid. (In macrophotography, the dust spots may be concentric light and dark rings because of diffraction.) 4.2.1 Using Photoshop Elements. · Load the GBI into Photoshop. · Select the Magic Wand tool. · Make sure the tool property Contiguous is unchecked. · Enlarge the image ( Ctrl + ) so that the dots are a good working size on the screen. · Click on any black dot. This will select all black dots in the image. (If you look carefully, you can see shimmering outlines). · Go to Select>Modify>Contract and set Contract By to 2 pixels (you can play around with this setting later if you want). This causes very small dots to be ignored. · Go to Select>Modify>Expand and set Expand By to 5 pixels (you can play around with this setting later if you want). This enlarges the dust spots that have not been ignored. · Go to Edit>Fill Selection and select Contents Use = Black and Opacity = 100%. · Save the resulting image as a TIFF – this is your Dust Mask. 4.2.2 Using GIMP · Load the GBI into GIMP (or just continue from stage 4.1). · Select the Select By Colour Tool (not the Magic Wand / Fuzzy Select Tool). · Enlarge the image to perhaps 400% so that the dots are a good working size on the screen. · Click on any black dot. This will select all black dots in the image. (If you look carefully, you can see shimmering outlines). · Go to Select>Shrink and set Shrink selection by to 2 pixels (you can play around with this setting later if you want). This causes very small dots to be ignored. · Go to Select>Grow and set Crow selection by to 5 pixels (you can play around with this setting later if you want). This enlarges the dust spots that have not been ignored. · Go to Edit>Fill with FG Color (assuming your foreground colour is set to black. If it isn't then use BG colour if that is black, or use the Colour Picker to set colour to black.) · Click on Select>Invert · Go to Edit>Fill with BG Color (assuming your background colour is set to white. If it isn't then use FG colour if that is white, or use the Colour Picker to set colour to white.) · Save the resulting image as a TIFF – this is your Dust Mask. 5. Updating the Dust Mask The Dust Mask file can be retained for use in subsequent sessions. However, there are likely to be new dust spots which cause new dust streaks to appear. You can update the Dust Mask manually by: · Loading the Dust Mask into GIMP. · Load one of your images into GIMP as a top layer, and set opacity to 90%. · Expand the view to 200% · Set the Pencil Tool as described above for manual masks · Find the dust spot(s) that are causing the streak and click on them. · Delete the top layer and save the lower layer as a TIFF as your updated Dust Mask.
  5. This is a rainbow shot with my "single shot IRG" method using a Tiffen#12 + DB850 filter. The combination transmits roughly 550-650nm + 800-900nm, without the 700-800nm region which otherwise contaminates the IRG since that part of the IR contributes unequally to the three channels (unlike the 800-900nm band) and so cannot be subtracted off by removing the blue channel without supplying unknown multipliers. Here is my estimate of the spectrum based on the Tiffen#12 data and DB-850 data supplied by the manufacturers: The processing has been documented by me extensively already on this board, so I will just present the final image. (Unfortunately I did not record shooting data, and in any case this is a panorama, so I can't give the exposure.) Lens was the metal EL-Nikkor 80mm/5.6. Interestingly, some inner bows can be seen. Now compare to the visible photo, taken by iPhone XS Max:
  6. I got a near infrared polarizer! It was on eBay for a very cheap price (it is quite old) so I snapped it up and tried it out today. The wavelength range is 780-1250nm, the extinction ratio is 1:1000, and the peak transmission is 20% or so (bleh). But it doesn't have much wavefront error so it does okay for imaging, unlike the Thorlabs ones. Here we have a favorite test subject, St. John's Seminary and Chandler pond. The filter is the 980BP10 filter for the dark water effect. I stacked it with the IR polarizer and stuck the whole thing on an old Tiffen polarizer ring, to allow rotation. The camera was the converted Sony A7S, the lens was the 80mm/5.6 EL-Nikkor (metal). Exposure was F/11, 2.5", ISO100. I took three images at 0, 45, and 90 degrees (approximately) and put them in red green and blue channels after alignment in Photoshop. I white balanced the result and adjusted the saturation a bit by transforming to the L*a*b color space. Then I transformed back to RGB. "Red" "Green" "Blue" False color
  7. Took a day trip down to Niagara Falls today, and shot with the Hrommagicus filter. It's quite surprising how much colour I can tease out of the raw ARW file. All of this was done in Lightroom: Post-processing screen-grab: Yes, the original is over-exposed. I had exposure comp set to +1 in error, so I had also a slower shutter speed than expected.
  8. I have been noticing for some time now that when I take a photo with Tiffen#12 filter and try to process it by doing the (nominally) IRG-like transformation new Red = Blue new Green = Red - Blue new Blue = Green - Blue followed by white balance, which allegedly works because the blue channel consists entirely of IR and the Red and Green channels contain equal amounts of IR as the Blue channel, I do not get a photo that exactly matches what one would arrive at by combining infrared, red, and green photos shot separately. It occurred to me that one reason for this may be that the 700-800nm region contains light that slightly pollutes the blue channel and the others unequally. I happened to stumble on the following Midwest Optics dual-band visible+IR filter the other day and it hit me that this could be a remedy: This filter is missing the region from 650nm to 800nm, exactly the troublesome region if my guess about the problem is correct. Combined with the Tiffen#12 filter, one gets a combined spectrum that looks (very approximately) like the black line here: (That graph is based on hand-digitizing the Midwest Optics curve and the Tiffen#12 curve and multiplying them. Some extrapolation was also done where I had no data. So this should not be taken as numerically accurate, rather, it is meant to convey the idea of what I was trying to do here.) I bought one, and it came yesterday, so I did a quick experiment with a leaf where I first assembled an IRG image by hand, and then using the DB850+Tiffen#12 combination. IRG "by hand," using an Omega 830DF30 for the infrared, and the red and green channels from a Hoya UVIR cut + BG38 2mm stack: IRG by the DB850+Tiffen#12 stack (one shot): The colors looked reasonable (in fact it is the "by hand" shot that's the weaker of the two, due to misalignments) so I went off to the reservoir and downtown to see what I could see. Here are some samples. I didn't do any processing outside of white balancing and exposure/contrast adjustments aside from the above transformation. Waterworks Gas station, showing some red->yellow color shifts Street scene
  9. A classic sunflower. The SWIR is the major contribution here, because many sunflowers have been imaged before on this forum. I made a 530 image panorama to get adequate resolution, and the results look nice. It took all night to stitch the images, and I went through 5 software packages before I found one that didn't get bogged down by the sheer number of images involved. Most stitching programs are geared toward a small number of images with a large number of pixels, rather than the opposite situation, which is what I have with the TriWave! Vis sunshine Resolve 60mm lens, F8 iso640 0.04" (DB850+S8612 1.75mm) UV sunshine Resolve 60mm lens, F8 iso2500 2" Vis sunshine Resolve 60mm lens, F8 iso640 0.04" (DB850+S8612 1.75mm) UV sunshine Resolve 60mm lens, F8 iso2500 2" UVIVF ConvoyS2+ Sony FE 55mm lens, F8 iso1000 30" IRG sunshine Resolve 60mm lens, F8 iso400 0.005" (DB850+Tiffen #12) Near infrared (830-870nm) sunshine Resolve 60mm lens, F8 iso400 0.05" (DB850+Hoya R72) SWIR (1500-1600nm) halogen Thorlabs 50mm achromatic doublet, F10-ish, not sure how to quantify the rest of the exposure info. Thorlabs 1500nm long pass filter. 534 image panorama stitched with Panorama Stitcher, a Mac app available in the Mac App Store. This was the program that finally worked after trying Hugin, PTGui, Photoshop CS6, Panoweaver, and Autopano Giga. Autopano Giga's free trial actually worked, but when I went to buy it, I discovered that Kolor, the company that made it, had been bought by GOPro, which then discontinued the product! With no way to unlock the software, I had to find another program. Due to stitching errors near the edges, I was forced to crop this more than I would have liked. The results are still pretty nice, though, and I found that the sunflower (as with other members of the aster family) has a dark center in the SWIR 1500-1600nm band, despite being pretty uniformly white in the 850nm NIR range.
  10. Bernard: This old thread seemed the most appropriate to add my post to without starting up yet another one ... Editor: It is quite OK to start new topics. Your IR Overlay with Channel Swap is interesting in its own right. So I have split it off into its own topic. I have to say I am not an Aerochrome enthusiast: I find the subject quite interesting from a technical viewpoint and enjoy seeing the images posted on the forum, but it seems a one-trick pony to me. But then I don't get abstract or "modern" art either, so this probably says more about my philistinism quotient than anything else. Anyway I was interested in trying something out. The original Aerochrome/EIR, when used with a yellow (minus blue) filter resulted in IR showing as red, visible red as green, and visible green as blue - with visible blue being lost altogether. So one standard way to simulate this very closely is to overlay an IR image on a visible image and use channel swapping to achieve the Aerochrome effect. This is done in the 2nd image below (the 1st is a normal visible shot). For the IR shot I used a Midwest Optical BP850 - partly to go a bit deeper into the IR than the R72 would go, and because, having bought the BP850 in error, I needed to find a purpose for it to save face. So far, this is all standard stuff. But what about the point that Aerochrome with a yellow filter completely ignores the visible blue region. How about changing the channel swapping such that IR goes to red, red and green each contribute 50% to the green channel, and blue goes to blue? The third shot below tries this out. And finally (and I know this is drifting off topic), what if a similar thing is done with UV? So UV goes to blue, blue and green each contribute 50% to green, and red goes to red? The result of that is in the fourth shot below. (Baader U used for UV; all lighting by flash.) And finally just to round the whole thing off - a pan-spectral image with UV providing the blue channel, visible the green, and IR the red. This is the last shot below - and takes us part of the way back towards Aerochromism. Visible IR-->Red, R/2+G/2-->Green, Blue-->Blue Red-->Red, B/2+G/2-->Green, UV-->Blue IR-->Red, Visible-->Green, UV-->Blue
  11. Not the first time this has been tried, but I attempted to make some wavelength-dependent false colors by taking three photos using a 780BP30, an 830PB40, and a 1064BP25 filter and putting the resulting images in the blue, green, and red channels respectively. The camera used was the TriWave, which is monochrome and has no Bayer filter to distort the results. Lighting was a halogen light with some kind of shield over it. The physical setup looked like this. I have the TriWave attached to an iris, followed by a 100mm lens with a NIR/SWIR AR coating from Thorlabs, and then a sliding filter holder that lets me easily swap filters without messing up the image. The experimental subject was this jalapeño: The filter spectra (supplied by Omega with the filters) were: The three unaltered images came out like this: Gain of the camera went down over the range, so I adjusted exposure time by roughly one stop for each image. 780BP30 (blue channel) 830PB40 (green channel) 1064BP25 (red channel) I took one additional image at 1500nm long pass (the end of the camera's range). This wasn't used for anything, I was just curious and it was the last filter in the filter holder, so I took it "while I was there anyway." I put the images in the three channels and got this: Then I whitebalanced off the paper in the background using PhotoNinja and trimmed the histogram for better contrast:
  12. STICKY LIST Sticky :: SWIR Photography: Cams, Mods, Lenses, Lights, Links (You are here.) Sticky :: UV-Capable Lenses Sticky :: UV/IR Books Sticky :: UV/Vis/IR Filters Sticky :: UV Induced Visible Fluorescence Sticky :: UV Photography: Cams, Mods, Lights, Links Sticky :: White Balance in UV/IR Photography Sticky :: IR Photography: Cams, Mods, Lenses, Lights, Links by Andy Perrin for UltravioletPhotography.com Started: 28 June 2019 Edited: 26 November 2020 Note from the author: To paraphrase Andrea, "This is a joint effort by the members who enjoy [shortwave] Infrared photography. Thanks to everyone for their suggestions, comments, proofreading, lists, links, measurements, experiments and all round good fellowship." Please PM Andy Perrin on UltravioletPhotography.com with any corrections, additions or suggestions. Abbreviations: IR = infrared (taken here to mean the entire band from NIR-LWIR) UV = ultraviolet NIR = near infrared SWIR = shortwave infrared MWIR = mediumwave or midwave infrared LWIR = longwave infrared Quoted prices are in US dollars and are only meant to give a rough idea. INTRODUCTION As another well-known Guide once put it, the infrared is big. It is, in fact, so hugely mind-bogglingly big that it can't be properly treated as a single band for many purposes. It has thus been divided into a handful of sub-bands according to several different mutually inconsistent schema. Wikipedia (as of June 28, 2019) lists five different schemes to divide up the infrared, several of them with overlapping nomenclature for different wavelength cutoffs to add to the confusion. For this guide, we will use the scheme termed "Sensor response division scheme" on Wikipedia, which starts with Near-Infrared (NIR) from 700nm-1000nm, which is where silicon sensors cut off, followed by Shortwave-Infrared (SWIR) from 1000nm to 3000nm, Midwave-Infrared (MWIR) from 3000-5000nm, Longwave-infrared (LWIR) from 8000-12000 microns or 7000-14000 microns, and then Very-long wave infrared from 12000-30000 microns. The interested reader can consult Wikipedia for the other schema, while the alert reader is left to ponder what became of the gap from 5000-8000nm. Unfortunately, this mess has real consequences for those trying to purchase shortwave infrared camera equipment because it may not be listed as "shortwave infrared" on eBay or other sites. Equipment may be listed by the type of sensor it is compatible with, usually InGaAs, or as SWIR, shortwave, the generic "IR," or even NIR (which is sometimes considered to include as far out as 5 microns!) The searcher is advised to try variants on all of these or else miss out on deals. SHORTWAVE INFRARED PHOTOGRAPHY SWIR CAMERAS Using the classification above, SWIR starts at 1000nm. Silicon still has some sensitivity in the 1000-1100nm region, and for this part of the band one can use an ordinary converted silicon camera. However, while some interesting SWIR effects start here, such as water becoming light-absorbent, most of the differences from NIR don't become significant until one is past 1100nm. Silicon sensors can be made to exhibit some sensitivity to SWIR from 1460-1600nm by coating them in an up-converting phosphor material. These coated sensors use the anti-Stokes effect, in which two SWIR photons hit the material and a single NIR photon is emitted, which is then captured by the silicon sensor. Edmund Optics sells these. They are intended for calibrating telecom lasers, and have not been tested for imaging purposes. Typical new cost is the $2000-3000 range. The author cautiously warns against purchasing one, unless you find a really outstanding deal, for reasons described below. A second (bad) option is to use an up-converting phosphor screen in conjunction with a converted silicon camera. The screens are available from Edmund and also under other brand names for much less than Edmund's. The author purchased one of these on eBay for ~$200 and did a series of experiments with them. In these experiments, it was found that the phosphor material is extremely weak, rendering the apparatus as a whole quite insensitive. Thus it was necessary to use a very intense light source, nearly to the point of setting fire to the scene. In addition, the screen itself was granular and did not provide good resolution or contrast. Because the same phosphor materials are involved in the coated sensors of the last paragraph, the author doubts that the coated sensors will work well for general purpose photography. However someone would need to acquire one to test the hypothesis — a lot of money for something expected not to work well. A relatively low-cost option that would provide SWIR coverage up to 1550nm is the Find-R-Scope vidicon tubes. These are analog devices, but could potentially be coupled to an ordinary camera. They are relatively easy to find on eBay for prices under $1000. Buyer beware — not all of them go out to 1550nm. Read listings carefully, and if it does not say it is 1550nm-capable, assume it is not. Vidicon tubes are rated only for a certain number of hours before they wear out, so the age of the device may also be a consideration. The author has not tested any of these, but this may be the most cost effective entry into SWIR. The next step up in quality from the vidicon tube imagers is the most traditional means of SWIR imaging: Indium Gallium Arsenide (InGaAs) cameras. These are usually digital cameras with resolution of 320x240 or 640x480 (and higher now, but at a cost) with excellent sensitivity from 900nm-1700nm. The cameras are generally machine vision cameras for industrial, medical, art conservation, or espionage use, so they need to be attached to a computer to take and store the photos. Conceivably some kind of portable apparatus could be rigged up by using a tablet as the computer. eBay price is generally $5000-$10000, but I have seen as low as $3000. Typical brand names are Goodrich Sensors Unlimited, FLIR Tau SWIR, Allied Vision, and Princeton Instruments (far from an exhaustive list). Good questions to ask of a seller include whether the camera has a lens, whether it has the driver software needed to run, and if it comes with a power supply. NOTE: Beware of line scan cameras, which capture only a single line of pixels rather than an image, with the intent that the object or camera motion is required to digitize a picture. These are frequently available on eBay for much cheaper than normal cameras, making them look like a tempting bargain. Avoid the temptation. A rarely seen alternative to InGaAs is the Germanium-on-CMOS imagers made by the defunct NoblePeak Technologies company. These cameras, collectively called the TriWave, ranged from 320x240 up to some higher resolution limit unknown to this author. The cameras are sensitive from 300-1600nm. A high resolution example was demonstrated by Nick Spiker on this forum here. The author owns an analog-output 640x480 version and paid $3000 for the camera in "new-unused-but-opened" condition (still in original plastic bags) with all accessories except the lens. The TriWave cameras were in active development at the time of the company's demise, and therefore it is likely that all the TriWave cameras have slightly different capabilities, depending on what point in the development cycle they were sold at. Later cameras probably had digital output via a USB port, as shown in TriWave datasheets. These cameras are cooled by a built-in thermoelectric cooling system and require about 60 seconds to "boot up" while you wait for the chip to reach -80C. The camera will not produce an image until it reaches -70C or so, with optimal results at -80C or lower. As with InGaAs, when buying used ask whether the camera has its original lens, whether it has the driver software needed to run, and if it comes with a power supply. (Updated Dec 15, 2019) A new germanium camera is available as of late 2019 called the BeyonSense 1 with 128x128 pixels in a portable BlueTooth camera format that works in conjunction with a phone app for Android or iOS. The app for iOS was confirmed to be in the iOS App Store as of Dec 14th, 2019, and the camera itself was being sold on eBay for $1138. While the sensor is currently low resolution, it is hoped that future versions will improve on this and make SWIR more accessible. A new option by the company SWIR Vision Systems are the Acuros quantum dot cameras, which are quite high resolution, ranging from 640x512 up to 1920x1080 pixels as of 6/28/2019. The company claims the price is lower than InGaAs, but the author does not know the actual prices, which are not posted. Because the technology is new there are no used cameras available yet. The sensor seems to be particularly sensitive on the blue end of the visible spectrum (with unknown but likely high UV sensitivity) based on their published quantum efficiency chart. The sensitivity goes to zero by 1700nm. The cameras are digital machine vision cameras, so need a computer to operate like the InGaAs and Ge-CMOS cameras. SWIR FILTERS For all of the above cameras, appropriate filtration is necessary to block non-SWIR wavelengths and to narrow the piece of the SWIR spectrum the photographer wishes to look at. It is worth noting that some material properties can change significantly with wavelength in SWIR, so the 1000-1100nm band is different from 1200-1300nm, which is different from 1500-1600nm in terms of what one will see. In particular, sugar and water both have rapid variation across SWIR, so objects of biological origin like flowers or people are likely to show interesting effects. Depending on the type of imager technology, different levels of blocking will be needed depending on sensor. InGaAs, in particular, is not sensitive below 900nm, so does not require any visible light blocking or UV blocking. The Triwave Ge-CMOS camera, on the other hand, is sensitive from 300nm-1600nm, which means 300nm-1100nm (or higher) needs to be blocked well to see any SWIR. The photographer will need to evaluate their blocking needs based on what camera they are using, and what light source. The two best-priced options that the author has identified so far are Thorlabs and Omega Optical (in particular Omega's eBay site for out-of-spec filters is full of good deals). The author owns two SWIR Thorlabs filters, an FEL1500 (1500nm long pass), and an FELH1200 (1200nm premium long pass) and has noticed no out-of-band signals using the light sources available for testing with. In particular, visible contamination is a worry with the author's TriWave camera, but using the FEL1500, objects that reflect visible and NIR light extremely well but 1500nm SWIR poorly (e.g. human skin) show as black, indicating no contamination. For the 1400-1600nm band, the author feels the "skin test" is an easy way to check for poor filter blocking, much as dandelions are used to check for NIR leaks in UV. Note that skin is not dark in SWIR until 1400nm or so, so this is not a useful test in the 1000-1400nm range. Unfortunately, the author does not know of any SWIR absorption glass filters available for sale except the 1000nm longpass kind, which leak a bit of NIR. SWIR LENSES The lens situation in SWIR is similar to UV and the issues are the same. Most ordinary lenses will pass at least a bit of SWIR, sometimes as much as 50%. Multicoated lenses and lenses with many elements are bad news. Chromatic aberration and focal shift can cause problems. Because of the similarity in the underlying issues, UV-capable lenses probably make good candidates for SWIR testing also. One catch is that most SWIR cameras use C-mount lenses, so it will be necessary to use an adapter if one can be found. Another issue to keep in mind is that focal lengths are given in absolute numbers, but most SWIR cameras have tiny "cropped" sensors, so a 50mm lens on a TriWave with a 1/2" sensor, which has a crop factor of 5.4 relative to a 35mm sensor, will behave with an effective focal length of 50*5.4 = 270mm! An alternative to accidental SWIR lenses is to buy a lens intended for SWIR imaging. These use special glass types that pass SWIR better than ordinary lens glass, and they have broadband anti-reflective (BBAR) coatings designed for the SWIR region. The author has tried two of these. The first is a simple 50mm achromatic doublet from Thorlabs, which was purchased on eBay for less than the list price. The photographer is advised to always check eBay for Thorlabs equipment first, by entering the desired model number directly into the search box, because many bargains are available. Thorlabs also sells SM1 to C-mount adapters. Another lens tried by the author is a 12.5mm/F1.4 Kowa SWIR lens, also purchased on eBay, which was found to perform significantly better than the doublet in sharpness and contrast, even on a 640x480 sensor. While the Thorlabs lens was much cheaper, the Kowa's performance was so much better that it is the recommended lens of the two. On the TriWave, this 12.5mm lens has an effective focal length of 67.5mm, so it performs as a telephoto, not a wide-angle as one might naively expect. SWIR LIGHTING First, there is natural lighting. The sun produces abundant SWIR light. A very interesting effect is that the clear night sky also produces SWIR, via an effect called airglow, which is emission by chemical reactions in the atmosphere. The author has not been able to detect this effect yet, possibly due to light pollution. For artificial lighting, any incandescent source should work. Halogen lights are well-known to produce abundant SWIR. Unfortunately for SWIR photographers, while halogen room lights and desk lamps are still abundant at the time of writing (mid-2019), already it is clear that LED lights, which are more energy efficient and less likely to start fires, are dominating the market. In several more years, halogen lamps may become specialist items with expected price increases. SWIR LEDs and laser diodes exist but are expensive. Thorlabs sells the parts. The author has no experience with these, nor is he aware of any torches for sale. (Updated Dec 19, 2019) Another option is the Arcadia Reptile ‘Deep Heat Projector’ which has a steeply rising output in the range 1000-2000nm. If uneven spectral distribution is not an issue, this might be a possibiity. SWIR LINKS CAMERAS Phosphor/Silicon: Phosphor-coated CMOS camera from Edmund (not recommended, quite insensitive) Another phosphor-coated CMOS camera from Edmund (not recommended, quite insensitive) Vidicon Tube: Find-R-Scope from Edmund (less pricey than InGaAs or Ge-CMOS but has limited life) These can generally be found much more cheaply on eBay. Be careful, not all go to 1550nm. InGaAs: The traditional SWIR imager, excellent sensitivity, pricey, requires lens, power supply and software driver. Goodrich Sensors Unlimited FLIR Tau SWIR Allied Vision Princeton Instruments NOTE: Beware of line scan cameras, which capture only a single line of pixels rather than an image, with the intent that the object or camera motion is required to digitize a picture. These are frequently available on eBay for much cheaper than normal cameras, making them look like a tempting bargain. Avoid the temptation. Ge-CMOS: Excellent sensitivity, expensive, might be difficult to find. TriWave Unknown company, prototype product (although versions were seen on eBay), unknown sensitivity. BeyonSense 1 Quantum Dot: The most recent SWIR imager, high resolution, not much known yet about these. SWIR Vision Systems SWIR FILTERS Thorlabs filters Omega Optical Filters Omega eBay site for out-of-spec or batch over-run filters SWIR LENSES - Check eBay first for used copies, always. - Remember to account for the crop factor of your sensor when choosing focal length. - Keep in mind that it can be hard to find step rings for certain filter diameters. Designed for SWIR: Thorlabs Achromatic Doublets Kowa SWIR lenses Accidental SWIR Lenses: Wollensak 1 inch 1.5 Cine Velostigmat C mount (suggested by dabateman; tested by Andy Perrin)
  13. Since aquiring my full spectrum G7, I have been playing around with ways to get shorter exposure times with UV photos. Naturally the best way is to either have a better lens or better illumination, but I was thinking about another way that might work in some situations. I'll post some of the results here as you might be interested in seeing results. So the idea here is to use modern (mirrorless) camera's high burst rate. Similarly to how modern smartphones can achieve very nice results by combining many fast (and noisy) exposures, I figured something similar might work for UV, as our big sensors are dealing with the same kind of problem in the UV. Naturally smartphones have a lot of smart HDR exposures and clever raw image alignment behind them, but some basic tests are described below. Of course as cameras get better, these burst rates will continue to improve so its a thing that should get more useful as time goes on*. This then could allow for handheld UV photography at reasonable sharpness; as it should eliminate camera shake and allow for longer exposures (think 0.5-1sec) My panasonic G7 has 2 burst modes: 1. A traditional burst mode, shooting RAW+JPG at about 10Hz 2. 4K video mode, where the camera shoots a series of JPGs at 30Hz** which is essentially just 4K video; but in any aspect ratio :). For the first test I decided to just do a basic handheld test using option (2). the 4K images are 8mp, which is just about acceptable if you get the framing right on site. Another note is that the camera saves the images as MP4 unfortunately, so I fear some more detail is lost in that step compared to a series of JPGs, but it is what it is. Perhaps they use a higher bitrate format for this mode as they do allow JPG extraction in camera. So the preliminary results. Images taken on my G7 with Kolari Vision UV pass filter at 6400 ISO, 1/80s with Nikon 50mm f/1.8D (@f2.8). 1. First the out of camera JPG. Lots of noise as expected from that ISO on MFT. Crops are about 900x900 px. 2. The processed RAW file. Processed using DxO photolab, using the PRIME denoising option (usually works really well for low colour saturation images) 3. The averaged 4K video files. Frames extracted using Fiji, aligned and averaged in Affinity Photo (paid software, but Hugin or similar will do the trick too). 30 frames averaged (i.e. 1sec of ''video'' under ideal conditions; likely a bit longer) ! some items that should be mentioned: the images were taken handheld and as such the framing is not exactly the same between shots. Still its reasonably close id say. So when might this be useful? I think this might be handy in situations where you are shooting handheld and as such are limited to e.g. 1/50s, but want a longer exposure to use lower ISO or stop down the lens for improved sharpness. Even if your scence is not static (e.g. a person posing) you can probably get away with 8 exposures of 1/50s (i.e. scene needs to be static enough for ~1/4s), which, assuming that averaging noisy images works comparably to equivalent reduction in ISO (to be tested at later date), is a full 3 stops compared to the single image case. limitations: The camera burst and framerates are a direct limit on how much information you can get per exposure time. For example, if you need 1/80 to eliminate motion blur, the burst is still limited to 30Hz and as such 62.5% of the time between frames is not used to capture the scene, making 1s of exposure equivalent to (at most) 0.375s. If your camera can do 60Hz this already improves to only 25% waste exposure time. Anyway I'm happy enough with this as a proof on concept, further tests to be explored in the future. todo: test option 1 (to see what is possible when retaining RAW images and full sensor resolution. * as an example, the panasonic G9 which is the newer and more pro oriented MFT camera from panasonic can do 6K burst at 30Hz and 4K burst at 60Hz, which is already much nicer. ** The camera has negligible readout and clearing time between frames.
  14. Chin Peng

    Hello, from Singapore

    Hello to Administrator, Thank you for signing me up to the forum promptly despite the fact that we are on opposite site of the earth. I have just started my journey to UV IR and multispectral photography. Converted a Canon 60D to wide spectrrum. Got myself a couple of UV bandpass filter and took some photos. Now am trying to figure out if what I have captured are correctly done. Hope to be able to use the technique for my work in heritage conservation, particularly building and artifacts related. Looking forward to share some photos and any advise related to the matter is welcome. Anyone know of any good training program for this technique? regards Chin Peng
  15. What is the best program for processing Nikon raw files for UV shooting? I use Capturу One now, but i heard that PhotoNinja is better. I have installed PhotoNinja, but i don't know hot can i change white balance in this program. Where in interface PhotoNinja i can change white balance? Could you help me?
  16. The Starlight express Lodestar X2 color is an unusual camera. Its designed to be a guide camera for telescopes. Its very small and just a sensor, guide port and USB port. So you need to have a computer to run it and the computer supplies the power. What makes it interesting though is the sensor in the color version. Its a Sony ICX 829 AKA sensor with a YCMG color filter grid. NOT the typcal RGGB, pattern. its also not a typical array with an odd pattern. The pixel size is huge at 8.4 um, with only 752 pixels by 580 pixels on the chip which is 1/2 size. So the crop factor is 5.4x. Its also a CCD, not CMOS sensor. No modification is needed, and this camera is a CS mount with back sensor distance of 12.5mm. So any lens will mount to it. Using the KSS 60mm F3.5 quartz C-mount lens I was able to test its UV sensitivity using StarLight live software. There is no gain or ISO adjustment for the CCD astro sensors. So you just get shutter speed and aperture on the lens. All were shot with KSS set to F4 (Which is really F8). All images are saved as PNG files in software, I then saved them as Jpeg 75% in Infranviewer to upload here. Flower in visible: Flower using 313bp25 with 330WB80 Improved filter, two UVB lights and shutter speed of 5 seconds: Flower using 303bp10 with 330WB80 imporve filter, single 8W 302nm light and shutter speed of 60 seconds: The software to run it is fun. Its designed for Astrophotography, but allows you to collect dark frames and stack with images. You can live stack images in Sum, mean or median to get better images. It also seems to be the best to interpret the YCMG color array, as the developer worked with the Starlight express to get the true pixel pattern. In it you can adjust the color anyway you want. It changes the way I look at UV false color and you can push almost anything you want. Its fun Flower in visible with Wollensak 25mm lens at F4:
  17. So, I did a very light editing on this infrared test photo. It is obviously not a award winning composition and frame, but I am curious about the overall impression of everyone on colors and white balance. Thanks!
  18. Last night I discovered the Sony Imaging Edge app (for Mac and Windows), which allows you to see your camera's Live View on your computer screen. This makes it vastly easier to focus, since my computer screen is 27 inches, while my camera screen is maybe 2.5 inches. (I don't know how they advertise computer screens overseas, but here they are in inches. Anyway, you take my meaning.) The app is confusingly named "Remote.app" on the Mac, not "Imaging Edge." Here is the download link if you are a Sony user: https://support.d-im...imagingedge/en/ Please note that there is a bug that prevents the app from working if DropBox is on, so you have to turn that off before hooking up the camera. I would imagine that other companies provide something similar.
  19. This is not strictly a UV/IR-related post. It is about a free suite of Photoshop filters/actions that (collectively) allow repeating patterns to be removed from images. Things like that autofocus bug in the Nikon Z6, for example. But it can also be used for removing any kind of repeating pattern (e.g. screen doors, bricks, half-tone patterns...). The suite is available here: http://ft.rognemedia.no In order to use it, the YouTube video on that site is REQUIRED VIEWING. (You will not be able to figure this one out by trial and error...trust me.) But the results look like magic.
  20. I took a photo of a coneflower several days ago, and people were curious about what would happen if I repeated the photos with the dried flower, because water strongly absorbs SWIR light, so the pattern may simply reflect the water distribution in the flower. The flower was picked on July 12, at 12am and imaged immediately after picking. Because this was not a planned experiment, I did not write down all the exact photographic settings that I used in this image set, so the followup may have slightly different contrast. I don't think it will matter for the purpose, since we are not trying to extract quantitative information. The flower was reimaged just now (July 15, 3am), 75 hours later. In order to make a valid comparison, I think I need to show how the images look straight out of camera, with no processing, in both sets. Then I will show the processed results. This is SOC, except for resizing and labeling. After subtracting off the pattern noise of the sensor, here are the results: And with sharpening and local contrast adjustment: There is clearly no significant change in "SWIR signature" of this flower after it is thoroughly dried.
  21. The two purposes of this mini-project were to see if the usual trend of patinas on old books becoming more transparent as one goes deeper into the near infrared continues into the shortwave region, and secondly, to see how far it is possible to push the TriWave camera output quality, and whether one can obtain high quality photos with it at all, given the resolution limitation of analog NTSC video. The book in question is "Adventures of a French Soldier: Exemplifying the Evil, Crime and Sufferings of War, with Reflections" (1831). Summary First the main results. The SWIR photo (1500-1600nm) does seem more legible than the NIR photos. I did not keep as close an eye on exposure times as I wish I had, so there will be some variation due to unequal exposure, but I did my best to correct for that in post processing. It is very hard in any case given that the images were taken with different cameras and different types of camera even. In addition, the SWIR image is a panorama. By a procedure described below, it is possible to greatly improve the output of the SWIR camera though a "white frame subtraction." This was done on the SWIR images prior to building the panorama. The final output quality was only slightly inferior to the Sony A7S. The large versions now follow, along with shooting details. UV (Sony A7S, S8612 1.75mm + UG11 2mm, with a Convoy S2+ torch. F/16 ISO3200 10") Visible (Sony A7S, BG38 2mm, halogen bulb, F/16 ISO320 0.25") NIR 720nm long pass (Sony A7S, Hoya R72, halogen bulb, F/16 ISO250 0.25") NIR 1000nm long pass (Sony A7S, unknown 1000nm eBay filter, halogen bulb, F/16 ISO2500 0.25") SWIR 1500nm long pass (TriWave, Thorlabs FEL1500, halogen bulb, F/4, analog gain=1, 15fps, 405 lines of integration per frame, no gamma curve, digital gain=1, digital offset = 0, with dark frame subtraction on) This is a panorama of 46 images stitched in Photoshop, then sharpened in Smart Deblur. ---------------------- Process for Construction of the SWIR Panorama Next, I will discuss the process flow for the construction of the SWIR panorama. To begin with, a typical image from the camera looked like this (unprocessed in any way, original size): Looking carefully, one can see there are a lot of artifacts, some from the sensor, some from a dichroic reflection (which I plan to take care of by finding a different filter attachment method eventually, and maybe a lens hood). My next step was to remove the dichroic reflection and the sensor glitches by taking a "white frame" and combining it with each image from the panorama in MATLAB. The white frame looked like this: By fiddling in Photoshop, I discovered that inverting the white frame, doing a 50% opacity "Darker Color" blend in Layers, flattening the image, and adjusting the contrast would eliminate the ring. I then replicated this procedure in a MATLAB script and did it for every image in a batch. (I could probably have made a PS action to do this, but I chose not to, because I would rather keep my workflow in MATLAB as much as possible.) After this procedure, the image looks like this: At this point all the images were combined into a panorama in Photoshop, and then it was sharpened in Smart Deblur. --- Conclusions My conclusion is that the output image quality is acceptable, especially when tiled into a panorama with the white frame subtraction method. Here is a second, more dramatic example of the difference the white frame method makes, but on a different photographic subject: This made such a difference to the final results that it will be used in all further work with this camera.
  22. I came across this old site which looked at 28 cameras using a spectrometer. It seems they claim you can used their MATLAB script to calculated the spectral response of a camera using a simple color chart. Not sure if this would work as I don't have MATLAB. However if it does work between 400nm and 720nm, which is were they set their limits. I wonder if this could be expanded, for a full spectrum converted camera. Here is the site link describing their research: http://www.gujinwei.org/research/camspec/ Here is the site link to their MATLAB script: http://www.gujinwei.org/research/camspec/db.html I guess the MATLAB experts here can better address if this can be used for more broad spectrum determination. Since the color checker seems to have a near standard UV signature, it may be possible to drop the lower threshold at least into the UV range.
  23. During last June's open-door helicopter flight (written up here), I had the opportunity to fly over a field that I'd walked across the previous week. From the air, I took aerochrome-style red/green/infrared shots using Tiffen #12 filter and the NEX-7 camera, as documented at the link. Quoting the relevant parts from that writeup, the processing steps were: Following that, I used Independent Component Analysis (written up here using faded ads on brick walls as an example) to bring out the hidden patterns in the fields. I visualize the three ICA components by putting them in the channels of an Lab colorspace image and adjusting contrast to align the peaks of the histograms of the a and b channels. The history of the fields is itself fascinating; they were once the Great Cedar Swamp until Cumberland Farms filled in the land in the 1970s. You can read the whole sad story at this link, but I will excerpt some pieces so you can interpret the photos below appropriately. --- #1a: #1b: #2a: #2b: #3a: #3b: In the following two photos, the original flow of the river through the swamp shows up dramatically. #4a: #4b: #5a: #5b: In this pic and the ICA below it, there is a rectangular patch that must have been cleared at some point by one of the companies involved. #6a: #6b: #7a: #7b: --------- Finally, I'll end with some visible light photos of what it looks like on the ground. I walked along the white mud path (which was littered with living snails!) that runs through the center of the former swamp. It was beautiful and filled with birds and wildflowers. The Audubon Society is trying to raise money to buy the land and save it from developers. This is the path shown in photos 4(a, b ) and 6(a, b ):
  24. (continued from the monochrome image thread) The question, proposed by Cadmium, is whether out of band light can be subtracted off somehow. Having played with this for the EIR-type images, I think the answer is yes, but it has to be done very carefully on the unwhitebalanced TIFF generated straight from RAW. Otherwise JPG algorithm messes with it potentially. Also there is the issue of how many bits per channel — 256 colors/channel is not enough when you subtract, because subtraction amplifies noise. Also small values can get rounded to zero. For example, consider these two monochrome one dimensional "images" meant to represent gray with a few spots of noise: image1 = [254 254 256 254] image2 = [253 252 253 252] image1 has average value 254.5 ± 1.5 of noise or so, so the noise is 1.5/254.5 = 0.6% of the mean image2 has average value 252.5 ± 0.5 of noise or so, so the noise is 0.5/252.5 = 0.2% of the mean So there is a good signal to noise ratio in the original images. If we subtract them, image1-image2 = [1 2 3 2] and it has average value of 2 ± 1, so the noise in the subtracted image is 50% of the mean! When you have more levels of gray (and the image is properly exposed) then you are less likely to end up in this situation. For a 16 bit image, the difference between two gray levels is 1/65536 instead of 1/256, so you have more wiggle room. The out of band image is going to have a lot of very small values which will get rounded to zero in an 8 bit situation (and likely also by the JPG algorithm), and that will mess things up also. ---- Next, software. I have heard good things about ImageJ but have never used it before. The main issue with just using Photoshop is that we don't know what it's doing behind the scenes. For instance, it has both a "Subtract" and "Difference" option for layers and I've never been quite sure what the, er, difference is between them. If you code in MATLAB or other computer language, you can do the subtraction directly and then you don't have to worry so much. ---- Finally, there is the issue of whether the RAW converter is doing anything to the image. Potentially the RAW converter might apply a curve to the image values, but I haven't seen anything like that in PhotoNinja's, if white balance is turned off. I think the best would be doing the subtraction on the RAW subpixel values themselves before demosaicing, but I'm not sure how to do that yet.
×
×
  • Create New...