Jump to content
UltravioletPhotography

Search the Community

Showing results for tags 'Processing'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Office
    • Announcements
    • UVP Rules & Guidelines
    • Requests for Photographs
    • Feedback & Support
  • Introductions
    • Who & Why
    • Introduce Yourself
  • UVP Technical Zone
    • Techniques, Tests & Gear
    • UV Lens Technical Data
    • Non-technical Experiences
    • STICKIES, References & Lists
    • Essays & Tutorials
    • ID Help
  • UVP Photo Zone
    • Ultraviolet & Multispectral Photos
    • Fauna: Animals, Birds, Insects or Other Critters
    • Forensics & Other Investigations
    • Fluorescence and Related Glows
    • Infrared and its Friends (SWIR, MWIR, LWIR)
    • Macro
    • People and Portraits
    • Scapes: Land, Sea, City
  • UVP Botanicals
    • UV Wildflowers by Family
    • UV Cultivars: Garden & Decorative Flora
    • UV Cultivars: Vegetables, Herbs & Crops
    • UV Other Botanicals
    • Index

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

  1. I mostly process my images in Darktable. Darktable has many input color profiles to pick from. I am mostly asking, since I noticed a peculiar problem with the standard color matrix and I don't really get why it has to happen. Basically, standard color matrix seems to make areas of certain channels completely black sometimes to achieve saturation. Normal photo with standard color matrix: Blue channel isolated with the color calibration tab: This is very annoying to me, since it is data loss/distortion. I have lately been splitting images into channels, subtracting channels from one another etc. I figure that there is no way the blue photosites on the sensor actually captured an image this dark and contrasty. If I switch to Lab or linear Rec709/Rec2020, I don't encounter this problem anymore, but the problem is that for some reason, the colors shift. Yellow becomes orange, blue becomes cyan, etc. Again, isolated blue channel: My theory here is that Lab actually shows you how the sensor itself sees the world, and image processing programs are just designed to automatically shift the colors back to where they should be and clip certain channels (such as with the dandelion) to achieve saturation akin to that you see in real life. But I am not sure. What I know is that I would like to see my data represented as objectively as possible. Here's another example. I got myself a a diffraction grating from AliExpress yesterday, it has 600 lines/mm, actually works very well, even though I've been told here it's not enough. I attached my Rowi GO-2 orange filter on the lens, color balanced on PTFE and then tilted the camera away from the lightsource so that the pattern would show. I used a halogen bulb. Standard color matrix full color and blue channel isolated: If you look carefully, you notice that towards the shortwave end, there is a completely black spot. It makes zero sense for it to be there optically. Lab: Lab has no such thing, and also the blue channel seems to be working as some sort of bizarre triple bandpass (any explenation anyone?), which might be why my single shot aerochrome attempts have been failing. Looking at this, I am very annoyed that I don't have Rawdigger and can't really afford it. I am even more annoyed at the fact that something as simple as OBJECTIVELY splitting a raw file into the four channels (red, blue, green #1 and #2) without any color profiles or demosaicing is not available in any free software. It would be priceless to be able to do that so that I could analyse what's going on with my images. This way I am not even sure if the triple bandpass effect is not just some artifact of the software combining the channels together to achieve some visual standard. Makes me wonder how well do cameras really see. Here's the RAW file if anyone wanted to investigate it themselves. DSC03329_halogen_rowiGO2.ARW
  2. I have been taking a lot of photos lately. This time I decided to investigate a pair of cucumbers. I illuminated them with a halogen spotlight, which emits enough light for IR, visible and UV. IR tri color is using my GRB3 method UV is taken with a ZWB2+QB39 stack (which surprisingly does not leak significantly, even with halogen) I will also be including the individual pictures if anyone else wanted to take a shot at processing them (please, do post). I'd especially appreciate if someone managed to stack all 7 channels continually. I only could stack them by binning the visible and the IR part of the spectrum together. The pictures are in full hd so if you still have a 1080p display, you might want to enlarge. visible IR tri color 850nm+720nm+red Aerochrome simulation GBUV full spectrum individual color channels: 950nm longpass ~850nm band ~720nm band red band green band blue band UV band (400-350nm) I also decided to stack the images in Photoshop and pick the range stack mode, I got interesting results. All of the bands stacked: All of the bands except for UV stacked: IR only stacked: IR only stacked (normalized): Bonus: IR stacked, normalized and processed with Topaz Denoise AI: Here's the IR range mapped on the visible image: Here's the range between 720 and 850 bands mapped on the visible image:
  3. Lately, I have been having much fun taking pictures of different objects in many bands and then combining the data I get in different ways. I have gotten many interesting results which I will share later but for now I would like to share images of this beautifully blue mineral I got on a flea market on saturday. The images are in full hd. As I usually do, I took several images of the rock in different bands. Three IR bands with my GRB3 method, a normal RGB image (as seen above), and a UV image with a ZWB2 and QB39. 950nm 850nm 720nm red green blue UV Out of curiosity, I also did a UV picture with the same exposure time, aperture and ISO with a 510nm longpass filter screwed on top of the two filters. I think the result was very impressive considering this was a halogen spotlight and not even the sun or some other better lightsource. This image was pushed by 8.612 stops in Darktable. Now for the more artistic interpretations. G-B-UV 950nm-850nm-720nm Aerochrome simulation 850+720+R full spectrum (850nm+950nm+720nm)-(R+G+B)-UV Edit: here's a full spectrum stack made with a hybrid method I developed using advice from both @Stefano and @Andrea B. (found in this thread): I think it looks much better. (stackmode maxium)-(stackmode median)-(stackmode minimum) stackmode range, normalized Brighter areas show where the minaral is the least consistent in its reflectance. It's very inconsistent overall.
  4. Had the chance to work 1:1 with a very patient model who let me take out my UV filter stack at a Halloween photo event. All images were taken on A7R full spectrum with ZBW2 + IR CUT + Kolari Hotmirror Gen1 on Sony 55mm 1.8 It was a bit overcast, so I was pushing my camera to its absolute limits! All images were lightly tweaked and denoised in DXO PhotoLab 5 Elite, I used In-Camera WB Happy to receive feedback, thoughts, and suggestions! ISO 25600, 1/60 sec, f/2.5 For Comparison: in-camera jpeg VS raw edit (same settings as previous) ISO 10000, 1/60 sec, f/1.8 ISO 12800, 1/40 Sec, f/1.8 ISO 12800, 1/60 Sec, f/1.8
  5. I would like some help with Photo Ninja please, but I am not tech savvy, so I need it simple. I have Photo Ninja, I like the 'Detail' slider it is magical. What I would like to know is the 'Sharpening' sliders. There is the 'Sharpening Strength' & 'Sharpening Radius', what do they do & how should they be used Please ?
  6. I am considering a debayered camera and wondered about the practical advantages and disadvantage people have found with the above programs. I can use either Mac or PC-based software.
  7. Feel freee to use: https://github.com/lukaszgryglicki/align. You just need to install "golang" on mac/linux/windows and then install that package. Like this: *go install github.com/lukaszgryglicki/align/cmd/align*. Then you can align and combine R, R, B images into resulting image like this: *align R.JPG G.JPG B.JPG Aligned.JPG*. See examples on the GitHub repo - it's a very simple but useful program (IMHO). It detects how many pixels movements are needed to align all images together. More details about algorithm here: https://github.com/lukaszgryglicki/align#some-details
  8. My recent in-camera stacking experiment with the S1R didn't work out too well. So I thought I would put the raw sequence into an app to see if things improved. Long ago I had both Helicon and Zerene but they have both expired. So I'm looking to either renew one of those or try something else. Let us know what you are using for stacking. Any comments, either pro or con, are welcomed. This is a one choice poll, so is you are using more than one stacking app, please select only the one you think is "best".
  9. I was studying stacking artifacts the other day, trying to understand halos appearing around petals of flowers (high contrast against dark background). I’d been seeing a lot of this in my Olympus in-camera visible stacks, so was puzzled. Soft halo looks like this around the petals.. I found a good description of the most common stacking gremlins here by Allan Walls Photography. He’s a macro photographer, and uses big stacks processed in Zerene. It’s a long video, but I found it well worth watching; here’s a summary from my notes. 1. Wiggly Worm - wormy things in the background. Caused by dust on the sensor. 2. Background Banding - caused by changes in lighting between stack shots, common with flash. Minimized by using continuous lighting, or in Pshop blur the entire background. 3. Ghosting - caused by motion in the background ie clouds, water, grass. Run an extra pass in Zerene using higher settings and retouch. 4. Echo - caused by motion in the foreground. Remove the frames with echoes from the stack, or reshoot. 5. Halos - soft halos are caused by stacking software becoming slightly confused trying to smooth areas with high contrast, worse with light subject on black backgrounds. Big blurry halos can be caused by variations in light source; psychedelic halos bleeding into the background can be caused by lighting or shots in the stack being in the wrong order. Tight, hard halos with chunks are sometimes found in dmap stacks - increase estimation radius and rerun. Generally stacking with pmax will overcome soft and color bleed halos. I found the Zerene help page very useful - for the differences in pmax vs. dmap stacking and how to retouch using one or the other or individual stack images. For Oly shooters, my conclusions after a little bit of study is that the halos are not a function of the camera or the lens, they are a function of the in-camera stacking software, and maybe the lighting. So I will probably forgo the Olympus in-camera stacking (turn focus stacking off), and just do individual shots (turn focus bracketing on and start focus on closest part of the subject) and do all my stacking in Zerene. I think Oly's in-camera stacking works ok most of the time, but simple bracketing and export to better stacking software seems to be the right plan for my flowers against a black background. The new OMDS OM-1 camera seems to treat focus stacking/bracketing the same as the EM-1 series does. I haven't done many in-camera stacks in the ultraviolet because of the longish shutter speeds, but have not seen soft halos in those images.
  10. UV color is a complex and controversial topic. For example, Andrea will always remind you that UV false color is not strictly related to wavelength, as it depends on many factors (lighting spectrum, lens transmission, filter transmission, sensor response, white balance...), although we always see the same colors in our UV photos: blues, lavenders/purples, yellows, and sometimes green. Red is not a color that we would expect. A different way of thinking at color outside the visible spectrum in general is to make a TriColour/trichrome/tri-band image, which often produces more natural-looking colors (for example, the sky is still blue) and also there is a wavelength-color relationship. UVP member Bernard Foot has experimented with the technique some years ago, and I have already tried it before. Other people (notably UVP member OlDoinyo) like to render white-balanced UV photos in BGR (swapping the red and blue channels), which also produces blue skies and a different color palette. Since I have a color camera, the images I take when making a TriColour image have colors, which I normally get rid of to make the channels. If I stack those images instead, I can simulate the raw image taken by a camera with an approximately flat response between about 310 nm and 400 nm, and with sunlight having a uniform spectrum too. This never happens in real life, even with a UV-dedicated lens. The interesting part is comparing the resulting colors with those of a normal UV photo. The equipment I used is the usual one: full-spectrum Canon EOS M, SvBony 0.5x focal reducer lens and the following filters: TriColour: Blue channel: double 310 nm Chinese bandpass filter + ZWB1 (2*2 mm) (the ZWB1's are not necessary, but I used them anyway); Green channel: BrightLine 340/26 filter + ZWB1 (2*2 mm); Red channel: BrightLine 387/11 filter + Chinese BG39 (2 mm); Standard UV: ZWB2(2 mm) + Chinese BG39 (2 mm); Visible reference: Chinese BG39 (2 mm); The technique used to make the TriColour images is also the usual one, described here. The major difference is that I took multiple 310 nm exposures this time and stacked them taking the darkest pixels (5 in both cases). As for the raw color stacks, I set the brightness of each image to be about the same by eye and stacked them. Also, following Andy's advice last time, I raised the brightness of my images and the contrast in the TriColour stacks (also because the contrast in the original channels was removed in PhotoNinja during the processing). The visible and UV references are white-balanced in-camera, the raw stacks were white-balanced in PhotoNinja. I used both UV-lavender and UV-yellow subjects. For the lavender, I picked three items with varying degrees of lavender: a magnifying glass on the left (transparent at 387 nm, mostly transparent at 340 nm and opaque at 310 nm), almost colorless; a white LED lightbulb in the middle, and a plastic lens on the right (mostly transparent at 387 nm but opaque at 340 nm and below, which shows a strong blue-purple color). Visible reference: Standard UV: White-balanced raw UV stack: As you can see, the color palette didn't change much, but since here the shorter UV wavelengths contribute much more to the image, the magnifying glass is noticeably darker. In general, objects with a pale lavender color got a color boost. UV TriColour: Here the color palette is obviously richer, with the color giving a good indication of the transmitted/reflected wavelengths. Standard UV, BGR: Compared to the TriColour rendition above, only the plastic lens on the right looks similar, while the color deviates more for items with a flatter UV response. For comparison, here's the raw stack, in BGR version: ...and now for the yellows. Here I used a 3 mm thick ZWB1 filter on the left, and a 2 mm thick ZWB2 filter on the right. Visible reference: Standard UV: Here the colors look similar, with the ZWB1 filter being slightly greener, as expected. Also the paper tissue I used apparently contains UV-absorbing fibers. White-balanced raw UV stack: Here things get weird. The ZWB1 filter got orange, which is a bit different from its normal color. Also, and this was expected, the difference in color (hence transmission) between the filters is more evident now. UV TriColour: Standard UV, BGR: UV stack, BGR: Raw or .tif files are available.
  11. EDITOR'S NOTE: I need to preserve this, so I'm putting it here. Comments, corrections, suggestions are all welcomed. The middle strip shows Blue in varying degrees of brightness and saturation. Six different methods of conversion to Black and White are shown next to it. The leftmost strip shows a kind of relative luminance conversion, sRGB luma, which is calculated on converted RGB values which are non-linear. This conversion does not do well at preserving brightness differences on the Blue strip. But in some cases, the results are better and do preserve the luminosity of colors. See below. The next strip shows middle grey (128,128,128) as a Color layer over the blue strip. This produces a luminosity result very similar to the sRGB luma method. The formula has different weighting factors. (Shown below.) The strip to the immediate left of the Blue strip is a greyscale conversion from Photoshop Elements. Adobe has never shared their "secret formula" for greyscale conversion. Note that greyscale conversion preserves most of the brightness and saturation differences of the Blue strip. Immediately to the right of the Blue is a simple desaturation of the Blue strip. To desaturate, the max and min RGB values are averaged. This desaturation also preserves the brightness and saturation differences of the Blue strip. But desaturation does not always work well. See below. The next strip shows a another simple averaging method for desaturation. The result looks good here, but you can run into troubles with this method also. See below. Please note that there are a variety of desaturation formulas. And finally the rightmost strip shows a conversion based on the brightest RGB values. Obviously this method does not preserve saturation differences when brightness is the same. Here are the 6 primary and secondary RGB colors: Top Row: Red(255,0,0) Green(0,255,0) Blue(0,0,255) Bottom Row: Cyan(0,255,255) Magenta(255,0,255), Yellow(255,255,0) The 6 colors are shown at full saturation and full brightness. Conversion to B&W using sRGB luma calculation. Here the luminosity of the fully saturated, fully bright colors is preserved. The yellow block is most luminous. The blue block is least luminous. The number on each block indicates the grey value. For example, the red patch (255,0,0) has been converted to (54,54,54) Conversion to B&W by layering middle grey (128,128,128) as a Color layer over the 6 blocks. Photoshop Elements was used for the layers. The approximate formula for luminosity in PSE is shown. Again the luminosity of the fully saturated, fully bright colors is preserved. But above, there were some problems with the blue strip on the bottom half with the fully saturated but decreasing brightness patches. Conversion by Greyscale Here the Photoshop Elements greyscale conversion was used. The formula is unknown. It appears to be some kind of weighted calculation which preserves luminosity. Conversion to B&W by simple desaturation. Obviously, simple desaturation does not work so well when there is uniform brightness and saturation in the colors. Conversion to B&W by an averaged desaturation. Again, not so great given uniform brightness and saturation. Conversion to B&W by maximum brightness, max(R,G,B). A bit difficult to show here because each block produces 255 and (255,255,255) is pure white. Conclusion When converting color to Black & White, there is no best method. But it looks to me like the greyscale or luma conversions are probably a good place to start. With any method, the outcome is dependent on the intensity (brightness and saturation) of the original colors together with whatever the photographer wants to bring out in the photograph by use of filters (either physical or digital) as well as other editing tricks and techniques. Of course, you can always let Photoshop convert to Black & White for you. QUESTION What is a Monochrom* camera recording and what is it doing to the raw data? My assumption has been that the photon wells are capturing more photons from more reflective areas and fewer photons from less reflective areas of a subject. But does the camera massage that data in any way before producing the photo? Is the resulting photo similar to greyscale or desaturated or luma conversions as shown above? The answer is probably out there with a little bit of Internet sleuthing. Related Question Two objects are sitting side by side. One reflects 100% blue and absorbs all other wavelengths. The other reflects 100% yellow and absorbs all other wavelengths. If photographed with a Monochrom camera, will the two objects look the same in the photo? We perceive the yellow object as having more luminosity than the blue object. But the Monochrom camera has no way to detect that. *I am referring here to a Leica Monochrom digital rangefinder. Other Conversion Methods from Color to Black & White Black to White gradient maps are another useful method. Experiment with the middle slider for increasing/decreasing dark/light areas or substitute dark grey for black for a different look. You can use the channel mixer to weight R, G and B values. Or use the channel mixer as a way to apply a filter. BOUQUET Black & White Conversion via Desaturation Very dull !! That is, it lacks contrast. Black & White Conversion via Greyscale Black & White Conversion via Luma Weighting Black & White Conversion via Gradient Mapping I like this one best. I chose black and white for the gradient. The Gradient Mapping Leveled Maybe a bit too much. I applied the black dropper to the darkest area and the white dropper to the brightest area.
  12. I've used Photoshop Elements for years whenever I need some simple layer work, text on photos, frames, drop-shadow cut-outs, dust bunny removal, color analysis and other stuff. I just added the latest Adobe Photoshop Elements 2022 to my new (sorta) Macbook. This latest 2022 version has some good improvements in selecting and refining edges for cut-outs. I make occasionally make cut-outs of UV/IR floral signatures either to remove cluttered backgrounds or for design purposes, so I was very happy that cut-outs are now easier. Here is an electric-pink hollyhock which I cut-out and drop-shadowed while practicing with my new PSE 2022. A bit of detail was subsequently added in Luminar. Adobe RGB is used. I hope that is nicely viewable in all browsers. You have to be careful when resizing drop-shadowed cut-out file because sometimes a halo artifact appears between the cut-out and the drop shadow after resizing. I used Photo Mechanic's resizer and it did well. Anybody who has ever tried a cut-out with earlier incarnations of PSE probably had the same troubles I did in getting a smooth, yet realistic cleaned-up edge. In PSE 2022 it was SO easy. I selected the pink flowers using an auto-selector. (First time that ever worked really well!) Then made 3 little tweaks with the Refine Edge tool to smooth a bit and remove edge color artifacts. Here are two unresized details from the edges of the flower before the photo was resized. From the right side of the flower. The drop-shadow shows no gaps because the Refine Tool cleaned up the edge so nicely. From the left side of the flower. There are a couple of very very minor bumps here, so perhaps I could have gone one or two more pixels in smoothing. But after resizing it didn't matter. BTW, those Luminar detail sliders are very useful. You can brush in the desired detail enhancement exactly where you want it. Here is a before and after, unresized crop. Note in the green areas that detail sliders can bring out a noise effect. I'll brush that out in the final version and just keep the enhancement for the white filaments and anthers.
  13. I'm using a Sony A7R + Sony-Zeiss 55 1.8 + ZWB2 and bg39 stack and I'm not sure exactly if I'm doing things correctly or if the filters are working 100% as intended. I shot a couple of test shots outdoors. I didn't WB the camera and ended up using the same profile I use for 850nm to white balance in Lightroom. I tried several processes - I put some photos through DXO PureRaw (to clean up grain and correct distortion) then edited in LR while for another I tried editing in DXO PhotoLab 5 and in LR so I will post below the process used for each. Filters: https://m.aliexpress.com/item/1005003432469423.html?spm=a2g0n.order_detail.0.0.23daf19cQqAl4I AND https://m.aliexpress.com/item/4001322146531.html?spm=a2g0n.order_detail.0.0.4f8ff19cS2QNjD Looking for any CC or suggestions! Also trying to verify the filters are working as they should!
  14. Added Later: My summary has become this: why is there color noise in the UV photo but not in the Visible or UV+Blue+Green photos? Added Later: I attempted an explanation here - LINK. This is an unresized crop from a D610 + UV-Nikkor + BaaderU + SB140 photo of Chamaebatiara millefolium photographed indoors against a black velvet background. The file has been converted and white balanced only. No sharpening or detail enhancement has been applied. The flower buds, stems and small leaves are very, very hairy. The areas with these small hairs show lots of color artifacts. I don't think these are due to iridescense. It seems rather that the complexity of the hairy areas causes some kind of moiré like effect. I'm not sure what is the correct terminology for this effect. This will click-up to approximately 1600 x 1800 pixels in an enlarged browser. Then you can see the color noise. But next after this photo is a 3X enlargement which clearly shows the color noise. This screen shot was taken from a 3X enlargement in Photo Mechanic. There are cyan, pink, blue, brown and green areas. Here is that same area after Noise Ninja color reduction set to the default 50. Not quite every color is gone, but things look a bit less color-noisy. Finally here is Noise Ninja at the maximum 100 for color noise reduction. There are still some colorful bits, but the hairy areas are much more neutral. Here's the first thing: I think I like seeing the color noise due to the complexity of the hairy areas. So I'm undecided about whether to de-noise a photo like this or not. Here's the second thing: The UV light passes through the Bayer filter and is primarily recorded in the red channel. The white balance step produces typically some combination of false blue, false yellow and grey/black/white tones. So where does this color noise come from? It must be a result of demosaicing? This is an enlargement in Raw Digger showing the file before any white balance is applied. You can see some of the color noise in this raw composite.
  15. Following my post on using the 405nm laser to image falling snow, I decided that I needed a 405nm bandpass filter that was larger than 12mm (the size of the Omega filter that I used in that series). I bought one online earlier this week, and it arrived today, so I tested it out. The filter I bought was the MidOpt Bi405 25.4mm filter, which has this spectrum, using data given by the manufacturer and replotted by me for easier reading: The filter also has an IR leak which I plugged by stacking with BG38 2mm. Having acquired the filter, I took some photos with it out my window using the Sony A7S full spectrum conversion, and the EL-Nikkor 80mm/5.6 (metal) lens. I then processed the images in PhotoNinja as usual. My settings for PN were with the default values inside the checked boxes. At first everything seemed fine, albeit with a tiny bit of blurriness that I didn't usually get with the EL-Nikkor: But a closer examination showed something was very badly awry! Here is a crop of the above image enlarged 300% with nearest neighbor interpolation: It looked horrible. Mind you, it looked like this in the original TIFF and in the RAW converter, so that blockiness isn't JPEG artifacts. I did wonder at first if it might be caused by the fact that Sony uses compressed RAWs in their A7S (uncompressed is not an option) but further investigation convinced me otherwise. Because the next thing I tried was processing the image with Adobe Camera RAW, with very different results. ACR, with "Adobe Monochrome" and 16 bits selected, produced the following rendition, again with all default settings unmodified: Full size: Crop at 300% with nearest neighbor interpolation: MUCH better. So the conclusion I'm drawing is that when all the information is in one channel (blue here), PhotoNinja has serious issues processing the RAW, but Adobe does not. LATE BREAKING UPDATE: While there is about 2 stops more blue in the RAW than red and green and green2, RAW Digger (yes I own it finally) says there's plenty of the latter.
  16. Tried a UV landscape today with the Pentax K-1 using the pixel shift resolution setting. This moves the sensor one pixel, four times, to capture full frames of rbg. Used my standard in camera UV custom WB. Instead of a normal UV picture it has a major yellow cast. Visible light photos using pixel shift have the same WB as standard. Any ideas why this occurs? Haven't had time to open the Raw in Silkypix to see if it fixes things. There does't appear to be any way in camera to set a custom pixel shift WB. Guess I'll have to live with it. Thanks, Doug A
  17. Pentax 645Z with Pentax 645 A 120 macro lens and Tiffen UV Haze 2A filter. First attempt using diy modified Pentax AF540FGZ full spectrum flash with 6mm of Tangsinuo ZWB1 UG11340nm glass covering flash tube. F11 ISO400 21 seconds to allow 3 flash pops. I tried this shot earlier without flash on a moonless night. Unfortunately, there are bright lights about 1/4 mile away and they diluted the purity of the yellow. I struggled processing this image. Difficult to decide how it should look . Any tips and comments are welcome. Thanks for looking, Doug A
  18. Things Not Needed in a Converter App 1) The Milky Way 2) Giraffes Alternate title: The Wonders of AI This from Luminar 4 - a perfectly replaced sky in an IR photo and a Giraffe. Sorry, I couldn't resist... I have been wearing myself out attempting to find a non-SilkyPix converter which will white balance my S1R files. While trying out Luminar 4, I wandered into the AI Tools area.
  19. DxO Photo Lab 5 will not correctly white balance all my raw UV files. It does OK with some Nikon D610 raw NEFs made using the BaaderU UV-pass filter, but won't correctly handle those D610 files made with a U330 + S8612 stack (UV + Blue + Green) or the StraightEdge UV-pass filter, red version. Photo Lab 5 cannot correctly white balance my Panasonic S1R UV files at all, not even those made with the BaaderU. For the record in this short report, there is only a very simple White Balance tool -- a dropper which can range in size from 1-50 pixels diameter. The temperature slider ranges from 2000 to 50000. The Tint slider from -50 to +50 along the usual green/magenta axis. (The 50000 temperature setting is unusual. It is for aquatic images.) Usually a temp of 2000 with a negative tint is sufficient for white balancing UV images, but not in this app.) There are otherwise some very good photo editing tools in DxO Photo Lab 5. For example, there is very nice Local Adjustments palette which incorporates the old Nik Control Points along with some improvements and additions. And DxO Photo Lab is widely known for its capability with "ordinary" visual images for which the camera and lens information is known in order to apply corrections for various lens distortions and aberrations.
  20. Since many substances absorb plenty of UV, the world often appears quite dark in UV. Unless the sky is part of the picture. The Skie likes to appear very bright. Sure, it also scatters a lot of UV light. This is certainly one of the reasons why daylight photos with a cloudless sky often appear softly lit. When I was still doing the WB with anodized aluminum plates, I particularly noticed the darkness in the UV. Because with the aluminum I also had a kind of reference gray. PTFE is less suitable because our perception of brightness is not linear and we cannot differentiate light substances very well. The following pictures illustrate my problem: If I have PTFE in the picture, I can adjust a light value to it, e.g. 90% luminance. A point in the shadow defines black. Then the flowers look structureless black: If I remove the PTFE, the whole picture looks much too dark: Now I lighten up: I can see structures in the flowers and in the wood. However, this brightness does not correspond to the first picture with the Teflon. So it is totally overexposed: If I increase the local contrast in the flowers, the result looks pretty nice. But now we are in the field of fine art ... Now to my question: How do you do it with the brightness and the gradation or the gamma? Do you use standards (like PTFE for the WB) or do you just decide based on the visual result?
  21. To see a non-white balanced file, or "raw composite" as it is called in Raw Digger, make the following settings in RPP. The output is quite similar to that from Raw Digger but may have different camera color profiliing applied. The differences for this particular example seem to be in degree of saturation and amount of brightness. White Balance, WB: Select UniWB from the drop-down WB menu at the top left. Curve: Select Gamma from the drop-down Curve menu under the RGB channel boxes. ............Enter 2.2 in the gamma box. All other settings should be 0 or OFF. Note that while you can get a UniWB version of a file from some raw converters (like RPP or Darktable), you would not be able to get the accompanying raw histograms supplied by Raw Digger which are useful for various kinds of analyses. An RPP screen shot showing the settings for obtaining a raw composite. RPP: Raw Composite with Gamma 2.2 and RPP's camera profile. Raw Digger: Raw Composite with embedded camera profile from NEF and a Gamma 2.2 curve, and autoscaling. For reference here is the finished photograph (which was shown elsewhere earlier).
  22. As you all are surely aware, there is a way to create white light with just red, green and blue light. But there can be more combinations then that, hence CRI. Methinks, surely this works the same way for color channels too? I think with four you use cyan, magenta, yellow and green. What I would basically like to try is simulate an image a hyperspectral camera could make by merging more than three channels. Say I take a picture of a subject with 8 bandpass filters, the shortest wavelength bandpass is at 340nm, the longest is at 940nm. The rest lay somewhere in between. I now have 8 images but as of now I only know how to merge three. The way I would like to do this is start with a channel that is fully red, then move to a channel that is orange, then yellow, then green etc etc, so that in the end when I stack the channels, the whole spectrum is represented and all colors we can see can be created by mixing the different levels in each channel. I would like to do this in Photoshop but any other suggestions are appreciated. Thanks.
  23. Michael Erlewine, a Nikongear member, (Birna's other forum) wrote some tutorials about focus stacking. These might be of interest to members here. 24 video tutorials: http://spiritgrooves.net/Photography.aspx (24 vids, my goodness !!!) PDFs: http://spiritgrooves.net/e-books.aspx#Photography All Erlewine's eBooks about phototography, macros, stacks are listed there. Specific links to Stacking eBooks. Clicking these links will download a PDF. http://spiritgrooves.net/pdf/e-books/ArtofFocuStacking.pdf http://spiritgrooves.net/pdf/e-books/AOFS_Workbook.pdf http://spiritgrooves.net/pdf/e-books/Retouching Stacked Photos.pdf http://spiritgrooves.net/pdf/e-books/Focus Stacking Short.pdf Do look at one of Erlewine's eBooks because the images are stunning! The bolded link is a good example to look through just for the photos.
  24. I have a longstanding interest in local history, and lately this has crossed over a number of times with my interest in multispectral imaging because of the possibilities of revealing seemingly lost information. These possibilities have been well-investigated by the historical community, so I'm not doing anything new and exciting by their standards here, at least from a technical standpoint. One of the better-known examples is Christina Duffy's multispectral imaging of a burnt Magna Carta. This post will be on a commonly used method of combining multiple images from different (sometimes overlapping) spectral bands to extract the text of this advertisement. The method is called Independent Component Analysis (ICA) or Blind Signal Separation (BSS). The main text of the ad is reasonably clear, but there is smaller text that is nearly illegible and it would be nice to recover it. Independent Component Analysis originated in the audio community. The original problem it was meant to solve is known as the "cocktail party problem" — you are at a party and two people are talking at once: how do you figure out what each person is saying? Each ear hears something slightly different (because it is facing a different direction, at a different distance from each speaker, etc.) so your brain can untangle the resulting mess somehow, but what if you want a computer to do it? In the computer version, you have two recordings from different mics (representing your ears) and the computer's task is to spit out the two original audio streams. The way it was solved was to imagine that each recording is a weighted average (linear combination in math-speak) of the original sources, but you don't know the weights. The problem becomes to recover the weights. Different ICA methods take different approaches to finding the weights. The method I used is called fastICA. In the context of image analysis, we imagine that each channel of our multispectral image (not just the R,G, and B, also additional channels for UV and IR) contains some information about the hidden letters, but different colored letters might reflect in different parts of the spectrum. The orange text, for example, is not visible in UV. This means that the ICA algorithm (which is just adding cleverly-chosen weighted sums and differences of the original channels) would be able to subtract off the brick background in principle, making the text easier to read. Could you do this by hand? Technically yes. It is just adding and subtracting channels, after all. But in the current example there are 12 channels coming from 4 images, and determining the correct weights — all 144 of them — by trial and error would be very laborious indeed. The images used here were taken with the following filters/stacks using the Novoflex Noflexar 3.5/35mm: UV- 2mm UG11 and 1.5mm S8612 Visible- Hoya UV/IR Cut IR/vis- TIffen #12 IR- Hoya R72 UV: Visible (for reference): IR/vis (Tiffen #12). This has had the Aerochrome treatment described in the other post on my helicopter flight: IR: The IR does the best by itself in revealing the smaller text, but it does not make it fully readable. Now we run the ICA. If you give the ICA 12 channels (4 photos x 3 channels/photo) then it will give back 12 "independent components" - images that are statistically independent and therefore should hopefully reveal unique information. Here is what you actually get back (with some contrast adjustment): The ICA process is not 100% unique. The ICA components that are revealed will come out randomly inverted (because statistically, changing the sign from + to - does not affect whether a channel is correlated to any of the others). So it is permissible to invert them back to normal. What we see above is that (as predicted) the ICA managed to wipe the text off the wall altogether in some cases, and in others bits of the text remain. The images fall into 4 groups: (1) images showing the main text ("Royal Crown Cola"/"Mansfield Market"), (2) images of plain brick wall, (3) images with bits of the smaller orange text that we are interested in, and (4) one entirely blank image (noise). That last one is because both the Tiffen and the Hoya R72 image contain duplicate infrared info, so it subtracts IR - IR and gets noise. I took an average of each of the first three groups and then put the results in the channels of an Lab file. This was the final result: The smaller text is now readable (barely)! The ads read, MANSFIELD MARKET FRESH KILLED POULTRY MEAT GROCERY FRUITS VEGETABLE (something, probably FRESH) EVERYDAY and Drink ROYAL CROWN COLA (unreadable) References I learned a lot from this review article, and if you are interested in writing your own ICA routine, I recommend it highly, especially for its comments on the pros and cons of different methods: Choi, S., Cichocki, A., Park, H.M. and Lee, S.Y., 2004. Blind Source Separation and Independent Component Analysis: A Review. The wikipedia article on fastICA has a nice overview of that particular method. https://en.wikipedia.org/wiki/FastICA A fairly advanced book on the topic (read the review above before diving into this). Cichocki, A. and Amari, S.I., 2002. Adaptive blind signal and image processing: learning algorithms and applications (Vol. 1). John Wiley & Sons. Christina Duffy's piece on the burned Magna Carta is fun reading. https://www.bl.uk/ma...rnt-magna-carta
  25. The newest version of Affinity photo is out and free upgrade to those whom own it. The most interesting feature for me is it now supports FITS files natively, the raw format for Astrophotography cameras. Before it was only hidden in Astro profile and you needed a flat, dark and stack to work on them. But sadly doesn't have a GCYM or any mix of that color profile. So still get bands with Lodestar x2c files. I may just have to write a program to debayer that camera myself. Especially since PHD2 seems to shift the pixels and even Starlight live can't handle them correctly.
×
×
  • Create New...