• Ultraviolet Photography
  •  

Flow visualization by Background-Oriented Schlieren imaging

Processing LWIR Video
19 replies to this topic

#1 Andy Perrin

    Member

  • Members+G
  • 4,084 posts
  • Location: United States

Posted 12 April 2021 - 05:43

By rights, this topic doesn't really belong on UVP, which is why I'm sticking it in the chat room, but I have a feeling many people here will be interested in it, because it's about another kind of invisible imaging.

I've been working for several years with a Ph.D. student in Florida who is building a large scale wind tunnel for imitating thunderstorm downbursts over models of medium-sized buildings. The problem he has is how to visualize the flow over these models in a wind tunnel the size of small warehouse. Currently he is doing a smoke based system, but it occurred to me that another way would be to put pieces of black tape at various places on the model and heat them with spotlights or small lasers, then visualize the refractive index changes in the air as it flows. The imaging of refractive index gradients has a long history in fluid mechanics and is known as Schlieren imaging. It is usually done with large parabolic mirrors, but a new kind of computational Schlieren called Background-Oriented Schlieren has recently come of age, and it is extremely simple to do, as I will demonstrate below.

The basic idea is that you take your warm object that is making the hot air currents and you place it in front of a large screen of randomly placed dots. Small changes in refractive index caused by warming the air make the dots "dance" on the background. Prior to placing the warm object in the scene, you take a "tare" image of just the screen with the dots, and then you can use special programs that use image correlations to determine how far each background dot has moved. The final output is then shown as a grayscale image where the amount of left-right movement of the dots is coded as a shade of gray (or sometimes a false color).

The setup is shown below, both as a schematic from a paper by Gary Settles, "Smartphone schlieren and shadowgraph imaging," and also in my kitchen.

Attached Image: Screen Shot 2021-04-12 at 1.26.03 AM.png

My setup:
Attached Image: BOS setup2 UVP.jpg

Note that the size of the screen I used is actually much too small, which I knew ahead of time, but this was intended only as a proof of concept. I wanted to know how easy it was, and how practical it would be to build a set up in a large wind tunnel. I also wanted to demonstrate the concepts for my student, and show him how to do the processing.

My tare image (actually an average of 30 aligned images to reduce noise) looked like this:
Attached Image: B1 UVP.jpg

A second image with the candle lit looked like this. No visible movement of the background is obvious and it couldn't be seen with the naked eye either. At this point I was very nervous that it wouldn't work!
Attached Image: B2 UVP.jpg

After processing the images, though, the airflow popped right out! Images were processed in MATLAB using a freeware program called PIVLab. Places where I had no dot screen or that had very little texture for the PIVLab program to detect came out noisy. I have also Photoshopped the candle and bowl back into the photo, which is standard procedure in BOS imaging.
Attached Image: BOS UVP pic 1.jpg

Here are some more examples:
Attached Image: BOS UVP pic 2.jpg

Attached Image: BOS UVP pic 3.jpg

Attached Image: BOS UVP pic 4.jpg

I do believe this is the first time THE AIR ITSELF has been imaged on UVP!

I also took thermal photos, but what you see here is not the air but the soot from the candle flame (since gases don't emit blackbody radiation). And before Stefano points it out, yes, I COULD have visualized the CO2 absorption in MWIR with the other camera, but at the moment I don't have a way to support that 7kg camera in my kitchen.

Attached Image: IR_4714.jpg
Attached Image: IR_4715.jpg


Edited by Andy Perrin, 12 April 2021 - 06:37.


#2 colinbm

    Member

  • Members+G
  • 2,405 posts
  • Location: Australia

Posted 12 April 2021 - 07:00

Almost scary, ghost like...
You have the equipment to measure the temperature of the moving air, & the equipment to see the moving air, I wonder if you can see where different wavelengths finish ?

#3 Andy Perrin

    Member

  • Members+G
  • 4,084 posts
  • Location: United States

Posted 12 April 2021 - 07:22

Colin, if I could align everything right, maybe? The thermal camera does also have a crappy visible light camera that lets you do image fusion, so I maybe I could get something out of that with bright enough lighting.

#4 JMC

    Member

  • Members+G
  • 1,270 posts
  • Location: London, UK

Posted 12 April 2021 - 08:04

Amazing pictures Andy.

Out of interest, do you know if there is a 'non Matlab' way of doing this type of imaging relatively simply?
Jonathan M. Crowther

http://jmcscientificconsulting.com

#5 Stefano

    Member

  • Members(+)
  • 1,796 posts
  • Location: Italy

Posted 12 April 2021 - 08:14

Nice stuff Andy! I didn't actually think of CO2 absorption in MWIR, but I think a "dark gas" should also emit some blackbody radiation. You would literally see the "air" (CO2) glowing. The problem is that CO2 absorbs better at around 4.5 μm (it has weaker absorption bands at shorter wavelengths), and you would have quite a bit of background thermal radiation there (but since the gases emitted by a candle flame should be pretty hot, the emission may still stand out). Still, one day you could try this.

I also wonder what's special in this random dots. Wouldn't a grid work better? A gamma camera uses a coded aperture mask with a random pattern of square dots (at least, it appears random to me), and that somehow helps making an image (the Wikipedia article explains it, but I didn't read it and probably I wouldn't understand much). There are also other types of coded masks, but with a non-random pattern (such as these). There is something special about randomness.

#6 nfoto

    Former Fierce Bear of the North

  • Owner-Administrator
  • 3,179 posts
  • Location: Sørumsand, Norway

Posted 12 April 2021 - 08:44

We at UVP like to see such new techniques demonstrated !!

#7 Bernard Foot

    Bernard Foot

  • Members+G
  • 718 posts
  • Location: UK

Posted 12 April 2021 - 10:21

That's fascinating, Andy.

How did you create the tare image?

I guess the air rising from the candle forms a fairly constant flow. Will the simulated thunderstorm be more turbulent - and is that an issue?

How do you think this will scale up? If you have a warehouse wall-sized tare image I guess the challenge is going to be to get enough resolution to capture all the moving dots.
Bernard Foot

#8 dabateman

    Da Bateman

  • Members+G
  • 2,767 posts
  • Location: Maryland

Posted 12 April 2021 - 11:04

Excellent Andy. Hot air is present on the forum all the time. I am glad you can now photograph it.
You said "but at the moment I don't have a way to support that 7kg camera in my kitchen."

Sounds like your not using your students properly. Isn't that what they are for.

When you first mentioned this I was expecting you had bought a bottle of culture club mirror paint and painted a wall in your apartment.
https://www.cultureh...&pr_seq=uniform

This maybe cheaper depending on the grain and size of the dots.

#9 OlDoinyo

    Member

  • Members(+)
  • 864 posts
  • Location: North Carolina

Posted 12 April 2021 - 13:50

Would the Schlieren effect be more pronounced in UV? I would think the refractive index change might be greater and some sensitivity might be gained.

#10 Stefano

    Member

  • Members(+)
  • 1,796 posts
  • Location: Italy

Posted 12 April 2021 - 13:59

View PostOlDoinyo, on 12 April 2021 - 13:50, said:

Would the Schlieren effect be more pronounced in UV? I would think the refractive index change might be greater and some sensitivity might be gained.
There is a change, but I think it would be negligible.

https://www.google.c...u3n-Rmbz9H8Rfo4

#11 Stefano

    Member

  • Members(+)
  • 1,796 posts
  • Location: Italy

Posted 12 April 2021 - 14:49

Well, one advantage UV could have if you are imaging very hot objects (like the candle flame) is that you have much less blackbody radiation and so you should have a cleaner image.

#12 Andy Perrin

    Member

  • Members+G
  • 4,084 posts
  • Location: United States

Posted 12 April 2021 - 16:06

JMC said:

Amazing pictures Andy.

Out of interest, do you know if there is a 'non Matlab' way of doing this type of imaging relatively simply?
Thanks, and yes, there are plenty of freeware PIV ("Particle Imaging Velocimetry") programs out there, and plenty of paid ones also. Any of them will do. I actually went with PIVLab in MATLAB because it has a nice GUI (no coding required).

Stefano said:

Nice stuff Andy! I didn't actually think of CO2 absorption in MWIR, but I think a "dark gas" should also emit some blackbody radiation. You would literally see the "air" (CO2) glowing. The problem is that CO2 absorbs better at around 4.5 μm (it has weaker absorption bands at shorter wavelengths), and you would have quite a bit of background thermal radiation there (but since the gases emitted by a candle flame should be pretty hot, the emission may still stand out). Still, one day you could try this.

I also wonder what's special in this random dots. Wouldn't a grid work better? A gamma camera uses a coded aperture mask with a random pattern of square dots (at least, it appears random to me), and that somehow helps making an image (the Wikipedia article explains it, but I didn't read it and probably I wouldn't understand much). There are also other types of coded masks, but with a non-random pattern (such as these). There is something special about randomness.
Stefano, thermal emission from gases is a line spectrum, not blackbody. Blackbody refers specifically the the Planck distribution. I do plan to look at CO2 one of these days.

Now regarding randomness. This is actually important to the method. Repeating patterns will NOT work at all, because the software cannot tell if the pattern has shifted by some integer number of cycles of whatever pattern you used. Like, you cannot tell these patterns apart because I moved the bottom one over by two spaces.
...1212121212121212...
  ...1212121212121212...
The phenomenon is called aliasing. When you learn about Fourier transforms you will hear more about it.

Bernard said:

How did you create the tare image?

I guess the air rising from the candle forms a fairly constant flow. Will the simulated thunderstorm be more turbulent - and is that an issue?

How do you think this will scale up? If you have a warehouse wall-sized tare image I guess the challenge is going to be to get enough resolution to capture all the moving dots.
I made the tare image with another MATLAB script that I downloaded (freeware), but actually almost any random dot pattern works well. According to the literature, the dot size should be chosen to represent 3-4 pixels on the camera sensor, so you deliberately choose the dots to be resolvable!

The thunderstorm will be more turbulent, but you can still see a lot of structure in turbulence. Many people have used this technique with turbulent flows before successfully. My biggest concern is that the the camera will not be far enough back to get the subject and the BOS background in good focus. We can stop down and use more light, but that only works up to a point.

#13 Andy Perrin

    Member

  • Members+G
  • 4,084 posts
  • Location: United States

Posted 12 April 2021 - 16:07

OlDoinyo said:

Would the Schlieren effect be more pronounced in UV? I would think the refractive index change might be greater and some sensitivity might be gained.


Yes, as you can see from Stefano's curve, the refractive index gets larger in UV. The changes in refractive indexes that are being measured here are already infinitesimal, and only a small change would have a big effect. However, I'm not sure it would help to go towards UV since the formula relating the pixel shift to the refractive index of surrounding air is inversely proportional to refractive index. IR might help more.

Edited by Andy Perrin, 12 April 2021 - 16:07.


#14 dabateman

    Da Bateman

  • Members+G
  • 2,767 posts
  • Location: Maryland

Posted 12 April 2021 - 17:57

Andy is the random dot method better than the mirror method?
Now that you can paint a mirror 8 feet by 8 feet with the culture club stuff for $40, I would think that would be easier. You could buy a sheet of plywood from Home depot, paint it, set it up and be ready to go.
Alternatively, if the random dot method is better. Than printing the pages out and gluing them to a board should also work.
I am just thinking when you scale this up to your warehouse application.

#15 Stefano

    Member

  • Members(+)
  • 1,796 posts
  • Location: Italy

Posted 12 April 2021 - 19:09

The mirror must be parabolic, not flat. I think a huge parabolic reflector could be built given the scale of the project (it seems big stuff to me), but I think it's not easy to make it precise enough. I don't know if the mirror has to be near-perfect in order to have a working system.

The BOS method has advantages, but as Andy mentioned focus is a problem. UV might help a bit with diffraction, but imagine the amount of UV lamps you would need to use... it would actually be interesting to take UV photos with that amount of UV.

Using long focal lengths can "compress perspective", but you would need to go far away. Or you can make the dots larger, but that would decrease resolution probably. This is actually pretty challenging.

#16 Andy Perrin

    Member

  • Members+G
  • 4,084 posts
  • Location: United States

Posted 12 April 2021 - 19:34

Dabateman, the mirror method requires a concave mirror that optically focuses the light with the precision of telescope type optics. It’s not something you can do with paint.

#17 WiSi-Testpilot

    Member

  • Members(+)
  • 105 posts
  • Location: Germany

Posted 12 April 2021 - 19:47

It's a very interesting subject.
With the GStreamer you can create (dynamic) snow patterns and other patterns, so you could use a screen or a projector for the background.

Code for a Windows PC:
gst-launch-1.0.exe videotestsrc pattern=snow ! video/x-raw, framerate=1/1, width=1920, height=1080 ! d3dvideosink

Attached Images

  • Attached Image: snow.jpg

Edited by WiSi-Testpilot, 12 April 2021 - 22:46.


#18 Andy Perrin

    Member

  • Members+G
  • 4,084 posts
  • Location: United States

Posted 12 April 2021 - 19:51

You can't use a projector! The method requires the dots to be absolutely fixed in place, but a projector would itself be subject to the motion of the air, so the dots would move. My student and I were thinking we would basically print wallpaper-like sections of random dots. Another idea might be to use e-ink eventually, like those large billboards.

Edited by Andy Perrin, 12 April 2021 - 21:01.


#19 Andrea B.

    Desert Dancer

  • Owner-Administrator
  • 8,762 posts
  • Location: UVP Western Division, Santa Fe, New Mexico

Posted 13 April 2021 - 22:04

Andy, I read with fascination. And the images are just *totally* COOL!! Thank you for this topic.

Small question: do you need to know the camera sensor's pixel pitch or pixel size
to determine the best size for the dots on the background image?

Could you use the steam escaping from a teakettle as the turbulent flow in your kitchen model?


**********
I was planning to add some keywords to UVP such as MWIR, SWIR, LWIR.
Maybe I should also add Background-oriented Schlieren Imaging also? :grin:

**********
I certainly have no problem with this topic being in the Technical section if you would like to have it there.
Andrea G. Blum
Often found hanging out with flowers & bees.

#20 Andy Perrin

    Member

  • Members+G
  • 4,084 posts
  • Location: United States

Posted 14 April 2021 - 02:31

Andrea, to answer the last question first: if you can figure out where on Earth this goes, by all means add it to a technical section. I was just scratching my head!

Quote

Small question: do you need to know the camera sensor's pixel pitch or pixel size
to determine the best size for the dots on the background image?
Not really, the main thing is that when you are all the way zoomed in, you want to have a few pixels across each dot. This is easily determined by moving the camera to and fro, zooming, etc. while watching in Live View. The papers I read suggested having a few different backgrounds prepped in advance with different dot sizes. The method is honestly REALLY easy on the photography end of things. It's a bit more involved on the computer processing side, but not so much that I didn't figure it out in about 45 minutes of fiddling with the PIVLab software. For free software, check out OpenPIV, which works using Python.
http://www.openpiv.net

Quote

Could you use the steam escaping from a teakettle as the turbulent flow in your kitchen model?
I certainly could! In fact one of the papers I read (by Gary Settles) had this one, made with an iPhone 5S:
Attached Image: Screen Shot 2021-04-13 at 10.30.53 PM.png

Edited by Andy Perrin, 14 April 2021 - 02:31.