Last year, I blogged about an algorithm to find overshooting tops from satellite images ... on 1km visible imagery from satellite data, overshooting tops identify initiating thunderstorms very well. After some changes, we now have a working algorithm.
What changes did we have to make beyond the simplistic variance filter I talked about in my earlier post? First, we had to correct for sun angle because the satellite visible channel darkens considerably as the sun gets closer to the horizon. Although this correction is different for every part of the image, it could be done from basic solar radiation principles. Another thing we did was to pick the brightest 3-4% of the pixels i.e. a histogram equalization.
But the technique works, and now it works in real-time. This illustrates my work cycle in a microcosm -- secure funding, identify data source, analyze data, implement ideas, refine algorithm over different scenarios, run in real-time, collect statistics, publish paper ...
Note that I did not add operational implementation to the list. With funding cuts at operational agencies, they're not actually moving anything new into their systems -- instead, weather forecasters simply start using our "experimental" web services and our real-time data feeds become quasi-operational. It's a poor way to do business, but that's what we now have.
UPDATE: Here's the link to the realtime overshooting tops products.