After I finished evaluating the Storm Events Database from NOAA, I was convinced we needed to look for machine recorded events. When you start poking around the NOAA site looking for radar data, you’ll find a lot of information about how they record this data in binary block format. Within this data, you’ll find measurements of reflectivity, radial velocity, and spectrum width. So not only would I have to build code to decode the binary readings into textual data. But then I would have to build a model to translate those three measurements into a boolean for “was hail detected” and if so “how large was that hail?”
While it’s doable, I didn’t want to become an expert on reading radar data.
Fortunately, NOAA already has the data transformed from binary format to textual. Even better, they have built models that can translate those three sensor readings into fields that can answer my question of “was hail detected?” and “how large was that hail?”. This data is available in compressed CSV format too. So I pulled that data into our data lake and got to work.
Exploring the Source
Like last time, we needed to identify what was in the files. The files are bundled up into yearly files for all past years, and monthly files for the current year. Within each file we have a DateTime, longitude, latitude, radar station name, cell id (not 100% sure what this is), Range in nautical miles, azimuth in degrees (how high off the horizon), probability that the sensor reading is severe hail, probability that the reading is hail, and the maximum observed size of hail in the reading.
These are all machine created sensor readings, so it’s watching even when humans are asleep. And the hail readings have a probability associated with it telling us how likely the reading is accurate based on the translation model.
The big surprise here came when I looked at the total size of the data set. The previous data set was 1.06GB, this set is currently 10.8GB.
Data Quality Issues
The only real data quality issue I found was there were records with a SEVPROB (severe probability), PROB (probability), and MAXSIZE (maximum hail size observed) of -999. So let’s get rid of those.
ADLA Doesn’t support multiple output files yet
You can create a U-SQL script that reads in multiple files and does some work on the data in those files, but you have to write the results to a single file. It’s only available to folks in private preview. For the rest of us we could write 276 statements like the one below:
DECLARE @Source = @"/raw/ncdc.noaa.gov/pub/data/swdi/database-csv/hail-1995{*}.csv"; DECLARE @Destination = @"/sandbox/hail_research/database-csv-no999s/1995/01/hail-19950101.csv"; @haildata = EXTRACT [UTCTime] long , [Longitude] double , [Latitude] double , [RadarSiteID] string , [CellID] string , [RangeNauticalMiles] int , [Azimuth] int , [SevereProbability] int , [Probability] int , [MaxSizeinInches] double FROM @Source USING Extractors.Csv(quoting: false, skipFirstNRows: 3, silent: true); @rs1 = SELECT [UTCTime].ToString().Substring(0, 8) AS EventDate, [Longitude], [Latitude], [RadarSiteID], [CellID], [RangeNauticalMiles], [Azimuth], [SevereProbability], [Probability], [MaxSizeinInches] FROM @haildata WHERE [SevereProbability] != - 999 AND [Probability] != - 999 AND [MaxSizeinInches] != - 999 AND [UTCTime] BETWEEN 19950101000000 AND 19950101999999; OUTPUT @rs1 TO @Destination USING Outputters.Csv(outputHeader: true, quoting: false, rowDelimiter: "\n");
Or, we can use PowerShell to accomplish the same thing. I took the script above and turned it into a parameterized variable.
$usqlStatement = "blah" + $startDate + " more script here" + $endDate
With a parameterized version of the script, we can write a loop that goes through all the source files, removes rows with -999 values, and saves the output into separate daily files. That way, additional processing can be scaled out more! In using Powershell to create these scripts dynamically, I found out you can also submit the scripts to ADLA too!
In the script below I have a collection of all the scripts stored in $Result. Each $record in those results has a script name and a script body. I spin through each record and submit the job, so long as there are less than 200 ADLA jobs already queued or running.
foreach($record in $Result){ While($(Get-AzureRmDataLakeAnalyticsJob -Account "" -State "Running").Count -gt 200) { "Waiting for the number of active jobs to fall below 200"; Start-Sleep -Seconds 60 } "Submitting $record.ScriptName"; Submit-AzureRmDataLakeAnalyticsJob ` -Account "" ` -Name $record.ScriptName ` -Script $record.Scriptbody ` -AnalyticsUnits 1 }
Yeah, I found out there is a limit to the number of simultaneous jobs you can run in ADLA. It’s 200. If you’re going to run other queries while this PowerShell runs, you might want to set the max to a number lower than 200. Just in case.
After this job ran, we reduced the total data set size from 10.8GB to 6.44GB. This led us to our next issue: converting latitude and longitude to State, County, Township and Range. We have to geocode approximately 133M rows! Solving this problem has proven to take quite a while and several programming attempts. I’ll share them with you next time.
What do I want you to learn from this?
Exploring new datasets quickly is a skill you must develop in order to work in data science effectively. It took a couple days to locate this data set within all the links available on the NOAA site. Once we had the files, figuring out what’s inside them didn’t take a lot of time. Learning to deal with roadblocks was a major feature of this part of the story. ADLA simply didn’t have the functionality we needed to solve the problem…So I created a solution. Having the ability to fall back on previous skill sets and apply them to current problems will serve you well. Don’t abandon your old skills as you pick up new ones.