Skip to content

shannonlowder.com

Menu
  • About
  • Biml Interrogator Demo
  • Latest Posts
Menu

NOAA Radar and Severe Weather Data Inventory

Posted on August 29, 2018November 13, 2022 by slowder

After I finished evaluating the Storm Events Database from NOAA, I was convinced we needed to look for machine recorded events. When you start poking around the NOAA site looking for radar data, you’ll find a lot of information about how they record this data in binary block format.  Within this data, you’ll find measurements of reflectivity, radial velocity, and spectrum width.  So not only would I have to build code to decode the binary readings into textual data. But then I would have to build a model to translate those three measurements into a boolean for “was hail detected” and if so “how large was that hail?”

While it’s doable, I didn’t want to become an expert on reading radar data.

Fortunately, NOAA already has the data transformed from binary format to textual.  Even better, they have built models that can translate those three sensor readings into fields that can answer my question of “was hail detected?” and “how large was that hail?”.  This data is available in compressed CSV format too.  So I pulled that data into our data lake and got to work.

Exploring the Source

Like last time, we needed to identify what was in the files. The files are bundled up into yearly files for all past years, and monthly files for the current year.  Within each file we have a DateTime, longitude, latitude, radar station name, cell id (not 100% sure what this is), Range in nautical miles, azimuth in degrees (how high off the horizon), probability that the sensor reading is severe hail, probability that the reading is hail, and the maximum observed size of hail in the reading.

These are all machine created sensor readings, so it’s watching even when humans are asleep. And the hail readings have a probability associated with it telling us how likely the reading is accurate based on the translation model.

The big surprise here came when I looked at the total size of the data set. The previous data set was 1.06GB, this set is currently 10.8GB.

Data Quality Issues

The only real data quality issue I found was there were records with a SEVPROB (severe probability), PROB (probability), and MAXSIZE (maximum hail size observed) of -999.  So let’s get rid of those.

ADLA Doesn’t support multiple output files yet

You can create a U-SQL script that reads in multiple files and does some work on the data in those files, but you have to write the results to a single file.  It’s only available to folks in private preview. For the rest of us we could write 276 statements like the one below:

	DECLARE @Source = @"/raw/ncdc.noaa.gov/pub/data/swdi/database-csv/hail-1995{*}.csv";
	DECLARE @Destination = @"/sandbox/hail_research/database-csv-no999s/1995/01/hail-19950101.csv";

	@haildata =   
		EXTRACT 	
			  [UTCTime] long
			, [Longitude] double
			, [Latitude] double
			, [RadarSiteID] string
			, [CellID] string
			, [RangeNauticalMiles] int
			, [Azimuth] int
			, [SevereProbability] int
			, [Probability] int
			, [MaxSizeinInches] double
			FROM @Source
			USING Extractors.Csv(quoting: false, skipFirstNRows: 3, silent: true);

	@rs1 =
		SELECT
			[UTCTime].ToString().Substring(0, 8) AS EventDate,
			[Longitude],
			[Latitude],
			[RadarSiteID],
			[CellID],
			[RangeNauticalMiles],
			[Azimuth],
			[SevereProbability],
			[Probability],
			[MaxSizeinInches]
		FROM @haildata
		WHERE
			[SevereProbability] != - 999
			AND [Probability] != - 999
			AND [MaxSizeinInches] != - 999
			AND [UTCTime] BETWEEN 19950101000000 AND 19950101999999;

	OUTPUT @rs1 
		TO @Destination
		USING Outputters.Csv(outputHeader: true, quoting: false, rowDelimiter: "\n");

Or, we can use PowerShell to accomplish the same thing. I took the script above and turned it into a parameterized variable.

$usqlStatement = "blah" + $startDate + " more script here" + $endDate

With a parameterized version of the script, we can write a loop that goes through all the source files, removes rows with -999 values, and saves the output into separate daily files. That way, additional processing can be scaled out more! In using Powershell to create these scripts dynamically, I found out you can also submit the scripts to ADLA too!

In the script below I have a collection of all the scripts stored in $Result. Each $record in those results has a script name and a script body. I spin through each record and submit the job, so long as there are less than 200 ADLA jobs already queued or running.

foreach($record in $Result){
    While($(Get-AzureRmDataLakeAnalyticsJob -Account "" -State "Running").Count -gt 200) {
        "Waiting for the number of active jobs to fall below 200";
        Start-Sleep -Seconds 60
    }

    "Submitting $record.ScriptName";
    Submit-AzureRmDataLakeAnalyticsJob `
        -Account "" `
        -Name $record.ScriptName `
        -Script $record.Scriptbody `
        -AnalyticsUnits 1
    
}

Yeah, I found out there is a limit to the number of simultaneous jobs you can run in ADLA. It’s 200. If you’re going to run other queries while this PowerShell runs, you might want to set the max to a number lower than 200. Just in case.

After this job ran, we reduced the total data set size from 10.8GB to 6.44GB. This led us to our next issue: converting latitude and longitude to State, County, Township and Range. We have to geocode approximately 133M rows! Solving this problem has proven to take quite a while and several programming attempts. I’ll share them with you next time.

What do I want you to learn from this?

Exploring new datasets quickly is a skill you must develop in order to work in data science effectively. It took a couple days to locate this data set within all the links available on the NOAA site. Once we had the files, figuring out what’s inside them didn’t take a lot of time. Learning to deal with roadblocks was a major feature of this part of the story. ADLA simply didn’t have the functionality we needed to solve the problem…So I created a solution. Having the ability to fall back on previous skill sets and apply them to current problems will serve you well. Don’t abandon your old skills as you pick up new ones.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

  • A New File Interrogator
  • Using Generative AI in Data Engineering
  • Getting started with Microsoft Fabric
  • Docker-based Spark
  • Network Infrastructure Updates

Recent Comments

  1. slowder on Data Engineering for Databricks
  2. Alex Ott on Data Engineering for Databricks

Archives

  • July 2023
  • June 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • October 2018
  • August 2018
  • May 2018
  • February 2018
  • January 2018
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • June 2017
  • March 2017
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • August 2013
  • July 2013
  • June 2013
  • February 2013
  • January 2013
  • August 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006
  • December 2005
  • November 2005
  • October 2005
  • September 2005
  • August 2005
  • July 2005
  • June 2005
  • May 2005
  • April 2005
  • March 2005
  • February 2005
  • January 2005
  • November 2004
  • September 2004
  • August 2004
  • July 2004
  • April 2004
  • March 2004
  • June 2002

Categories

  • Career Development
  • Data Engineering
  • Data Science
  • Infrastructure
  • Microsoft SQL
  • Modern Data Estate
  • Personal
  • Random Technology
  • uncategorized
© 2025 shannonlowder.com | Powered by Minimalist Blog WordPress Theme