Skip to content

shannonlowder.com

Menu
  • About
  • Biml Interrogator Demo
  • Latest Posts
Menu

Data Science Project 1: Predicting Hail Damages

Posted on August 15, 2018November 13, 2022 by slowder

Early on in my new role I was asked to find out how risky it was to offer hail insurance for a given property.  If you haven’t worked with insurance before here’s the basics.  You’re placing a bet that says I’m betting something bad is going to happen.  The insurer is betting that it won’t happen.  Like all good casinos there is a lot of statistics on how likely an event is to happen based on past experience.  The insurer looks at these statistics and plugs in attributes about you to determine how risky their bet is.  If the bet is really risky to the insurer, they will charge you higher premiums…and get more money from you.

If you file a claim against the insurance policy, the insurer pays you, and you win the bet.

If you do not file a claim, the insurer wins.  Like nearly all of the financial industry it’s all calculated risk.  The goal is to set premiums in such a way that across all policies, the insurer makes a profit.

Auto Insurance

If you are looking for auto insurance today, the insurer is going to pull a credit report on you. Statistically speaking, if you’re financially responsible, then you’re likely a responsible driver too.  The insurer will also look at property values for your primary residence, as well as for comparable homes in your area. They’ll look at crime statistics in general, and then specifically for auto theft and vandalism.  They’ll look at accident rates in your area in general, and then specifically for your make and model vehicle.  I’m sure there are many other attributes they consider before returning a quote for an insurance premium.

Crop Insurance

Now, let’s look at how hail insurance is quoted for farmers.  The US Department of Agriculture (and some of it’s children agencies) collect information about the insurance sold each year, and the claims filed each year.  These numbers are broken down by state, county, township and section (about a square mile). A rolling average is calculated to determine how likely a farm in a given geography is for filing a claim.

Yup, that’s it.  Past claims is all that’s considered in determining the premium.  So what happens for a farmer that has 100 acres? Can he reliably visually inspect all those acres by sight to find hail damage after each event?  How many times was damage missed until the end of the season during a harvest.  Wouldn’t it be more accurate if we considered some weather data related to hail events?

And so, our story begins

Since our current data set was based on claims, I wanted to see if there were any cases where there were hail events, and no claims were made. I quickly found the National Oceanic and Atmospheric Administration’s (NOAA) Storm Events Database. Just finding this data set taught me that one agency, in this case NOAA had many child organizations, and while they are separate entities, data from child agencies might be found on the parent agency’s site, and sometimes it won’t be.

This database was awesome, it went back to 1950! It was contributed by the National Weather Service (NWS). In addition to that it had entries from first responders and air traffic controllers. All trusted sources, right? Well, even the most trained person is still human. Humans make mistakes, and humans can have perception problems. All things considered, I went ahead and started analyzing the data. It was free after all!

Azure Data Lake Storage and Analytics

This source data was CSV flat files, a little over a gigabyte of data.  I used my file interrogator solution to generate my stage table, and then generate an SSIS package to read these into a database.

I also took this opportunity to practice with Azure Data Lake Storage and Analytics. I built an Azure Data Factory pipeline to download the compressed CSVs from NOAA, and put them into a data lake. With the files stored, I could then start writing some U-SQL queries to clean up the data and get it ready for analysis.

By having a copy in SQL Server, I had a way to do my own sanity checks as I continued to learn ADLA.

Exploring the Source

After you identify a data source, you’ve got to work your way through the data. Identify what information is in the file. How are the columns arranged, what data types do you have? What information do you really have? Are there any data quality issues?

Trust me, you always have some quality issue with every source.

So in exploring this first data set, I found 50 columns of data in each of csv file. Fortunately, all the files were the same layout. In looking through these columns, I found 21 of them could be useful for this project. There were some date and time columns, there was a column to identify the type of storm event the record related to. There were also two columns that recorded the property damages and crop damages in terms of dollars. That may be helpful in identifying how the size of the hail stones affects damages. There were columns for coordinates, as well as identifiers for what state and county an event occurred in.

Once I had an idea of what was in the data source, I put together a couple sample U-SQL queries to comb through the 68 yearly files, and put together a single file I could explore with either PowerBI, Excel, or Notepad++. Using this single file I started finding some issues with the data in the files. In my next article we’ll start looking into those data quality issues, and how you can deal with them in a data science project.

What do I want you to learn from this?

There are three facets to data science: math, programming, and business knowledge. At the beginning of a data science project you have to get to know the business you’re trying to help with data science. In this case, I had to take what I knew from my time in the finance industry, and learn about those things unique to crop insurance. When you begin your first data science project, you might not know anything about the industry you’re serving. Ask lots of questions. Read any materials you can find on your target industry. This will be key to understanding new terms, and identifying column names in data sources.

The next thing I’d like to share with you is embrace new technologies. While the concepts behind data science are old, the way you implement those concepts are changing all the time. In this case, I began using Azure Data Lake Analytics and Data Lake Store. You have to be willing to jump in to a new technology, use it, and find out where that new tool fits within your current skill set. The ability to use these new tools can be the difference between getting into a role and being left out.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

  • A New File Interrogator
  • Using Generative AI in Data Engineering
  • Getting started with Microsoft Fabric
  • Docker-based Spark
  • Network Infrastructure Updates

Recent Comments

  1. slowder on Data Engineering for Databricks
  2. Alex Ott on Data Engineering for Databricks

Archives

  • July 2023
  • June 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • October 2018
  • August 2018
  • May 2018
  • February 2018
  • January 2018
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • June 2017
  • March 2017
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • August 2013
  • July 2013
  • June 2013
  • February 2013
  • January 2013
  • August 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006
  • December 2005
  • November 2005
  • October 2005
  • September 2005
  • August 2005
  • July 2005
  • June 2005
  • May 2005
  • April 2005
  • March 2005
  • February 2005
  • January 2005
  • November 2004
  • September 2004
  • August 2004
  • July 2004
  • April 2004
  • March 2004
  • June 2002

Categories

  • Career Development
  • Data Engineering
  • Data Science
  • Infrastructure
  • Microsoft SQL
  • Modern Data Estate
  • Personal
  • Random Technology
  • uncategorized
© 2025 shannonlowder.com | Powered by Minimalist Blog WordPress Theme