Skip to content

shannonlowder.com

Menu
  • About
  • Biml Interrogator Demo
  • Latest Posts
Menu

Data Analysis…can we automate this?

Posted on May 23, 2018November 14, 2022 by slowder

As some of you know, I’ve moved from consulting back into a full-time employee for Crop Pro Insurance.  There was so much opportunity in this role.  First of all, this role gives me my first full-time data science credit.  I also get to build a team to support data science projects.  On top of that, I get to push my data automation code to the next level.  All of these work together to make me ask one question: “Can we build a tool that would assist in Data Analysis tasks?”

What does a data analyst do?

I’m not saying data analysts aren’t doing anything. What I am asking is can we list what they are doing so we can identify tasks that are ripe for automation?

They Collect Ontology

During the first couple weeks of the new job, much of my time is spent learning about the business. I’m learning new acronyms, new vocabulary, and new concepts.  The fancy word for this analysis is Ontology Collection.  Data analysts will collect this information on paper or in OneNote.  Eventually, they’ll create documentation where they put all of this information in a human-readable form that is useful for both developers and business owners to communicate about the business using a single language.

I’ve found that the process of gathering this kind of information isn’t structured.  It’s generally a set of conversations between the analysts and the Subject Matter Experts (SMEs).  For the most part, the analyst records notes while the SMEs talk about all the details of their work. It’s up to the analyst to figure out what pieces of information are important and which they discard.  Right now, I would think this would be a difficult task to automate.  The end goal of this automation would be a chatbot that would record all the information raw and parse through this information looking for new words and phrases.  Using each new word and phrase as a prompt to give feedback to the SME for further definition.

But how could we model ontology in a way that the bot could work?

How could we model this information so that other processes could consume it down the line?

They Populate a Data Catalog

Yes, I’m referring to Azure Data Catalog. And yes, I’m aware ADC has room for improvement, but it is still the best option for facilitating this kind of work.  It’s the first product to realize no one person or team will ever be able to catalog all the data assets in your enterprise. It will take many SMEs to get there.  Data Analysts are great at this kind of work because they can take information from all prior meetings that hint at a data source and explore that information further.  Let’s say someone mentions a system in passing.  A good data analyst will record what they can when they learn about this new source and explore for more information later.

This follow-up seems ripe for automation.  Any time a new source of information is identified, we could queue it up for interrogation.  I’m referring to exploring the information schema of a given source. It’s not always baked into the source system; sometimes, it requires tools to get the job done.  But this process can be kick-started with automation and then pass the work over to a human for further analysis.

Collecting this information in both a human-readable and machine-readable way will be critical.  A solid metadata model can support both of these goals.

They Collect Business Rules

When analysts are in meetings, they’ll also learn about business rules.  These rules have a tremendous variety.  These rules can define which data is valid versus what’s considered bad data.  These rules can define Service Level Agreements that control when developers can deploy solutions to different environments.  All of this information is useful, but are we collecting it so that we can consume it via machine later?  I often find this information in the documentation and have to implement it in code or jobs.  I’ve never found it collected in a way I could reference.

The real challenge here is to define a structure that’s flexible enough to allow for rules around subject areas completely unknown before they’re learned.  They’re also going to have to be enforceable too.  This sounds incredibly difficult, but are there techniques that could work?

There’s far more to the job of a data analyst.  What tasks am I missing here? What are your thoughts on automation and extending the analysts’ abilities with AI?  Could we equip our best analysts to do even more?  Could we keep up with growing demand through a hybrid solution of humans and machines? Share your thoughts below and via Twitter.  I’m interested in what you think!

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

  • A New File Interrogator
  • Using Generative AI in Data Engineering
  • Getting started with Microsoft Fabric
  • Docker-based Spark
  • Network Infrastructure Updates

Recent Comments

  1. slowder on Data Engineering for Databricks
  2. Alex Ott on Data Engineering for Databricks

Archives

  • July 2023
  • June 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • October 2018
  • August 2018
  • May 2018
  • February 2018
  • January 2018
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • June 2017
  • March 2017
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • August 2013
  • July 2013
  • June 2013
  • February 2013
  • January 2013
  • August 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006
  • December 2005
  • November 2005
  • October 2005
  • September 2005
  • August 2005
  • July 2005
  • June 2005
  • May 2005
  • April 2005
  • March 2005
  • February 2005
  • January 2005
  • November 2004
  • September 2004
  • August 2004
  • July 2004
  • April 2004
  • March 2004
  • June 2002

Categories

  • Career Development
  • Data Engineering
  • Data Science
  • Infrastructure
  • Microsoft SQL
  • Modern Data Estate
  • Personal
  • Random Technology
  • uncategorized
© 2025 shannonlowder.com | Powered by Minimalist Blog WordPress Theme