Skip to content

shannonlowder.com

Menu
  • About
  • Biml Interrogator Demo
  • Latest Posts
Menu

U-SQL and ETL Processing

Posted on October 26, 2017November 14, 2022 by slowder

When you get started with Azure Data Lake Analytics and U-SQL specifically, you may get a little confused. It looks like a mash-up of T-SQL and C#. Turns out, That’s exactly what it is!  You can find lots of information on MSDN, or GitHub, or StackOverflow. Let’s get started with some basics.

Variables, DataTypes, and Case Sensitivity

One of the early demos I produced for my new Data Lake client was a script to pick up files landing in their Raw folder, and scrub out the sensitive information. At the top of this script, we define two variables, one for the input file and one for the output file.

DECLARE @InputFile string = @"/Raw/TestTenant/Demographic/Households/Households_201705_46201.csv";
DECLARE @OutputFile string = @"/Stage/TestTenant/Demographic/HouseHolds/Households_201705_46201.csv";

The DECLARE @InputFile looks like standard T-SQL, but then the data type string isn’t.  Data types have to be listed in their C# forms. If you’re a SQL guy like me, you’re going to want a quick reference to translate back and forth. You may not recognize the at symbol in front of the values.  That’s a c# convention that tells the interpreter to interpret the string literally, and do not try to interpret any escape characters.  That’s import here since I don’t want my paths to be interpreted with excape characters.  In the demo above, I’m using Linux or web notation, but you can reference paths like Windows, and include backslashes (\).

There are three things to keep in mind as you start writing U-SQL. First, all your identifiers (variable, column, and later table names) are case sensitive.  If you key in the right identifier, but in the wrong case, you will get an error message from the compiler.  That’s due to the fact this code is getting compiled down to c# before it’s executed on multiple compute nodes.

Second, U-SQL’s reserved words need to be in all caps.  Luckily, that’s a convention I tried to stick to in T-SQL since it made my code a little easier to read.

Lastly, statements need to end in a semi-colon.  That tells the interpreter you are at the end of a statement, move on to the next statement.  In T-SQL you can still use GO as a terminator, but that’s been officially deprecated for years.  You should have been replacing them with semi-colons.  So, now’s a good time to start that habit too, right?

Multi-Step Processing

U-SQL scripts are like T-SQL in that you can combine multiple statements into a single script to perform a complex operation. In my demo, after declaring the input and output files, I EXTRACT data from my source file.

@SourceData =
   EXTRACT 
    [Column1] string,
    [Column2] string,
    [Column3] int?,
 FROM @InputFile
 USING Extractors.Csv(silent:true, skipFirstNRows:1);

This is a little like declaring a Common Table Expression (CTE) in T-SQL. In U-SQL you store the output from the input file in a variable.  We could have several statements after this statement and still make a reference to @SourceData.  At least in that, U-SQL is a little easier to work with. The first statement after the variable is one of a handful of expressions in U-SQL.  In this case, we want to get data out of a file and use it in a later step. The really cool part is Extract can operate over one or many files. Even better, since U-SQL was built to parallel process, the more files you have to Extract and process at the same time, the more efficient the script becomes (if you crank up the performance).

Extract can establish Schema on Read. So, the three columns I’m defining show you how that would happen.  As ADLA reads my file, it’s only looking for three columns, two strings followed by an nullable integer.  That question mark after the data type is how you define a nullable int. Most data types in c# will require it to allow for null values.  Check out the msdn for more info on when you’ll need it…otherwise you can just add it when your script fails due to a null value.

The from clause looks pretty similar to T-SQL. The only difference is it’s referring to a file instead of a table.

The last statement is Using.  This isn’t like anything in T-SQL. This is closer to C#.  Basically, it allows you to reference function or extension names.  In this case we want to use a class called Extractors that will let us create our rowset from a file or collection of files. In that class, we can extract data from text files, Comma Separated (CSVs), or Tab Separated files (TSVs).  These functions take several parameters.  In my case, I wanted to keep reading, effectively ignoring rows that do not match my expected format. That’s what the silent:true gives me.  I also wanted to skip the header rows, because the name of [column 3] is a text value and not an integer value.

If I hadn’t passed that parameter, the Extractor would have failed!

Conclusion

At this point in our script, we have set variables and read a file into a row set.  In my next entry, I’ll show you how you can start building some advanced transformations like Hashing values.  After that, I’ll show how we could automatically generate this script from some simple metadata!  In the mean time, if you have questions, please send them in!  I’m here to help.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

  • A New File Interrogator
  • Using Generative AI in Data Engineering
  • Getting started with Microsoft Fabric
  • Docker-based Spark
  • Network Infrastructure Updates

Recent Comments

  1. slowder on Data Engineering for Databricks
  2. Alex Ott on Data Engineering for Databricks

Archives

  • July 2023
  • June 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • October 2018
  • August 2018
  • May 2018
  • February 2018
  • January 2018
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • June 2017
  • March 2017
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • August 2013
  • July 2013
  • June 2013
  • February 2013
  • January 2013
  • August 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006
  • December 2005
  • November 2005
  • October 2005
  • September 2005
  • August 2005
  • July 2005
  • June 2005
  • May 2005
  • April 2005
  • March 2005
  • February 2005
  • January 2005
  • November 2004
  • September 2004
  • August 2004
  • July 2004
  • April 2004
  • March 2004
  • June 2002

Categories

  • Career Development
  • Data Engineering
  • Data Science
  • Infrastructure
  • Microsoft SQL
  • Modern Data Estate
  • Personal
  • Random Technology
  • uncategorized
© 2025 shannonlowder.com | Powered by Minimalist Blog WordPress Theme