Skip to content

shannonlowder.com

Menu
  • About
  • Biml Interrogator Demo
  • Latest Posts
Menu

SQL201–The OUTPUT Clause

Posted on August 11, 2011 by slowder

Starting with SQL 2005 SQL introduced a new clause for your INSERT, UPDATE, DELETE, and MERGE statements…the OUTPUT clause.  If you’re not using it yet, it might be because you’re not sure what you could use it for.  Let’s go through a scenario where the OUTPUT clause could save you from doing a RBAR (Row By Agonizing Row) operation.

INSERT a record and retrieve the IDENTITY created

I’m going to insert a record into AdventureWorks.Sales.SalesOrderDetail, and return the SalesOrderDetailID created by the INSERT.

   1: INSERT INTO Sales.SalesOrderDetail

   2: (

   3:       SalesOrderID, OrderQty, ProductID, SpecialOfferID, UnitPrice, UnitPriceDiscount

   4:     , rowguid, ModifiedDate

   5: )

   6: SELECT TOP 1

   7:       43659 AS SalesOrderID, OrderQty, ProductID, SpecialOfferID, UnitPrice, UnitPriceDiscount

   8:     , NEWID() AS rowguid, GETDATE() AS ModifiedDate

   9: FROM Sales.SalesOrderDetail

  10: WHERE

  11:       SalesOrderID = 43660;

  12:  

  13: SELECT SCOPE_IDENTITY() AS CreatedIdentity;

  14:  

Simple enough right?  

What if I wanted to INSERT more than one record?

Well, if you only had access to SCOPE_IDENTITY() then you’d have to run row by row, inserting one at a time, and selecting the identities created.  If you’re doing one or two records that might be fine.  But as SQL developers you have to think in sets of thousands.

Considering INSERT statements with thousands of rows, you need to learn how the OUTPUT clause could show you all the identities created.  Let’s use OUTPUT now.

   1: SELECT

   2: COUNT(*)

   3: FROM Sales.SalesOrderDetail

   4: WHERE

   5:       SalesOrderID = 43660

   6: --2 rows    

   7:  

We’re going to insert two rows into Sales.SalesOrder Detail.

   1: INSERT INTO sales.salesorderdetail

   2: (

   3:       SalesOrderID, OrderQty, ProductID, SpecialOfferID, UnitPrice, UnitPriceDiscount

   4:     , rowguid, ModifiedDate

   5: )

   6: OUTPUT

   7:       inserted.SalesOrderDetailID

   8: SELECT

   9:       43659 as SalesOrderID, OrderQty, ProductID, SpecialOfferID, UnitPrice, UnitPriceDiscount

  10: , NEWID() as rowguid, GETDATE() as ModifiedDate

  11: FROM Sales.SalesOrderDetail

  12: WHERE

  13:       SalesOrderID = 43660;

The syntax for OUTPUT is pretty simple, you just add the clause between the INSERT INTO and the SELECT or VALUES clause.  Then when you run it you get the identities inserted, right?

Msg 334, Level 16, State 1, Line 1
The target table 'sales.salesorderdetail' of the DML statement cannot have any enabled triggers if the statement contains an OUTPUT clause without INTO clause.

Close… But no cigar.  This is the one catch with the OUTPUT clause.  It’s not an option if your table already has a trigger.  You could get away with an OUTPUT INTO clause, we’ll cover that option later.  For now, I’ll just remind you you can DISABLE a TRIGGER.

   1: DISABLE TRIGGER iduSalesOrderDetail ON sales.salesorderdetail 

   2: GO 

   3: INSERT INTO sales.salesorderdetail 

   4: ( 

   5:       SalesOrderID, OrderQty, ProductID, SpecialOfferID, UnitPrice, UnitPriceDiscount 

   6: , rowguid, ModifiedDate 

   7: ) 

   8: OUTPUT 

   9:       inserted.SalesOrderDetailID 

  10: SELECT 

  11:       43659 as SalesOrderID, OrderQty, ProductID, SpecialOfferID, UnitPrice, UnitPriceDiscount 

  12: , NEWID() as rowguid, GETDATE() as ModifiedDate 

  13: FROM Sales.SalesOrderDetail 

  14: WHERE 

  15:       SalesOrderID = 43660 

  16: GO 

  17: ENABLE TRIGGER iduSalesOrderDetail  ON sales.salesorderdetail 

  18: GO 

  19:  

And now you get the two identities you created with this insert statement.

That’s just the beginning.

Over the past few weeks, I’ve worked with developers to build queue tables and help them process records in that table.  One process adds records to the queue.  Another process picks up a couple thousand at a time, then does work based on that set.  Before we were getting lots of locking and blocking and quite a few deadlocks.

But with some creative indexing and OUTPUT clauses, we were able to eliminate all the contention between the process that adds records to the queue, and the process that removes records from the queue.

I’ve also built archival processes using the OUTPUT clause.  When it deletes from a table, the OUTPUT INTO clause puts those records into an archival table.  

The OUTPUT INTO clause?

Yup.  It’s just like the OUTPUT clause, except it puts the data into a table.  The table can be a real table, or a table variable.  Let’s take the output example from above, and shove it into a table variable called @InsertedIdentities.

   1: GO 

   2: DISABLE TRIGGER iduSalesOrderDetail ON sales.salesorderdetail 

   3: GO 

   4:  

   5: DECLARE @InsertedIdentities TABLE ( 

   6:       SalesOrderDetailID INT 

   7: ); 

   8:  

   9: INSERT INTO sales.salesorderdetail 

  10: ( 

  11:       SalesOrderID, OrderQty, ProductID, SpecialOfferID, UnitPrice, UnitPriceDiscount 

  12:     , rowguid, ModifiedDate 

  13: ) 

  14: OUTPUT 

  15:       inserted.SalesOrderDetailID 

  16: INTO @InsertedIdentities 

  17: SELECT 

  18:       43659 as SalesOrderID, OrderQty, ProductID, SpecialOfferID, UnitPrice, UnitPriceDiscount 

  19:     , NEWID() as rowguid, GETDATE() as ModifiedDate 

  20: FROM Sales.SalesOrderDetail 

  21: WHERE 

  22:       SalesOrderID = 43660; 

  23:  

  24: SELECT * 

  25: FROM @InsertedIdentities; 

  26: GO 

  27: ENABLE TRIGGER iduSalesOrderDetail  ON sales.salesorderdetail 

  28: GO 

  29:  

Note that the INTO part of the OUTPUT INTO clause comes after the list of columns you’re trying to INSERT INTO the destination table.

Conclusion

Get to know the OUTPUT clause, it’s incredibly useful, especially when doing a scan of a table could result in reading through millions of rows to find out which ones you just updated.  You can get the data out of the subset of rows you updated.  A serious performance gain when you start talking VLDBs.

As always, if you have any questions, send them in…I’m here to help!

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

  • A New File Interrogator
  • Using Generative AI in Data Engineering
  • Getting started with Microsoft Fabric
  • Docker-based Spark
  • Network Infrastructure Updates

Recent Comments

  1. slowder on Data Engineering for Databricks
  2. Alex Ott on Data Engineering for Databricks

Archives

  • July 2023
  • June 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • October 2018
  • August 2018
  • May 2018
  • February 2018
  • January 2018
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • June 2017
  • March 2017
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • August 2013
  • July 2013
  • June 2013
  • February 2013
  • January 2013
  • August 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006
  • December 2005
  • November 2005
  • October 2005
  • September 2005
  • August 2005
  • July 2005
  • June 2005
  • May 2005
  • April 2005
  • March 2005
  • February 2005
  • January 2005
  • November 2004
  • September 2004
  • August 2004
  • July 2004
  • April 2004
  • March 2004
  • June 2002

Categories

  • Career Development
  • Data Engineering
  • Data Science
  • Infrastructure
  • Microsoft SQL
  • Modern Data Estate
  • Personal
  • Random Technology
  • uncategorized
© 2025 shannonlowder.com | Powered by Minimalist Blog WordPress Theme