Skip to content

shannonlowder.com

Menu
  • About
  • Biml Interrogator Demo
  • Latest Posts
Menu

De-duping by temp table

Posted on January 9, 2012January 17, 2012 by slowder

Last time, we learned that manually removing duplicate rows from a table could be a long and manual process.  Especially when you have as many duplicates as I created in our troubled table.  Today we’re going to look at using temp tables to remove the extra rows.  I gave you the overview to this already, so today we’re going to explore several versions of this pattern, and discuss some of the pros and cons of each step.

 Me At Work

Before we get started, my demo is going to use a “real” table instead of a #temp table or a @table variable.  I’ve talked about the differences between them before, and I will re-visit that topic later.  For the course of this article our temp table is just a table we’re going to drop when we’re finished using it.

Creating your temporary table

SELECT DISTINCT
     FirstName
   , LastName
   , email
INTO TempTroubledTable
FROM TroubledTable

or

CREATE TABLE TempTroubledTable  (
      FirstName VARCHAR(50)
    , LastName VARCHAR(50)
    , email VARCHAR(255)
)

INSERT INTO TempTroubledTable
(FirstName, LastName, email)
SELECT DISTINCT
   FirstName, LastName, email
FROM TroubledTable

There are discussions all over the internet on why you should use CREATE TABLE and then INSERT INTO.  It has to do with LOCKS that are held while the data is copied over.  Generally you’re removing duplicates on an ad-hoc basis, so you don’t think a lot about code re-use while you’re doing your work.  But if you’re working on a high transaction volume system, and you know resources on your server are scarce, I would recommend creating your table before inserting the data.

We have a distinct list of the records that are duplicated in our source table.

Removing the duplicates from the source table

Free Giant Macro Pencil and Pink Eraser Creative Commons

Again, we have a choice: DELETE or TRUNCATE.  To make that call you need to think about how the source table is used.

  1. Do you have exclusive access to the source table? Will anyone try to read or write to the table while you delete the duplicate rows?
    • If you can get exclusive access, TRUNCATE will remove all the rows (and therefore the duplicates) faster than a DELETE, due to how the transaction log records the operation.
    • If you can’t get exclusive access, DELETE will allow you to remove the records while maintaining other users’ access to the table. But beware: you could experience blocking due to those other users’ accessing the table at the same time. But you may need to delete the duplicate rows in batches.
  2. Is your source table very large? Do you have 100,000 rows? A million rows? more? Or is your transaction log size limited in some way?
    • If you have a very large table, the difference in how TRUNCATE works could save you from having your transaction log fill up your drive. This is a very real situation the more rows you have to remove.
    • If the source table isn’t very large, then DELETE will perform as well as TRUNCATE would.

So, depending on how you answered the above questions you could use either of the following commands to remove the duplicates.

DELETE FROM tt
FROM TroubledTable tt
INNER JOIN TempTroubledTable ttt
    on tt.email = ttt.email

Which could be quite costly, both the join and the transaction log are working against us. (the join is really only needed if you needed to keep some of the rows.)

TRUNCATE TABLE TroubledTable

Either way, once you verify you’ve removed all the duplicate rows, you’re ready to put the unique rows back into your source table.

SELECT email, COUNT(*)
FROM troubledtable
GROUP BY email
HAVING COUNT(*) > 1

Putting the unique rows back into your source table

Since we only selected the DISTINCT rows into our temp table, we can restore the rows with a simple:

INSERT INTO TroubledTable
SELECT * from TempTroubledTable

And now we can get rid of our temp table.

DROP TABLE TempTroubledTable

What if INSERT to source fails?

Occasionally the transaction log will be very limited in space, or the transaction log is too slow for my liking.  There is another way to get that data from your temp table back to the source.  You could rename the temp table.  But of course that means you have to drop the current source table, then do the rename.

DROP TABLE TroubledTable
GO
EXEC sp_rename @objname = 'TempTroubledTable', @newname = 'TroubledTable'

The great thing about this is the command is a data definition language (DDL) command, and runs much faster than the data manipulation language (DML) INSERT/SELECT.  It’s something to keep in mind when manipulating data like this.

So, verify you’ve cleared all the duplicates out of the source table

SELECT email, COUNT(*)
FROM troubledtable
GROUP BY email
HAVING COUNT(*) > 1

And make sure you have data in your source table.

SELECT TOP 100 *
FROM TroubledTable

And now you’re done!

Next time, we’ll cover how to do this by using a join from the source table to  the source table.  It’s yet another way you can remove duplicates from your tables.  Until then, if you have any questions, please let me know!

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

  • A New File Interrogator
  • Using Generative AI in Data Engineering
  • Getting started with Microsoft Fabric
  • Docker-based Spark
  • Network Infrastructure Updates

Recent Comments

  1. slowder on Data Engineering for Databricks
  2. Alex Ott on Data Engineering for Databricks

Archives

  • July 2023
  • June 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • October 2018
  • August 2018
  • May 2018
  • February 2018
  • January 2018
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • June 2017
  • March 2017
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • August 2013
  • July 2013
  • June 2013
  • February 2013
  • January 2013
  • August 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006
  • December 2005
  • November 2005
  • October 2005
  • September 2005
  • August 2005
  • July 2005
  • June 2005
  • May 2005
  • April 2005
  • March 2005
  • February 2005
  • January 2005
  • November 2004
  • September 2004
  • August 2004
  • July 2004
  • April 2004
  • March 2004
  • June 2002

Categories

  • Career Development
  • Data Engineering
  • Data Science
  • Infrastructure
  • Microsoft SQL
  • Modern Data Estate
  • Personal
  • Random Technology
  • uncategorized
© 2025 shannonlowder.com | Powered by Minimalist Blog WordPress Theme