Skip to content

shannonlowder.com

Menu
  • About
  • Biml Interrogator Demo
  • Latest Posts
Menu

De-duping by Self join

Posted on January 11, 2012January 17, 2012 by slowder

doris day

Let’s continue the series on getting rid of duplicate data.  Today we’re going to use self joins to get rid of them.  A self join is a join from a table back to itself.  It might seem weird to do that, but by the end of this article you’ll see how it’s useful.  I use self joins to de-dupe when the table has a primary key, yet that primary key didn’t prevent duplicates from being inserted.  Let’s set up a demo table now.

CREATE TABLE TroubledTableWithPK (
      ttPrimaryKey INT IDENTITY(1,1)
    , FirstName VARCHAR(50)
    , LastName VARCHAR(50)
    , email VARCHAR(255)
)

Now, let’s copy some data from the backup table we created before.

INSERT INTO TroubledTableWithPK
(FirstName, LastName, email)
SELECT * from OriginalTroubledTable

Take a look at the duplicates in that table.

SELECT email, COUNT(*)
FROM TroubledTableWithPK
GROUP BY email
HAVING COUNT(*) > 1

Yeah, we have duplicates!  Take a closer look at one of these duplicates and you can clearly see how the primary key, which was supposed to uniquely identify the row to the table, didn’t prevent duplicate email addresses from being added.

SELECT *
FROM TroubledTableWithPK
WHERE email ='elit.a.feugiat@Etiam.com'

Let’s propose a way to remove all the duplicate records that aren’t the “last” copy of the duplicate data.  The last copy would be the record with the highest ttPrimaryKey Value.

SELECT email, MAX(ttPrimaryKey)
FROM TroubledTableWithPK
WHERE email ='elit.a.feugiat@Etiam.com'
GROUP BY email

If we were to join TroubledTableWithPK to a sub select of TroubledTableWithPK, we could show all those records we want to remove.

SELECT tt.*
FROM TroubledTableWithPK tt
INNER JOIN (
    SELECT email, MAX(ttPrimaryKey) AS maxTTPrimaryKey
    FROM TroubledTableWithPK
    GROUP BY email) maxvals
    ON tt.email = maxvals.email
    AND tt.ttPrimaryKey != maxvals.maxTTPrimaryKey
WHERE tt.email ='elit.a.feugiat@Etiam.com'

Compare these results with the SELECT * statement we ran.  By joining on email, and where the ttPrimaryKey != MAX(ttPrimaryKey) shows us the “early” duplicates in the table.  Now all we have to do is cahnge the SELECT to a DELETE statement.

DELETE FROM tt
FROM TroubledTableWithPK tt
INNER JOIN (
    SELECT email, MAX(ttPrimaryKey) AS maxTTPrimaryKey
    FROM TroubledTableWithPK
    GROUP BY email) maxvals
    ON tt.email = maxvals.email
    AND tt.ttPrimaryKey != maxvals.maxTTPrimaryKey
WHERE tt.email ='elit.a.feugiat@Etiam.com'

So now when we run the SELECT *, no more duplicates!

SELECT *
FROM TroubledTableWithPK
WHERE email ='elit.a.feugiat@Etiam.com'

So all we have to do to delete all the duplicates, not just the ones for Elit Feugiat, just remove the where clause.

DELETE FROM tt
FROM TroubledTableWithPK tt
INNER JOIN (
    SELECT email, MAX(ttPrimaryKey) as maxTTPrimaryKey
    FROM TroubledTableWithPK
    GROUP BY email) maxvals
    ON tt.email = maxvals.email
    AND tt.ttPrimaryKey != maxvals.maxTTPrimaryKey

And All our duplicates not-so-magically disappear!

SELECT email, COUNT(*)
FROM TroubledTableWithPK
GROUP BY email
HAVING COUNT(*) > 1

Now, I would like to say, you might want to check out the execution plan of this method.  You could run into a situation where the table scan, the implicit order of the subquery and nested loops of the inner join to the sub table could give you troubles if you had a large number of records in TroubledTableWithPK.  These operations could cause you some blocking issues if any other process was trying to read records from this table during the delete.

I’d also like to point out, if you’re deleting a large number of records, you could also find your transaction log filling up.

Deletes are going to cost you more than a truncate if you inserted the distinct records to a temp table, then truncated the table.  It’s even more than selecting the distinct into a new table, then renaming the new table to the old name.

Now you have another tool available when you’re faced with duplicate data.  Just one more method to go in this series: CTEs.  Don’t worry, they’re not as hard as you might think.  Until then, if you have any questions, let me know.  I’m here to help!

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

  • A New File Interrogator
  • Using Generative AI in Data Engineering
  • Getting started with Microsoft Fabric
  • Docker-based Spark
  • Network Infrastructure Updates

Recent Comments

  1. slowder on Data Engineering for Databricks
  2. Alex Ott on Data Engineering for Databricks

Archives

  • July 2023
  • June 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • October 2018
  • August 2018
  • May 2018
  • February 2018
  • January 2018
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • June 2017
  • March 2017
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • August 2013
  • July 2013
  • June 2013
  • February 2013
  • January 2013
  • August 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006
  • December 2005
  • November 2005
  • October 2005
  • September 2005
  • August 2005
  • July 2005
  • June 2005
  • May 2005
  • April 2005
  • March 2005
  • February 2005
  • January 2005
  • November 2004
  • September 2004
  • August 2004
  • July 2004
  • April 2004
  • March 2004
  • June 2002

Categories

  • Career Development
  • Data Engineering
  • Data Science
  • Infrastructure
  • Microsoft SQL
  • Modern Data Estate
  • Personal
  • Random Technology
  • uncategorized
© 2025 shannonlowder.com | Powered by Minimalist Blog WordPress Theme