Skip to content

shannonlowder.com

Menu
  • About
  • Biml Interrogator Demo
  • Latest Posts
Menu

Studying for the 70-457: Part Deux

Posted on January 30, 2013May 23, 2017 by slowder

January 31 is coming up pretty fast ,Just 3 more days, so I want to try to knock out the rest of the study guide in the next couple days so that I’m ready Thursday! Let’s jump right in!

Work with Data

  • Query data by using SELECT statements.
    • This objective may include but is not limited to: use the ranking function to select top(X) rows for multiple categories in a single query; write and perform queries efficiently using the new code items such as synonyms and joins (except, intersect); implement logic which uses dynamic SQL and system metadata; write efficient, technically complex SQL queries, including all types of joins versus the use of derived tables; determine what code may or may not execute based on the tables provided; given a table with constraints, determine which statement set would load a table; use and understand different data access technologies; CASE versus ISNULL versus COALESCE

You want to know what’s funny I seriously thought the left join with the default join. I had to actually look it up and MSDN to find out the inner joint is the default join.  I’ve become so used to always typing the join type, LEFT, INNER, RIGHT, CROSS, that when I just see JOIN, I think it’s a LEFT join. Dynamic sql and performance issues — You need to keep in mind if you’re going to use dynamic SQL that in doing so, you suffer a performance hit.  What I mean is you can’t reuse execution plans when you’re creating dynamic SQL.  Each time you run the dynamic SQL, the engine is going to try and create an execution plan for that call.  You can get around that by changing your server setting to “optimize for adhoc workloads”.  This will improve your performance if you do a lot of adhoc requests. My personal preference would be to understand why you’re using so much dynamic SQL and try to address the needs that caused the dynamic queries to be used.  But if you simply can’t address the root cause… the optimize option is there. DMVs — starting with SQL 2005 you got a bunch of dynamic management views and functions you could use to figure out what’s going on inside the server you could find out things like wait types and  how long you’ve been waiting.  You could find your worst performing queries by run time, or execution plans being used.  Using DMVs is what get’s me around having to break out profiler every time someone needs me to figure out why the server is slow. Correlated Subqueries, I’ve been using IN and  EXISTS  with subqueries for years.  It’s an easy way to figure out what’s missing from a load.  But with 2012, you can do something new.  You can compare a single value to a set. Let’s say you wanted to see all the salespeople that sold less than the top 10 sales people ( I know this is a contrived example, but work with me here.)

SELECT FirstName, LastName
FROM SalesPeople
WHERE SalesTotal < ALL (SELECT SalesTotal FROM v_TopTenSalesPeople)

The ALL keyword requires that the row’s value has to be < all the values in the view v_TopTenSalesPeople. The ANY or SOME keyword would require that the Sales total be less than one of those values.  So you’d get back the #2 through #10 sales person in your results, but not the Top sales person.  With the ANY or SOME, you could construct statements that would give you the same results as an EXISTS statement. CASE VS ISNULL VS COALESCE — check out this article over on stackexchange.

  • Implement sub-queries.
    • This objective may include but is not limited to: identify problematic elements in query plans; pivot and unpivot; apply operator; cte statement; with statement

PIVOT/UNPIVOT  –for now  check out the MSDN article.

  • Implement data types.
    • This objective may include but is not limited to: use appropriate data; understand the uses and limitations of each data type; impact of GUID (newid, newsequentialid) on database performance, when to use which data type for columns

I’ve seen it a lot: using a GUID column as the primary key (and by default a clustered index).  NEWID() doesn’t provide a sequential value.  It’s random.  So you’re going to get a ton of page splits on your insert operations.  That’s where people will come to the idea let’s use NEWSEQUENTIALID().  It is sequential, but it’s still not a great primary key.  When I need a globally unique identifier on a row, it’s cool to make that column a primary key, just find out what your natural sort order is for that data.

That way the normal operation of inserting new data will always append your new rows to the last page (or make a new page at the end), rather than suffering the performance hit of having to split a page, then move some data around, then inserting your data.  Trust me, when you start dealing with heavy insert loads, you’ll thank me.

 

I started this study guide Monday, and now it’s Wednesday…looks like I’m not going to finish the study guide before the exam.  Good thing is I have more exams scheduled later in the year.  I have time to create plenty more study guides. If you’re looking for something in particular, let me know.  I’ll help you find it!

 

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

  • A New File Interrogator
  • Using Generative AI in Data Engineering
  • Getting started with Microsoft Fabric
  • Docker-based Spark
  • Network Infrastructure Updates

Recent Comments

  1. slowder on Data Engineering for Databricks
  2. Alex Ott on Data Engineering for Databricks

Archives

  • July 2023
  • June 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • October 2018
  • August 2018
  • May 2018
  • February 2018
  • January 2018
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • June 2017
  • March 2017
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • August 2013
  • July 2013
  • June 2013
  • February 2013
  • January 2013
  • August 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006
  • December 2005
  • November 2005
  • October 2005
  • September 2005
  • August 2005
  • July 2005
  • June 2005
  • May 2005
  • April 2005
  • March 2005
  • February 2005
  • January 2005
  • November 2004
  • September 2004
  • August 2004
  • July 2004
  • April 2004
  • March 2004
  • June 2002

Categories

  • Career Development
  • Data Engineering
  • Data Science
  • Infrastructure
  • Microsoft SQL
  • Modern Data Estate
  • Personal
  • Random Technology
  • uncategorized
© 2025 shannonlowder.com | Powered by Minimalist Blog WordPress Theme