January 31st is the day I’ve set to take the 70-457: Transition Your MCTS on SQL Server 2008 to MCSA: SQL Server 2012, Part 1. So let’s jump right in to the materials covered. I’m going to link to the articles I’ve already published. Then, for those I haven’t I’ll be diving in to those I’m unsure of, or feel I need to review before the exam. If you’ve got any questions on the materials below, hit me up on twitter or email, and we’ll go through them together. Working with the material and discussing it really helps me cement the materials in my mind.
Create Database Objects
- Create and alter tables using T-SQL syntax (simple statements).
- This objective may include but is not limited to: create tables without using the built-in tools; ALTER; DROP; ALTER COLUMN; CREATE
- I’m assuming that when they say”without using the built-in tools” they mean you’re going to have to be able to do these things without using the GUI.
- If you haven’t started using the Visual Studio 2012 Data tools, it helps you move from GUI design to T-SQL design, by splitting the screen and showing you how the GUI affects the code, and the code affects the GUI designer.
- Design views.
- This objective may include but is not limited to: ensure code non regression by keeping consistent signature for procedure, views, and function (interfaces); security implications
OK, so what does “ensure code non regression by keeping consistent signature for procedure” mean? Have you ever wanted to make a change to a view, procedure, etc. and the “developer” say’s NO? Think about why he said no. He doesn’t want to go into the code to make changes to his code. This requirement makes you show ways of altering views in such a way that his code wouldn’t have to change. Some of the tricks I’ve used in the past to accomplish this are:
- Create a view with the same column names and data types a programmer was expecting. Rename the table to something new, then name the view the original table name. Then when the code runs, it uses the view instead of the table. Caution: if the programmer was expecting to do updates on that base table, you’ll have to take extra steps to make sure ALL that functionality is maintained in the view.
- If you have a view that’s not fast, and you make tweaks to make it fast, you have to keep the output column names and data types the same, so that you don’t impact any programs hitting that view.
- If you change a procedure, make sure the required inputs are the same. You can add optional parameters, but all the inputs and outputs have to be the same, otherwise you’ll impact the programs
Ok, now think about security implications of designing views. Consider this: you have a table with Social Security Numbers ( I know, bad juju) And you want to give access to the table, without giving access to the SSNs. You could revoke access to the base table, create a view, give them SELECT access to the view, and only give them access to the last four digits of the Social Security Number.
The idea they’re testing you on is: do you know views aren’t just for giving you different insights into the actual columns of your tables, but it’s also able to give you multiple security configurations!
- Create and alter DML triggers.
- This objective may include but is not limited to: inserted and deleted tables; nested triggers; types of triggers; update functions; handle multiple rows in a session; performance implications of triggers
When they’re asking about triggers, you want to keep a couple things in mind:
- Performance — make sure you’re designing your triggers to be quick and light on resources. Make sure you understand the trigger isn’t fired row-by-row, it’s fired for the batch. So keep in mind all the records in that batch are in the pseudo tables inserted and deleted. They contain rows of records inserted by the batch or deleted by the batch. So if you’re building an insert trigger, you’ll only have access to inserted. If you’re building a delete trigger, then you only have access to deleted. But if you’re building an update trigger, the old records will be in the deleted table, and the new records are in the inserted table. When used properly, these tables can limit the scope of your actions, and greatly improve the performance of your trigger.
- Order of operations — If you add a trigger to a table, both the table’s changes and the trigger’s changes have to be written to the transaction log before the transaction is complete. If the trigger fails, then the operation on the table fails. If you have multiple triggers being fired for a single operation, all the triggers will have to commit before the operation is successful.
They may also ask you about nested triggers. There is a server-wide option you have to set to allow nested triggers. Read http://msdn.microsoft.com/en-us/library/ms190739(v=sql.110).aspx. This isn’t the answer they’re going to look for, but in my experience, avoid nested triggers unless you’re willing to commit a lot of time to testing every variation of events that will fire those triggers. Otherwise you can get an recursion overflow (trying to call more than 32 levels deep).
While they could be used to handle self referencing table issues, I’ve had better luck reading the data in so that the root nodes are inserted first, then the next level down. Loop further and further down, until all the levels are inserted. It’s easier to log that process than nested triggers, it’s also easier to kick out error rows to an error handler when ordering the inserts.
Ok, that’s the first section of the exam. Next up, working with the data, selects, sub queries, and data types. If you’re studying for the new certification exams, hit me up. I’d love to share notes with you and work through the exams together! Let me know if I can help.