Sunday, December 31, 2006

The Road to Hell By Chris Rea Or Highway to Hell By AC/DC

So here is my last goofy post of the year, I will have some posts that have at least something to do with SQL and databases from tomorrow on. I just happened to hear both of these songs today and since their titles are so similar I decided to do a little poll. So the question is which of these two do you prefer?

The Road to Hell by Chris Rea from the album Road to Hell
Or
Highway to Hell by AC/DC from the album Highway to Hell

The title Highway to Hell has nothing to do with Satanism (Malcolm once said "me mum would kill me for that!"). The title came after a reporter asked AC/DC if they could describe what life was like constantly touring around the globe. Angus replied that it was "a highway to Hell", and the name stuck

Believe it or not I will pick The Road to Hell as my favorite song. If you asked me yesterday I would have picked Highway to Hell; maybe I am not angry enough today and am in a pleasant mood, It is New Years Eve after all.
The first four songs on Highway to Hell are the best, I actually prefer Walk All Over You over Highway to Hell. While Highway to Hell is a hardrock song The Road to Hell is not; you will find that it reminds you of Dire Straits when listening to it (I believe Mark Knopfler plays guitar on the song)



The Road to Hell Track listing
------------------------------------
The Road to Hell (Pt. 1)
The Road to Hell (Pt. 2)
You Must Be Evil
Texas
Looking for a Rainbow
Your Warm and Tender Love
Daytona
That's What They Always Say
I Just Wanna Be With You
Tell Me There's a Heaven


Highway to Hell Track listing
---------------------------------------
Highway to Hell
Girls Got Rhythm
Walk All Over You
Touch Too Much
Beating Around the Bush
Shot Down in Flames
Get It Hot
If You Want Blood (You've Got It)
Love Hungry Man
Night Prowler

This was of course the last album that AC/DC recorded with Bon Scott; he died 6 months later. The cause of death listed on his death certificate was "Acute alcoholic poisoning" and the verdict of the inquest "Death by misadventure". After Bon Scott's death the band hired Johnson as their new lead singer. They recorded Back in Black in 1980 which would become the fourth best-selling album in the United States, tied with Billy Joel's Greatest Hits Vol. 1 & II with 21 million copies sold



So which one do you prefer?

Friday, December 29, 2006

SQL Server 2005 SP2 Backup With Vardecimal Enabled Can't Be Restored On SQL Server 2005 SP1 or Earlier

SQL Server 2005 SP2 introduces the Vardecimal storage format. The Vardecimal storage format is a new storage format to store decimal/numeric data. This new storage format is not understood by SQL Server 2005 or SQL Server 2005/SP1. Just like you cannot attach a SQL Server 2005 database to SQL Server 2000, attaching/restoring a SQL Server 2005/SP2 database that has been enabled for Vardecimal storage format to earlier versions of SQL Server 2005 will fail. SQL Sever implements this by incrementing the database version number when the database is enabled for Vardecimal storage format. When you disable Vardecimal storage format on a database, its database version is decremented so that the database can now be attached to earlier versions of SQL Server 2005.

Read more about this and what the implications are here: http://blogs.msdn.com/sqlserverstorageengine/archive/2006/12/29/vardecimal-storage-format-and-its-implications-on-backup-recovery.aspx

Thursday, December 28, 2006

Guess Who Gave This Answer In A Newsgroup?

Try to guess who gave this answer. Here is a small hint, the person looks like Anton Szandor LaVey

Question: I guess the key question for me is, can this be done entirely in SQL?


Answer: The answer is always "Yes, we can do it in SQL!"

The right answer is "But, like a size 26 thong, just because you can
does not mean you should!
"


Google around for "Bin packing" and/or "Knapsack" problems on some math
websites. This is a known NP-complete problem. In English, it means
that the only way to solve it is to try all possible combinations, so
the execution time grows fastrer than any polynominal expression (i.e
think about factorials or worse).


There are often several valid solutions, too. Being a set-oriented
language, SQL will attempt to find the set of ALL solutions. And run
forever.


This is a job for a procedure (yes, [name here] is saying nice thing about
procedural code!) which will stop at the first usable answer, even if
it is not optimal. Now you have to pick your algorithm. This is
usually the Greedy algorithm ("grab the biggest bite you can and add it
to the answer; see if you met the goal; if not, repeat") modified to do
some back tracking.


So who gave this answer?

Wednesday, December 27, 2006

Sweet Child O' Mine By Guns N' Roses Or Child in Time By Deep Purple?

Yesterday we talked about Whole Lotta Love and whole Lotta Rosie today we are talking about Sweet Child O' Mine and Child In Time. So which is your favorite?

Sweet Child O' Mine by Guns N' Roses from the album Appetite for Destruction
Child in Time by Deep Purple from the album Deep Purple in Rock

This is a very tough pick for me because I love both songs but I would have to go for Sweet Child O' Mine. This song is perfect in every way from the vocals to the guitar solo to the introduction's famous D-flat based riff. The lyrics were written by Axl Rose as a love letter to his girlfriend at the time Erin Everly. No matter how many times I listen to the song it never bores me; something I can’t say about Paradise City; I just can’t stand that song anymore

Child in time is a song that is over 10 minutes long; It starts slowly with only an organ then builds up to a faster pace and then stops abruptly to start over again. Guitarist Ritchie Blackmore solo is just amazing and Ian Gillan’s voice goes from quiet singing to loud screaming. If you never heard this song I urge you to check it out.

Lyrically, the song is deep and dark; here are the lyrics:
Sweet child in time,
You'll see the line.
The line that's drawn between,
The good and the bad.
See the blind man,
He's shooting at the world.
The bullets flying,
Mmm... they're taking toll.
If you've been bad,
Lord I bet you have.
And youve not been hit,
You've not been hit by flying lead.
You'd better close your eyes,
You'd better bow your head.
Wait for the ricochet...


I have at least 3 versions of this song; the original, the live version from Made In Japan and the live version from Scandinavian Nights.

My favorite Guns N' Roses songs:
Mr. Browstone
Live and Let Die
Don't Cry
Civil War
Knockin' on Heaven's Door
You Could Be Mine
It's So Easy
Nightrain
Out Ta Get Me
Mr. Brownstone
My Michelle
Think About You
Sweet Child O' Mine
You're Crazy


My favorite Deep Purple songs:
Child In Time
Smoke On The Water
Hush
Soldier Of Fortune (David Coverdale is the singer on this one)
Speed King (the longer version with crazy guitar intro)

So which are your favorites?

Tuesday, December 26, 2006

sys.dm_db_session_space_usage

Okay so today we will talk about the sys.dm_db_session_space_usage dynamic management view.
This is the third dynamic management view that I have covered; a list if all of them and the date that I covered some of them can be found below

http://sqlservercode.blogspot.com/2006/08/dynamic-management-views-blog-post.html



Before we start here are a couple of remarks

IAM pages are not included in any of the allocation or deallocation counts reported by this view.

Page counters are initialized to zero (0) at the start of a session. The counters track the total number of pages that have been allocated or deallocated for tasks that are already completed in the session. The counters are updated only when a task ends; they do not reflect running tasks.

A session can have multiple requests active at the same time. A request can start multiple threads, tasks, if it is a parallel query.

Here is some information about the columns in this view

session_id
smallint
Session ID.
-- session_id maps to session_id in sys.dm_exec_sessions.

database_id
smallint
Database ID.

user_objects_alloc_page_count
bigint
Number of pages reserved or allocated for user objects by this session.

user_objects_dealloc_page_count
bigint
Number of pages deallocated and no longer reserved for user objects by this session.

internal_objects_alloc_page_count
bigint
Number of pages reserved or allocated for internal objects by this session.

internal_objects_dealloc_page_count
bigint
Number of pages deallocated and no longer reserved for internal objects by this session.


First we will talk about user objects

User Objects
The following objects are included in the user object page counters:

  • User-defined tables and indexes
  • System tables and indexes
  • Global temporary tables and indexes
  • Local temporary tables and indexes
  • Table variables
  • Tables returned in the table-valued functions




SELECT *
FROM sys.dm_db_session_space_usage
WHERE session_id = @@spid


Values in the columns
--------------------------------------
user_objects_alloc_page_count =0
user_objects_dealloc_page_count = 0
internal_objects_alloc_page_count = 0
internal_objects_dealloc_page_count = 0




CREATE TABLE #temp(id VARCHAR(MAX))
INSERT #TEMP VALUES(REPLICATE('a',25000))
INSERT #TEMP VALUES(REPLICATE('b',25000))
INSERT #TEMP VALUES(REPLICATE('c',25000))
INSERT #TEMP VALUES(REPLICATE('d',25000))
INSERT #TEMP VALUES(REPLICATE('e',25000))

SELECT *
FROM #temp

SELECT *
FROM sys.dm_db_session_space_usage
WHERE session_id = @@spid

user_objects_alloc_page_count =5
user_objects_dealloc_page_count = 0
internal_objects_alloc_page_count = 0
internal_objects_dealloc_page_count = 0

DROP TABLE #temp

SELECT *
FROM sys.dm_db_session_space_usage
WHERE session_id = @@spid


Values in the columns
--------------------------------------
user_objects_alloc_page_count =5
user_objects_dealloc_page_count = 5
internal_objects_alloc_page_count = 0
internal_objects_dealloc_page_count = 0


CREATE TABLE #temp(id VARCHAR(MAX))
INSERT #TEMP VALUES(REPLICATE('a',25000))
INSERT #TEMP VALUES(REPLICATE('b',25000))
INSERT #TEMP VALUES(REPLICATE('c',25000))
INSERT #TEMP VALUES(REPLICATE('d',25000))
INSERT #TEMP VALUES(REPLICATE('e',25000))


SELECT *
FROM #temp

SELECT *
FROM sys.dm_db_session_space_usage
WHERE session_id = @@spid


Values in the columns
--------------------------------------
user_objects_alloc_page_count =10
user_objects_dealloc_page_count = 5
internal_objects_alloc_page_count = 0
internal_objects_dealloc_page_count = 0


TRUNCATE TABLE #temp

SELECT *
FROM sys.dm_db_session_space_usage
WHERE session_id = @@spid


Values in the columns
--------------------------------------
user_objects_alloc_page_count =10
user_objects_dealloc_page_count = 10
internal_objects_alloc_page_count = 0
internal_objects_dealloc_page_count = 0



DROP TABLE #temp1


so those were the use objects now we will talk about internal objects

Internal Objects
Internal objects are only in tempdb. The following objects are included in the internal object page counters:

  • Work tables for cursor or spool operations and temporary large object (LOB) storage
  • Work files for operations such as a hash join
  • Sort runs



--Create the tables
CREATE TABLE #temp1(id INT, SomeCol VARCHAR(MAX))
INSERT #TEMP1 VALUES(1,REPLICATE('a',25000))
INSERT #TEMP1 VALUES(2,REPLICATE('b',25000))
INSERT #TEMP1 VALUES(3,REPLICATE('c',25000))
INSERT #TEMP1 VALUES(4,REPLICATE('d',25000))
INSERT #TEMP1 VALUES(5,REPLICATE('e',25000))


CREATE TABLE #temp2(id INT, SomeCol VARCHAR(MAX))
INSERT #TEMP2 VALUES(1,REPLICATE('a',25000))
INSERT #TEMP2 VALUES(2,REPLICATE('b',25000))
INSERT #TEMP2 VALUES(3,REPLICATE('c',25000))
INSERT #TEMP2 VALUES(4,REPLICATE('d',25000))
INSERT #TEMP2 VALUES(5,REPLICATE('e',25000))


--Do a sort operation
SELECT *
FROM#TEMP2 t2
JOIN #TEMP1 t1 ON t1.id = t2.id
ORDER BY t1.id,t1.SomeCol,t2.id,t2.SomeCol


--Check the view
SELECT *
FROM sys.dm_db_session_space_usage
WHERE session_id = @@spid


Values in the columns
--------------------------------------
user_objects_alloc_page_count =20
user_objects_dealloc_page_count = 10
internal_objects_alloc_page_count = 16
internal_objects_dealloc_page_count = 8


What is this dmv useful for? You can use it to to help you diagnose and troubleshoot problems caused by insufficient disk space in the tempdb database.

The following error messages indicate insufficient disk space in the tempdb database. These errors can be found in the SQL Server error log, and may also be returned to any running application.

1101 or 1105
Any session must allocate space in tempdb.

3959
The version store is full. This error usually appears after a 1105 or 1101 error in the log.

3967
The version store is forced to shrink because tempdb is full.

3958 or 3966
A transaction cannot find the required version record in tempdb.


Some of these queries that you can use can be found here: Troubleshooting Insufficient Disk Space in tempdb

Whole Lotta Rosie By AC/DC or Whole Lotta Love By Led Zeppelin?

I got a new MP3 player from my sister in law for Christmas and have been listening to some good old R & R

So the question is which of these similar named songs do you prefer?
Whole Lotta Rosie by AC/DC with Bon Scott from the album Let There Be Rock.
Whole Lotta Love by Led Zeppelin from the album Led Zeppelin II.

Between these two my pick is Whole Lotta Rosie by AC/DC and I think that this is still my favorite AC/DC song.

My favorite Led Zeppelin song is not Whole Lotta Love but one of these
Since I've Been Loving You from Led Zeppelin III
Black Dog from Led Zeppelin IV
Over the Hills and Far Away from Houses of the Holy
Travelling Riverside Blues from the Led Zeppelin Box Set

I am still undecided on the Led Zep favorite. Which is your favorite Led Zep song? And no Stairway to Heaven is not allowed as your pick

Thursday, December 21, 2006

Display 1 When Any Column Is 1 And 0 Otherwise by Using SIGN, SUM Or (Bitwise OR)

You have three columns; if any of the columns is 1 you want to display 1 if they are all 0 you want to display 0. how can you do that?
there are basically three ways to do this
1 SUM
2 SIGN
3 | (Bitwise OR)


Here are the examples

--create table and insert rows
CREATE TABLE #test (test_A INT,test_B INT,test_C INT)

INSERT INTO #test VALUES(0,0,0)
INSERT INTO #test VALUES(1,0,0)
INSERT INTO #test VALUES(0,1,0)
INSERT INTO #test VALUES(0,0,1)
INSERT INTO #test VALUES(0,1,1)
INSERT INTO #test VALUES(1,1,1)



First we will display 0 if all columns are 0; if any of the columns is 1 then we will display a one

--Using SIGN
SELECT SIGN(test_A+test_B+test_C) ,*
FROM #test

--Using (Bitwise OR)
SELECT (test_A | test_B | test_C),*
FROM #test

--Using Sum
SELECT CASE SUM(test_A + test_B + test_C)
WHEN 0 THEN 0 ELSE 1 END, *
FROM #test
GROUP BY test_A , test_B , test_C


Now let's say we want to display 1 if all the values are 1.
we can do that by using &; this is the Bitwise AND operator
--Using (Bitwise AND)
SELECT (test_A & test_B & test_C),*
FROM #test

-- SUM can also be used
SELECT CASE SUM(test_A + test_B + test_C)
WHEN 3 THEN 1 ELSE 0 END, *
FROM #test
GROUP BY test_A , test_B , test_C


Now let's reverse the process and display 0 if any of the columns are 1 and 1 if all the columns are 0.
We have to use CASE to accomplish this

--Using CASE with SIGN
SELECT CASE SIGN(test_A+test_B+test_C)
WHEN 1 THEN 0 ELSE 1 END, *
FROM #test

--Using CASE with Bitwise OR
SELECT CASE (test_A | test_B | test_C)
WHEN 1 THEN 0 ELSE 1 END, *
FROM #test


--Using SUM with CASE
SELECT CASE SUM(test_A + test_B + test_C)
WHEN 0 THEN 1 ELSE 0 END, *
FROM #test
GROUP BY test_A , test_B , test_C


Here is some additional stuff, add this row and run the queries again
INSERT INTO #test VALUES(1,NULL,1)

What happens when you run this?

SELECT SIGN(test_A+test_B+test_C) ,*
FROM #test

A NULL is returned; If you have NULLS in your table then you might want to use COALESCE

SELECT SIGN(test_A+COALESCE(test_B,0)+test_C) ,*
FROM #test

Wednesday, December 20, 2006

SQL Server 2005 Will Be Supported On Vista And Windows Server Longhorn

Don't get scared of those people at Badvista.org

Microsoft announced on September 27, 2006 that SQL Server 2005 would be supported on Vista This is just more FUD from Stallman & Co. However you must have service Pack 2 installed for it to run. From the site:
"In an effort to provide customers with more secure products, Microsoft Windows Server "Longhorn" and Microsoft Windows Vista will only support SQL Server 2005 Service Pack 2 (SP2) or later when it becomes available. Earlier versions of SQL Server, including SQL Server 2000 (all editions including Desktop Engine edition, a.k.a MSDE), SQL Server 7.0, and SQL Server 6.5, will not be supported on Windows Server "Longhorn" or Windows Vista. Customers running applications with these earlier versions of SQL Server should consider evaluating and upgrading to SQL Server 2005, which was designed to take advantage of the upcoming security and performance enhancements in the operating environment."


You can read the whole story here

You can test SQL Server 2005 Service Pack 2 - Community Technology Preview (CTP) December 2006 right now

SQL Server 2005 Service Pack 2 - Community Technology Preview (CTP) December 2006 Available For Download

That's right it's here. Download it now and play around with vardecimal and custom reports for Management Studio

http://www.microsoft.com/downloads/details.aspx?FamilyID=d2da6579-d49c-4b25-8f8a-79d14145500d&DisplayLang=en

Tuesday, December 19, 2006

How Well Do You Interview And Do You Use Wizard Driven Programming?

We were interviewing people for developer and architect positions. The developer position required about 5 years C# and SQL Server. We got a couple of good candidates but some of them were really terrible. They claim SQL server experience since 1998 but don't know what a clustered index is or they can not name even one isolation level. There were people who had 10 years experience in web development but could not explain what MVC (Model-view-controller) is or name a design pattern. People created webservices but did not know what a WSDL is?????

You see this is the problem I have with WDP (Wizard Driven Programming); It's okay to use wizards but you HAVE to know what is going on behind the scenes. It's okay to use Enterprise Manager but you have to know how to do the same thing in Query Analyzer. If you never write your own scripts you will sooner or later come to a shop where they don't use these tools and changes have to be submitted via scripts. Not only do you have to script everything you also have to provide back-out scripts in case something goes wrong.

So here are some of the questions I asked; with 5 years experience I think you should not miss more than two (probably traceflag 1204 and parameter sniffing questions)

Let me know how many you couldn't get without looking them up


What is normalization
What is the fastest way to empty a table
what is a deadlock
Can you give an example of creating a deadlock
How do you detect deadlocks
What is an audit trail
what is an identity column
How do you return an identity value from a table
How do you return an identity value from a table with a trigger
How many bytes can you fit in a row, do you know why
What is a clustered index
How many clustered indexes per table
How many nonclustered indexes per table
what is an execution plan
Between index scan, index seek and table scan; which one is fastest and which one is slowest
How do you return a value from a proc
How do you return a varchar value from a proc
If I have a column that will only have values between 1 and 250 what data type should I use
How do you enforce that only values between 1 and 10 are allowed in a column
How to check for a valid date
Which date format is the only safe one to use when passing dates as strings
How do you suppress rows affected messages when executing an insert statement
Can you name the 4 isolation levels in SQL Server 2000
How would you select all last names that start with S
How would you select all rows where the date is 20061127
What is horizontal partitioning
What does schemabinding do
How do you test for nulls
Name some differences between isnull and coalesce
What is a temp table
what is the difference between a local and global temporary table
If you create a local temp table and then call a proc is the temp table available inside the proc
What is referential integrity
what is the fastest way to populate a table (performance wise)
using the method above what can you do to make it even faster
What data type should you use to store monetary values
What is a cascade delete
Name a couple of types of joins
What is a SQL injection
What is parameter sniffing
Name 2 differences between a primary key and UNIQUE Constraints
How do you ensure that SQL server will use an index
What does option fast (10) do
What is the difference between union and union all
What does trace flag 1204 do

Sunday, December 17, 2006

BadVista: The Worst FUD You Will Ever Read.... St. Ignucious Has Gone Wild

Richard Stallman (RMS, St. Ignucious ) is at it again. let's see what the site says:



The Free Software Foundation (FSF) today launched BadVista.org, a campaign
with a twofold mission of exposing the harms inflicted on computer users by the
new Microsoft Windows Vista and promoting free software alternatives that
respect users' security and privacy rights.
“Vista is an upsell masquerading
as an upgrade. It is an overall regression when you look at the most important
aspect of owning and using a computer: your control over what it does. Obviously
MS Windows is already proprietary and very restrictive, and well worth
rejecting. But the new 'features' in Vista are a Trojan Horse to smuggle in even
more restrictions. We'll be focusing attention on detailing how they work, how
to resist them, and why people should care”, said FSF program administrator John
Sullivan.



More interesting is the piece of garbage that they link to: 25 Shortcomings Of Vista

Here are some of the 'gems'

1. SMB2: Vista introduces a new variant of the SMB protocol called SMB2, which may pose problems for those connecting to non-Microsoft networks, such as Samba on Linux.
--Speculation and pure FUD

7. Five Versions: The array of Vista editions could prove to be three too many, and upgrades between versions remain an unknown.
8. Activation: The need to activate the product via the Web could prove to be a time-waster during mass deployments.
--Could prove? Could prove? Who writes this stuff? For mass deployments there is something called Key Management Service

Key Management Service
Your organization can host the Key Management Service (KMS) internally to automatically activate computers running Windows Vista. To use the KMS, you must have a minimum of 25 computers running Windows Vista that are connected together. Computers that have been activated through KMS will be required to reactivate by connecting to your organization's network at least every six months.

Currently the KMS software runs on a local computer running Windows Vista or the Microsoft Windows Server Code Name "Longhorn" operating system. In the future, it will run on the Microsoft Windows Server 2003 operating system.


9. Storage Space: With Vista taking as much as 10 Gbytes of hard drive space, big and fast hard drives will be a must.
--Last time I checked you can't really buy anything less than 100GB

10. Backup: See No. 9. Backing up desktops will take a great deal of space.
--So backup only your personal data.

18
Buried Controls
Many options and controls are further buried, requiring a half-dozen mouse clicks or more to get to. Network settings and display settings are offenders here.
--Some people said the same thing in WinXP, until they realized there is also a classic view. Vista has this classic view also.

19. Installation Can take hours on some systems. Upgrades are even slower.
--This is BS; it should not take more than 30 minutes unless you got a pirated copy on 1500 floppy discs

20. HHD Hybrid Hard Drives. These are potentially a huge performance booster, but there's little information and support is available (even though should be available).
--Since when is having an emerging technology that will boost performance a shortcoming?

#25:
WordPadAbility to open .doc files has been removed.
--Hahahahahaha who wrote this piece of crap. You either use Word or OpenOffice

The writer of this article is Frank J. Ohlhorst send him an email if you want to tell him what you think

Friday, December 15, 2006

Three Ways To Display Two Counts From a Table Side By Side

This question came up on tek-tips.

I want to count the CA and the UT states from the authors table and output this in different columns named CaCount and UtCount

CaCount UtCount
----------- -----------
15 2


How do you do such a thing? To display the following is very easy

state StateCount
----- -----------
CA 15
UT 2

You just do a count and group by

SELECT state,COUNT(*) AS StateCount
FROM authors
WHERE state in ('CA', 'UT')
GROUP BY state


How do we display them side by side? We can do this 3 different ways
1 stick the count in variables
2 run the count twice a s a separate query
3 sum with case

USE PUBS
GO

--stick the count in variables
DECLARE @UtCount INT
DECLARE @CaCount INT

SELECT @UtCount = COUNT(*) FROM authors WHERE state ='UT'
SELECT @CaCount = COUNT(*) FROM authors WHERE state ='CA'

SELECT @CaCount AS CaCount,@UtCount AS UtCount

--run the count twice as a separate query
SELECT (SELECT COUNT(*) FROM authors
WHERE state ='CA') AS CaCount,
(SELECT COUNT(*) FROM authors WHERE state ='UT') AS UtCount


--sum with case
SELECT SUM(CASE state WHEN 'CA' THEN 1 ELSE 0 END) AS CaCount,
SUM(CASE state WHEN 'UT' THEN 1 ELSE 0 END) AS UtCount
FROM authors


If you look at the execution plan then you will see that the best option is the third one (sum with case). The third option only queries the table once instead of twice like the other two solutions and will thus be twice as fast
So keep that in mind when you have to do such a query next time

[edit]
Oracle Version
jbjornson left the Oracle version in a comment; I decided to add it to the post. I don't know about the syntax coloring on Oracle/TOAD so you are getting a black and white only ;-(

--run the count twice as a separate query
SELECT CA.caCount, UT.UtCount
FROM (SELECT COUNT(*) as CaCount FROM authors) WHERE state ='CA') CA,
(SELECT COUNT(*) FROM authors WHERE state ='UT') UT


--sum with case
SELECT SUM(decode(state, 'CA', 1, 0)) AS CaCount,
SUM(decode(state, 'UT', 1, 0)) AS UtCount
FROM authors
[/edit]

Visual Studio 2005 Service Pack 1 (SP1) Released

Visual Studio 2005 Service Pack 1 (SP1) has been released. From the site:
Service Pack 1 addresses issues that were found through a combination of customers and partner feedback, as well as internal testing. The issues addressed range in severity from places where the syntax coloring was incorrect to customer reported crashes in various scenarios. In some areas, more than 50% of the bugs addressed were reported by customers through the MSDN Product Feedback Center and Microsoft Connect. Overall, Service Pack 1 offers customers improvements in responsiveness, stability and performance for Visual Studio 2005.

Through further advancement and feedback, Service Pack 1 also provides over 70 improvements for common development scenarios including:


  • New processor support (e.g., Core Duo) for code generation and profiling

  • Performance and scale improvements in Team Foundation Server

  • Team Foundation Server integration with Excel 2007 and Project 2007

  • Tool support for occasionally connected devices and SQL Server Compact Edition

  • Additional support for project file based Web applications

  • Windows Embedded 6.0 platform and tools support




Get it here

Wednesday, December 13, 2006

Index Build Strategy In SQL Server

The Tips, Tricks, and Advice from the SQL Server Query Processing Team blog has 3 really good posts about Index Build strategy in SQL Server. You should definitely read these to really understand how to implement that. Here are the links:
Index Build strategy in SQL Server - Part 1: offline, serial, no partitioning

Index Build strategy in SQL Server - Part 2: Offline, Parallel, No Partitioning

Index Build strategy in SQL Server - Part 2: Offline, Parallel, No Partitioning (Non stats plan (no histogram))

Problems With ROW_NUMBER() In A WHERE Clause

This is a question that came up today. A person had a query like this


USE AdventureWorks
GO

SELECT
ROW_NUMBER() OVER (ORDER BY addressline1) AS rowNum,
addressline1,
city
FROM person.address
WHERE rowNum > 3;

That gave the following error
Server: Msg 207, Level 16, State 1, Line 6
Invalid column name 'rowNum'.

What you have to do is use Common Table Expressions or use a subquery. Below are both methods.

--Subquery
SELECT * FROM ( SELECT
ROW_NUMBER() OVER (ORDER BY addressline1) AS rowNum,
addressline1,
city
FROM person.address) AS x
WHERE rowNum > 3;


--CTE
WITH x (rowNum,addressline1,
city) AS
(SELECT
ROW_NUMBER() OVER (ORDER BY addressline1) AS rowNum,
addressline1,
city
FROM person.address)

SELECT * FROM X
WHERE rowNum > 3;

Tuesday, December 12, 2006

SQL Server 2005 Practical Troubleshooting: The Database Engine Is Shipping!

Amazon is shipping SQL Server 2005 Practical Troubleshooting: The Database Engine. This is a must have for all SQL developers who run into any kind of problems with SQL Server 2005

For all of you who did not read my interview with the editor of this book Ken Henderson; you can find that here; Ken describes the target audience for the book the most important chapters for customers who have performance problems and much more

A sample chapter is available: Chapter 4 Procedure Cache Issues




About the book:
Paperback: 456 pages
Publisher: Addison-Wesley; 1ST edition
Language: English
ISBN: 0321447743

Contents
Preface
Chapter 1 Waiting and Blocking Issues
Chapter 2 Data Corruption and Recovery Issues
Chapter 3 Memory Issues
Chapter 4 Procedure Cache Issues
Chapter 5 Query Processor Issues
Chapter 6 Server Crashes and Other Critical Failures
Chapter 7 Service Broker Issues
Chapter 8 SQLOS and Scheduling Issues
Chapter 9 Tempdb Issues
Chapter 10 Clustering Issues
Index



Amazon Links: CA FR DE UK JP US

Visual Studio Team Edition for Database Professionals links

By now you know that Visual Studio Team Edition for Database Professionals has been released. But do you know all the links to the newsgroup, documentation and download page? No? No problem here are the links:

Download Link
The product is available on the MSDN subscribers site, or you can download the trial version from he following URL :http://www.microsoft.com/downloads/details.aspx?FamilyId=7DE00386-893D-4142-A778-992B69D482AD&displaylang=en
For the trial version, you still need an installed copy of Visual Studio 2005 Team Suite which can be either a trial or a licensed version

Documentation
Team Edition for Database Professionals

Forum
Visual Studio Team System - Database Professionals

Visual Studio 2005 Team Edition for Database Professionals Site
Data Dude

Sunday, December 10, 2006

Wedding Anniversary

So here we are 9 years later. We are still married have 3 kids and bought our second home this year. So let me give you a little background of how I met my wife etc etc. I met my wife in a bar on the upper eastside in New York City.
I don’t remember the name of the bar but I do remember that we went to Elaine’s for a drink and also to a place named the Cowboy Bar. Fast forward to the wedding date; we got married twice first at City Hall on December 11th and then we did a church wedding on August 15th (the pictures are from that day) We did the December wedding separately so that we could start my Green Card process; my wife is a third generation Italian-American but I had Dutch and Croatian citizenships.

The wedding was 1 block from were we lived. My wife (to be) spent the night in a hotel with her sisters. The wedding started at 3; I was still rollerblading till 2:30 in Central Park. I took a shower got dressed and walked to the church (which was 1 block from my apartment)

5% of the wedding guests did jail time.
The reception was held at the Kitano Hotel which is located at 38th and Park Avenue. I did not have a huge wedding only 65 or so family members and some of our closest friends were invited. Out of those people there were 3 who did some jail time. One of them is Brendan who was the head of the NYC Hells Angels chapter at that time.

Another was brought by a friend and the last one was a friend of mine who did jail time because he beat up a cop. The Hells Angel did not dress up of course, if you look at the picture you can see me next to my wife. The other picture is me and my wife on the roof. The reception had 2 huge balconies; one was facing the Empire State Building and the other one was facing the Chrysler Building. The good thing when having your wedding at a hotel is that you can just walk to your room.


Honeymoon in Hawaii
So after the reception we had two days to get ready for the honeymoon. We did our shopping and on Monday night we went to China Club with out friend from Europe. China Club is the place to go to on Mondays; you can always spot some model or one of the Yankees or Knicks there. The next day we went to Maui and Kuai. I have said many times to my wife that I want to go back to Hawaii once before I die…. Maui and Kuai were beautiful places to spend your honeymoon; if you are into playing golf, surfing, hiking, snorkeling then there are lots of things to do here. Jurassic Park was shot in Kuai and once you arrive in Kuai you will recognize the green scenery right away

After the honeymoon
Once you are married don’t have kids right away; your live will change and you won’t be able to do anything for a while. So that is what we did; we waited 4 years to have kids. We visited Paris and Amsterdam and bought a house near Princeton before having our first son Christian. Then my wife became pregnant with twins and we had to sell our house and buy a bigger house. If owing you house is the American Dream then selling and buying your next house has to be the American Nightmare; what a stressful situation. After that ordeal I said that the next time I move will be in one of those 55+ community houses (I still have about 20 years to go). So today we live in Princeton and we have Catherine and Nicholas to keep us awake at night.


Traditions
My wife and I usually buy our Christmas tree on our anniversary. In the city we would pick up a Charlie Brown on 2nd avenue and 79th street; out here we go to the farm and chop down our own. Somehow my wife thinks it is also a tradition to watch Serendipity; if I have to watch a movie with Kate Beckingsale I prefer Underworld but hey you have to be nice on your anniversary….


BTW I still fit in my wedding suit, the tailor was wrong ;-(

Friday, December 08, 2006

Plan For The Day…Gone Wrong In a Big Way

It is my 9th year wedding anniversary this weekend so I took the day off today and below is a list of things I planned to do today

[ ] Get a Christmas tree at the farm
[ ] Go for breakfast with the whole family
[ ] Drop of a package at the post office
[X] Interview a person for a position at work
[ ] Enhance a Stored Procedure and close the item in BugTrax
[ ] Watch Serendipity (my wife insists)
[ ] Get haircuts
[X] Cover the furniture in the garden

As you can see I managed to accomplish two things only so far. What happened? My son got sick and had a high fever and threw up this morning; that eliminated the first 3 items right away. I managed to do the phone interview because it was a reversed phone interview (the candidate was at the office and I interviewed him from home over the phone)
Forget about the haircuts (I rescheduled for tomorrow); where do I leave our 3 kids?
So that leaves me with watching Serendipity tonight ;-(

Anyway I will post something about my wedding this weekend. Believe it or not 5% of the people that came to my wedding did jail time; one of them was the head of the NYC Hells Angels chapter( I will also post some pictures this weekend)


Oh you wanted some SQL material? No problem this will keep you busy for a while ;-)

Log Buffer #21: A Carnival of the Vanities for DBAs
Log Buffer #22: A Carnival of the Vanities for DBAs

Thursday, December 07, 2006

Get The Domain Name Of Your SQL Server Machine

How do you get the domain name that your SQL Server machine belongs to with T-SQL code?

You can use the undocumented procedure xp_regread; I have tested this on SQL Server 2005 and on SQL Server 2000.
Be advised that since this is an undocumented procedure that it might not be available in the future or it might be changed either because of a service pack or a hot fix. Don't use it as production code!

Here is how to do this

DECLARE @chvDomainName NVARCHAR(100)
EXEC master.dbo.xp_regread 'HKEY_LOCAL_MACHINE',
'SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon',
N'CachePrimaryDomain',@chvDomainName OUTPUT

SELECT @chvDomainName AS DomainName

Monday, December 04, 2006

Visual Studio 2005 Team Edition for Database Professionals Released to Manufacturing

On Friday 1st December 2006 the development team officially announced that Data Dude was done and was being released to manufacturing. It should be available for download to MSDN subscribers on December 7th 2006 and generally available for purchase on January 1st 2007.

This release marks the end of the first leg in a very exciting journey to revolutionize the ALM space and finally bring the database into the application lifecycle.


You can find more info about Data Dude here: Visual Studio Team Edition for Database Professionals

Sunday, December 03, 2006

Top 5 Posts For November 2006

Below are the top 5 posts according to Google Analytics for the month of November 2006 in order by pageviews descending


How Are Dates Stored In SQL server?
SQL Query Optimizations
Check If Temporary Table Exists
SQL Server 2005 Script Repository
Five Ways To Return Values From Stored Procedures

Top SQL Server Google Searches For November 2006

These are the top SQL Searches on this site for the month of November. this time I left some of the searches that have nothing to do with SQL Server or programming (Like breeds of cats that produce less FEL D1s)

Here is the list:
apprentice sample narrative report in bank
generate table script
breeds of cats that produce less FEL D1
CSV import MS SQL ASP
gravelines
case in sql server 2005
if condition in sql server 2005
search string in a string sql server
sql 2000 olap range
TOP DOWNLOAD SOFTWARE
169.254.x.x
DATE CONVERSIONS
sp_send_dbmail

As you can see if and case are frequent searches. Just yesterday I answered a case question; the poster had this code

set t7 =
case
when (t4 is null and t5 is null) then cast(0 as varchar)
when t4 is null then cast(t5 as varchar) else cast(t4 as varchar)
end


Which of course can be replaced with

set t7 =coalesce(t4,t5,0)

This will return the first non non value, since t7 is a varchar column there is no reason to convert to varchar either

Wednesday, November 29, 2006

Interview With SQL Server MVP Louis Davidson: Author Of Pro SQL Server 2005 Database Design and Optimization

After interviewing Ken Henderson I decided to try my luck with Louis Davidson. By now you already know that he said yes ;-). Louis is the author of Pro SQL Server 2005 Database Design and Optimization and a SQL Server MVP. I have a copy of his book and I highly recommend it to anyone who wants to learn about designing and optimizing databases. The question-and-answer session with Louis that follows was conducted via email.

Louis, the first time I came across your name was in the Microsoft public SQL Server programming group. Can you explain to us why you participate in newsgroups and forums?

I have been involved with newsgroups since back in college when I was an avid reader of rec.music.beatles. And when I found the SQL Server groups (back then, before Microsoft’s groups it was comp.databases.ms-sqlserver), I started reading them, and finally giving some feedback. I then spent time in the newsgroups trying to answer people’s questions to help them, and help me extend my knowledge past school and technical books; because the real situations that people get into are more interesting than anything you can get in a textbook.

I quit hanging around for quite a while because the public newsgroups (just got too doggone noisy. I mean come on, there is only so often that one can have certain, “personal” products sold to them.

But about three years ago, I went to a session with Kalen Delaney (who was then, and is now one of the people who I look up to greatly,) who was talking about things to do to promote your books. I started then trying to answer three questions a day (or more) on the Microsoft newsgroups (instead of the public ones) and now on the MSDN Forums (http://forums.microsoft.com/MSDN/ShowForum.aspx?ForumID=85&SiteID=1).

I don’t know that it helped my book sales in any overwhelmingly large way, but it certainly has helped me in my career. I find that so many things I have learned while helping others end up being useful at my day job. Plus, my philosophy in regards to SQL Server is simple: the more people who like SQL Server the more likely that the product will be around when I retire in 30-40 years.


I have noticed that you have explained the 12 rules of Codd, do you think that it is important to know these rules as a database developer?

Absolutely, if you don’t know why you do something, then it is just too easy to think that it isn’t worth it. And you can’t really bend the rules until you know them. Codd’s rules are the foundation that the RDBMS was built upon, which the begat normalization. Once people get that SQL Server is essentially a tool largely created to manipulate and serve up normalized data (a concept that was built upon the original 12 rules) it actually becomes essential to normalize, not just something that some square geezer is telling you that to do.


Why did you decide to write technical books?

Two reasons:
1. So I would have a reference to refer to when I need to recall something.
2. Because I hate having free time.

Seriously, I had really done quite badly in my senior level database design class in college, but good enough (I think I made a C.) So when I was thrust into the database programmer role, I wanted to learn how to do it right (something my father had not so subtlety driven into my brain years before.) What made things even better was that I had several mentors who believed in doing things right during my first six or seven years of programming, so I learned essentially only good practices, but I still wanted more. I couldn’t find a book on database design that was not written as a text book, so I suggested a book on database design to the editors at WROX, whom I was working for as a tech editor on a Visual Basic/SQL Server book. I expected to tech edit this design book, or possibly write a chapter or two, but before I knew it they had contracted me to write the entire book (Chuck Hawkins wrote a chapter for me, but I wrote the rest) and paid me what seemed like a ton of money (of course, if I average it out to an hourly wage, I averaged far less than minimum wage when all was said and done.)

The process of writing the book was an amazing journey, one because I had never written a paper greater than like 15 pages (my degree is in CS, so all of our papers were really quite small in comparison) and I had to learn a good deal of stuff along the way because I was not really an “expert” at the time (and the more I know, the less I feel like an expert now!) The thing about writing is that you have to be more correct than in almost any medium. When you speak on a subject, if you say something, the people in the audience go, “whu?” and if you talk fast enough and right enough, they will likely forgive you (as long as you don’t do it again.) But if you mess up in writing in a book, it just hangs there over your head forever.

When it was over I had something I was proud of and I said what every technical book writer I have talked with says: “NEVER AGAIN! Too much work, not enough gain, I could make more money cleaning out stalls at the local pet store!!!” I have now said more or less the same thing three times now, and I hope to say it again around the time Katmai ships.


Which chapter was the hardest to write and can you explain why?

Well, the ones that were the hardest were actually the easiest, since I got the Kevin Kline and Kurt Windisch to write them. These guys were excellent co-authors, and did great work. Kurt’s section on CLR programming is great, with some awesome example code that I often use when I need to build a CLR function. Kevin wrote a chapter about cross-platform development that rounded out the book.

Of the chapters I wrote, the most troublesome was probably the security chapter. Security is a very complex topic and there are many, many ways to enforce security. Even worse, there are too many ways to get around it once you have it set up. It is the one chapter that I am not completely happy with, and the one that I plan to do some writing on again in the next few months, most likely after the holidays, during which I hope to do very very little that isn’t concerned with eating, sleeping, or going to Disney World.


What is the audience for this book?

Well, with this being my third book I was kind of hoping that my audience had grown to Dogbert
-like proportions, but I have to be reasonable. I would simply be satisfied if I could get every SQL Server programmer to read it (and buy it!) The overall goal was to make a practical guide going through the boring bits (theory, modeling), the stuff you hear about but rarely really understand (normalization) and to include the more practical topics like constraints, triggers, security, etc that even people who have years of experience would like.

I think that most people will get something from the book, from the newbie to the pro. Of course the payoff will be far less for someone who already knows the theory, but there is something in there for everyone that will at least be worth your 60 bucks (which is the full retail price. Don’t pay full retail price!)

I also try to keep it as light as possible, considering the very dry topic. There are several little humorous bits thrown in, and for fun I added a hidden message for a contest to run later (actually going on right now in the end of 2006.) All of the clues are leading to a message that I “encrypted” into a bit of code in the book (and yes, that is bonus clue for reading through this really long interview!) for a bit of fun.

Ultimately, I want the book to be something that I would purchase in each version at the very least for the practical aspects of the book. You might think that since I wrote the book I wouldn’t refer to it, but I often use the templates for triggers, procedures, error handling, etc from the book whenever I need to write one without using some tool.


Which SQL Server books are on your bookshelf?

Well, to tell the truth, I have quite a few on my desk and no time to read them. I have thumbed through Itzik’s book, “Inside Microsoft SQL Server 2005: T-SQL Programming”, and it is pretty darn good. Dejan Sarka, who was one of my technical editors worked on that book too. Beauchemin and Sullivan’s “A Developer’s Guide to SQL Server 2005” seems pretty good, and I have used Tachev’s book, “Applied Microsoft Analysis Services 2005” several times to try to figure out how different Analysis Services was in the 2005 version.

And I would be remiss to not mention Kalen’s previous “Inside SQL Server 2005” book. What a great reference that book was. I can’t wait to get my hands on her new books (not that I would have the time to read them either!)


What are/could be some consequences if your data model is not in Third Normal Form or even First Normal Form?

The earth could stop spinning on its axis, at least that is what I try to tell people. The more reasonable answer is that you will have nagging, annoying issues with your data that will require more and more management just to keep straight.

Normalization pushes data consistency issues to the foreground and forces you to deal with them. For example, take first normal form. One of the rules of first normal form is that every row is unique. A constant question on newsgroups/forums is: “I have duplicate rows, how do I get rid of them?” Well, go through this following messy set of steps, and then PROTECT YOURSELF USING KEYS!
Another concern of first normal form is scalar attributes. I like to define this as keeping your data broken down to the lowest form you will manipulate using SQL. So if you have to use substring for any common operation, they you likely have an issue. For example take an address, if you use addresses just as the user types them in, then just have a text column, put the address in it. But if you have a mailing system and you have to cleanse the data, and then break the address down into all of its constituent parts instead of going through the process every time you need an address. SQL is great for taking bits and pieces and constructing what you need, but lousy at breaking things down.

For third normal form, the problems are more ominous. If the data in the table isn’t really describing the thing that the table is modeling, you will end up with redundant data. Redundant data doesn’t sound so bad, until you have need to update the data. Consider the case where you have a table modeling a person, and in this table you have the person’s manager, the manager’s name and address. From this sentence, it almost sounds plausible and correct. It certainly is something you might want to see on a screen. But is the manager just the manager for the one person? If not you might have ten copies of the managers information. Now you have to make sure and deal with all ten copies, which is a real pain, to say the least.


How did the massive changes between SQL Server 2000 and SQL Server 2005 affect the research for your book?

The beauty for me was that the truly massive changes made little difference. Since the book I have written is about database design, things haven’t changed so incredibly much from that angle in any version of SQL Server. SQL evolves and gets better and there are optimization changes, but the basics remain the same. Fortunately as machines get more powerful we need to tweak performance less and less and we can just focus on data integrity and usability through normalization.

The biggest differences that I am interested in from an authoring standpoint have been in the ability to do more with T-SQL (I wrote the T-SQL for Developer’s chapter in the Pro SQL Server 2005 book for Apress), and now using the CLR functions. I had Kurt Windisch (a friend who was a board member for PASS at the time) write a very large CLR section, and I am so glad. Now I have a nice reference and excellent code samples for CLR objects. It is one of the great things about a book like this in that it is not overview book where you simply give the new features and come up with some reason you might want to use the features. If the feature isn’t tremendously useful, we can say that it isn’t useful. I think we got it pretty much right by saying that functions are really the most useful part of the new CLR objects, as well as to a lesser extent aggregations.

I admit that I don’t cover XML, something that I lament occasionally. I had waffled on including it for months because I just don’t feel that it is that important to OLTP systems, but on the other hand it could certainly be useful to implement a set of sparse properties (a topic I do touch on) and had arranged for another coauthor set up to write me a chapter about XML, but they dropped out too late to get another one. Even today I am considering commissioning an “add-on” chapter about XML to send out to readers (if you want it, vote with your email to drsql@hotmail.com and your wallets by purchasing the book so I can afford to pay someone. )


Name three things that are new in SQL Server 2005 that you find are the most valuable?



  1. Dynamic Management Objects. Wow, I can’t even begin to start on just how great these are. The best one to me is outlined in this blog entry: http://drsql.spaces.live.com/blog/cns!80677FB08B3162E4!1443.entry . Automatic index tuning is awesome. I have used the basis of this system to optimize a third party system in excellent ways using the number two feature.
  2. Included columns on indexes. These have really made it easier for dba’s to optimize queries by building covering indexes for queries without bloating indexes to death. What’s more, because you aren’t limited to 900 bytes, you can cover some queries in larger, more unwieldy systems that you just couldn’t do before. They aren’t the end all as they still have to be maintained, but they are certainly excellent in many situations. We have used them extensively to optimize a third party system that was very heavy on reads to cut down tremendously on full table scans.
  3. The new BI Paradigm. The attribute oriented nature of Analysis Services is just plain cool. It gives users this extremely rich set of possibilities for viewing data using their cubes that is just so much better than they had in 2000. It was a tremendous change in gears though, and I am very glad that the fundamentals of dimensional modeling didn’t change, so the Kimball paradigm data warehouse we had was still quite viable using either platform, just easier to use.

Name three things that you wish would be in Katmai (the next version of SQL Server)?

I should have finished this interview a few weeks earlier, and I could have spoken a bit more freely about this subject, but at this point I am under NDA for what I have heard about Katmai. What I will say is that I hope to see as much work done in the relational arena (T-SQL and the engine) as they do for the other areas like XML, or tools. The heart and soul of SQL Server should always be right there, lurking in the name: SQL. All of the other new stuff they have added over the years has made things better for us, but I would have traded many, many non relational features to have had the ROW_NUMBER() and RANK() functions in an earlier version.

Of course, this is just my very selfish view on things as a Data Architect/Relational Programmer. In this particular case, even more than normal, I particularly speak only for me.


How critical is a proper data model for a good performing database?

Freaking essential! This is like asking how important is it to know the race route to win the race. The SQL Server engine is amazingly good at serving up data, but it is up to the implementer to know what data is needed and to put the data into a format that works the way that SQL Server works.

I would also be remiss if I didn’t take this one step further. How can you build anything without design? Not one programmer would call up a contractor and say “build me a house, three bedrooms, two baths” and then just let the contractor sketch out a rough drawing and then start working would they? And while I realize that it seems that software is a lot more pliable than building a house, once you have built the foundation classes, the database, and started a UI, rework can be just as costly as running a bulldozer though a regular house, and on a particularly a large software project, far more expensive.


What are some of the biggest mistakes that people make when designing a database?

I did something this for Greg Low’s really cool SQL Down Under podcast (where you get a good idea that I am terrified of speaking publicly) but lets just boil it down to two here:


  1. Treating database design like application design - The reason why the concept of a distinct business layer is so intriguing to me is that we can transform the user’s views/needs from the GUI through this layer to the data layer. Each layer of the application has it’s own needs (UI, Object Layer Representation, Data Access Layer, Stored Procedure Layer, Database Structures) each very important in it’s own right. More importantly, each has very different design goals.
  2. Forgetting about the future – During the design phase of any project, changes are very, very cheap. So if you realize you want to have a table with a domain of values instead of a simple varchar column, it is as simple as adding a rectangle to your model and connecting rectangles. Same goes with the discovery that the cardinality between two of your tables was not what you originally thought (“oh, a person is allowed to have more than one phone number in this system.”) However, as time passes, and more code is written, and more data is added to the system, the costs go up almost exponentially. Of course when the project manager has saddled you up and is giving you the crop, demanding results (that is their job after all) it can be all too easy to simply ignore the future and start coding. Always think about Future-You and how annoyed he is going to be when he realizes that had you just spent another hour three months ago, two weeks of work (of course corresponding to Future-You’s vacation to Disney World) could have been avoided.


There are so many others, bad naming, poor constraint usage, under-using stored procedures, ignoring security, weak testing, and so on and so on. But I think I can safely say that if you start with these two concepts, a great number of these bad habits will correct themselves.


Can you list any third party tools that you find useful to have as a SQL Server developer/admin?

I have become quite enamored with the Red-Gate tools, especially the database compare tool. I use it quite often to compare databases to find changes that have been made, especially for indexes and stuff done for tuning a database. The Apex tools are pretty good too.

The one tool I have used a lot lately has been the Automatic Indexing stuff from the query optimization team: (http://blogs.msdn.com/queryoptteam/archive/2006/06/01/613516.aspx). I am working right now on optimizing a homegrown/third party “object oriented”-esque tool that has a great deal of object abstraction built in, and have found tremendous performance gains from some really esoteric indexes this thing added to some of the queries that I would have never thought of in a month of Sundays. I wrote about it in more detail here: (http://drsql.spaces.live.com/blog/cns!80677FB08B3162E4!1443.entry and yes, I know this is the second time I have mentioned it, it is that important!)

In general, I am not so much a tool guy as I am a script guy. I use the UI to script out objects quite often, but then I try to edit everything with T-SQL so I know how it is done. The people who get too comfortable in the wizards or GUI often forget or don’t learn all of the tricks that you can do with the raw SQL commands, because it would just be too hard to build the perfect UI. So they give you 50% in the wizards and 90% in the GUI, and the rest is “advanced” stuff. If you don’t know about the nuts and bolts, you might be tempted to say “SQL Server doesn’t…”.

Lastly I have always been a big fan of ERwin for modeling. It does a really good job of representing my structures, and I like their macro language for building very customized scripts to build tables/views/procedures/etc.


What is the biggest Database in GB/PB that you have ever worked with, how many rows was your biggest table ever?

It was about ten feet across, six feet high. I had just started my new job as a LAN administrator and occasionally I had to fill in for the night operator. We had these 14 inch across reels of something that looked like a reel to reel tape, but they….wait, do you mean gigabytes and petabytes? I was thinking Great Big, or Pretty Big.

Well definitely no petabytes, and only recently have we started thinking in terms of terabytes, some of which is due to the fact that we have started storing everything as Unicode, essentially doubling the size of the database. We probably have around a half of a terabyte in our OLTP system and Data warehouse combined. I am never too excited about the size of a database as I am of the number of rows in a key table. You could have 10 billion rows and a very small database because it was designed nicely, or to the contrary, a million rows and have a terabyte if you just stored all of your data in text blobs (which is a favorite kind of storage for too many people.)

The largest database I designed I never got to see come to fruition because our company went through a couple of downsizing “events”, which I was part of the next to last one. I only know it was large because a prior coworker of mine called me up one day and told me we had gone past the 2 billion row mark and they were to changing to use bigint rather than basic int types for the surrogate keys. Tons of rows, but really they were quite small rows. We were keeping information on Internet equipment that the company had deployed. It was a pretty cool system that I wish I could see in action today (if it is still being used. The concept was neat, but as the implementers we were never 100% sure if the data would be useful later in the process.)


When did you start blogging and why?
    When is easy, almost two years ago when I started writing my last book. The answer to why is pretty much the same as the answer to the technical books question, but increased an order of magnitude (as a reminder, that answer was: Two reasons: so I would have a reference to look into when I need it; and because I hate having free time.)

    Blogging is possibly the biggest time sink on earth really. When I am working a “normal” 40 or so hour week in my day (read: paying) job, it is easy to spend an hour or so a day blogging, but when my job squeezes my time, it gets really hard.

    But the most exciting part of blogging is the incredible active reference it gives me, in a format that really works for me. I can also use more of a shotgun approach and just write about what I want to write about on any topic, SQL or otherwise, rather than just writing about database design exclusively.

    Over the years, I have tried creating libraries of code on my laptop, but that never has seemed to work for me. As I have aged (a little) I have started to realize that commenting, blogging, writing, etc is not for you, it is for Future Me and other people. I have so much code on my laptop from the past 10 years that I don’t know what the heck to do with it. I have VB6, SQL from past ages, failed article attempts, diagrams, etc all over my projects directory. So Past Me (that jerk!) is punishing me with his laziness, something I now vow not to do to Future Me. Confused? Good.

    I started to realize that my blog had gotten out of control with so much information that I could no longer search and find when I found OfficeLive.com, a Microsoft site that allows you to create a free website for your small business. So I created drsql.org as a reference for my books, presentations, and my blog. On there you will find a decent enough index to my blog entries, usually within a month of being up to date, as well as a list of about ten articles that I can get to from the front page just about instantly.


    About the book:
    By Louis Davidson, Kevin Kline, Kurt Windisch
    ISBN: 1-59059-529-7
    672 pp.
    Published: May 2006

    Table Of Contents
    CHAPTER 1 Introduction to Database Concepts
    CHAPTER 2 Data Modeling
    CHAPTER 3 Conceptual Data Modeling
    CHAPTER 4 The Normalization Process
    CHAPTER 5 Implementing the Base Table Structures
    CHAPTER 6 Protecting the Integrity of Your Data
    CHAPTER 7 Securing Access to the Data
    CHAPTER 8 Table Structures and Indexing
    CHAPTER 9 Coding for Concurrency
    CHAPTER 10 Code-Level Architectural Decisions
    CHAPTER 11 Database Interoperability
    APPENDIX A Codd’s 12 Rules for an RDBMS
    APPENDIX B Datatype Reference
    INDEX

    Sample Chapter:
    Ch. 02 - Data Modeling


    Amazon Links: CA FR DE UK JP US

    Tuesday, November 28, 2006

    Use SCHEMABINDING In Views To Protect Against Alteration Or Dropping Of Tables

    My post from last night triggered something in me to write a little bit about SCHEMABINDING.
    If you create a view and someone drops one of the tables that the view is dependent on then the view will fail. How can you protect yourself against that? You can use WITH SCHEMABINDING. Here is how you use it:

    Let's first create some tables and the view

    CREATE TABLE dbo.Test1 (ID INT, SomeName VARCHAR(49))
    CREATE TABLE dbo.Test2 (ID2 INT, SomeOtherName VARCHAR(49))

    INSERT dbo.Test1 VALUES 1,'Name1')
    INSERT dbo.Test1 VALUES( 2,'Name2')
    INSERT dbo.Test2 VALUES( 1,'OtherName1')
    INSERT dbo.Test2 VALUES( 2,'OtherName2')

    CREATE VIEW dbo.TestTables
    AS
    SELECT
    ID,SomeName, ID2,SomeOtherName
    FROM dbo.Test1 t1
    JOIN dbo.Test2 t2 on t1.ID =t2.ID2

    --SELECT from the view and everything is fine
    SELECT *
    FROM dbo.TestTables


    -- Drop table test2
    DROP TABLE dbo.Test2

    --Let's do the select again
    SELECT * FROM dbo.TestTables

    --And this is the error message that we get back
    Server: Msg 208, Level 16, State 1, Procedure TestTables, Line 3
    Invalid object name 'Test2'.
    Server: Msg 4413, Level 16, State 1, Line 1
    Could not use view or function 'TestTables' because of binding errors.

    --Let's create the Test2 table again
    CREATE TABLE dbo.Test2 (ID2 INT, SomeOtherName VARCHAR(49))

    INSERT dbo.Test2 VALUES( 1,'OtherName1')
    INSERT dbo.Test2 VALUES( 2,'OtherName2')

    --Now let's alter the view and we will use SCHEMABINDING
    ALTER VIEW dbo.TestTables WITH SCHEMABINDING
    AS
    SELECT
    ID,SomeName, ID2,SomeOtherName
    FROM dbo.Test1 t1
    JOIN dbo.Test2 t2 on t1.ID =t2.ID2


    --Try dropping the Test2 table again
    DROP TABLE dbo.Test2

    --And here is the message
    Server: Msg 3729, Level 16, State 1, Line 1
    Cannot DROP TABLE 'dbo.Test2' because it is being referenced by object 'TestTables'.

    --Let's try altering the table by adding a column
    ALTER TABLE dbo.Test2 ADD Col3 INT

    -- That seems to work
    SELECT * FROM dbo.Test2
    SELECT * FROM dbo.TestTables


    --Let's try altering the table by dropping a column
    ALTER TABLE dbo.Test2 DROP COLUMN SomeOtherName

    --Can't do that
    Server: Msg 5074, Level 16, State 3, Line 1
    The object 'TestTables' is dependent on column 'SomeOtherName'.
    Server: Msg 4922, Level 16, State 1, Line 1
    ALTER TABLE DROP COLUMN SomeOtherName failed because one or more objects access this column.

    --Let's try altering the table by changing a column size
    ALTER TABLE dbo.Test2 ALTER COLUMN SomeOtherName VARCHAR(48)

    --Same here; this is not allowed
    Server: Msg 5074, Level 16, State 3, Line 1
    The object 'TestTables' is dependent on column 'SomeOtherName'.
    Server: Msg 4922, Level 16, State 1, Line 1
    ALTER TABLE ALTER COLUMN SomeOtherName failed because one or more objects access this column.

    --Let's drop the column that is not used by the view
    ALTER TABLE dbo.Test2 DROP COLUMN Col3

    --No problem here either, if a column is not used by the view it can be dropped


    --Clean up the mess
    DROP VIEW dbo.TestTables
    DROP TABLE dbo.Test2, dbo.Test1

    So there you have it; SCHEMABINDING is a nice way to protect your views from dropped tables, modified columns or dropped columns on tables that are used in views.

    --You can add columns without a problem to the table used by a view.
    --You can modify a column that is not used in a view.
    --You can also drop a column that is not used in the view

    Monday, November 27, 2006

    Do You Use Partitioned Views Or Indexed Views In SQL Server 2000

    I am in the process of interviewing people for a position at work. The people I have interviewed so far have between 8 and 10 years SQL experience. I noticed one thing; none of these people knew what partition views or horizontal partitioning is or what is involved with creating an index view (the SET options, SCHEMABINDING etc etc). I must say that I was surprised that people who worked with SQL Server since 1998 didn’t know about isolation levels, audit trails, difference between a unique constraint and primary key and the list goes on

    So here is my question to you: Do you use partitioned views or indexed views?

    I must admit that I have created indexed views only twice on a production box. Partitioned views I am using a lot more since the table that we had grew to 180 million rows and you can query it on a combination of columns. Creating partitioned views is not a big deal but it involves some maintenance if you add years or whatever your partitioned column is. Then you have to create does tables and add them to the view. Luckily SQL Server 2005 has partitioned functions and this makes it a lot easier

    So is this an unreasonable question? It’s not like I asked the difference between a conversion and a cycle deadlock is it?

    Saturday, November 25, 2006

    SQL Server - Best Practices Site Available On TechNet

    Microsoft TechNet has a page named SQL Server - Best Practices; this site has white papers, scrips a toolbox and top 10 lists
    From the site:
    Get the real-world guidelines, expert tips, and rock-solid guidance to take your SQL Server implementation to the next level. Drawing on the extensive experience and expertise from respected developers and engineers at Microsoft as they walk you through the specifics on solving particularly difficult issues.



    Here is a preview of some of the material that is available right now:

    Technical White Papers
    Deep level technical papers on specific SQL Server topics that were tested and validated by SQL Development

    • SQL Server 2005 Performance Tuning using Waits and Queues

    • TEMPDB Capacity Planning and Concurrency Considerations for Index Create and Rebuild

    • Loading Bulk Data into a Partitioned Table

    • DBCC SHOWCONTIG Improvements and Comparison between SQL Server 2000 and SQL Server 2005

    • Database Mirroring Best Practices and Performance Considerations


    Best Practices ToolBox
    • Scripts and tools for performance tuning and troubleshooting SQL Server 2005


    Top 10 Lists
    Summary lists (usually consisting of 10 items) of recommendations, best practices and common issues for specific customer scenarios by the SQL Server Customer Advisory Team.

    • Storage Top 10 Best Practices

    • OLTP Top 6 Performance Issues for OLTP Applications


    Best Practices in SQL Server Books Online• Best Practices for Replication Administration

    • Replication Security Best Practices

    A must for every SQL Server developer: SQL Server - Best Practices

    Friday, November 24, 2006

    Auto Update Statistics Asynchronously.

    In SQL Server 2000 if the engine determined that the statistics are out of date it would update the statistics and then run the query. If you had a very long table this could take a long time. In SQL server 2005 by turning Auto Update Statistics Asynchronously On the query will run with the old statistics which is not optimal but will probably still be faster than updating the statistics first on a large table and then running the query. The statistics will be updated after the query and the next queries can take advantage of the updated statistics.


    Usage:
    AUTO_UPDATE_STATISTICS_ASYNC { ON OFF }

    ON
    Queries that initiate an automatic update of out-of-date statistics will not wait for the statistics to be updated before compiling. Subsequent queries will use the updated statistics when they are available.
    OFF
    Queries that initiate an automatic update of out-of-date statistics, wait until the updated statistics can be used in the query optimization plan.
    Setting this option to ON has no effect unless AUTO_UPDATE_STATISTICS is set to ON.

    So how do you enable this? Here is an example for the AdventureWorks database however setting this option to ON has no effect unless AUTO_UPDATE_STATISTICS is also set to ON.

    ALTER DATABASE AdventureWorks SET AUTO_UPDATE_STATISTICS_ASYNC ON WITH NO_WAIT

    What does Books On Line have to say about this?

    Asynchronous Statistics Updates

    A query that initiates an update of out-of-date statistics must wait for those statistics to be updated before compiling and returning a result set. This can cause unpredictable query response times and may cause applications that have aggressive time-outs to fail.

    In SQL Server 2005, the database option AUTO_UPDATE_STATISTICS_ASYNC provides asynchronous statistics updating. When this option is set to ON, queries do not wait for the statistics to be updated before compiling. Instead, the out-of-date statistics are put on a queue for updating by a worker thread in a background process. The query and any other concurrent queries compile immediately by using the existing out-of-date statistics. Because there is no delay for updated statistics, query response times are predictable; however, the out-of-date statistics may cause the query optimizer to choose a less-efficient query plan. Queries that start after the updated statistics are ready will use those statistics. This may cause the recompilation of cached plans that depend on the older statistics version.


    The AUTO_UPDATE_STATISTICS_ASYNC option is set at the database level and determines the update method for all statistics in the database. It is applicable only to statistics updating and cannot be used to asynchronously create statistics. Setting this option to ON has no effect unless AUTO_UPDATE_STATISTICS is also set to ON. By default, the AUTO_UPDATE_STATISTICS_ASYNC option is OFF