These are the top SQL Searches on this site for the month of September I have left out searches that have nothing to do with SQL Server or programming (for example atlantic city escorts)
calculating application availability
pl/sql code to calculate application availability
vb .net Datagrid column naming
does not have the identity property
application availability report and pl/sql
autoincrement
datetime string
sqldatareader stored proc clr
OUTER JOIN
OUTER JOIN SQL 2000 example
I always find it interesting to see what people are searching for and it also gives me ideas for things to write about
A blog about SQL Server, Books, Movies and life in general
Monday, October 02, 2006
Sunday, October 01, 2006
iWoz: From Computer Geek to Cult Icon: How I Invented the Personal Computer, Co-Founded Apple, and Had Fun Doing It
How I wish I had more time and needed less sleep (less than the 4-5 hours I am getting now) I am very excited about this book and will for sure put it on my Christmas list
Book Description
The mastermind behind Apple sheds his low profile and steps forward to tell his story for the first time.
Before cell phones that fit in the palm of your hand and slim laptops that fit snugly into briefcases, computers were like strange, alien vending machines. They had cryptic switches, punch cards and pages of encoded output. But in 1975, a young engineering wizard named Steve Wozniak had an idea: What if you combined computer circuitry with a regular typewriter keyboard and a video screen? The result was the first true personal computer, the Apple I, a widely affordable machine that anyone could understand and figure out how to use.
Wozniak's life—before and after Apple—is a "home-brew" mix of brilliant discovery and adventure, as an engineer, a concert promoter, a fifth-grade teacher, a philanthropist, and an irrepressible prankster. From the invention of the first personal computer to the rise of Apple as an industry giant, iWoz presents a no-holds-barred, rollicking, firsthand account of the humanist inventor who ignited the computer revolution. 16 pages of illustrations.
Amazon link is here for those interested
Book Description
The mastermind behind Apple sheds his low profile and steps forward to tell his story for the first time.
Before cell phones that fit in the palm of your hand and slim laptops that fit snugly into briefcases, computers were like strange, alien vending machines. They had cryptic switches, punch cards and pages of encoded output. But in 1975, a young engineering wizard named Steve Wozniak had an idea: What if you combined computer circuitry with a regular typewriter keyboard and a video screen? The result was the first true personal computer, the Apple I, a widely affordable machine that anyone could understand and figure out how to use.
Wozniak's life—before and after Apple—is a "home-brew" mix of brilliant discovery and adventure, as an engineer, a concert promoter, a fifth-grade teacher, a philanthropist, and an irrepressible prankster. From the invention of the first personal computer to the rise of Apple as an industry giant, iWoz presents a no-holds-barred, rollicking, firsthand account of the humanist inventor who ignited the computer revolution. 16 pages of illustrations.
Amazon link is here for those interested
Return All 78498 Prime Numbers Between 1 and 1000000 Continues in the Land Down Under
So this Prime Number challenge won't die, the other day I wrote about it in THIS post. Rob Farley from Down Under let me a comment with two approaches he took, I decided to link to them from a seperate post. His first attempt is primes and his second attempt is More On Primes. His approach is interesting since he doesn't delete from the table but actually inserts into the table. Make sure you check it out
Friday, September 29, 2006
Trouble With ISDATE And Converting To SMALLDATETIME
If you want to use the ISDATE function to convert a value to a SMALLDATETIME you also have to take into consideration that SMALLDATETIME stores date and time data from January 1, 1900, through June 6, 2079 but DATETIME stores date and time data from January 1, 1753 through December 31, 9999
So even though the ISDATE function returns 1 for the date 1890-01-01 this can not be converted to SMALLDATETIME and you will receive an error message after you run the following statement
SELECT CONVERT(SMALLDATETIME,'18900101')
Server: Msg 296, Level 16, State 3, Line 1
The conversion of char data type to smalldatetime data type resulted in an out-of-range smalldatetime value.
Also be careful with rounding
Run these four statements
SELECT CONVERT(SMALLDATETIME,'2079-06-06 23:59:29')
SELECT CONVERT(SMALLDATETIME,'2079-06-06 23:59:29.998')
SELECT CONVERT(SMALLDATETIME,'2079-06-06 23:59:29.999')
SELECT CONVERT(SMALLDATETIME,'2079-06-06 23:59:30')
The first two are fine , the second two blow up because the value gets rounded up to the next day after it gets rounded up to the next minute (and hour)
I decided to roll out my own fnIsSmallDateTime() function because who wants to write the same CASE ISDATE when Value between this and that code all over the place?
Here is the code for the user defined function
CREATE FUNCTION fnIsSmallDateTime(@d VARCHAR(50))
RETURNS BIT
AS
BEGIN
DECLARE @bitReturnValue BIT
SELECT @bitReturnValue =CASE
WHEN ISDATE(@d) = 1 THEN CASE
WHEN CONVERT(DATETIME,@d) > ='19000101'
AND CONVERT(DATETIME,@d) <= '20790606 23:59:29.998' THEN 1
ELSE 0
END
ELSE 0
END
RETURN @bitReturnValue
END
GO
Let's create a test table with values
CREATE TABLE TestSmallDate (SomeDate VARCHAR(40))
INSERT TestSmallDate VALUES ('19000101')
INSERT TestSmallDate VALUES ('18991231')
INSERT TestSmallDate VALUES ('19010101')
INSERT TestSmallDate VALUES('20790607')
INSERT TestSmallDate VALUES ('2079-06-06 23:59:29.677')
INSERT TestSmallDate VALUES ('2079-06-06 23:59:29.998')
INSERT TestSmallDate VALUES ('2079-06-06 23:59:29.999')
INSERT TestSmallDate VALUES ('2079-06-06 23:59:59.000')
INSERT TestSmallDate VALUES('2079-06-06 01:00:00')
INSERT TestSmallDate VALUES ('2079-06-06 00:00:00')
INSERT TestSmallDate VALUES ('2079-06-06 00:00:01')
INSERT TestSmallDate VALUES('WhoIsYourDaddy')
If you want NULL for values that can not be converted to SMALLDATETIME use this code
SELECT dbo.fnIsSmallDateTime(SomeDate),
CASE dbo.fnIsSmallDateTime(SomeDate)
WHEN 1 THEN CONVERT(SMALLDATETIME,SomeDate) END AS ConvertedToSmallDate,
SomeDate
FROM TestSmallDate
if you want to convert the values that can not be converted to SMALLDATETIME to '1901-01-01 00:00:00' use the code below
SELECT dbo.fnIsSmallDateTime(SomeDate),
CASE dbo.fnIsSmallDateTime(SomeDate)
WHEN 1 THEN CONVERT(SMALLDATETIME,SomeDate)
ELSE CONVERT(SMALLDATETIME,'19000101') END AS ConvertedToSmallDate,
SomeDate
FROM TestSmallDate
Return only data that can be converted to SMALLDATETIME
SELECT * FROM TestSmallDate
WHERE dbo.fnIsSmallDateTime(SomeDate) =1
Return only data that can not converted to SMALLDATETIME
SELECT * FROM TestSmallDate
WHERE dbo.fnIsSmallDateTime(SomeDate) =0
So even though the ISDATE function returns 1 for the date 1890-01-01 this can not be converted to SMALLDATETIME and you will receive an error message after you run the following statement
SELECT CONVERT(SMALLDATETIME,'18900101')
Server: Msg 296, Level 16, State 3, Line 1
The conversion of char data type to smalldatetime data type resulted in an out-of-range smalldatetime value.
Also be careful with rounding
Run these four statements
SELECT CONVERT(SMALLDATETIME,'2079-06-06 23:59:29')
SELECT CONVERT(SMALLDATETIME,'2079-06-06 23:59:29.998')
SELECT CONVERT(SMALLDATETIME,'2079-06-06 23:59:29.999')
SELECT CONVERT(SMALLDATETIME,'2079-06-06 23:59:30')
The first two are fine , the second two blow up because the value gets rounded up to the next day after it gets rounded up to the next minute (and hour)
I decided to roll out my own fnIsSmallDateTime() function because who wants to write the same CASE ISDATE when Value between this and that code all over the place?
Here is the code for the user defined function
CREATE FUNCTION fnIsSmallDateTime(@d VARCHAR(50))
RETURNS BIT
AS
BEGIN
DECLARE @bitReturnValue BIT
SELECT @bitReturnValue =CASE
WHEN ISDATE(@d) = 1 THEN CASE
WHEN CONVERT(DATETIME,@d) > ='19000101'
AND CONVERT(DATETIME,@d) <= '20790606 23:59:29.998' THEN 1
ELSE 0
END
ELSE 0
END
RETURN @bitReturnValue
END
GO
Let's create a test table with values
CREATE TABLE TestSmallDate (SomeDate VARCHAR(40))
INSERT TestSmallDate VALUES ('19000101')
INSERT TestSmallDate VALUES ('18991231')
INSERT TestSmallDate VALUES ('19010101')
INSERT TestSmallDate VALUES('20790607')
INSERT TestSmallDate VALUES ('2079-06-06 23:59:29.677')
INSERT TestSmallDate VALUES ('2079-06-06 23:59:29.998')
INSERT TestSmallDate VALUES ('2079-06-06 23:59:29.999')
INSERT TestSmallDate VALUES ('2079-06-06 23:59:59.000')
INSERT TestSmallDate VALUES('2079-06-06 01:00:00')
INSERT TestSmallDate VALUES ('2079-06-06 00:00:00')
INSERT TestSmallDate VALUES ('2079-06-06 00:00:01')
INSERT TestSmallDate VALUES('WhoIsYourDaddy')
If you want NULL for values that can not be converted to SMALLDATETIME use this code
SELECT dbo.fnIsSmallDateTime(SomeDate),
CASE dbo.fnIsSmallDateTime(SomeDate)
WHEN 1 THEN CONVERT(SMALLDATETIME,SomeDate) END AS ConvertedToSmallDate,
SomeDate
FROM TestSmallDate
if you want to convert the values that can not be converted to SMALLDATETIME to '1901-01-01 00:00:00' use the code below
SELECT dbo.fnIsSmallDateTime(SomeDate),
CASE dbo.fnIsSmallDateTime(SomeDate)
WHEN 1 THEN CONVERT(SMALLDATETIME,SomeDate)
ELSE CONVERT(SMALLDATETIME,'19000101') END AS ConvertedToSmallDate,
SomeDate
FROM TestSmallDate
Return only data that can be converted to SMALLDATETIME
SELECT * FROM TestSmallDate
WHERE dbo.fnIsSmallDateTime(SomeDate) =1
Return only data that can not converted to SMALLDATETIME
SELECT * FROM TestSmallDate
WHERE dbo.fnIsSmallDateTime(SomeDate) =0
SQL Server Application Platform Podcast About SQL Server Service Broker On Channel 9
Channel 9 has a two part podcast with Roger Wolter about SQL Server Service Broker. WMA, MP3 and Video formats are available for download
From the site: "You are thinking of a messaging solution for your application. A solution that can exchange messages reliably, predictably and in-order. A solution that offers queue like functionality only better. What is it you ask? None other than SQL Server 2005 and this very interesting technology known as SQL Service Broker that is built right into it. On today’s program I’m joined by my colleague Roger Wolter who is going to give us all the juicy details"
Get the episodes here --> part1, part2
From the site: "You are thinking of a messaging solution for your application. A solution that can exchange messages reliably, predictably and in-order. A solution that offers queue like functionality only better. What is it you ask? None other than SQL Server 2005 and this very interesting technology known as SQL Service Broker that is built right into it. On today’s program I’m joined by my colleague Roger Wolter who is going to give us all the juicy details"
Get the episodes here --> part1, part2
Wednesday, September 27, 2006
Cool And Sexy New SQL Server Blog
That's right! What is more cool or sexy than Query Optimizations? It doesn't matter how beautiful or complex your data model is, if you show to your boss that a query used to take 17 seconds and now runs in 300 milli-seconds then you are the new SQL superhero.
If some of the following terms are foreign to you (CTRL + K, Index Scan, Index Seek, Table Scan, Sargable, Index Hint, Parameter Sniffing, Missing Statistics, L2 Cache, Compilation, Optimal Plans) then I have the blog for you right here
Tips, Tricks, and Advice from the SQL Server Query Processing Team
Even if you do know about those terms then this is still the blog for you since there is tons of stuff that you did not know yet. so make sure to check it out and add it to your feed
If some of the following terms are foreign to you (CTRL + K, Index Scan, Index Seek, Table Scan, Sargable, Index Hint, Parameter Sniffing, Missing Statistics, L2 Cache, Compilation, Optimal Plans) then I have the blog for you right here
Tips, Tricks, and Advice from the SQL Server Query Processing Team
Even if you do know about those terms then this is still the blog for you since there is tons of stuff that you did not know yet. so make sure to check it out and add it to your feed
Tuesday, September 26, 2006
Return A Rowcount By Using Count Or Sign
Sometimes you are asked by the front-end/middle-tier developers to return a rowcount as well with the result set. However the developers want you to return 1 if there are rows and 0 if there are none. How do you do such a thing?
Well I am going to show you two ways. the first way is by using CASE and @@ROWCOUNT, the second way is by using the SIGN function
For CASE we will do this
RETURN CASE WHEN @@ROWCOUNT > 0 THEN 1 ELSE 0 END
So that's pretty simple, if @@ROWCOUNT is greater than 0 return 1 for everything else return 0
Using the SIGN function is even easier, all you have to do is this
RETURN SIGN(@@ROWCOUNT)
That's all, SIGN Returns the positive (+1), zero (0), or negative (-1) sign of the given expression. In this case -1 is not possible but the other two values are
So let's see this in action
USE pubs
GO
--Case Proc
CREATE PROCEDURE TestReturnValues
@au_id VARCHAR(49) ='172-32-1176'
AS
SELECT *
FROM authors
WHERE au_id =@au_id
RETURN CASE WHEN @@ROWCOUNT > 0 THEN 1 ELSE 0 END
GO
--Sign Proc
CREATE PROCEDURE TestReturnValues2
@au_id VARCHAR(49) ='172-32-1176'
AS
SELECT *
FROM authors
WHERE au_id =@au_id
RETURN SIGN(@@ROWCOUNT)
GO
--Case Proc, 1 will be returned; default value is used
DECLARE @Rowcount int
EXEC @Rowcount = TestReturnValues
SELECT @Rowcount
GO
--Case Proc, 0 will be returned; dummy value is used
DECLARE @Rowcount int
EXEC @Rowcount = TestReturnValues 'ABC'
SELECT @Rowcount
GO
--Sign Proc, 1 will be returned; default value is used
DECLARE @Rowcount int
EXEC @Rowcount = TestReturnValues2
SELECT @Rowcount
GO
--Sign Proc, 0 will be returned; dummy value is used
DECLARE @Rowcount int
EXEC @Rowcount = TestReturnValues2 'ABC'
SELECT @Rowcount
GO
--Help the environment by recycling ;-)
DROP PROCEDURE TestReturnValues2,TestReturnValues
GO
Well I am going to show you two ways. the first way is by using CASE and @@ROWCOUNT, the second way is by using the SIGN function
For CASE we will do this
RETURN CASE WHEN @@ROWCOUNT > 0 THEN 1 ELSE 0 END
So that's pretty simple, if @@ROWCOUNT is greater than 0 return 1 for everything else return 0
Using the SIGN function is even easier, all you have to do is this
RETURN SIGN(@@ROWCOUNT)
That's all, SIGN Returns the positive (+1), zero (0), or negative (-1) sign of the given expression. In this case -1 is not possible but the other two values are
So let's see this in action
USE pubs
GO
--Case Proc
CREATE PROCEDURE TestReturnValues
@au_id VARCHAR(49) ='172-32-1176'
AS
SELECT *
FROM authors
WHERE au_id =@au_id
RETURN CASE WHEN @@ROWCOUNT > 0 THEN 1 ELSE 0 END
GO
--Sign Proc
CREATE PROCEDURE TestReturnValues2
@au_id VARCHAR(49) ='172-32-1176'
AS
SELECT *
FROM authors
WHERE au_id =@au_id
RETURN SIGN(@@ROWCOUNT)
GO
--Case Proc, 1 will be returned; default value is used
DECLARE @Rowcount int
EXEC @Rowcount = TestReturnValues
SELECT @Rowcount
GO
--Case Proc, 0 will be returned; dummy value is used
DECLARE @Rowcount int
EXEC @Rowcount = TestReturnValues 'ABC'
SELECT @Rowcount
GO
--Sign Proc, 1 will be returned; default value is used
DECLARE @Rowcount int
EXEC @Rowcount = TestReturnValues2
SELECT @Rowcount
GO
--Sign Proc, 0 will be returned; dummy value is used
DECLARE @Rowcount int
EXEC @Rowcount = TestReturnValues2 'ABC'
SELECT @Rowcount
GO
--Help the environment by recycling ;-)
DROP PROCEDURE TestReturnValues2,TestReturnValues
GO
Monday, September 25, 2006
Happy One Year Anniversary
So here we are one year and 236 posts later. I can not believe that it has been one year already. First of all I will make 2 small changes. The first change is that I will feature a blog/site of the week; this will always happen on a Friday. I will link to the blog and link to the 5 most interesting posts/articles. If possible I will say a little something about the person whose site it is, something like author of this book and an interview is available here.
The second change is that I will write some stuff that has nothing to do with SQL Server but might still be of interest to you. This I will publish on weekends so that you can skip that easily if you check on weekdays only. What will I write? Maybe something that goes on in my life or a book or movie review. However I will not review the Matrix, Titanic or some other well know movie. No I will pick something that is not as popular for example Ghost In The Machine, The Seven Samurai, Animatrix. For books this could be Crypto, The Cobra Event or The Coming Plague
Or I could write that once you have kids and you do NOT have TIVO then Comcast On Demand really rocks. For example Jericho is a show that I just started to watch, this show reminded me a little bit of The Stand by Stephen King (his best book together with Thinner, It and Salems Lot)
So what is so cool about On Demand? No commercials, that’s right; nada. Pause and Resume for up to 24 hours, this is a must have with newborns.
Comcast announced a deal with CBS to have the following shows free the day after it airs: CSI: Crime Scene Investigation, CSI: Miami, CSI: NY, Survivor, NCIS, Numb3rs, Jericho and Big Brother
That’s it for now
The second change is that I will write some stuff that has nothing to do with SQL Server but might still be of interest to you. This I will publish on weekends so that you can skip that easily if you check on weekdays only. What will I write? Maybe something that goes on in my life or a book or movie review. However I will not review the Matrix, Titanic or some other well know movie. No I will pick something that is not as popular for example Ghost In The Machine, The Seven Samurai, Animatrix. For books this could be Crypto, The Cobra Event or The Coming Plague
Or I could write that once you have kids and you do NOT have TIVO then Comcast On Demand really rocks. For example Jericho is a show that I just started to watch, this show reminded me a little bit of The Stand by Stephen King (his best book together with Thinner, It and Salems Lot)
So what is so cool about On Demand? No commercials, that’s right; nada. Pause and Resume for up to 24 hours, this is a must have with newborns.
Comcast announced a deal with CBS to have the following shows free the day after it airs: CSI: Crime Scene Investigation, CSI: Miami, CSI: NY, Survivor, NCIS, Numb3rs, Jericho and Big Brother
That’s it for now
Return All 78498 Prime Numbers Between 1 and 1000000 In 3 seconds
That is right folks; SQL Server is capable of returning all 78498 prime numbers between 1 and 1000000 in 3 seconds. Who said that SQL Server isn't suitable for this task?
Let's start with a little bit of history; Ward Pond had a posting on his blog on how to create a table with 1000000 rows. Hugo Kornelis replied with a solution that ran in 1110 ms. For fun I left the following comment: “How about the next challenge is to return all 78498 prime numbers between 1 and 1000000?”
Ward took the challenge and posted a solution that would take hours to complete. Then Hugo Kornelis posted a solution that took 8 seconds. After that Ward tweaked Hugo’s solution and got it down to 3 seconds. That is just unbelievable. I wonder how long it would run if you were to code something like that in C, C++, C# or your favorite language?
Any takers?
Let's start with a little bit of history; Ward Pond had a posting on his blog on how to create a table with 1000000 rows. Hugo Kornelis replied with a solution that ran in 1110 ms. For fun I left the following comment: “How about the next challenge is to return all 78498 prime numbers between 1 and 1000000?”
Ward took the challenge and posted a solution that would take hours to complete. Then Hugo Kornelis posted a solution that took 8 seconds. After that Ward tweaked Hugo’s solution and got it down to 3 seconds. That is just unbelievable. I wonder how long it would run if you were to code something like that in C, C++, C# or your favorite language?
Any takers?
Wednesday, September 20, 2006
Five Ways To Return Values From Stored Procedures
I have answered a bunch of questions over the last couple of days and some of them had to do with returning values from stored procedures
Everyone knows that you can return a value by using return inside a stored procedure. What everyone doesn't know is that return can only be an int data type
So how do you return something that is not an int (bigint, smallint etc etc) datatype
Let's take a look
We will start with a regular return statement, everything works as expected
--#1 return
CREATE PROCEDURE TestReturn
AS
SET NOCOUNT ON
DECLARE @i int
SELECT @i = DATEPART(hh,GETDATE())
RETURN @i
SET NOCOUNT OFF
GO
DECLARE @SomeValue int
EXEC @SomeValue = TestReturn
SELECT @SomeValue
GO
Now let's try returning a varchar
ALTER PROCEDURE TestReturn
AS
SET NOCOUNT ON
DECLARE @i VARCHAR(50)
SELECT @i = DATENAME(mm,GETDATE())
RETURN @i
SET NOCOUNT OFF
GO
DECLARE @SomeValue VARCHAR(50)
EXEC @SomeValue = TestReturn
SELECT @SomeValue
GO
Oops, it doesn't work the following message is returned (if you run it in September)
Server: Msg 245, Level 16, State 1, Procedure TestReturn, Line 7
Syntax error converting the varchar value 'September' to a column of data type int.
Let's try hard coding a character value
ALTER PROCEDURE TestReturn
AS
SET NOCOUNT ON
RETURN 'ab'
SET NOCOUNT OFF
GO
DECLARE @SomeValue VARCHAR(50)
EXEC @SomeValue = TestReturn
SELECT @SomeValue
GO
It is interesting that the procedure compiles without a problem. But when we try to run it the following message is displayed
Server: Msg 245, Level 16, State 1, Procedure TestReturn, Line 7
Syntax error converting the varchar value 'ab' to a column of data type int.
So what can we do? well we can use an OUTPUT parameter. By the way the following 4 ways to return a varchar values are in the order from best to worst
--#2 OUTPUT
ALTER PROCEDURE TestReturn @SomeParm VARCHAR(50) OUTPUT
AS
SET NOCOUNT ON
SELECT @SomeParm = 'ab'
SET NOCOUNT OFF
GO
DECLARE @SomeValue VARCHAR(50)
EXEC TestReturn @SomeParm = @SomeValue OUTPUT
SELECT @SomeValue
GO
Another way is to create a temp table and call the proc with insert..exec
--#3 Insert Into TEMP Table outside the proc
ALTER PROCEDURE TestReturn
AS
SET NOCOUNT ON
SELECT 'ab'
SET NOCOUNT OFF
GO
DECLARE @SomeValue VARCHAR(50)
CREATE TABLE #Test(SomeValue VARCHAR(50))
INSERT INTO #Test
EXEC TestReturn
SELECT @SomeValue = SomeValue
FROM #Test
SELECT @SomeValue
DROP TABLE #Test
GO
This one is almost the same as the previous example, the only difference is that ther insert happens inside the proc
And of course if you call the proc without creating the table you will get a nice error message
--#4 Insert Into TEMP Table inside the proc
ALTER PROCEDURE TestReturn
AS
SET NOCOUNT ON
INSERT INTO #Test
SELECT 'ab'
SET NOCOUNT OFF
GO
DECLARE @SomeValue VARCHAR(50)
CREATE TABLE #Test(SomeValue VARCHAR(50))
EXEC TestReturn
SELECT @SomeValue = SomeValue
FROM #Test
SELECT @SomeValue
DROP TABLE #Test
And last you create a permanent table with an identity, in the proc you insert into that table and you return the identity value. You can then use that identity value to get the varchar value
--#5 Insert Into A Table And Return The Identity value
CREATE TABLE HoldingTable(ID INT IDENTITY,SomeValue VARCHAR(50))
GO
ALTER PROCEDURE TestReturn
AS
SET NOCOUNT ON
DECLARE @i INT
INSERT INTO HoldingTable
SELECT 'ab'
SELECT @I = SCOPE_IDENTITY()
RETURN @i
SET NOCOUNT OFF
GO
DECLARE @SomeValue VARCHAR(50), @i INT
EXEC @i = TestReturn
SELECT @SomeValue = SomeValue
FROM HoldingTable
WHERE ID = @i
SELECT @SomeValue
DROP PROCEDURE TestReturn
Everyone knows that you can return a value by using return inside a stored procedure. What everyone doesn't know is that return can only be an int data type
So how do you return something that is not an int (bigint, smallint etc etc) datatype
Let's take a look
We will start with a regular return statement, everything works as expected
--#1 return
CREATE PROCEDURE TestReturn
AS
SET NOCOUNT ON
DECLARE @i int
SELECT @i = DATEPART(hh,GETDATE())
RETURN @i
SET NOCOUNT OFF
GO
DECLARE @SomeValue int
EXEC @SomeValue = TestReturn
SELECT @SomeValue
GO
Now let's try returning a varchar
ALTER PROCEDURE TestReturn
AS
SET NOCOUNT ON
DECLARE @i VARCHAR(50)
SELECT @i = DATENAME(mm,GETDATE())
RETURN @i
SET NOCOUNT OFF
GO
DECLARE @SomeValue VARCHAR(50)
EXEC @SomeValue = TestReturn
SELECT @SomeValue
GO
Oops, it doesn't work the following message is returned (if you run it in September)
Server: Msg 245, Level 16, State 1, Procedure TestReturn, Line 7
Syntax error converting the varchar value 'September' to a column of data type int.
Let's try hard coding a character value
ALTER PROCEDURE TestReturn
AS
SET NOCOUNT ON
RETURN 'ab'
SET NOCOUNT OFF
GO
DECLARE @SomeValue VARCHAR(50)
EXEC @SomeValue = TestReturn
SELECT @SomeValue
GO
It is interesting that the procedure compiles without a problem. But when we try to run it the following message is displayed
Server: Msg 245, Level 16, State 1, Procedure TestReturn, Line 7
Syntax error converting the varchar value 'ab' to a column of data type int.
So what can we do? well we can use an OUTPUT parameter. By the way the following 4 ways to return a varchar values are in the order from best to worst
--#2 OUTPUT
ALTER PROCEDURE TestReturn @SomeParm VARCHAR(50) OUTPUT
AS
SET NOCOUNT ON
SELECT @SomeParm = 'ab'
SET NOCOUNT OFF
GO
DECLARE @SomeValue VARCHAR(50)
EXEC TestReturn @SomeParm = @SomeValue OUTPUT
SELECT @SomeValue
GO
Another way is to create a temp table and call the proc with insert..exec
--#3 Insert Into TEMP Table outside the proc
ALTER PROCEDURE TestReturn
AS
SET NOCOUNT ON
SELECT 'ab'
SET NOCOUNT OFF
GO
DECLARE @SomeValue VARCHAR(50)
CREATE TABLE #Test(SomeValue VARCHAR(50))
INSERT INTO #Test
EXEC TestReturn
SELECT @SomeValue = SomeValue
FROM #Test
SELECT @SomeValue
DROP TABLE #Test
GO
This one is almost the same as the previous example, the only difference is that ther insert happens inside the proc
And of course if you call the proc without creating the table you will get a nice error message
--#4 Insert Into TEMP Table inside the proc
ALTER PROCEDURE TestReturn
AS
SET NOCOUNT ON
INSERT INTO #Test
SELECT 'ab'
SET NOCOUNT OFF
GO
DECLARE @SomeValue VARCHAR(50)
CREATE TABLE #Test(SomeValue VARCHAR(50))
EXEC TestReturn
SELECT @SomeValue = SomeValue
FROM #Test
SELECT @SomeValue
DROP TABLE #Test
And last you create a permanent table with an identity, in the proc you insert into that table and you return the identity value. You can then use that identity value to get the varchar value
--#5 Insert Into A Table And Return The Identity value
CREATE TABLE HoldingTable(ID INT IDENTITY,SomeValue VARCHAR(50))
GO
ALTER PROCEDURE TestReturn
AS
SET NOCOUNT ON
DECLARE @i INT
INSERT INTO HoldingTable
SELECT 'ab'
SELECT @I = SCOPE_IDENTITY()
RETURN @i
SET NOCOUNT OFF
GO
DECLARE @SomeValue VARCHAR(50), @i INT
EXEC @i = TestReturn
SELECT @SomeValue = SomeValue
FROM HoldingTable
WHERE ID = @i
SELECT @SomeValue
DROP PROCEDURE TestReturn
Tuesday, September 19, 2006
You Can Rollback Tables That You Have Truncated (Inside A Transaction)
There seems to be a misconception that when you issue a TRUNCATE command against a table you will not be able to roll back.
That simply is not true; TRUNCATE TABLE removes the data by deallocating the data pages used to store the table's data, and only the page deallocations are recorded in the transaction log.
What does this mean? This means that SQL Server will use the mimimum amount of logging that it can to delete the data and still make it recoverable. in contrast to that the DELETE statement removes rows one at a time and records an entry in the transaction log for each deleted row
You see why TRUNCATE is so much faster; it deals with pages not with rows. and we all know that 1 extent is 8 pages and a page is 8K and can hold 8060 bytes. Well if you rows are 20 bytes wide then you need to log 403 delete statements with DELETE but TRUNCATE just uses a pointer to the page
So let's see how that works
--Create the table and inser 6 values
CREATE TABLE RollBacktest(id INT)
INSERT RollBacktest VALUES( 1 )
INSERT RollBacktest VALUES( 2 )
INSERT RollBacktest VALUES( 3 )
INSERT RollBacktest VALUES( 4 )
INSERT RollBacktest VALUES( 5 )
INSERT RollBacktest VALUES( 6 )
GO
--Should be 6 rows
SELECT 'Before The Transaction',* FROM RollBacktest
BEGIN TRAN RollBackTestTran
TRUNCATE TABLE RollBacktest
--Should be empty resultset
SELECT * FROM RollBacktest
--should be 0
SELECT COUNT(*) AS 'TruncatedCount' FROM RollBacktest
ROLLBACK TRAN RollBackTestTran
--Yes it is 6 again
SELECT 'ROLLED BACK',* FROM RollBacktest
DROP TABLE RollBacktest
That simply is not true; TRUNCATE TABLE removes the data by deallocating the data pages used to store the table's data, and only the page deallocations are recorded in the transaction log.
What does this mean? This means that SQL Server will use the mimimum amount of logging that it can to delete the data and still make it recoverable. in contrast to that the DELETE statement removes rows one at a time and records an entry in the transaction log for each deleted row
You see why TRUNCATE is so much faster; it deals with pages not with rows. and we all know that 1 extent is 8 pages and a page is 8K and can hold 8060 bytes. Well if you rows are 20 bytes wide then you need to log 403 delete statements with DELETE but TRUNCATE just uses a pointer to the page
So let's see how that works
--Create the table and inser 6 values
CREATE TABLE RollBacktest(id INT)
INSERT RollBacktest VALUES( 1 )
INSERT RollBacktest VALUES( 2 )
INSERT RollBacktest VALUES( 3 )
INSERT RollBacktest VALUES( 4 )
INSERT RollBacktest VALUES( 5 )
INSERT RollBacktest VALUES( 6 )
GO
--Should be 6 rows
SELECT 'Before The Transaction',* FROM RollBacktest
BEGIN TRAN RollBackTestTran
TRUNCATE TABLE RollBacktest
--Should be empty resultset
SELECT * FROM RollBacktest
--should be 0
SELECT COUNT(*) AS 'TruncatedCount' FROM RollBacktest
ROLLBACK TRAN RollBackTestTran
--Yes it is 6 again
SELECT 'ROLLED BACK',* FROM RollBacktest
DROP TABLE RollBacktest
Monday, September 18, 2006
DDL Trigger Events Documented In Books On Line
A while back I wrote about DDL trigger events in a post named DDL Trigger Events Revisited
And I claimed that this stuff wasn't documented
Well I am wrong, this information is documented in the Books Online topic "Event Groups for Use with DDL Triggers.
The link to the online Books On Line is below
http://msdn2.microsoft.com/en-us/library/ms191441.aspx
Anyway they have an image, at least you can copy and paste the code I gave you ;-)
And I claimed that this stuff wasn't documented
Well I am wrong, this information is documented in the Books Online topic "Event Groups for Use with DDL Triggers.
The link to the online Books On Line is below
http://msdn2.microsoft.com/en-us/library/ms191441.aspx
Anyway they have an image, at least you can copy and paste the code I gave you ;-)
Friday, September 15, 2006
Do Not Concatenate VARCHAR and VARCHAR(MAX) Variables
Do Not Concatenate VARCHAR and VARCHAR(MAX) Variables, what happens is that the whole string will be implicitly converted to varchar(8000)
Run these examples to see what I mean
declare @v varchar(max)
select @v = (cast('a' as varchar)) + replicate('a', 9000)
select len(@v)
--8000
GO
declare @v varchar(max)
select @v = (cast('a' as varchar(1))) + replicate('a', 9000)
select len(@v)
--8000
GO
declare @v varchar(max)
select @v = (cast('a' as varchar)) +replicate (cast('a' as varchar(max)), 9000)
select len(@v)
--9001
GO
declare @v varchar(max)
select @v = (cast('a' as varchar(1))) + replicate(cast('a' as varchar(max)), 9000)
select len(@v)
--9001
GO
Or how about this? If you don't convert to varchar(max) while doing the LEN function it returns 8000
declare @v varchar(max)
select @v = replicate('a', 9000)
select len(@v)
declare @v varchar(max)
select @v = replicate(cast('a' as varchar(max)), 9000)
select len(@v)
Run these examples to see what I mean
declare @v varchar(max)
select @v = (cast('a' as varchar)) + replicate('a', 9000)
select len(@v)
--8000
GO
declare @v varchar(max)
select @v = (cast('a' as varchar(1))) + replicate('a', 9000)
select len(@v)
--8000
GO
declare @v varchar(max)
select @v = (cast('a' as varchar)) +replicate (cast('a' as varchar(max)), 9000)
select len(@v)
--9001
GO
declare @v varchar(max)
select @v = (cast('a' as varchar(1))) + replicate(cast('a' as varchar(max)), 9000)
select len(@v)
--9001
GO
Or how about this? If you don't convert to varchar(max) while doing the LEN function it returns 8000
declare @v varchar(max)
select @v = replicate('a', 9000)
select len(@v)
declare @v varchar(max)
select @v = replicate(cast('a' as varchar(max)), 9000)
select len(@v)
Thursday, September 14, 2006
O'Reilly Code Search
Here is something handy:
Announcing O'Reilly Code Search, where you can enter search terms to find relevant sample code from nearly 700 O'Reilly books. The database currently contains over 123,000 individual examples, comprises 2.6 million lines of code, all edited and ready to use.
it's pretty neat, all the source code from all the O'Reilly books is searchable online
So to Search for the term SELECT in category SQL you would enter "cat:sql select" and this would return these results http://labs.oreilly.com/search.xqy?t=code&q=cat%3Asql+select
For C# you would do "cat:csharp select" and just SQL Server instead of SQL would be "cat:sql server select"
Let me know what you think
Announcing O'Reilly Code Search, where you can enter search terms to find relevant sample code from nearly 700 O'Reilly books. The database currently contains over 123,000 individual examples, comprises 2.6 million lines of code, all edited and ready to use.
it's pretty neat, all the source code from all the O'Reilly books is searchable online
So to Search for the term SELECT in category SQL you would enter "cat:sql select" and this would return these results http://labs.oreilly.com/search.xqy?t=code&q=cat%3Asql+select
For C# you would do "cat:csharp select" and just SQL Server instead of SQL would be "cat:sql server select"
Let me know what you think
Wednesday, September 13, 2006
What Is Your Corporate Standard
If you are not a consultant and you work for a company then does your company have a corporate standard for development languages/products?
Our IT department is about 800 people and to get good support you can not have 3 thousands different products in your shop. As of today this is what is supported in our company
Java Stack
Sun's Project Tango
Apache Web Server 2.x
Tomcat 5.x (web container), JBoss 4.x (EJB and Web Container), WebSphere Network Edition 6.1.x (web and EJB container)
Hibernate 2.x, Spring 1.2.x
Sun's J2SE 5 (aka J2SE 1.5.x)
MySQl 5.x, Oracle 10g, SQL Server 2005
.NET Stack
WCF
IIS 6
.NET 2.0
CLR Version 2
MySQl 5.x, Oracle 10g, SQL Server 2005
Of course we have other things that we use ColdFusion, SQL Server 2000, that is fine but no NEW development is supposed to be done with those tools/products
So here is my question to you; what is your corporate standard?
Our IT department is about 800 people and to get good support you can not have 3 thousands different products in your shop. As of today this is what is supported in our company
Java Stack
Sun's Project Tango
Apache Web Server 2.x
Tomcat 5.x (web container), JBoss 4.x (EJB and Web Container), WebSphere Network Edition 6.1.x (web and EJB container)
Hibernate 2.x, Spring 1.2.x
Sun's J2SE 5 (aka J2SE 1.5.x)
MySQl 5.x, Oracle 10g, SQL Server 2005
.NET Stack
WCF
IIS 6
.NET 2.0
CLR Version 2
MySQl 5.x, Oracle 10g, SQL Server 2005
Of course we have other things that we use ColdFusion, SQL Server 2000, that is fine but no NEW development is supposed to be done with those tools/products
So here is my question to you; what is your corporate standard?
The sum or average aggregate operation cannot take a bit data type as an argument
The sum or average aggregate operation cannot take a bit data type as an argument.
Oh yes I fell for this one yesterday. It's not that I didn't know about it (in the back of my head) it's just that I forgot
I was answering one question in the microsoft forums and someone wanted to sum something, unfortunately the datatype was bit and as we all know bit data types can not be used with average or sum.
You see that's why it is important when asking question to provide DDL and INSERT scripts. If I had that then I would have gotten the error myself and would have modified the query by converting to int
So instead of this (simplified)
SELECT SUM(col1)
FROM (SELECT CONVERT(BIT,1) AS col1 UNION ALL
SELECT CONVERT(BIT,0) )P
I would have done this
SELECT SUM(CONVERT(INT,col1))
FROM (SELECT CONVERT(BIT,1) AS col1 UNION ALL
SELECT CONVERT(BIT,0) )P
And of course we should all read this-->
http://classicasp.aspfaq.com/general/how-do-i-make-sure-my-asp-question-gets-answered.htm l
Does this qualify as a rant? I hope not.
Oh yes I fell for this one yesterday. It's not that I didn't know about it (in the back of my head) it's just that I forgot
I was answering one question in the microsoft forums and someone wanted to sum something, unfortunately the datatype was bit and as we all know bit data types can not be used with average or sum.
You see that's why it is important when asking question to provide DDL and INSERT scripts. If I had that then I would have gotten the error myself and would have modified the query by converting to int
So instead of this (simplified)
SELECT SUM(col1)
FROM (SELECT CONVERT(BIT,1) AS col1 UNION ALL
SELECT CONVERT(BIT,0) )P
I would have done this
SELECT SUM(CONVERT(INT,col1))
FROM (SELECT CONVERT(BIT,1) AS col1 UNION ALL
SELECT CONVERT(BIT,0) )P
And of course we should all read this-->
http://classicasp.aspfaq.com/general/how-do-i-make-sure-my-asp-question-gets-answered.htm l
Does this qualify as a rant? I hope not.
Sunday, September 10, 2006
sys.dm_db_index_usage_stats
This is the second article about the dynamic managment views in SQL Server 2005, to see all of them click here
Today we are going to talk about the sys.dm_db_index_usage_stats dynamic managment view
This view is extremely helpful in a couple of ways, I will list some of them
It can help you identify if an index is used or not
You can also find out the scan to seek ratio
Another helpful thing is the fact that the last seek and scan dates are in the view, this can help you determine if the index is still used
So let's get started shall we?
CREATE TABLE TestIndex(id INT identity,
SomeID INT not null,
SomeDate DATETIME not null)
GO
CREATE CLUSTERED INDEX IX_TestIndexID ON TestIndex(SomeID)
GO
CREATE NONCLUSTERED INDEX IX_TestIndexDate ON TestIndex(SomeDate)
GO
INSERT TestIndex VALUES(1,GETDATE())
GO
INSERT TestIndex VALUES(2,GETDATE()-1)
GO
--Run the sys.dm_db_index_usage_stats query
SELECT
TableName = OBJECT_NAME(s.[object_id]),
IndexName = i.name,
s.last_user_seek,
s.user_seeks,
CASE s.user_seeks WHEN 0 THEN 0
ELSE s.user_seeks*1.0 /(s.user_scans + s.user_seeks) * 100.0 END AS SeekPercentage,
s.last_user_scan,
s.user_scans,
CASE s.user_scans WHEN 0 THEN 0
ELSE s.user_scans*1.0 /(s.user_scans + s.user_seeks) * 100.0 END AS ScanPercentage,
s.last_user_lookup,
s.user_lookups,
s.last_user_update,
s.user_updates,
s.last_system_seek,
s.last_system_scan,
s.last_system_lookup,
s.last_system_update,*
FROM
sys.dm_db_index_usage_stats s
INNER JOIN
sys.indexes i
ON
s.[object_id] = i.[object_id]
AND s.index_id = i.index_id
WHERE
s.database_id = DB_ID()
AND OBJECTPROPERTY(s.[object_id], 'IsMsShipped') = 0
AND OBJECT_NAME(s.[object_id]) = 'TestIndex';
After each of the select queries below run the sys.dm_db_index_usage_stats query above
--user_updates should be 2 but user_seeks,user_scans, user_lookups should be 0
SELECT *
FROM TestIndex
WHERE ID =1
--IX_TestIndexID user_scans = 1
SELECT *
FROM TestIndex
WHERE SomeID =1
--IX_TestIndexID user_seeks = 1
SELECT *
FROM TestIndex
WHERE SomeDate > GETDATE() -1
AND SomeID =1
--IX_TestIndexID user_seeks = 2
--let's force the optimizer to use the IX_TestIndexDate index
SELECT *
FROM TestIndex WITH (INDEX = IX_TestIndexDate)
WHERE SomeDAte > GETDATE() -1
--IX_TestIndexDate user_seeks = 1
IX_TestIndexID
SeekPercentage = 66.66% and ScanPercentage = 33.33
As you can see I have added the following code
CASE s.user_seeks WHEN 0 THEN 0
ELSE s.user_seeks*1.0 /(s.user_scans + s.user_seeks) * 100.0 END AS SeekPercentage
CASE s.user_scans WHEN 0 THEN 0
ELSE s.user_scans*1.0 /(s.user_scans + s.user_seeks) * 100.0 END AS ScanPercentage
This is helpful to determine the seek/scan ratio if you have mostly scans then maybe you have to look at your queries to optimize them
If you run the sys.dm_db_index_usage_stats query again you will se that the user_updates column is 2, that's because we inserted 2 rows (2 batches)
Let's do this
UPDATE TestIndex
SET SomeID = SomeID + 1
--(2 row(s) affected)
Now user_updates is 3 since we used 1 batch that modified 2 rows
Now restart your server and run the same query again. as you can see the resultset is empty this is because the counters are initialized to empty whenever the SQL Server (MSSQLSERVER) service is started. In addition, whenever a database is detached or is shut down (for example, because AUTO_CLOSE is set to ON), all rows associated with the database are removed.
When an index is used, a row is added to sys.dm_db_index_usage_stats if a row does not already exist for the index. When the row is added, its counters are initially set to zero.
When you run this query
SELECT *
FROM TestIndex
You will see a row again after you run the sys.dm_db_index_usage_stats query
Also note that every individual seek, scan, lookup, or update on the specified index by one query execution is counted as a use of that index and increments the corresponding counter in this view. Information is reported both for operations caused by user-submitted queries, and for operations caused by internally generated queries, such as scans for gathering statistics.
The user_updates counter indicates the level of maintenance on the index caused by insert, update, or delete operations on the underlying table or view. You can use this view to determine which indexes are used only lightly all by your applications. You can also use the view to determine which indexes are incurring maintenance overhead. You may want to consider dropping indexes that incur maintenance overhead, but are not used for queries, or are only infrequently used for queries.
sys.dm_db_index_usage_stats
database_id smallint
ID of the database on which the table or view is defined.
object_id int
ID of the table or view on which the index is defined
index_id int
ID of the index.
user_seeks bigint
Number of seeks by user queries.
user_scans bigint
Number of scans by user queries.
user_lookups bigint
Number of lookups by user queries.
user_updates bigint
Number of updates by user queries.
last_user_seek datetime
Time of last user seek
last_user_scan datetime
Time of last user scan.
last_user_lookup datetime
Time of last user lookup.
last_user_update datetime
Time of last user update.
system_seeks bigint
Number of seeks by system queries.
system_scans bigint
Number of scans by system queries.
system_lookups bigint
Number of lookups by system queries.
system_updates bigint
Number of updates by system queries.
last_system_seek datetime
Time of last system seek.
last_system_scan datetime
Time of last system scan.
last_system_lookup datetime
Time of last system lookup.
last_system_update datetime
Time of last system update.
Today we are going to talk about the sys.dm_db_index_usage_stats dynamic managment view
This view is extremely helpful in a couple of ways, I will list some of them
It can help you identify if an index is used or not
You can also find out the scan to seek ratio
Another helpful thing is the fact that the last seek and scan dates are in the view, this can help you determine if the index is still used
So let's get started shall we?
CREATE TABLE TestIndex(id INT identity,
SomeID INT not null,
SomeDate DATETIME not null)
GO
CREATE CLUSTERED INDEX IX_TestIndexID ON TestIndex(SomeID)
GO
CREATE NONCLUSTERED INDEX IX_TestIndexDate ON TestIndex(SomeDate)
GO
INSERT TestIndex VALUES(1,GETDATE())
GO
INSERT TestIndex VALUES(2,GETDATE()-1)
GO
--Run the sys.dm_db_index_usage_stats query
SELECT
TableName = OBJECT_NAME(s.[object_id]),
IndexName = i.name,
s.last_user_seek,
s.user_seeks,
CASE s.user_seeks WHEN 0 THEN 0
ELSE s.user_seeks*1.0 /(s.user_scans + s.user_seeks) * 100.0 END AS SeekPercentage,
s.last_user_scan,
s.user_scans,
CASE s.user_scans WHEN 0 THEN 0
ELSE s.user_scans*1.0 /(s.user_scans + s.user_seeks) * 100.0 END AS ScanPercentage,
s.last_user_lookup,
s.user_lookups,
s.last_user_update,
s.user_updates,
s.last_system_seek,
s.last_system_scan,
s.last_system_lookup,
s.last_system_update,*
FROM
sys.dm_db_index_usage_stats s
INNER JOIN
sys.indexes i
ON
s.[object_id] = i.[object_id]
AND s.index_id = i.index_id
WHERE
s.database_id = DB_ID()
AND OBJECTPROPERTY(s.[object_id], 'IsMsShipped') = 0
AND OBJECT_NAME(s.[object_id]) = 'TestIndex';
After each of the select queries below run the sys.dm_db_index_usage_stats query above
--user_updates should be 2 but user_seeks,user_scans, user_lookups should be 0
SELECT *
FROM TestIndex
WHERE ID =1
--IX_TestIndexID user_scans = 1
SELECT *
FROM TestIndex
WHERE SomeID =1
--IX_TestIndexID user_seeks = 1
SELECT *
FROM TestIndex
WHERE SomeDate > GETDATE() -1
AND SomeID =1
--IX_TestIndexID user_seeks = 2
--let's force the optimizer to use the IX_TestIndexDate index
SELECT *
FROM TestIndex WITH (INDEX = IX_TestIndexDate)
WHERE SomeDAte > GETDATE() -1
--IX_TestIndexDate user_seeks = 1
IX_TestIndexID
SeekPercentage = 66.66% and ScanPercentage = 33.33
As you can see I have added the following code
CASE s.user_seeks WHEN 0 THEN 0
ELSE s.user_seeks*1.0 /(s.user_scans + s.user_seeks) * 100.0 END AS SeekPercentage
CASE s.user_scans WHEN 0 THEN 0
ELSE s.user_scans*1.0 /(s.user_scans + s.user_seeks) * 100.0 END AS ScanPercentage
This is helpful to determine the seek/scan ratio if you have mostly scans then maybe you have to look at your queries to optimize them
If you run the sys.dm_db_index_usage_stats query again you will se that the user_updates column is 2, that's because we inserted 2 rows (2 batches)
Let's do this
UPDATE TestIndex
SET SomeID = SomeID + 1
--(2 row(s) affected)
Now user_updates is 3 since we used 1 batch that modified 2 rows
Now restart your server and run the same query again. as you can see the resultset is empty this is because the counters are initialized to empty whenever the SQL Server (MSSQLSERVER) service is started. In addition, whenever a database is detached or is shut down (for example, because AUTO_CLOSE is set to ON), all rows associated with the database are removed.
When an index is used, a row is added to sys.dm_db_index_usage_stats if a row does not already exist for the index. When the row is added, its counters are initially set to zero.
When you run this query
SELECT *
FROM TestIndex
You will see a row again after you run the sys.dm_db_index_usage_stats query
Also note that every individual seek, scan, lookup, or update on the specified index by one query execution is counted as a use of that index and increments the corresponding counter in this view. Information is reported both for operations caused by user-submitted queries, and for operations caused by internally generated queries, such as scans for gathering statistics.
The user_updates counter indicates the level of maintenance on the index caused by insert, update, or delete operations on the underlying table or view. You can use this view to determine which indexes are used only lightly all by your applications. You can also use the view to determine which indexes are incurring maintenance overhead. You may want to consider dropping indexes that incur maintenance overhead, but are not used for queries, or are only infrequently used for queries.
sys.dm_db_index_usage_stats
database_id smallint
ID of the database on which the table or view is defined.
object_id int
ID of the table or view on which the index is defined
index_id int
ID of the index.
user_seeks bigint
Number of seeks by user queries.
user_scans bigint
Number of scans by user queries.
user_lookups bigint
Number of lookups by user queries.
user_updates bigint
Number of updates by user queries.
last_user_seek datetime
Time of last user seek
last_user_scan datetime
Time of last user scan.
last_user_lookup datetime
Time of last user lookup.
last_user_update datetime
Time of last user update.
system_seeks bigint
Number of seeks by system queries.
system_scans bigint
Number of scans by system queries.
system_lookups bigint
Number of lookups by system queries.
system_updates bigint
Number of updates by system queries.
last_system_seek datetime
Time of last system seek.
last_system_scan datetime
Time of last system scan.
last_system_lookup datetime
Time of last system lookup.
last_system_update datetime
Time of last system update.
Saturday, September 09, 2006
Don't Use Union On Tables With Text Columns
When you have a SQL UNION between 2 or more tables and some of these tables have columns with a text data type use UNION ALL instead of UNION.
If you use UNION you will be given the following message
Server: Msg 8163, Level 16, State 4, Line 10
The text, ntext, or image data type cannot be selected as DISTINCT.
What happens is that UNION use distinct behind the scenes and you can not use distinct on text, ntext or image data types
Run this script to see what I mean
CREATE TABLE TestUnion1 (id INT,textCol TEXT)
CREATE TABLE TestUnion2 (id INT,textCol TEXT)
GO
INSERT TestUnion1 VALUES(1,'abc')
INSERT TestUnion2 VALUES(1,'abc')
INSERT TestUnion1 VALUES(1,'aaa')
INSERT TestUnion1 VALUES(1,'zzz')
INSERT TestUnion1 VALUES(3,'abc')
--problem
SELECT * FROM TestUnion1
UNION --ALL
SELECT * FROM TestUnion2
--no problem
SELECT * FROM TestUnion1
UNION ALL
SELECT * FROM TestUnion2
DROP TABLE TestUnion1,TestUnion2
If you use UNION you will be given the following message
Server: Msg 8163, Level 16, State 4, Line 10
The text, ntext, or image data type cannot be selected as DISTINCT.
What happens is that UNION use distinct behind the scenes and you can not use distinct on text, ntext or image data types
Run this script to see what I mean
CREATE TABLE TestUnion1 (id INT,textCol TEXT)
CREATE TABLE TestUnion2 (id INT,textCol TEXT)
GO
INSERT TestUnion1 VALUES(1,'abc')
INSERT TestUnion2 VALUES(1,'abc')
INSERT TestUnion1 VALUES(1,'aaa')
INSERT TestUnion1 VALUES(1,'zzz')
INSERT TestUnion1 VALUES(3,'abc')
--problem
SELECT * FROM TestUnion1
UNION --ALL
SELECT * FROM TestUnion2
--no problem
SELECT * FROM TestUnion1
UNION ALL
SELECT * FROM TestUnion2
DROP TABLE TestUnion1,TestUnion2
Thursday, September 07, 2006
SQL Server 2005 Failover Clustering White Paper
Microsoft has published a comprehensive document about implementing failover clustering for SQL Server 2005 and Analysis Services
Overview
This white paper is intended for a technical audience and not technical decision makers. It complements the existing documentation around planning, implementing, and administering of a failover cluster that can be found in Microsoft SQL Server 2005 Books Online. To ease the upgrade process for existing users of failover clustering, this white paper also points out differences in the failover clustering implementation of SQL Server 2005 compared to SQL Server 2000.
Get it here
Overview
This white paper is intended for a technical audience and not technical decision makers. It complements the existing documentation around planning, implementing, and administering of a failover cluster that can be found in Microsoft SQL Server 2005 Books Online. To ease the upgrade process for existing users of failover clustering, this white paper also points out differences in the failover clustering implementation of SQL Server 2005 compared to SQL Server 2000.
Get it here
Kalen Delaney Has Finished Inside SQL Server 2005: The Storage Engine And Is Also Blogging On SQLblog.com
Some good news that I am very excited about; Kalen Delaney has finished Inside SQL Server 2005: The Storage Engine. I have already pre-ordered her book but will have to wait until November 8, 2006 when it will ship (hopefully). I have her 2000 edition and it's my favorite book together with Ken Henderson's Guru series. Kalen also has started to blog on SQLblog.com
So what am I currently reading and what else am I going to buy.
Currently I am reading a very good SQL book by Louis Davidson named Pro SQL Server 2005 Database Design and Optimization. I hope to be done by the time Inside SQL Server 2005: The Storage Engine ships, I should be if the kids let me. Pro SQL Server 2005 Database Design and Optimization is a very good book and starts from Data Model and goes all the way to Database Interoperability. some other things covered are Protecting the Integrity of Your Data,Table Structures and Indexing,Coding for Concurrency
This book does also a very good job of explaining Codd’s 12 Rules for an RDBMS
What am I going to buy next?
Next book on my list is Expert SQL Server 2005 Development by Adam Machanic. I like the chapters that Adam wrote in Pro SQL server 2005, I like what he does in newsgroups and I like his blog. So that is enough for me to check out the book
After that I will buy SQL Server 2005 Practical Troubleshooting: The Database Engine by Ken Henderson which will be published December 5, 2006 (Sinterklaas dag for all you Dutch people)
I have 3 of Ken's books and I will get this one and the follow up to The Guru's Guide to SQL Server Stored Procedures, XML, and HTML which will be published May 31, 2007
So I went a little overboard with the links, this post has more blue characters than black ones.
So what is on your list and what are you currently reading?
I am also interested in getting A Developer's Guide to SQL Server 2005 by Bob Beauchemin. We will see; if I finish these books and the others are not published yet then I will. I did not have this problem when I used to take the Amtrak/NJ Transit train from Princeton to New York City (lots of time to read). Right now I work and live in Princeton and my commute is about 8 minutes
So what am I currently reading and what else am I going to buy.
Currently I am reading a very good SQL book by Louis Davidson named Pro SQL Server 2005 Database Design and Optimization. I hope to be done by the time Inside SQL Server 2005: The Storage Engine ships, I should be if the kids let me. Pro SQL Server 2005 Database Design and Optimization is a very good book and starts from Data Model and goes all the way to Database Interoperability. some other things covered are Protecting the Integrity of Your Data,Table Structures and Indexing,Coding for Concurrency
This book does also a very good job of explaining Codd’s 12 Rules for an RDBMS
What am I going to buy next?
Next book on my list is Expert SQL Server 2005 Development by Adam Machanic. I like the chapters that Adam wrote in Pro SQL server 2005, I like what he does in newsgroups and I like his blog. So that is enough for me to check out the book
After that I will buy SQL Server 2005 Practical Troubleshooting: The Database Engine by Ken Henderson which will be published December 5, 2006 (Sinterklaas dag for all you Dutch people)
I have 3 of Ken's books and I will get this one and the follow up to The Guru's Guide to SQL Server Stored Procedures, XML, and HTML which will be published May 31, 2007
So I went a little overboard with the links, this post has more blue characters than black ones.
So what is on your list and what are you currently reading?
I am also interested in getting A Developer's Guide to SQL Server 2005 by Bob Beauchemin. We will see; if I finish these books and the others are not published yet then I will. I did not have this problem when I used to take the Amtrak/NJ Transit train from Princeton to New York City (lots of time to read). Right now I work and live in Princeton and my commute is about 8 minutes
Subscribe to:
Posts (Atom)