Friday, May 22, 2020

PostgreSQL adds FETCH FIRST WITH TIES.. just like TOP n WITH TIES in SQL Server

PostgreSQL 13 Beta 1 was released yesterday, you can read the release notes here

https://www.postgresql.org/about/news/2040/

One thing that caught my eye was this statement in the release notes

PostgreSQL 13 brings more convenience to writing queries with features like FETCH FIRST WITH TIES, which returns any additional rows that match the last row.

This is I guess exactly like TOP WITH TIES in SQL Server. I believe this has been around in SQL Server since at least version 7.  How many times have I used it in code that was deployed in the last 20 years?  I believe I have used WITH TIES only once. It does make for great interview questions and SQL puzzles  :-)


So let's take a quick look at how TOP WITH TIES works in SQL Server.  The first thing we will do is look at what Books On Line says about TOP


WITH TIES Returns two or more rows that tie for last place in the limited results set. You must use this argument with the ORDER BY clause. WITH TIES might cause more rows to be returned than the value specified in expression. For example, if expression is set to 5 but two additional rows match the values of the ORDER BY columns in row 5, the result set will contain seven rows.You can specify the TOP clause with the WITH TIES argument only in SELECT statements, and only if you've also specified the ORDER BY clause. The returned order of tying records is arbitrary. ORDER BY doesn't affect this rule.

Time to get started and write some code to see this in action

First create this table of students and insert some data

CREATE TABLE #TopExample(GradeAverage int, Student varchar(100))
INSERT #TopExample VALUES(99.00,'Plato'),
      (98,'Socrates'),
      (95,'Diogenes the Cynic'),
      (94,'Antisthenes'),
      (94,'Demetrius'),
      (50,'Denis')

As you can see, I am not a very good student  :-(

If you do a regular TOP 4 query like this

SELECT TOP 4 GradeAverage, Student 
FROM #TopExample  
ORDER BY GradeAverage DESC

You will get back these results

GradeAverage Student
99          Plato
98          Socrates
95          Diogenes the Cynic
94          Demetrius


As you can see we are missing another student with a grade of 94, this is Antisthenes

This is easily fixed by adding WITH TIES to the query

SELECT TOP 4 WITH TIES GradeAverage, Student 
FROM #TopExample 
ORDER BY GradeAverage DESC

Now, you will get back these results, as you can see, you now have 5 rows and both rows with a grade average of 94 are included

GradeAverage Student
99          Plato
98          Socrates
95          Diogenes the Cynic
94          Demetrius
94          Antisthenes


Another way to do the same as WITH TIES is by using DENSE_RANK.  That query looks like this

;WITH c AS (SELECT DENSE_RANK() OVER (ORDER BY GradeAverage DESC) AS dens, 
 GradeAverage,Student 
 FROM #TopExample)

SELECT GradeAverage, Student 
FROM c WHERE dens <=4
ORDER BY GradeAverage DESC 

You will get back these same results again, you now have 5 rows and both rows with a grade average of 94 are included as well
GradeAverage Student
99          Plato
98          Socrates
95          Diogenes the Cynic
94          Demetrius
94          Antisthenes

Using DENSE_RANK is bit more code, but if portability is a concern, it might be a better choice

There you go a post about a feature you will never use :-)

If you want to run all the queries in one shot here is all the code



CREATE TABLE #TopExample(GradeAverage int, Student varchar(100))
INSERT #TopExample VALUES(99.00,'Plato'),
      (98.00,'Socrates'),
      (95.00,'Diogenes the Cynic'),
      (94.00,'Antisthenes'),
      (94.00,'Demetrius'),
      (50.00,'Denis')

SELECT TOP 4 GradeAverage, Student 
FROM #TopExample  
ORDER BY GradeAverage DESC

SELECT TOP 4 WITH TIES GradeAverage, Student 
FROM #TopExample 
ORDER BY GradeAverage DESC

;WITH c AS (SELECT DENSE_RANK() OVER (ORDER BY GradeAverage DESC) AS dens, 
 GradeAverage,Student 
 FROM #TopExample)

SELECT GradeAverage, Student 
FROM c WHERE dens <=4
ORDER BY GradeAverage DESC 


DROP TABLE #TopExample


And here is what it all looks like in SSMS, code and output



PostgreSQL adds FETCH FIRST WITH TIES.. just like TOP n WITH TIES in SQL Server Query in SSMS

Wednesday, May 06, 2020

You know about waitfor delay but did you know there is a waitfor time?


I was looking at some code I wrote the other day and noticed the WAITFOR command.. This got me thinking. How many times have I used WAITFOR in code, probably as much as I have used NTILE  :-)

I looked at the documentation for WAITFOR and notice there is TIME in addition to DELAY.  Oh that is handy, I always rolled my own ghetto-style version by calculating how long it would be until a specific time and then I would use that in the WAITFOR DELAY command

Why would you use the WAITFOR command?

The WAITFOR command can be used to delay the execution of command for a specific duration or until a specific time occurs. From Books On Line, the description is as follows:



Blocks the execution of a batch, stored procedure, or transaction until either a specified time or time interval elapses, or a specified statement modifies or returns at least one row.

WAITFOR   
{  
    DELAY 'time_to_pass'   
  | TIME 'time_to_execute'   
  | [ ( receive_statement ) | ( get_conversation_group_statement ) ]   
    [ , TIMEOUT timeout ]  
}  



Arguments
DELAY
Is the specified period of time that must pass, up to a maximum of 24 hours, before execution of a batch, stored procedure, or transaction proceeds.

'time_to_pass'
Is the period of time to wait. time_to_pass can be specified either in a datetime data format, or as a local variable. Dates can't be specified, so the date part of the datetime value isn't allowed. time_to_pass is formatted as hh:mm[[:ss].mss].

TIME
Is the specified time when the batch, stored procedure, or transaction runs.

'time_to_execute'
Is the time at which the WAITFOR statement finishes. time_to_execute can be specified in a datetime data format, or it can be specified as a local variable. Dates can't be specified, so the date part of the datetime value isn't allowed. time_to_execute is formatted as hh:mm[[:ss].mss] and can optionally include the date of 1900-01-01.


WAITFOR with a receive_statement or get_conversation_group_statement is applicable only to Service Broker messages. I will not cover those in this post



I must admit that I only use these commands a couple of times a year when running something ad-hoc.  In code,  I will use WAITFOR DELAY when doing a back fill of data,  and the table is replicated. In that case I will batch the data and after each batch is completed I will pause for a second or so. The reason I am doing this is because I don't want to increase replication latency, after all, I am a nice guy


WAITFOR TIME 

Let's take a look how you would use the WAITFOR command. I will start with WAITFOR TIME


The command is very easy.. if you want the print command to run at 09:57:16, you would do the following

WAITFOR TIME '09:57:16'
PRINT 'DONE  ' 

The seconds are optional, if you want it to run at 9 hours and 57 minutes, you can do the following

WAITFOR TIME '09:57'
PRINT 'DONE  ' 

One thing to know is that you can't grab the output from a time data type and use that in your WAITFOR TIME command. The following will blow up

SELECT CONVERT(time,  getdate()) --'09:57:16.9600000'

WAITFOR TIME '09:57:16.9600000'

Msg 148, Level 15 , State 1, Line 32
Incorrect time syntax in time string '09:57:16.9600000' used with WAITFOR.

What you need to do is strip everything after the dot.

We need the command to be the following


WAITFOR TIME '09:57:16'

There are two ways to accomplish this... first way is by using PARSENAME, I blogged about that function several times, the first time here: Ten SQL Server Functions That You Hardly Use But Should

All you have to tell SQL Server which part you want, if you use PARSENAME,1 you will get everything after the dot, if you use PARSENAME,2 you will get everything before the dot.


1
2
SELECT  PARSENAME('09:57:16.9600000',2), 
PARSENAME('09:57:16.9600000',1)

This returns  the following

09:57:16     9600000


The easiest way would have been to just use time(0) instead

1
2
SELECT CONVERT(time,  getdate()) ,--'09:57:16.9600000'
CONVERT(time(0),  getdate())  --'09:57:16

Below is a complete example that will wait for 10 seconds to run the PRINT statement on line 12 if you run the whole code block in 1 shot.

Also notice that I use a variable with the WAITFOR TIME command on line 9. The caveat with that is that the variable can't be a time datatype. This is why I use a varchar datatype and store the value of the time data type in it. The reason I use the time datatype in my procs is so that I don't have to do a lot of validations when someone is calling the proc. If they pass in a string that can't be converted.. the proc won't even run... it will fail right at the proc call itself


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
DECLARE @DelayTime time(0)= GETDATE()
PRINT @DelayTime

SELECT @DelayTime =DATEADD(second,10,@DelayTime)

PRINT @DelayTime

DECLARE @d varchar(100) = @DelayTime -- you have to use varchar in command
WAITFOR TIME @d 

-- Run your command here
PRINT 'DONE  ' + CONVERT(varchar(100),CONVERT(time(0),  getdate()))


What is printed is the following


10:49:48
10:49:58
DONE  10:49:58

Now when would you really use WAITFOR TIME?  You can accomplish the same with a scheduled job, the only time I use WAITFOR TIME is if I want a quick count of want to run something at a specific time but I know I won't be at my desk and I can't create a job without a ticket

But you also have to be aware that if your connection gets lost to the SQL Server instance, your command won't execute


WAITFOR DELAY

The WAITFOR DELAY command is similar to the WAITFOR TIME command, instead of waiting for a time, the command pauses for a specific period

Like I said before,  I use WAITFOR DELAY as well as a batch size in my back fill procs. Both can be passed in, if you do a load during a weekday, your delay would be longer than on a weekend.


Sometimes I need to see how many rows are getting inserted every minute.. or something similar
I will then combine WAITFOR DELAY and the batch terminator with a count number to execute the batch of statements  more than once

Here is such an example, it will run the INSERT statement 20 times, it will pause 1 minute between each execution


1
2
3
4
INSERT #temp(SomeCol, SomeTimeStamp)
SELECT COUNT(*), GETDATE() FROM sometable 
WAITFOR DELAY '00:01:00'
GO 20


That's all for this post.

Do you use the WAITFOR command, if so, what do you use it for?



Monday, January 20, 2020

TVPs vs Memory Optimized TVPs


The other day I was thinking about the blog post Faster temp table and table variable by using memory optimization I read a while back. Since you can't believe anything on the internets (no disrespect to whoever wrote that post) , I decided to take this for a test

In this post I will be creating 2 databases, one is a plain vanilla database and the other, a database that also has a file group that contains memory optimized data

I will also be creating a table type in each database, a plain one and a memory optimized one in the memory optimized database

So lets get started, first I will create the regular database and the regular table type

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
CREATE DATABASE TempTVP
GO

USE TempTVP
GO

CREATE TYPE dbo.DataProcessingType AS TABLE(
 SomeDate datetime NOT NULL,
 SomeSymbol varchar(40) NOT NULL,
 SomeValue numeric(24, 10) NOT NULL,
 SomeDescription varchar(100),
 index tvp_temp (SomeDate, SomeSymbol))
GO


Now I will create the memory optimized database and the memory optimized table type
In order for the database to be able to use memory optimized code, you need to add a filegroup and tell SQL Server it contains memory optimized data, after that is created, you add a file to that file group.

The table type syntax is identical except for the line (WITH   (MEMORY_OPTIMIZED = ON);) at the end

Here is what the script looks like



 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
CREATE DATABASE TempTVPHekaton
GO

USE TempTVPHekaton
GO


ALTER DATABASE [TempTVPHekaton] ADD FILEGROUP [Hekaton_Data] 
CONTAINS MEMORY_OPTIMIZED_DATA 
GO


ALTER DATABASE [TempTVPHekaton] ADD FILE (NAME='Hekaton_Data_file',
 FILENAME='C:\Data\ekaton_Data_file.mdf') TO FILEGROUP Hekaton_Data;
GO

CREATE TYPE dbo.DataProcessingType AS TABLE(
 SomeDate datetime NOT NULL,
 SomeSymbol varchar(40) NOT NULL,
 SomeValue numeric(24, 10) NOT NULL,
 SomeDescription varchar(100),
 index tvp_temp (SomeDate, SomeSymbol))
  WITH   (MEMORY_OPTIMIZED = ON); 
GO

Now that we have our two database, lets create a very simple stored proc in each database, all it does is store the row count from the table valued parameter passed in into a variable


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
CREATE PROCEDURE prTestTVP @tvp DataProcessingType readonly

AS

SET NOCOUNT ON

DECLARE @Count int

SELECT @Count = COUNT(*) FROM @tvp
GO

Now it is time to generate the test script

The text script will call the stored procedure 1000 times passing in a table valued parameter
The test script will populate the table type with 1000 rows, the data looks like this


That data is pushed into the table valued parameter, the proc is called, the table type is cleared out and every 100 iterations the current iteration will be printed

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
SET NOCOUNT ON
DECLARE @LoopID int = 1

WHILE @LoopID <= 1000
BEGIN
 DECLARE @tvp DataProcessingType
 INSERT @tvp -- add some values
  SELECT DATEADD(d,number,'20200101') as SomeDate,
  'X' + STR(number) + STR(@LoopID) as SomeSymbol,
   number * @LoopID * 1.11 as SomeValue,
   LEFT(REPLICATE(number,100),100) as SomeDescription
 FROM master..spt_values
  WHERE type = 'p' -- only numbers
  and number < 1000
 ORDER BY NEWID() --pseudo-random sort


 EXEC prTestTVP @tvp -- CALL proc with 1000 rows
 
 DELETE @tvp -- delete the data since it will be populated again

  if @LoopID %100 = 0 -- print every 100 iterations
  PRINT STR(@LoopID)
 SET @LoopID += 1 -- add 1 to counter

END


What I did now is take the code, I then pasted the code in 2 different SSMS windows and connected to the TempTVP database, I then executed the code in both windows and let it run.  Once it was finished, I noted down how long it took and then changed the connections to the database TempTVPHekaton which is memory optimized and ran the code in both windows as well. I played around with loops of 100, 1000, 2000, I played around as well by populating the table with rows between 1000 and 2048

Here are some of the results



DB Storage Iterations * rows Percentage of time
Disk 1000 * 1000 85.37%
Memory 1000 * 1000 14.63%
Disk 1500 * 1000 76.36%
Memory 1500 * 1000 23.64%
Disk 5000 * 100 92.31%
Memory 5000 * 100 7.69%


So it looks like it is at least 4 times faster, if the table is smaller and you have more iterations, it gets even faster


I did run into an issue while testing, if I made it execute 5000 times with a 2000 rows table.. I was greeted by the following error

Msg 701, Level 17, State 154, Procedure prTestTVP, Line 7 [Batch Start Line 0]
There is insufficient system memory in resource pool 'default' to run this query.

This code was running on a laptop where I had 40 tabs open in chrome so there was not a lot of free memory, I also didn't create a resource pool, everything was a default setup

If you look at the code you will see that I clear out the table after each iteration.

However the table variable doesn't get out of scope until the loop is finished. In my real time scenario, I don't have this issue, my procs are called by many processes but not in a loop

To read more about this error start here

Be aware of 701 error if you use memory optimized table variable in a loop

This is actually by-design behavior documented in “Memory-Optimized Table Variables”).  Here is what is state “Unlike memory-optimized tables, the memory consumed (including deleted rows) by table variables is freed when the table variable goes out of scope)”.  With a loop like above, all deleted rows will be kept and consume memory until end of the loop.


There you go.. if you are using table types, switching them to in memory table types might help your application perform better. But of course as I said before...  since you can't believe anything on the internets, test for yourself