Tuesday, February 12, 2019

Scripting out procs and user defined functions with PowerShell



I decided to take SonarCube for a spin to run some static code analysis. In order to do that I needed some code that I can display here without breaking laws.

I a real world, you would just point SonarCube to your code repository, but in order to show you how to run it so that you can follow along, I decided to go this route

I decided to use the AdventureWorks database to script out some procs and functions. If you want to follow what I am doing here, grab the backup from here: https://github.com/Microsoft/sql-server-samples/releases/tag/adventureworks

On your C drive create a folder named DB, in the DB folder create 2 folders, one name data and the other log

To restore the DB you can run this command (assuming your backup is in the c:\temp directory

USE [master]
RESTORE DATABASE [AdventureWorks2017] 
FROM  DISK = N'C:\temp\AdventureWorks2017.bak' WITH  FILE = 1,  
MOVE N'AdventureWorks2017' 
TO N'C:\DB\DATA\AdventureWorks2017.mdf',  
MOVE N'AdventureWorks2017_log' 
TO N'C:\DB\LOG\AdventureWorks2017_log.ldf',  
NOUNLOAD,  STATS = 5

 Now It's time to script out the procs and user defined functions

But first we need a place to store them

In my case I decided to create 2 folders in the c:\temp directory
StoredProcedures
UserDefinedFunctions

Now it's time to script out the procs and user defined functions, I will be using Powershell from that, you will need to install the sql server module, you can download it here

https://docs.microsoft.com/en-us/sql/powershell/download-sql-server-ps-module?view=sql-server-2017

Follow the instruction if you get any errors.

Now it's time to run the script, you will need to run powershell as an administrator

You will need to change Machinename\sql2019 to your machine and instance name

The script is simple.. it's for a one time use.. if you need to run the script repeatedly, you probably want to make it so you can rerun it for different databases and servers, so no hardcoding  :-)


Import-Module SqlServer -Version 21.0.17279

cd SQLSERVER:\SQL\MAchinename\sql2019\Databases\AdventureWorks2017\StoredProcedures



foreach ($tbl in Get-ChildItem  )

{
$k="C:\Temp\StoredProcedures\" + $($tbl.Schema) + "." + $($tbl.name) + ".SQL"
$tbl.Script() > $k
}

cd ..\UserDefinedFunctions

foreach ($tbl in Get-ChildItem  )

{
$k="C:\Temp\UserDefinedFunctions\" + $($tbl.Schema) + "." + $($tbl.name) + ".SQL"
$tbl.Script() > $k


That's all for the script.. if it ran successfully, you should see a bunch of procs and user defined functions in their directories.  Here is a view of the functions


Next up..installing SonarQube, SonarQube Scanner and the sonar-tsql-plugin

Monday, February 04, 2019

After 20+ years in IT .. I finally discovered this...

Originally I was not going to write this post, but after I found out that several other people didn't know this I figured what the heck, why not, maybe this will help someone else as well

Last week I was on a remote session with 2 clients, each run the Advent program . The team I am part of provides a script to run the advent (APX or Axys) executable. This will then generate the portfolios, composites, price, security master, splits and other files. We then zip it up and sftp it over for ingestion so that we can run analytics and attribution

During these calls I interact with system administrators because usually the need to give permissions so that the script runs correctly

None of these admins knew that what I will show you existed. All the co-workers I asked didn't know this either (This could be because they are developers and not admins)



Back in the day (win 98 or perhaps NT 4), there was a windows powertool that you could install and if you right clicked on a folder you would get an option to open a command window and it would be in the path that you right clicked on

Those power tools don't exist anymore and you could do the same by hacking the registry, it's like a 16 step process

But there is a faster way.....

So what I usually did before 2 months ago is that I would select the path


And then I would open a command prompt, type CD and then paste the path...not too complicated

But here is the faster way.... instead of copying the path...just type in cmd in the address bar and hit enter


Boom shakalaka... a command prompt is opened immediately and you are in the same path



Did you know this also works when you type Powershell in the address bar, Eric Darling left me a comment on twitter informing me that it works with powershell as well

Here is what you see after typing it




So there you have it... hopefully it will save you some minutes of valuable time in a year

Also if you knew about this or did not know..leave a comment and let me know





Tuesday, January 08, 2019

Print.. the disruptor of batch deletes in SQL



Someone had an issue where a batched delete script was not deleting anything. I looked over some code in our repository and noticed two patterns the way queries are written to handle batch deletes

One is a while loop that runs while @@rowcount is greater than 0

WHILE @@rowcount > 0
 BEGIN
  DELETE TOP (5000)
  FROM SomeTable
 END


The other way is to run a while loop which is always true and then check if @@rowcount is 0, if it is 0 then break out of the loop

 WHILE 1 = 1  
    BEGIN  
        DELETE TOP(5000)  
        FROM SomeTable

        IF @@ROWCOUNT = 0  
     BREAK  
     END


I have always used WHILE @@rowcount > 0 but you have to be careful because @@rowcount could be 0 when your while loop starts

Let's take a look at an example. This is a simplified example without a where clause..but let's say you have to delete several million rows from a table with many more millions of rows and the table is replicated... in that case you want to batch the deletes so that your log file doesn't fill up, replication has a chance to catch up and in general the deletes should run faster

SELECT TOP 20000 row_number() OVER(ORDER BY t1.id) AS SomeId,
  getutcdate() AS SomeDate, newid() AS SomeValue
INTO SomeTable
FROM sys.sysobjects t1
CROSS JOIN sys.sysobjects t2

SELECT COUNT(*) from SomeTable 

SELECT * FROM SomeTable WHERE 1= 0

WHILE @@rowcount > 0
 BEGIN
  DELETE TOP (5000)
  FROM SomeTable
 END

SELECT COUNT(*) from SomeTable 

DROP TABLE SomeTable -- Added here as cleanup in case people run the example

This is of course a silly example because why would you do a count like that against a different table before a delete

But what if you had this instead, you put a nice print statement there so that from the output you see when it started and you would also see the rowcounts?

SELECT TOP 20000 row_number() OVER(ORDER BY t1.id) AS SomeId,
  getutcdate() AS SomeDate, newid() AS SomeValue
INTO SomeTable
FROM sys.sysobjects t1
CROSS JOIN sys.sysobjects t2

SELECT COUNT(*) from SomeTable 

PRINT' Starting my update now....'

WHILE @@rowcount > 0
 BEGIN
  DELETE TOP (5000)
  FROM SomeTable
 END

SELECT COUNT(*) from SomeTable 

DROP TABLE SomeTable -- Added here as cleanup in case people run the example



The count is 20000 before and after the loop, nothing got delete, this is because a print statement will reset @@rowcount  to 0

Take a look by running this simple set of queries

SELECT 1 UNION ALL SELECT 2
SELECT @@rowcount as 'Rowcount'
PRINT '1'
SELECT @@rowcount as 'RowcountAfterPrint'

Here is what the output looks like


After the  PRINT line @@rowcount is reset back to 0

So if you want to use a while loop while checking @@rowcount, do this instead by running the delete first once outside the loop



SELECT TOP 20000 row_number() OVER(ORDER BY t1.id) AS SomeId,
  getutcdate() AS SomeDate, newid() AS SomeValue
INTO SomeTable
FROM sys.sysobjects t1
CROSS JOIN sys.sysobjects t2

SELECT COUNT(*) from SomeTable 

PRINT' Starting my update now....'

DELETE TOP (5000)
FROM SomeTable

WHILE @@rowcount > 0
 BEGIN
  DELETE TOP (5000)
  FROM SomeTable
 END

SELECT COUNT(*) from SomeTable 

DROP TABLE SomeTable -- Added here as cleanup in case people run the example

If you run the delete this way if there was something to delete, the while loop would be entered, if the table was empty then there would no need to enter the while loop

Also keep in mind that it is not just PRINT that will reset @@rowcount back to 0.

From Books On Line:

Statements such as USE, SET <option>, DEALLOCATE CURSOR, CLOSE CURSOR, BEGIN TRANSACTION, or COMMIT TRANSACTION reset the ROWCOUNT value to 0.

That's all... hopefully this helps someone out in the future if they notice nothing gets deleted

Friday, January 04, 2019

Proactive notifications

It has been a while since I wrote some of my best practices posts. I decided to revisit these posts again to see if anything has changed, I also wanted to see if I could add some additional info.


 Today we are going to look at proactive notifications

In the SQL Server Maintenance post from yesterday I touched upon proactive notifications a little, today I want to dive a little deeper into this subject.
The last thing you want to hear as a DBA is  ny of the following things from the end users or developers
  • The transaction log is full
  • The database is very slow
  • The latest backup we have is 9 days old
  • The table that was created has 2 extra columns this morning
  • Everything is locked up can't get any results back from a query
  • Deadlocks are occurring
What you really want to have at your shop is a tool like Quest FoglightSolarwindsRed Gate SQL Monitor or similar. The benefit of these tools is that there is a central location where you can look at all the alerts at a glance. You get a lot of stuff out of the box and all you have to do is tell it what server to start monitoring. I would suggest to start using the trial version to see if it is something that would be beneficial for your organization.
Of course you can roll your own solution as well, this will involve work and unless your time is worthless or you are bored out of your mind after work I wouldn't do it.

Utilize the logs

You need to scan the errorlog periodically to see if there are errors, you can automate this, no need to start opening log files every 5 minutes. create a SQL Agent job that runs every 5 minutes and checks if there are any errors since it last ran. You can use the xp_readerrorlog proc to read the error log from with sql server with T-SQL.
Here is a small example of what you can do if you have this in a SQL Agent job that runs every 5 minutes or so, you can of course email yourself the results, dump the result into a table that is perhaps shown on a dashboard in the office, there are many possibilities.


--This will hold the rows
CREATE TABLE #ErrorLog (LogDate datetime, ProcessInfo VarChar(10), 
ErrorMessage VarChar(Max))

-- Dump the errorlog into the table
INSERT INTO #ErrorLog
EXEC master.dbo.xp_readerrorlog

-- Delete everything older than 5 minutes
-- ideally you will store the max date when it ran last
DELETE #ErrorLog
WHERE LogDate <  DATEADD(mi,-5,GETDATE())

-- Some stuff you want to check for
-- Failed backups...you want to know this
SELECT * FROM #ErrorLog
WHERE ErrorMessage LIKE'BACKUP failed%'

-- Why does it take so looong to grow a file, maybe rethink your settings
SELECT * FROM #ErrorLog
WHERE ErrorMessage LIKE'Autogrow of file%'

-- What is going on any backups or statistic updates running at this time?
SELECT * FROM #ErrorLog
WHERE ErrorMessage LIKE'SQL Server has encountered %occurrence(s) of I/O requests taking longer than%'

-- My mirror might not be up to date
SELECT * FROM #ErrorLog
WHERE ErrorMessage LIKE'The alert for ''unsent log'' has been raised%'


DROP TABLE #ErrorLog 

Those are just small samples, you might want to look for other kind of messages from the errorlog

The transaction log is full

You want to make sure that you know you are running out of space before you run out of space. I covered this in the SQL Server Maintenance post Take a look at the sections Make sure that you have enough space left on the drives and Make sure that you have enough space left for the filegroups In those two section I described what to look for and also supplied code that you can then plug into your own solution

The database is very slow

This complaint you hear every now and then, I have seen this from time to time. There are several things that could be happening, here is a list
Someone decided to take a backup of that 1 TB database in the middle of the day
The update statistics job is still running
Statistics are stale and haven't been updated in a long time
The virus scan is running amok and nobody told it to ignore the database files
Someone decided to query all the data all at once
If you have a tool like Quest FoglightConfio IgniteRed Gate SQL Monitor or similar then you can see what query ran at what time, what it did and how long it ran.
You can of course also use sp_who2, BlkBy column and DBCC INPUTBUFFER to see what is going on
If you like to use Dynamic Management Views, then take a look at Glenn Berry's SQL Server 2005 Diagnostic Information Queries  posts, there is a .sql file in each post with all kind of queries to discover all kinds of stuff about your server.
It could also be that your hardware is having issues, make sure the IOs look good and check the eventlog for any clues.

The latest backup we have is 9 days old

The following query will give you for all the databases the last time it was backed up or display NEVER if it wasn't backed up

SELECT s.Name AS DatabaseName,'Database backup was taken on  ' + 
CASE WHEN MAX(b.backup_finish_date) IS NULL THEN 'NEVER!!!' ELSE
CONVERT(VARCHAR(12), (MAX(b.backup_finish_date)), 101) END AS LastBackUpTime
FROM sys.sysdatabases s
LEFT OUTER JOIN msdb.dbo.backupset b ON b.database_name = s.name
GROUP BY s.Name

Here is what the output will look like

DatabaseName LastBackUpTime
--------------  ---------------------------------------
model         Database backup was taken on  NEVER!!!
msdb         Database backup was taken on  12/10/2012
ReportServer Database backup was taken on  NEVER!!!


As you can see that is not that great, all the databases should be backed up on a regular basis. Scroll up to the Utilize the logs section to see how you can check the errorlog for failed backup messages.

Everything is locked up, you can't get any results back from a query

Usually this indicates that there is an open transaction somewhere that has not finished or someone did the BEGIN TRAN part but never did a COMMIT or ROLLBACK.
Some people just restart the server to 'fix' the issue, of course if you do that you will never know what the root cause is and you never know when it will happen again.
We can easily show what happens when you have an open transaction, btw don't do this on the production server.

In 1 query window run this, replace SomeTable with a real table name.

BEGIN TRAN

SELECT TOP 1 * FROM SomeTable WITH(UPDLOCK, HOLDLOCK)

You will get a message that the query completed successfully
In another window run this

SELECT TOP 1 * FROM SomeTable WITH(UPDLOCK, HOLDLOCK)

That query won't return anything unless the first one is commited or rolled back

Now run this query below, the first column should have the text AWAITING COMMAND

SELECT   sys.cmd
        ,sys.last_batch
        ,lok.resource_type
        ,lok.resource_subtype
        ,DB_NAME(lok.resource_database_id)
        ,lok.resource_description
        ,lok.resource_associated_entity_id
        ,lok.resource_lock_partition
        ,lok.request_mode
        ,lok.request_type
        ,lok.request_status
        ,lok.request_owner_type
        ,lok.request_owner_id
        ,lok.lock_owner_address
        ,wat.waiting_task_address
        ,wat.session_id
        ,wat.exec_context_id
        ,wat.wait_duration_ms
        ,wat.wait_type
        ,wat.resource_address
        ,wat.blocking_task_address
        ,wat.blocking_session_id
        ,wat.blocking_exec_context_id
        ,wat.resource_description
FROM    sys.dm_tran_locks lok
JOIN    sys.dm_os_waiting_tasks wat
ON      lok.lock_owner_address = wat.resource_address
JOIN sys.sysprocesses sys ON wat.blocking_session_id = sys.spid


As you can see you have a blocking_session_id and a session_id, this will tell you which session_id is being blocked. You can now verify that the transaction session_id is blocking the other id
Go back to that first command window and execute a rollback

ROLLBACK

The query that had that second select should now be done as well, if you run that query that checks for the waits it should be clean as well.
Of course you could have done the same exercise by running sp_who2, looking at the BlkBy column, finding out what that session is doing by running DBCC INPUTBUFFER(session_id) with that session_id

Deadlocks are occurring

There is already a post written on LessThanDot explaining how you can get emailed when deadlocks occur. Ted Krueger wrote that post and it can be found here: Proactive Deadlock Notifications

Summary

I only touched the surface of what can be done in this post. I want you to find out if there is any monitoring being done in your shop, who gets notified? I have worked in places where the end user was the proactive notification, as long as we fixed it before the business users started to complaint life was good. Manual notifications and homebrew solutions might work for a while but when you add more and more servers and you add more people to the team this becomes laborious and error prone.