Monday, January 16, 2017

Creating the Wide World Importers sample database v1.0




Before SQL Server 2005, we had the northwind and pubs sample database, these actually shipped with the product. If you did a SQL Server 2000 install, you would see these two database in addition to the system databases.

With SQL Server 2005 this changed, there were no more sample databases included. Instead new databases were made available for download, for a while now we have the AdventureWorks available. Today I wanted to see if there was a AdventureWorks 2016 database available. I couldn't find it. Instead I found the Wide World Importers sample database v1.0.

These sample databases live now on github


Here is the link  https://github.com/Microsoft/sql-server-samples


As a normal person you will probably navigate to the databases folder https://github.com/Microsoft/sql-server-samples/tree/master/samples/databases  and in there you will probably go to https://github.com/Microsoft/sql-server-samples/tree/master/samples/databases/wide-world-importers/wwi-database-scripts

Then you will see the following scripts

Now if your name is Denis or you like to waste time (or both), you think..cool I will download this repo and run these files.  That is not unreasonable to think......  but

To save you some time, here is the easiest way to put this database on your SQL Server instance

Instead of running those scripts, go to this page https://github.com/Microsoft/sql-server-samples/releases/tag/wide-world-importers-v1.0  scroll down to downloads, download the 121 MB file named WideWorldImporters-Full.bak

While that is downloading, create a folder on you C drive named SQL, in that folder create 2 folders one named Data and one named Log, it should look like in the image below




Now assuming the backup file is on the C drive, here is what the restore command should look like


USE [master]
RESTORE DATABASE [WideWorldImporters] 
FROM  DISK = N'C:\WideWorldImporters-Full.bak' WITH  FILE = 1,  
MOVE N'WWI_Primary' TO N'C:\SQL\DATA\WideWorldImporters.mdf',  
MOVE N'WWI_UserData' TO N'C:\SQL\DATA\WideWorldImporters_UserData.ndf',  
MOVE N'WWI_Log' TO N'C:\SQL\Log\WideWorldImporters.ldf',  
MOVE N'WWI_InMemory_Data_1' TO N'C:\SQL\DATA\WideWorldImporters_InMemory_Data_1',  
NOUNLOAD,  REPLACE,  STATS = 5

GO

On SQL Server vNext 1.1, the output looks like this from the command above

5 percent processed.
10 percent processed.
15 percent processed.
20 percent processed.
25 percent processed.
30 percent processed.
Processed 1464 pages for database 'WideWorldImporters', file 'WWI_Primary' on file 1.
Processed 53096 pages for database 'WideWorldImporters', file 'WWI_UserData' on file 1.
Processed 33 pages for database 'WideWorldImporters', file 'WWI_Log' on file 1.
Processed 3862 pages for database 'WideWorldImporters', file 'WWI_InMemory_Data_1' on file 1.
Converting database 'WideWorldImporters' from version 852 to the current version 860.
Database 'WideWorldImporters' running the upgrade step from version 852 to version 853.
Database 'WideWorldImporters' running the upgrade step from version 853 to version 854.
Database 'WideWorldImporters' running the upgrade step from version 854 to version 855.
Database 'WideWorldImporters' running the upgrade step from version 855 to version 856.
Database 'WideWorldImporters' running the upgrade step from version 856 to version 857.
Database 'WideWorldImporters' running the upgrade step from version 857 to version 858.
Database 'WideWorldImporters' running the upgrade step from version 858 to version 859.
Database 'WideWorldImporters' running the upgrade step from version 859 to version 860.
100 percent processed.
RESTORE DATABASE successfully processed 58455 pages in 6.105 seconds (74.803 MB/sec).

To make sure everything is cool after the restore is done, run the following

USE WideWorldImporters
GO

select [Website].[CalculateCustomerPrice](1,number,'20160101')
from master..spt_values
where type = 'p'


select [Website].[CalculateCustomerPrice](customerid,1,'20160101')
from Sales.customers


And now we can even check that the function is visible in the new sys.dm_exec_function_stats DMV

SELECT TOP 10 d.object_id, d.database_id, 
OBJECT_NAME(object_id, database_id) AS 'function name',   
    d.cached_time, d.last_execution_time, d.total_elapsed_time,  
    d.total_elapsed_time/d.execution_count AS [avg_elapsed_time],  
    d.last_elapsed_time, d.execution_count  
FROM sys.dm_exec_function_stats AS d  
ORDER BY [total_worker_time] DESC;  

And here is the output


As you can see, we can see this function was executed a bunch of time

So there you have it, this is how you create the DB.... I also now understand why Brent Ozar uses the Stackoverflow DB for his posts, at least it is easy to find  .., if you prefer that over this example, head over here: https://www.brentozar.com/archive/2015/10/how-to-download-the-stack-overflow-database-via-bittorrent/

Tuesday, January 10, 2017

T-SQL Tuesday #86: String or binary data would be truncated

This month's T-SQL Tuesday is hosted by Brent Ozar, he proposed the following

Find the most interesting bug or enhancement request (and it can be your own), and write a blog post about it (including a link to the Connect item so that folks who agree with you can upvote the item)

This one was pretty easy for me, it is the following connect  item Please fix the "String or binary data would be truncated" message to give the column name

This error drives me crazy as well, it should be fairly easy to tell me if nothing else what damn column barfed on the data inserted, but no.. all you get is something like

Msg 8152, Level 16, State 6, Procedure <ProcName>, Line 61 String or binary data would be truncated.

This is like not having the black box after a plane crashed, you know the plane crashed, but you don't know why exactly.

Dealing with this issue on a semi-regular basis, I even have written my own T-SQL helper to quickly see where the issue is

declare @ImportTable varchar(100)
declare @DestinationTable varchar(100)
select @ImportTable = 'temp'
select @DestinationTable = 'TestTrunc'
 
declare @ImportTableCompare varchar(100)
declare @DestinationTableCompare varchar(100)
select @ImportTableCompare = 'MaxLengths'
select @DestinationTableCompare = 'TempTrunc'
 
 
declare @sql varchar(8000)
select @sql  = ''
select @sql = 'select  0 as _col0 ,'
select @sql +=   'max(len( ' + column_name+ ')) AS ' + column_name + ',' 
from information_schema.columns
where table_name = @ImportTable
and data_type in('varchar','char','nvarchar','nchar')
 
select @sql = left(@sql,len(@sql) -1)
select @sql +=' into ' + @ImportTableCompare + ' from ' + @ImportTable
 
--select @sql -debugging so simple, a caveman can do it
 
exec (@sql)
 
 
 
select @sql  = ''
select @sql = 'select 0 as _col0, '
select @sql +=   '' + convert(varchar(20),character_maximum_length)
+ ' AS ' + column_name + ',' 
from information_schema.columns
where table_name = @DestinationTable
and data_type in('varchar','char','nvarchar','nchar')
 
select @sql = left(@sql,len(@sql) -1)
select @sql +=' into ' + @DestinationTableCompare
 
--select @sql -debugging so simple, a caveman can do it
 
exec (@sql)
 
 
select @sql  = ''
select @sql = 'select  '
select @sql +=   '' + 'case when  t.' + column_name + ' > tt.' + column_name
+ ' then ''truncation'' else ''no truncation'' end as '+ column_name
+ ',' 
from information_schema.columns
where table_name = @ImportTableCompare
and column_name <> '_col0'
select @sql = left(@sql,len(@sql) -1)
select @sql +='  from ' + @ImportTableCompare + ' t
join ' + @DestinationTableCompare + ' tt on t._col0 = tt._col0 '
 
--select @sql -debugging so simple, a caveman can do it
 
exec (@sql)
 
 
exec ('drop table ' + @ImportTableCompare+ ',' + @DestinationTableCompare )


Something like this only helps you if you have the data readily available, what if it is from an application? In that case you need profiler or extended events to capture the statement

It is also crazy that this connect item is almost 9 years old, it was opened in April 2008

We do have someone from Microsoft commenting on this issue last August

Posted by David [MSFT] on 8/5/2016 at 1:39 PM
Latest update - the developer working on it understands the challenges involved in creating a full fix. It may be tricky to plumb the information about columns needed to generate a full error message down to the actual conversion function in such a way that won't impact insert or update performance. We may implement something cheap in the short term such as logging the type and length of the data being truncated. It's still too early to know when such a fix would reach a publicly visible release.

This connect item has 1328 upvotes as of today, it also has 5 downvotes (who are these people..probably working on the SQL Server team :-) )

So there you have it that is my contribution to T-SQL Tuesday # 86, keep track of Brent's blog here https://www.brentozar.com/blog/ there will be a recap posted on Tuesday, January 2017

Saturday, January 07, 2017

BULK INSERT and csv format, new in vNext 1.1



In SQL Server vNext 1.1 we now have the ability to import a csv via the BULK INSERT command without having to specify the field or row terminator

 You still need to specify the format, if you only do something like the following

BULK INSERT AlexaSongs  
   FROM 'c:\Songs played with Alexa.csv'  

You will be greeted with these errors


Msg 4832, Level 16, State 1, Line 10
Bulk load: An unexpected end of file was encountered in the data file.
Msg 7399, Level 16, State 1, Line 10
The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error.
Msg 7330, Level 16, State 2, Line 10
Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".

So let's take a look at how this all works

First create the following table


USE tempdb
GO

CREATE TABLE AlexaSongs(PlayDate varchar(100), 
   SongName varchar(200), 
   Artist varchar(200), 
   Album varchar(200))
GO

Now grab the csv file from here Songs played with Alexa.csv  Either download the whole project and grab the file, or open in raw mode and copy and paste it into a file and save as Songs played with Alexa.csv

Now that you have the file and table ready, let's first take a look at how this was done before vNext 1.1

Here is what it looked like


BULK INSERT AlexaSongs  
   FROM 'c:\Songs played with Alexa.csv'  
   WITH  
     (  
        FIELDTERMINATOR =',',  
        ROWTERMINATOR = '\n'
      ); 

As you can see, we specified a comma as the field terminator and a newline as the row terminator

You could also get it to work by just specifying the field terminator in this case

BULK INSERT AlexaSongs  
   FROM 'c:\Songs played with Alexa.csv'  
   WITH  
     (  
        FIELDTERMINATOR =','
      ); 


So what does the new syntax look like?

Here is the code that accomplished the same as above but by using the new WITH FORMAT = CSV option

BULK INSERT AlexaSongs
FROM 'c:\Songs played with Alexa.csv' 
WITH (FORMAT = 'CSV'); 


I guess you could say it is a little cleaner, but all this really is is syntactic sugar

For Azure, it looks like this, I grabbed this straight from this Books On Line Page here

First you need to create a data source


CREATE EXTERNAL DATA SOURCE MyAzureInvoices
    WITH  (
        TYPE = BLOB_STORAGE,
        LOCATION = 'https://newinvoices.blob.core.windows.net', 
        CREDENTIAL = UploadInvoices  
    );

And then you use that data source

BULK INSERT Colors2
FROM 'week3/inv-2017-01-19.csv'
WITH (DATA_SOURCE = 'MyAzureInvoices',
      FORMAT = 'CSV'); 

For more examples including accessing data in a CSV file referencing a container in an Azure blob storage location go here https://msdn.microsoft.com/en-us/library/mt805207.aspx

That's all for today

Sunday, January 01, 2017

Running queries against the songs you played with Alexa



My son got an Amazon Echo for Christmas. We use the Echo mostly to play music. I have setup IFTTT (If This Then That) to save the name of any song we play in a Google Sheet.

Between December 26 and January 1st we played a little over 1000 songs. Most of the time I would just say something like "Alexa, play 80s music" or "Alexa, play 70s music" this is why you might see songs from the same period played in a row

It is no coincidence that a lot of George Michael songs were played, he died on Christmas day. The most played song was requested by my youngest son Nicholas, he loves Demons by Imagine Dragons

I decided to import the Alexa data into SQL Server and run some queries. If you want to follow along, you can get the file here from GitHub: Songs played by Alexa

I exported the Google Sheet to a tab delimited file, I saved this file on my C drive, I created a table and did a BULK INSERT to populate this table with the data from this file


USE tempdb
GO

CREATE TABLE AlexaSongs(PlayDate varchar(100), 
   SongName varchar(200), 
   Artist varchar(200), 
   Album varchar(200))
GO

BULK INSERT AlexaSongs  
   FROM 'c:\Songs played with Alexa.tsv'  
   WITH  
     (  
        FIELDTERMINATOR =' ',  
        ROWTERMINATOR = '\n'
      );  

 The date in the file is not a format that can be converted automatically, it looks like this  December 26, 2016 at 09:53AM

I decided to add a date column and then convert that value with T-SQL. I did this by using the REPLACE function and replacing ' at ' with  ' '

ALTER TABLE  AlexaSongs ADD DatePlayed datetime
GO


UPDATE AlexaSongs
SET DatePlayed =  CONVERT(datetime, replace(playdate,' at ',' '))
GO


Now that this is all done, we can run some queries
What is the artist which we played the most?

SELECT Artist, count(SongName) As SongCount 
FROM AlexaSongs
GROUP BY Artist
ORDER BY SongCount DESC

Artist SongCount
George Michael 33
Nirvana 32
Imagine Dragons 22
Josh Groban 19
Eagles 17
Stone Temple Pilots 17
Mariah Carey 16
Meghan Trainor 15
Simon & Garfunkel 13
Pearl Jam 12

As you can see that is George Michael

How about if we want to know how many unique songs we played by artist?

SELECT Artist, count(DISTINCT SongName) As DistinctSongCount 
FROM AlexaSongs
GROUP BY Artist
ORDER BY DistinctSongCount DESC

Artist DistinctSongCount
Nirvana 25
Stone Temple Pilots 16
George Michael 15
Eagles 12
Simon & Garfunkel 12
Josh Groban 12
Mariah Carey 11
Michael Bubl+¬ 9
Snoop Dogg 9
Harry Connick Jr. 9

In this case Nirvana wins

How about the 10 most played songs? To answer that question and grab ties, we can use WITH TIES

SELECT TOP 10 WITH TIES Artist, SongName, COUNT(*) As SongCount 
FROM AlexaSongs
GROUP BY Artist,SongName
ORDER BY SongCount DESC

Here are the results
Artist SongName SongCount
Imagine Dragons Radioactive 12
Jason Mraz I'm Yours 9
Pearl Jam Yellow Ledbetter 6
Josh Groban When You Say You Love Me 5
Oasis Wonderwall (Remastered) 4
House Of Pain Jump Around [Explicit] 4
Meghan Trainor Lips Are Movin 4
Imagine Dragons Round And Round 4
Nirvana Smells Like Teen Spirit 4
Sir Mix-A-Lot Baby Got Back [Explicit] 4
George Michael Careless Whisper 4
George Michael Faith (Remastered) 4
George Michael Father Figure 4
George Michael Freedom! '90 4


So what other interesting queries can you come up with? How about how many Christmas related songs were there? Would the query look something like this?

SELECT TOP 10 WITH TIES Artist, SongName, COUNT(*) As SongCount 
FROM AlexaSongs
WHERE SongName LIKE '%christmas%'
OR SongName LIKE '%xmas%'
OR SongName LIKE '%santa%'
GROUP BY Artist,SongName
ORDER BY SongCount DESC

Maybe you would want to know how many songs you played per day?

SELECT CONVERT(date, DatePlayed) as TheDate, count(*)
FROM AlexaSongs
GROUP BY CONVERT(date, DatePlayed)
ORDER BY TheDate


Or maybe you want to know how many songs with the same title were sung by more than 1 artist?
Is this what the query would look like?

SELECT SongName, count(DISTINCT Artist) As SongCount 
FROM AlexaSongs
GROUP BY SongName
HAVING COUNT(*) > 1
ORDER BY SongCount DESC

If you want the song as well as the artist, you can use a windowing function with DENSE_RANK


;WITH cte AS(
SELECT   Artist, SongName, 
  DENSE_RANK() OVER (PARTITION BY   SongName
    ORDER BY Artist ) AS SongCount
FROM AlexaSongs )  

SELECT * FROM cte WHERE SongCount > 1

That is all for this post, I will keep collecting this data till next Christmas and hopefully will be able to run some more interesting queries

Thursday, December 29, 2016

The 15 most popular posts in 2016


Another year is almost over. As the last post of this year I decided to share with you the 15 most popular posts in 2016. I just used Google Analytics to grab this info, I excluded the home page from the top 15.  Four of these posts were written in 2006 and one was written in 2005

Here they are in order of popularity

Some cool SQL Server announcements SQL Graph, Adaptive Query Plan, CTP1 of SQL vNext, SQL Injection detection
This is my recap of the chalkboard session with the SQL Server team at the SQL Server PASS summit in Seattle.

Five Ways To Return Values From Stored Procedures
A very old post that shows you five ways to return values from a stored proc

SQL Server 2016 SP1 released, SQL Server vNext available for download
Posted this while the connect event was going on

Installing SQL Server on Linux, connecting and running queries from Linux
A quick walk through on how to install SQL Server on Ubuntu, I also included a video in the post

Are your foreign keys indexed? If not, you might have problems
A post showing you that indexing foreign keys might be beneficial if you do deletes

Convert Millisecond To "hh:mm:ss" Format
A very old post showing you how to convert from milliseconds to "hh:mm:ss" format

What's new in SQL Server 2016: CREATE OR ALTER
A post showing how the new CREATE OR ALTER syntax works

Why you need additional privileges to truncate tables compared to delete statements
The reason why you need these privileges is shown

What's new in SQL Server 2016: Truncate partitions
This post shows how truncate partitions works

10 What we learned from the reddit SQL Server AMA thread on /r/sqlserver
A recap of the AMA SQL Server thread on reddit

11 Playing around with the new stuff in SQL Server vNext and SQL Server 2016 SP1
Just a post showing some of the new stuff released in SP 1 of SQL Server 2016 and vNext

12 ISO-11179 Naming Conventions
A very old post linking to the ISO-11179 Naming Conventions document

13 Query Analyzer Trick
A very old post explaining how you can drag the columns into the query window from object explorer. I think by now everyone knows this one right?

14 What was the first computer you bought, what were some of your favorite games?
Mine was the Commodore 128, what about you?

15 Chaos Isolation Level In SQL Server
This is linked from  dba.stackexchange.com, it is kind of interesting because it was a silly post I made


That is all... see you in 2017...

Out of focus Christmas tree

Tuesday, December 27, 2016

Didn't go to PASS but want to watch the sessions?


You didn't go to the PASS summit this year, but you would still want to watch the sessions?  There is a way now, you can buy the USB stick with all the sessions, you can also download the sessions you are interested in. The passboutique site has the details, They also have a sale going on at the moment.

So if you got a bunch of money over the holidays, this would be a great investment.....

Saturday, December 17, 2016

SQL Server vNext version Community Technology Preview 1.1 available


A nice holiday present for you all has just arrived: SQL Server next version Community Technology Preview 1.1


Here is what is new in terms of the SQL Engine

  • Language and performance enhancements to natively compiled T-SQL modules, including support for OPENJSON, FOR JSON, JSON built ins as well as memory-optimized tables support for computed columns.
  • Improved the performance of updates to non-clustered columnstore indexes in the case when the row is in the delta store.
  • Batch mode queries now support “memory grant feedback loops,” which learn from memory used during query execution and adjusts on subsequent query executions; this can allow more queries to run on systems that are otherwise blocking on memory.
  • New T-SQL language features:
    • Introducing three new string functions: TRIM, CONCAT_WS, and TRANSLATE
    • BULK IMPORT supports CSV format and Azure Blob Storage as file source
    • STRING_AGG supports WITHIN GROUP (ORDER BY)s
TRIM
So finally we can write the following instead of doing LTRIM and RTRIM

SELECT TRIM( '     NoSPaces    ') AS Result;

That will return just the value NoSpaces

You can also specify what characters to trim


TRANSLATE 
This acts like a bunch of replace statements, instead of REPLACE(REPLACE(REPLACE(REPLACE(SomeVal,'[','('),']',,')'),'{','('),'}',,')') you can do the following which is much cleaner


SELECT TRANSLATE('2*[3+4]/{7-2}', '[]{}', '()()');

Running that will return 2*(3+4)/(7-2)

Everything that is new in version 1.1 can be found here https://msdn.microsoft.com/en-us/library/mt788653.aspx

What are you waiting for??..... pick where you want to install the next version.....


See...you won't be bored this weekend.....

Friday, December 16, 2016

Interesting SQL related links for the week of Dec 16, 2016


Here are some interesting articles I read and tweeted about this past week, I think you will like these as well. If you are bored this weekend, some of these might be good for you to read


SQL Server + Python — What’s new

Python is one of the most popular and fastest-growing languages used today. Pyodbc (Python-SQL Server Connector) is an open source Python module maintained by Michael Kleehammer that uses ODBC Drivers to connect to SQL Server. It allows you to connect from the platform of your choice to SQL Server on-premises and in the cloud. Pyodbc is also cross platform and can be installed using pip.

We recently announced SQL Server v.Next CTP1 on Linux and Windows, which brings the power of SQL Server to both Windows and — for the first time ever — Linux. You can now connect your applications to SQL Server running on Linux, Windows and macOS (with Docker).

For our Python developers, we have a few updates that will improve Python connectivity with SQL Server. Pyodbc is now:



SQL Server 2016 SP1: Know your limits

With the recent announcement of SQL Server 2016 SP1, we announced the consistent programmability experience for developers and ISVs, who can now maintain a single code base and build intelligent database applications which scale across all the editions of SQL Server. The processor, memory and database size limits does not change and remain as–in all editions as documented in the SQL Server editions page. We have made the following changes in our documentation to accurately reflect the memory limits on lower editions of SQL Server. This blog post is intended to clarify and provide more information on the memory limits starting with SQL Server 2016 SP1 on Standard, Web and Express Editions of SQL Server.



Parameterization for Always Encrypted – Using SSMS to Insert into, Update and Filter by Encrypted Columns

SQL Server Management Studio 17.o (the next major update of SSMS, currently available as a Release Candidate) introduces two important capabilities for Always Encrypted:
  • Ability to insert into, update and filter by values stored in encrypted columns from a Query Editor window.
  • The new online encryption algorithm, exposed in the Set-SqlColumnEncryption PowerShell cmdlet, which makes tables available for both reads and writes during the initial encryption and column encryption key rotation.

This article addresses the first of the two enhancements.




While we often worry about sophisticated digital attacks, the most common attacks for accessing news organizations’ accounts depend on only a few simple weaknesses. These weaknesses are usually a combination of predictable passwords, phishing emails designed to steal login credentials, as well as malicious file attachments in email and elsewhere. While the attacks are simple, so are the defenses. This collection of resources and learning materials will walk you through practices recommended by security specialists for defending your newsroom against common attacks on your accounts.



SQL Server on Linux: How? Introduction

Making SQL Server run on Linux involves introducing what is known as a Platform Abstraction Layer (“PAL”) into SQL Server. This layer is used to align all operating system or platform specific code in one place and allow the rest of the codebase to stay operating system agnostic. Because of SQL Server’s long history on a single operating system, Windows, it never needed a PAL. In fact, the SQL Server database engine codebase has many references to libraries that are popular on Windows to provide various functionality. In bringing SQL Server to Linux, we set strict requirements for ourselves to bring the full functional, performance, and scale value of the SQL Server RDBMS to Linux. This includes the ability for an application that works great on SQL Server on Windows to work equally great against SQL Server on Linux. Given these requirements and the fact that the existing SQL Server OS dependencies would make it very hard to provide a highly capable version of SQL Server outside of Windows in reasonable time it was decided to marry parts of the Microsoft Research (MSR) project Drawbridge with SQL Server’s existing platform layer SQL Server Operating System (SOS) to create what we call the SQLPAL. The Drawbridge project provided an abstraction between the underlying operating system and the application for the purposes of secure containers and SOS provided robust memory management, thread scheduling, and IO services. Creating SQLPAL enabled the existing Windows dependencies to be used on Linux with the help of parts of the Drawbridge design focused on OS abstraction while leaving the key OS services to SOS. We are also changing the SQL Server database engine code to by-pass the Windows libraries and call directly into SQLPAL for resource intensive functionality.

Friday, December 09, 2016

Connecting to SQL Server when your default database is not available


One of our database on the development went in suspect mode today. This database was the default for a bunch of logins.  These people could not login now. Someone needed to use a different database but he couldn’t login because the database that was in suspect mode was the default database for the login he was using.

I told this person to click on the Options button in the connection dialog and specify another database. I guess there was an misunderstanding because this person couldn’t get it to work. This means it is time for a blog post.

Let's take a look how this all works

Here is a script that will create 2 databases

CREATE DATABASE Good
GO
 
CREATE DATABASE OopsBad
GO

Now create a new login named TestLogin with a password of Test. We are also adding the login we just created to the OopsBad database and we will make the login part of the db_owner role


USE [master]
GO
CREATE LOGIN [TestLogin] WITH PASSWORD=N'Test', DEFAULT_DATABASE=[OopsBad]
USE [OopsBad]
GO
CREATE USER [TestLogin] FOR LOGIN [TestLogin]
GO
USE [OopsBad]
GO
ALTER ROLE [db_owner] ADD MEMBER [TestLogin]
GO

Add the login we just created to the Good database as well and make the login part of the db_owner role



USE [Good]
GO
CREATE USER [TestLogin] FOR LOGIN [TestLogin]
GO
USE [Good]
GO
ALTER ROLE [db_owner] ADD MEMBER [TestLogin]
GO

Make sure that you can login with the TestLogin account

Now that you know that you can login with the TestLogin account, use another account and put the OopsBad database in offline mode

Take the database online

ALTER DATABASE OopsBad SET OFFLINE



Now if you try to login with the TestLogin account, you will see the following error



Here is what you need to do, on the connect to server window, click on the Options button



One the next screen, click on the Connection Properties tab, do not click on  Browse server... from the drop down.  If you click that you might have to wait for a long time

Instead just type in the database name, in our case the database name is Good



Click on Connect and you should be in....  That's it, pretty simple, you just have to know where to find it.

Interesting SQL related links for the week of Dec 09, 2016


Here are some interesting articles I read and tweeted about this past week, I think you will like these as well. If you are bored this weekend, some of these might be good for you to read


Transaction Commit latency acceleration using Storage Class Memory in Windows Server 2016/SQL Server 2016 SP1

SQL Server 2016 SP1 adds a significant new performance feature, the ability to accelerate transaction commit times (latency) by up to 2-4X, when employing Storage Class Memory (NVDIMM-N nonvolatile storage). This scenario is also referred to as “persistent log buffer” as explained below.

This enhancement is especially valuable for workloads which require high frequency, low latency update transactions. These app patterns are common in the finance/trading industry as well as online betting and some process control applications.

As we all wind down for the 2016 holiday season, we want to give the SQL Server community a holiday gift to say ‘thank you’ for all your support during 2016, and what better gift than more free content?!

As many of you know, I publish a bi-weekly newsletter to more than 13,500 subscribers that contains an editorial on a SQL Server topic, a demo video, and a book review of my most recently completed book. We’re making all the 2015 demo videos available so everyone can watch them – 25 videos in all, mostly in WMV format. I did the same thing the last few years for the 2014 videos, 2013 videos, 2012 videos, and 2011 videos.


Developers Choice: Query progress – anytime, anywhere

One of the most important actions when a performance issue hits, is to get precise understanding on the workload that’s executing and how resource usage is being driven. The actual execution plan is an invaluable tool for this purpose.

Query completion is a prerequisite for the availability of an actual query plan, but with LQS (Live Query Statistics), you can already get information about in-flight query executions (see this blog post), which is especially useful for long running queries, and queries that run indefinitely and never finish.

To look at in-flight query executions, the query execution statistics profile infrastructure must be enabled on demand.



Get All of the Freebies for “End-to-End Troubleshooting Checklist for Microsoft SQL Server”

One of my most enduring and popular presentations is called End-to-End Troubleshooting Checklist for Microsoft SQL Server”. In this presentation, I take you through my six-step checklist from detection of a performance issue on SQL Server through identification of the root cause to remediation and finally post-mortem steps to ensure the problem is prevented (or at least detected immediately) in the future.

Over the years, I’ve had many inquiries about the slides, T-SQL scripts, and additional troubleshooting information. I’m happy to report that I’ve finally collated all of the associated content from that presentation!



Selective XML Indexes: Not Bad At All

So what is a Selective XML index? It’s an index! For XML! Where you pick the parts of the XML to index. Other XML indexes are rather all or nothing, and can end up being pretty huge, depending on the size of your documents. I think they’re at least size of data, if I recall correctly. Let’s take a look at some examples.


sp_WhatsupQueryStore

The sp_WhatsupQueryStore Stored Procedure is a Microsoft SQL Server Stored Procedure that retrieves all kinds of information from the Query Store. By running the script on this website the sp_WhatsupQueryStore Stored Procedure gets installed in the "master" database of your SQL Server Instance.

After installation you can run the Stored Procedure as follows:

EXEC sp_WhatsupQueryStore @dbname, @timewindow, @topqueries, @return_store_config, @return_forced_plans, @return_top_duration, @return_top_cpu, @return_top_log_read, @return_top_log_write, @return_top_phys_read



Clustered Columnstore Index in Azure SQL Database

Columnstore index is the preferred technology to run analytics queries in Azure SQL Databases. We recently announced general availability if In-Memory technologies for all Premium databases. Similar to In-Memory OLTP, the columnstore index technology is available in premium databases.

The columnstore technology is available in two flavors; clustered columnstore index (CCI) for DataMart analytics workloads and nonclustered columnstore index (NCCI) to run analytics queries on operational (i.e. OLTP) workload. Please refer to NCCI vs CCI for the differences between these two flavors of columnstore indexes. The columnstore index can speed up the performance of analytics queries up to 100x while significantly reducing the storage footprint. The data compression achieved depends on the schema and the data, but we see around 10x data compression on average when compared to rowstore with no compression. This blog will focus on Analytic workloads using CCI but cover NCCI in a future blog.


When Did SQL Server Last Update That Statistic? How Much Has Been Modified Since? And What Columns are in the Stat?

Whether I’m working as a DBA, a consultant, a teacher, or just answering questions in my inbox, I always end up needing a script to inspect statistics one way or another.

Here are some freshly written scripts for a classic DBA question: what’s going on in my stats?



SQL Server + Java: What’s new

Java continues to be one of the most widely used programming languages for a variety of application scenarios and industries. The Microsoft JDBC Driver for SQL Server is used to connect Java applications to SQL Server, whether SQL Server is hosted in the cloud or on-premises, or provided as a platform-as-a-service.

With the release of SQL Server v.Next public preview on Linux and Windows, the ability to connect to SQL Server on Linux, Windows, Docker or macOS (via Docker) makes cross-platform support for all connectors, including the JDBC driver, even more important. To enable Java developers to use the newest SQL Server features, we have been updating the JDBC driver with client-side support for new features, including Always Encrypted and Azure Active Directory Authentication.



Microsoft R Server 9.0 now available

Microsoft R Server 9.0, Microsoft's R distribution with added big-data, in-database, and integration capabilities, was released today and is now available for download to MSDN subscribers. This latest release is built on Microsoft R Open 3.3.2, and adds new machine-learning capabilities, new ways to integrate R into applications, and additional big-data support for Spark 2.0.

This release includes a brand new R package for machine learning: MicrosoftML. This package provides state-of-the-art, fast and scalable machine learning algorithms for common data science tasks including featurization, classification and regression. Some of the functions provided include:
  • Fast linear and logistic model functions based on the Stochastic Dual Coordinate Ascent method;
  • Fast Forests, a random forest and quantile regression forest implementation based on FastRank, an efficient implementation of the MART gradient boosting algorithm;
  • A neural network algorithm with support for custom, multilayer network topologies and GPU acceleration;
  • One-class anomaly detection based on support vector machines.
You can learn more about MicrosoftML at this live webinar on Wednesday, December 14.



Released: Public Preview for Microsoft Azure SQL Database Management Pack (6.7.11.0)

We are working on significantly updating the Management Pack for Azure SQL Database.

This release will bring support for Elastics Pools and Azure AD Authentication among other new features. We are also working on handling monitoring of larger number of databases. We are expecting to improve the scale by the time we RTM. Here are some numbers to give you an idea for this public preview:



Improve query performance on memory optimized tables with Temporal using new index creation enhancement in SP1

With the introduction of the Temporal feature in SQL 2016 and Azure SQL Database, there is an ability to time travel through the state of data as it was at any given point of time. Alongside In-Memory OLTP, Temporal on memory optimized tables allows you to harness the speed of In-Memory OLTP, and gives you the same ability to track history and audit every change made to a record. Temporal added to memory optimized tables also allows you to maintain a “smaller” memory optimized tables and thereby a smaller memory footprint by deleting data that isn’t “hot” anymore from the current memory optimized table, which in turn moves it to the history table without having an external archival process to do that.
When memory optimized and temporal tables are combined, an internal memory optimized table is created in addition to the history table, as depicted in the diagram below. Data is flushed asynchronously from the internal in-memory History table to the disk based history table. The flush interval  isn’t currently configurable. Data is flushed when the internal table reaches 8% of the memory consumed by the current table, OR you can flush it manually by executing the procedure sys.sp_xtp_flush_temporal_history. The internal memory optimized table is created with the same column definitions as the current in-memory table, but with a single index.



Extended per-operator level performance stats for Query Processing
As requested by the community to complete the per-operator information, starting with SQL Server 2016 SP1 we are now exposing memory grant per grant iterator (such as Sorts and Hash Matches). These give you added insight into memory grants, and how overall memory usage is driven throughout execution.
In the October release of SSMS we exposed per-operator stats directly in the Properties window for each operator, but the memory grant properties are not there yet. These will be added to the properties window in an upcoming release of SSMS

Friday, December 02, 2016

Interesting SQL related links for the week of Dec 02, 2016


Here are some interesting articles I read and tweeted about this past week, I think you will like these as well. If you are bored this weekend, some of these might be good for you to read



Microsoft is putting reprogrammable chips into millions of servers

All the Internet giants, including Microsoft, now supplement their CPUs with graphics processing units, chips designed to render images for games and other highly visual applications. When these companies train their neural networks to, for example, recognize faces in photos—feeding in millions and millions of pictures—GPUs handle much of the calculation

Some giants like Microsoft are also using alternative silicon to execute their neural networks after training. And even though it’s crazily expensive to custom-build chips, Google has gone so far as to design its own processor for executing neural nets, the tensor processing unit.



Developers choice: Programmatically identify LPIM and IFI privileges in SQL Server

Lock Pages in Memory and Instant File Initialization privileges are couple of configuration settings which every DBA, SQL Server consultant or enthusiast have it in their checklist to ensure they see a predictable performance for their SQL Server instance. While Lock Pages in Memory privilege information is logged in SQL Error log, Instant File initialization information was first introduced in SQL Errorlog starting SQL Server 2016 RTM and later added to SQL Server 2014 with SP2.

When you are managing, administering or monitoring large deployment of SQL Servers, it is still cumbersome to programmatically query SQL Error log to check if these permissions are enabled for the SQL Server service account. With SQL Server 2016 SP1, we have added new columns in the DMV which now makes it easy to develop scripts to programmatically query and report whether Lock Pages in Memory and instant file initialization privileges are enabled on a given instance of SQL Server.



Introducing Batch Mode Adaptive Memory Grant Feedback

SQL Server uses memory to store in-transit rows for hash join and sort operations. When a query execution plan is compiled for a statement, SQL Server estimates both the minimum required memory needed for execution and the ideal memory grant size needed to have all rows in memory. This memory grant size is based on the estimated number of rows for the operator and the associated average row size. If the cardinality estimates are inaccurate, performance can suffer:

  • For cardinality under-estimates, the memory grant can end up being too small and the rows then spill to disk, causing significant performance degradation compared to a fully memory-resident equivalent.
  • For cardinality over-estimates, the memory grant can be too large and the memory goes to waste. Concurrency can be impacted because the query may wait in a queue until enough memory becomes available, even though the query only ends up using a small portion of the granted memory.


SQL Server on Linux: High availability and security

With SQL Server on Linux, Microsoft brings SQL Server’s core relational database engine to the growing enterprise Linux ecosystem. Both High Availability and Disaster Recovery (HADR) and security are aspects of SQL Server that are critically important for enterprises. This article highlights the HADR and security solutions for SQL Server on Linux that are available today, as well as the roadmap for what’s coming soon.



Using SQL Tools with SQL Server on Linux

Today, developers can use SQL Server in a variety of environments including on-premises, in datacenters, in virtual machines, in clouds such as Azure, AWS and Google, and also as a Platform as a Service (PaaS) offering with Azure SQL Database and Azure SQL Data Warehouse.

We recently announced SQL Server v.Next CTP1 on Linux and Windows, which brings the power of SQL Server to both Windows — and for the first time ever — Linux. Developers can now create applications with SQL Server on Linux, Windows, Docker, or macOS (via Docker) and then deploy to Linux, Windows, or Docker, on-premises or in the cloud.

As part of this announcement, we have released new SQL tools and also updated existing SQL tools. Developers can use these tools to connect to and work with SQL running anywhere, including SQL Server on Linux, Windows or Docker.



Reading 7 Things I Learned About Aurora at AWS re:Invent 2016

Richie and I attended the AWS re:Invent conference in Vegas last week. Here’s some of my favorite takeaways about Amazon Aurora, their homegrown relational database with MySQL compatibility.