Showing posts with label Best Practices. Show all posts
Showing posts with label Best Practices. Show all posts

Tuesday, January 01, 2019

SQL Server Maintenance

It has been a while since I wrote some of my best practices posts. I decided to revisit these posts again to see if anything has changed, I also wanted to see if I could add some additional info.

In this post  we are going to look at SQL Server maintenance

Just like with a car or a house, you need to do maintenance on databases as well. SQL Server has gotten better over the years, there are less knobs you need to turn out of the box but maintenance is still required.


In this post I will be looking at some stuff that you need to be aware of. Some of the things I will mention can be thought of as maintenance as well as regular checks. Think of a DBA as a car mechanic, instead of an oil change, tune up or checking the tire pressure, the DBA will check index fragmentation, run DBCC CHECKDB and make sure you have enough space for the database to grow for the next predetermined period.

The things I will cover in this post are: fragmentation of indexes, free drives space, space in filegroups, running DBCC CHECKDB and finally making sure that you have the latest source code of your objects in a source control system.

Check fragmentation of indexes

A lot of time your index will get fragmented over time if you do a lot of updates or inserts and deletes.

Now instead of rolling your own solution, you should take a look at some of them that are out there and used by many people. Take a look at SQL Server Index and Statistics Maintenance by Ola Hallengren, You can also get the scripts from Github here: https://github.com/olahallengren/sql-server-maintenance-solution


Check that your database is healthy by running DBCC CHECKDB

What does DBCC CHECKDB do? Here is the explanation from Books On Line
Checks the logical and physical integrity of all the objects in the specified database by performing the following operations:

  • Runs DBCC CHECKALLOC on the database.
  • Runs DBCC CHECKTABLE on every table and view in the database.
  • Runs DBCC CHECKCATALOG on the database.
  • Validates the contents of every indexed view in the database.
  • Validates link-level consistency between table metadata and file system directories and files when storing varbinary(max) data in the file system using FILESTREAM.
  • Validates the Service Broker data in the database.
So how frequent should you be running DBCC CHECKDB? Ideally you should be running DBCC CHECKDB as frequent as possible, do you want to find out that there is corruption when it is very difficult to fix since two weeks have passed or do you want to find out the same day so that you can fix the table immediately.

Paul Randal who worked on DBCC CHECKDB has a whole bunch of blog posts about DBCC CHECKDB, the posts can be found here http://www.sqlskills.com/blogs/paul/category/checkdb-from-every-angle.aspx

Make sure that you have enough space left on the drives

Running out of space on a drive is not fun stuff, suddenly you can't insert any more data into your tables because no new pages can be allocated. If you have tools in your shop like cacti then this is probably already monitored. If you don't have any tools then either get a tool or roll your own. Here is how you can get the free space fo the drives with T-SQL


CREATE TABLE #FixedDrives(Drive CHAR(1),MBFree INT)

INSERT #FixedDrives
EXEC xp_fixeddrives

SELECT * FROM #FixedDrives


Here is the output for one of my servers

Drive MBFree
------------------
C 6916  -- System
D 28921 -- Apps
L 52403 -- Log
M 4962  -- System databases
T 86208 -- Temps
U 71075 -- User databases 
V 212075-- User databases 


Here is a simple way of using T-SQL to create a SQL Agent job that runs every 10 minutes and will send an email if you go below the threshold that you specified. This code is very simple and is just to show you that you can do this in T-SQL. You can make it more dynamic/configurable by not hard-coding the drives or thresholds

DECLARE @MBFreeD INT
DECLARE @MBFreeE INT
CREATE TABLE #FixedDrives(Drive CHAR(1),MBFree INT)

INSERT #FixedDrives
EXEC xp_fixeddrives

SELECT @MBFreeD =  MBFree
FROM #FixedDrives
WHERE DRIVE = 'D'

SELECT @MBFreeE =  MBFree
FROM #FixedDrives
WHERE DRIVE = 'E'


DROP TABLE #FixedDrives

IF @MBFreeD < 30000 OR @MBFreeE < 10000
BEGIN
      DECLARE @Recipients VARCHAR(8000)
   SELECT @Recipients ='SomeGroup@SomeEmail.com'
       
  DECLARE @p_body AS NVARCHAR(MAX), @p_subject AS NVARCHAR(MAX), @p_profile_name AS NVARCHAR(MAX)

  SET @p_subject = @@SERVERNAME + N'  Drive Space is running low'
  SET @p_body = ' Drive Space is running low <br><br><br>' + CHAR(13) + CHAR(10) + 'Drive D has ' 
  + CONVERT(VARCHAR(20),@MBFreeD) + ' MB left <br>' + CHAR(13) + CHAR(10) + 'Drive E has ' 
  + CONVERT(VARCHAR(20),@MBFreeE) + ' MB left'

  EXEC msdb.dbo.sp_send_dbmail
     @recipients = @Recipients,
     @body = @p_body,
     @body_format = 'HTML',
     @subject = @p_subject
END

Make sure that you have enough space left for the filegroups

In the Sizing database files I talked about the importance of sizing database files. Just like you can run out of hard drive space, you can also fill up a file used by SQL Server. here is query that will tell you how big the file is, how much space is use and how much free space is left. You can use a query like this to alert you before you run out of space


SELECT
 a.FILEID,
 [FILE_SIZE_MB] = 
  CONVERT(DECIMAL(12,2),ROUND(a.size/128.000,2)),
 [SPACE_USED_MB] =
  CONVERT(DECIMAL(12,2),ROUND(FILEPROPERTY(a.name,'SpaceUsed')/128.000,2)),
 [FREE_SPACE_MB] =
  CONVERT(DECIMAL(12,2),ROUND((a.size-FILEPROPERTY(a.name,'SpaceUsed'))/128.000,2)) ,
 NAME = LEFT(a.NAME,35),
 FILENAME = LEFT(a.FILENAME,60)
FROM
 dbo.sysfiles a

Have the latest scripts of all your objects

You might say that you have all the code for your objects in the database. What if you want to go back to the version of the proc from 3 days ago, is it really easier to restore a 800 GB backup from 3 days ago just to get the stored proc code?

Of course not, make sure that you have DDL scripts of every object in source control, your life will be much easier.
I only touched on a couple of points here, some of the things mentioned here will also show up in the proactive notifications post in a couple of days. There is much more to maintenance than this, keep informed and make sure you understand what needs to be done.


Some more repos for you to use


Take a look at the GitHub repositories mentioned in this post: Five great SQL Server GitHub repos that every SQL Server person should check out  There are some good ones like dbatools and tigertoolbox

Monday, February 19, 2018

Reinventing the wheel

It has been a while since I wrote some of my best practices posts. I decided to revisit these posts again to see if anything has changed, I also wanted to see if I could add some additional info.

In this post  we are going to look at something called reinventing the wheel. Just in case your are not familiar with this metaphor or maybe you are not a native English speaker, I will use wikipedia's description of what reinventing the wheel means.
To reinvent the wheel is to duplicate a basic method that has already previously been created or optimized by others.
The inspiration for this idiomatic metaphor lies in the fact that the wheel is the archetype of human ingenuity, both by virtue of the added power and flexibility it affords its users, and also in the ancient origins which allow it to underlie much, if not all, of modern technology. As it has already been invented, and is not considered to have any operational flaws, an attempt to reinvent it would be pointless and add no value to the object, and would be a waste of time, diverting the investigator's resources from possibly more worthy goals which his or her skills could advance more substantially.



So now that you have read the paragraph above, how many times did you write some code only to find out that it already exists in the language as part of some library or function?. How many times have you written code that you could have easily grabbed from GitHub, CodePlex and other repositories for your own use?


Why write your own solution when you can use something that is robust and tested?


To start let's take a look at the GitHub repositories mentioned in this post: Five great SQL Server GitHub repos that every SQL Server person should check out
You will find code that does index maintenance, helps you with performance issues, setup and more. Check out that post for more details


Find out who the community leaders are for a particular skill set that you are interested in, start following these people, follow them on twitter, subscribe to their blogs and podcasts. Go to their presentations, talk to them, find out what they use, find out if they have made code available for the public to use. You will find out that a good percentage of these people have made available a whole bunch of libraries, stored procedures, functions, maintenance routines and much more for you to use and it is all free.
Don't be scared to ask for help on twitter, if you don't know any of the SQL Server tweeple, use the #sqlhelp hash tag and ask for help, here is an example of what it looks like #sqlhelp
Here is an image of the replies on twitter after I asked a question with the #sqlhelp tag


Besides twitter, you can also use slack. I like slack more because you are not limited to 280 characters. Here is the link to the relevant slack channel: https://sqlcommunity.slack.com/messages/C1MS1RA4B/

Here is a screen shot of what it looks like

That looks a little better than twitter don't you think?


Some commercial firms will also have community editions of code and tools for you to use. Take advantage of this, these are great, if you like the tools then maybe you will find a need for the pro editions, these have more bells and whistles and are not limited.

Some examples of available solutions:

SQL Server activity
Want to know what is going on right now? Try Adam Machanic's procedure Who Is Active

Execution Plans
Check out SentryOne's  Plan Explorer. This plan explorer does much more than the one that comes with SQL Server Management Studio

SQL Search and other tools
Red Gate has a bunch of free tool, you can get those here https://www.red-gate.com/products/free-tools. I started to use Red Gate's tools back in 2003, SQL Compare is the one I used the most. SQL Search is free and if you need to find anything in your DB it is invaluable.
Idera free tools
Idera has a bunch of free tools available for download, you can find those all here: https://www.idera.com/productssolutions/freetools

Get involved

If you have created some cool code and you know there is nothing similar, why now give back to the community? Put it out there, solicit feedback and in the end the code will be better because more eyes will have looked at it. Accept contributions as well. All of these things will make the community as a whole grow, if the community grows then the platform will grow as well. When the platform grows, this means there will be more demand for someone with your skill set. You are responsible that your community doesn't turn into a ghost town.

Wednesday, October 25, 2017

Foreign Keys don't always need a primary key

In the post Your lack of constraints is disturbing we touched a little upon foreign key constraints but today we are going to take a closer look at foreign keys. The two things that we are going to cover are the fact that you don't need a primary key in order to define a foreign key relationship, SQL Server by default will not index foreign keys

You don't need a primary key in order to have a foreign key

Most people will define a foreign key relationship between the foreign key and a primary key. You don't have to have a primary key in order to have a foreign key, if you have a unique index or a unique constraint then those can be used as well.

Let's take a look at what that looks like with some code examples


A foreign key with a unique constraint instead of a primary key


First create a table to which we will add a unique constraint after creation

CREATE TABLE TestUniqueConstraint(id int)
GO
Add a unique constraint to the table

ALTER TABLE TestUniqueConstraint ADD CONSTRAINT ix_unique UNIQUE (id)
GO
Insert a value of 1, this should succeed

INSERT  TestUniqueConstraint VALUES(1)
GO

Insert a value of 1 again, this should fail

INSERT  TestUniqueConstraint VALUES(1)
GO

Msg 2627, Level 14, State 1, Line 2
Violation of UNIQUE KEY constraint 'ix_unique'. Cannot insert duplicate key in object 'dbo.TestUniqueConstraint'. The duplicate key value is (1).
The statement has been terminated.



Now that we verified that we can't have duplicates, it is time to create the table that will have the foreign key



CREATE TABLE TestForeignConstraint(id int)
GO
Add the foreign key to the table

ALTER TABLE dbo.TestForeignConstraint ADD CONSTRAINT
 FK_TestForeignConstraint_TestUniqueConstraint FOREIGN KEY 
(id) REFERENCES dbo.TestUniqueConstraint(id) 



Insert a value that exist in the table that is referenced by the foreign key constraint


INSERT TestForeignConstraint  VALUES(1)
INSERT TestForeignConstraint  VALUES(1)

Insert a value that does not exist in the table that is referenced by the foreign key constraint


INSERT TestForeignConstraint  VALUES(2)

Msg 547, Level 16, State 0, Line 1
The INSERT statement conflicted with the FOREIGN KEY constraint "FK_TestForeignConstraint_TestUniqueConstraint". The conflict occurred in database "tempdb", table "dbo.TestUniqueConstraint", column 'id'.

The statement has been terminated.


As you can see, you can't insert the value 2 since it doesn't exist in the TestUniqueConstraint table



A foreign key with a unique index instead of a primary key

This section will be similar to the previous section, the difference is that we will use a unique index instead of a unique constraint
First create a table to which we will add a unique index after creation

CREATE TABLE TestUniqueIndex(id int)
GO

Add the unique index



CREATE UNIQUE NONCLUSTERED INDEX ix_unique ON TestUniqueIndex(id)
GO
Insert a value of 1, this should succeed


INSERT  TestUniqueIndex VALUES(1)
GO
Insert a value of 1 again , this should now fail


INSERT  TestUniqueIndex VALUES(1)
GO

Msg 2601, Level 14, State 1, Line 2
Cannot insert duplicate key row in object 'dbo.TestUniqueIndex' with unique index 'ix_unique'. The duplicate key value is (1).
The statement has been terminated.


Now that we verified that we can't have duplicates, it is time to create the table that will have the foreign key


CREATE TABLE TestForeignIndex(id int)
GO

Add the foreign key constraint

ALTER TABLE dbo.TestForeignIndex ADD CONSTRAINT
 FK_TestForeignIndex_TestUniqueIndex FOREIGN KEY 
 (id) REFERENCES dbo.TestUniqueIndex(id)  




Insert a value that exist in the table that is referenced by the foreign key constraint

INSERT TestForeignIndex  VALUES(1)
INSERT TestForeignIndex  VALUES(1)

Insert a value that does not exist in the table that is referenced by the foreign key constraint


INSERT TestForeignIndex  VALUES(2)


Msg 547, Level 16, State 0, Line 1
The INSERT statement conflicted with the FOREIGN KEY constraint "FK_TestForeignIndex_TestUniqueIndex". The conflict occurred in database "tempdb", table "dbo.TestUniqueIndex", column 'id'.
The statement has been terminated.


That failed because you can't insert the value 2 since it doesn't exist in the TestUniqueIndex table

As you have seen with the code example, you can have a foreign key constraint that will reference a unique index or a unique constraint. The foreign key does not always need to reference a primary key

Foreign keys are not indexed by default

When you create a primary key, SQL Server will by default make that a clustered index. When you create a foreign key, there is no index created

Scroll up to where we added the unique constraint to the TestUniqueConstraint table, you will see this code

ALTER TABLE TestUniqueConstraint ADD CONSTRAINT ix_unique UNIQUE (id)

All we did was add the constraint, SQL Server added the index behind the scenes for us in order to help enforce uniqueness more efficiently

Now run this query below


SELECT OBJECT_NAME(object_id) as TableName,
name as IndexName, 
type_desc as StorageType
FROM sys.indexes
WHERE OBJECT_NAME(object_id) IN('TestUniqueIndex','TestUniqueConstraint')
AND name IS NOT NULL

You will get these results

TableName         IndexName StorageType
---------------------   -----------     --------------
TestUniqueConstraint ix_unique NONCLUSTERED
TestUniqueIndex         ix_unique NONCLUSTERED

As you can see both tables have an index

Now let's look at what the case is for the foreign key tables. Run the query below


SELECT OBJECT_NAME(object_id) as TableName,
name as IndexName, 
type_desc as StorageType
FROM sys.indexes
WHERE OBJECT_NAME(object_id) IN('TestForeignIndex','TestForeignConstraint')

Here are the results for that query

TableName       IndexName StorageType
--------------------- --------- -------------
TestForeignConstraint NULL HEAP
TestForeignIndex NULL HEAP




As you can see no indexes have been added to the tables. Should you add indexes? In order to answer that let's see what would happen if you did add indexes. Joins would perform faster since it can traverse the index instead of the whole table to find the matching join conditions. Updates and deletes will be faster as well since the index can be used to find the foreign keys rows to update or delete (remember this depends if you specified CASCADE or NO ACTION when you create the foreign key constraint)

I wrote about deletes being very slow because the columns were not indexed here: Are your foreign keys indexed? If not, you might have problems
So to answer the question, yes, I think you should index the foreign key columns



Thursday, October 19, 2017

Your lack of constraints is disturbing

It has been a while since I wrote some of my best practices posts. I decided to revisit these posts again to see if anything has changed, I also wanted to see if I could add some additional info.

SQL Server supports the following types of constraints:

NOT NULL
CHECK
UNIQUE
PRIMARY KEY
FOREIGN KEY

Using constraints is preferred to using DML Triggers, rules, and defaults. The query optimizer will also use constraint definitions to build high-performance query execution plans.
When I interview people, I always ask how you can make sure only values between 0 and 9 are allowed in an integer column. I get a range of different answers to this question, here are some of them:
  • Convert to char(1) and make sure it is numeric
  • Write logic in the application that will check for this
  • Use a trigger
  • Create a primary key table with only the values from 0 till 9 then make this column a foreign key in the table you want to check for this
Only 25% of the people will tell you to use something that you can use from within SQL, and only 10% will actually know that this something is called a check constraint, the other ones know that there is something where you can specify some values to be used.


Why do we need constraints at all?

So why do we need constraints? To answer that question, first you have to answer another question: how important is it that the data in your database is correct? I would say that that is most important, after all you can have all the data in the world but if it is wrong it is useless or it might even ending up costing you money. To make sure that you don't get invalid data, you use constraints.

Constraints work at the database level, it doesn't matter if you do the data checking from the app or web front-end, there could be someone modifying the data from SSMS. If you are importing files, constraints will prevent invalid data from making it into the tables.

Constraints don't just have to have a range, constraints can handle complex validations. You can have regular expressions in check constraints as well, check out SQL Server does support regular expressions in check constraints, you don't always need triggers for some examples


Constraints are faster than triggers

The reason that check constraints are preferable over triggers is that they are not as expensive as triggers, you also don't need an update and an insert trigger, one constraint is enough to handle both updates and inserts.


Constraints are making it hard for us to keep our database scripts from blowing up

This is a common complaint, when you script out the databases and the primary and foreign key tables are not in the correct order you will get errors. Luckily the tools these days are much better than they were 10 years ago. If you do it by hand just make sure that it is all in the correct order. Another complaint is that constraints are wasting developers time, they can't just populate the tables at random but have to go in the correct order as well.


Some examples of constraints


First create this table

CREATE TABLE SomeTable(code char(3) NOT NULL)
GO


Now let's say we want to restrict the values that you can insert to only accept characters from a through z, here is what the constraint looks like

ALTER TABLE SomeTable ADD CONSTRAINT ck_bla
CHECK (code LIKE '[a-Z][a-Z][a-Z]' )
GO


If you now run the following insert statement....

INSERT SomeTable VALUES('123')

You get this error message back

Msg 547, Level 16, State 0, Line 1
The INSERT statement conflicted with the CHECK constraint "ck_bla". The conflict occurred in database "tempdb", table "dbo.SomeTable", column 'code'.
The statement has been terminated.


What if you have a tinyint column but you want to make sure that values are less then 100? Easy as well, first create this table

CREATE TABLE SomeTable2(SomeCol tinyint NOT NULL)
GO

Now add this constraint

ALTER TABLE SomeTable2 ADD CONSTRAINT ck_SomeTable2
CHECK (SomeCol < 100 )
GO

Try to insert the value 100

INSERT SomeTable2 VALUES('100')

Msg 547, Level 16, State 0, Line 2
The INSERT statement conflicted with the CHECK constraint "ck_SomeTable2". The conflict occurred in database "tempdb", table "dbo.SomeTable2", column 'SomeCol'.
The statement has been terminated.


Okay, what happens if you try to insert -1?

INSERT SomeTable2 VALUES('-1')

Msg 244, Level 16, State 1, Line 1
The conversion of the varchar value '-1' overflowed an INT1 column. Use a larger integer column.
The statement has been terminated.


As you can see you also get an error, however this is not from the constraint but the error is raised because the tinyint datatype can't be less than 0
Check constraint can also be tied to a user defined function and you can also use regular expressions. Ranges can also be used, for example salary >= 15000 AND salary <= 100000

For a post about foreign key constraints, go here: Foreign Keys don't always need a primary key



Tuesday, October 17, 2017

Standardized Naming And Other Conventions

It has been a while since I wrote some of my best practices posts. I decided to revisit these posts again to see if anything has changed, I also wanted to see if I could add some additional info.

Today we are going to look at standardized naming conventions and other conventions that you should standardize as well. Every company needs to have standards that developers need to follow in order to make maintenance easier down the road. There are several things that you can standardize on, here are just a few:

The naming of objects
The layout of code including comments
The way that error handling is done

The naming of objects

I am not a fan of underscores,  we tend to name our objects CamelCased

Stored procedures are usually prefixed with usp_ or pr but never sp_


One tool that ships with SQL Server that you can use is policy management, you can set it so that it checks if procs start with sp_


And here is what happens after the policy is evaluated





Since this is Adam Machanic's proc.. we will let this fly  :-)


Something like this can also be accomplished with DDL triggers, there are many ways to skin the cat, there is no excuse for having all kind of crazy named objects.

I also wrote about naming conventions in the using the ISO-11179 Naming Conventions post


Never use Hungarian notation on column names or variables, I have worked with tables that looked like this

CREATE TABLE tblEmployee(
strFirstName varchar(255),
strLastName varchar(255),
intAge int,
dtmBirthDate datetime
.......
.......
)
If you have intellisense in SSMS, having every table start with tbl is making it pretty useless. Also sometimes the data type of a column will change but of course nobody goes back to rename the column to reflect this because it will break code all over the place



Instead of having something like the following

-- the salary for the employee
declare @decValue decimal(20,2)

It would be better to have something like this

declare @EmployeeSalary decimal(20,2)

Now I don't have to scroll all the way to the top to figure out what is actually stored in this variable, EmployeeSalary pretty much describes what it is and I can also pretty much assume that this will be some amount and not a date

The layout of code including comments

I have worked with code that was all in lowercase and all in uppercase. I have no problem with either but if you at least standardize on one or the other it will be a little easier to jump from your code to someone else's code


You can setup standard templates in SSMS for your organization, you can get to it from the menu bar, View--> Template Explorer or hit CTRL + ALT + T
Now expand the Stored Procedures folder


The basic stored procedure template looks like this
-- =============================================
-- Create basic stored procedure template
-- =============================================

-- Drop stored procedure if it already exists
IF EXISTS (
  SELECT * 
    FROM INFORMATION_SCHEMA.ROUTINES 
   WHERE SPECIFIC_SCHEMA = N'<Schema_Name, sysname, Schema_Name>'
     AND SPECIFIC_NAME = N'<Procedure_Name, sysname, Procedure_Name>' 
)
   DROP PROCEDURE <Schema_Name, sysname, Schema_Name>.<Procedure_Name, sysname, Procedure_Name>
GO

CREATE PROCEDURE <Schema_Name, sysname, Schema_Name>.<Procedure_Name, sysname, Procedure_Name>
 <@param1, sysname, @p1> <datatype_for_param1, , int> = <default_value_for_param1, , 0>, 
 <@param2, sysname, @p2> <datatype_for_param2, , int> = <default_value_for_param2, , 0>
AS
 SELECT @p1, @p2
GO

-- =============================================
-- Example to execute the stored procedure
-- =============================================
EXECUTE <Schema_Name, sysname, Schema_Name>.<Procedure_Name, sysname, 
Procedure_Name> <value_for_param1, , 1>, <value_for_param2, , 2>
GO


You can modify this template, give it to every developer and now you all have the same template. What can be done with templates can also be done with snippets, if you do Tools-->Code Snippets Manager, you can see all the snippets that are available, you can add your own snippets so that all developers will have the same snippets for comment tasks.
Standardize on comments as well.  Besides what ships with SSMS, there are also commercial tools that will do an even better job than SSMS

The way that error handling is done

I like to have all the errors in one place, this way I know where to look if there are errors. Capture the proc or trigger that threw the error, it if is a multi-step proc then also note the code section in the proc, this will greatly reduce the time it will take you to pinpoint where the problem is. Michelle Ufford has a nice example here: Error Handling in T-SQL that you can use and implement in your own shop.
There are many more things that you need to standardize on, the thing that bothers me the most is when I see dates in all kind of formats when passed in as strings, use YYYYMMDD, this will make it non ambiguous.


Monday, October 16, 2017

Do not trust the SSMS designers, learn the T-SQL way



It has been a while since I wrote some of my best practices posts. I decided to revisit these posts again to see if anything has changed, I also wanted to see if I could add some additional info.

Read the following two lines

Question: How do you add a primary key to a table?
Answer: I click on the yellow key icon in SSMS!

Have you ever given that answer or has anyone every answered that when you asked this question?

Technically, yes, that will create a primary key on the table but what will happen when you do that? Let's take a look at some examples.
First create this very simple table

CREATE TABLE TestInt(Col1 tinyint not null)

Now the developers changed their mind and want to insert values that go beyond what a tinyint can hold. If you try to insert 300, you will get an error

INSERT TestInt VALUES(300)

Msg 220, Level 16, State 2, Line 2
Arithmetic overflow error for data type tinyint, value = 300.

The statement has been terminated.


No, problem, I will just change the data type by running this T-SQL statement

ALTER TABLE TestInt ALTER COLUMN Col1 int NOT NULL


But what if you use the SSMS designer by right clicking on the table, choosing design and then changing the data type from tinyint to int?

The answer is it depends on an option and if it is checked or not



If that option is checked, then you will get the following message when clicking on the script icon




If that option is not checked then here is what SSMS will do behind the scenes for you


/* To prevent any potential data loss 
issues, you should review this script in 
detail before running it outside the context
 of the database designer.*/
BEGIN TRANSACTION
SET QUOTED_IDENTIFIER ON
SET ARITHABORT ON
SET NUMERIC_ROUNDABORT OFF
SET CONCAT_NULL_YIELDS_NULL ON
SET ANSI_NULLS ON
SET ANSI_PADDING ON
SET ANSI_WARNINGS ON
COMMIT
BEGIN TRANSACTION
GO
CREATE TABLE dbo.Tmp_TestInt
 (
 Col1 int NULL
 )  ON [PRIMARY]
GO
ALTER TABLE dbo.Tmp_TestInt SET (LOCK_ESCALATION = TABLE)
GO
IF EXISTS(SELECT * FROM dbo.TestInt)
  EXEC('INSERT INTO dbo.Tmp_TestInt (Col1)
  SELECT CONVERT(int, Col1) FROM dbo.TestInt WITH (HOLDLOCK TABLOCKX)')
GO
DROP TABLE dbo.TestInt
GO
EXECUTE sp_rename N'dbo.Tmp_TestInt', N'TestInt', 'OBJECT' 
GO
COMMIT
That is right, it will create a new table, dump all the rows into this table, drop the original table and then rename the table that was just created to match the orgiinal table. This is overkill.

What about adding some defaults to the table, if you use the SSMS table designer, it will just create those and you have no way to specify a name for the default.
Here is how to create a default with T-SQL, now you can specify a name and make sure it matches your shop's naming convention

ALTER TABLE dbo.TestInt ADD CONSTRAINT
 DF_TestInt_Col1 DEFAULT 1 FOR Col1

About that yellow key icon, let's add a primary key to our table, I can do the following with T-SQL, I can also make it non clustered if I want to

ALTER TABLE dbo.TestInt ADD CONSTRAINT
 PK_TestInt PRIMARY KEY CLUSTERED 
 (Col1)  ON [PRIMARY]

Click that yellow key icon and here is what happens behind the scenes, I have not found a way to make it non clustered from the designer

/* To prevent any potential data loss issues, 
you should review this script in detail before running it 
outside the context of the database designer.*/
BEGIN TRANSACTION
SET QUOTED_IDENTIFIER ON
SET ARITHABORT ON
SET NUMERIC_ROUNDABORT OFF
SET CONCAT_NULL_YIELDS_NULL ON
SET ANSI_NULLS ON
SET ANSI_PADDING ON
SET ANSI_WARNINGS ON
COMMIT
BEGIN TRANSACTION
GO
ALTER TABLE dbo.TestInt
 DROP CONSTRAINT DF_TestInt_Col1
GO
CREATE TABLE dbo.Tmp_TestInt
 (
 Col1 int NOT NULL
 )  ON [PRIMARY]
GO
ALTER TABLE dbo.Tmp_TestInt SET (LOCK_ESCALATION = TABLE)
GO
ALTER TABLE dbo.Tmp_TestInt ADD CONSTRAINT
 DF_TestInt_Col1 DEFAULT ((1)) FOR Col1
GO
IF EXISTS(SELECT * FROM dbo.TestInt)
  EXEC('INSERT INTO dbo.Tmp_TestInt (Col1)
  SELECT Col1 FROM dbo.TestInt WITH (HOLDLOCK TABLOCKX)')
GO
DROP TABLE dbo.TestInt
GO
EXECUTE sp_rename N'dbo.Tmp_TestInt', N'TestInt', 'OBJECT' 
GO
ALTER TABLE dbo.TestInt ADD CONSTRAINT
 PK_TestInt PRIMARY KEY CLUSTERED 
 (
 Col1
 ) WITH( STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, 
ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]

GO
COMMIT

You might ask yourself why you should care, all the tables are small, this is not a big issue. This might be true now, what if you start a new job and now you have to supply alter, delete and create scripts? Now you are in trouble.

I used to do the same when I started, I used the designers for everything, I didn't even know Query Analyzer existed when I started, I created and modified the stored procedures straight inside Enterprise Manager. Trying to modify a view that had a CASE statement in Enterprise Manager from the designer....yeah good luck with that one....you would get some error that it wasn't supported, I believe it also injected TOP 100 PERCENT ORDER BY in the view as well

I don't miss those days at all. Get to learn T-SQL and get to love it, you might suffer when you start but you will become a better database developer.
Aaron Bertrand also has a post that you should read about the designers: Bad habits to kick : using the visual designers