Showing posts with label Errors. Show all posts
Showing posts with label Errors. Show all posts

Tuesday, January 10, 2017

T-SQL Tuesday #86: String or binary data would be truncated

This month's T-SQL Tuesday is hosted by Brent Ozar, he proposed the following

Find the most interesting bug or enhancement request (and it can be your own), and write a blog post about it (including a link to the Connect item so that folks who agree with you can upvote the item)

This one was pretty easy for me, it is the following connect  item Please fix the "String or binary data would be truncated" message to give the column name

This error drives me crazy as well, it should be fairly easy to tell me if nothing else what damn column barfed on the data inserted, but no.. all you get is something like

Msg 8152, Level 16, State 6, Procedure <ProcName>, Line 61 String or binary data would be truncated.

This is like not having the black box after a plane crashed, you know the plane crashed, but you don't know why exactly.

Dealing with this issue on a semi-regular basis, I even have written my own T-SQL helper to quickly see where the issue is

declare @ImportTable varchar(100)
declare @DestinationTable varchar(100)
select @ImportTable = 'temp'
select @DestinationTable = 'TestTrunc'
 
declare @ImportTableCompare varchar(100)
declare @DestinationTableCompare varchar(100)
select @ImportTableCompare = 'MaxLengths'
select @DestinationTableCompare = 'TempTrunc'
 
 
declare @sql varchar(8000)
select @sql  = ''
select @sql = 'select  0 as _col0 ,'
select @sql +=   'max(len( ' + column_name+ ')) AS ' + column_name + ',' 
from information_schema.columns
where table_name = @ImportTable
and data_type in('varchar','char','nvarchar','nchar')
 
select @sql = left(@sql,len(@sql) -1)
select @sql +=' into ' + @ImportTableCompare + ' from ' + @ImportTable
 
--select @sql -debugging so simple, a caveman can do it
 
exec (@sql)
 
 
 
select @sql  = ''
select @sql = 'select 0 as _col0, '
select @sql +=   '' + convert(varchar(20),character_maximum_length)
+ ' AS ' + column_name + ',' 
from information_schema.columns
where table_name = @DestinationTable
and data_type in('varchar','char','nvarchar','nchar')
 
select @sql = left(@sql,len(@sql) -1)
select @sql +=' into ' + @DestinationTableCompare
 
--select @sql -debugging so simple, a caveman can do it
 
exec (@sql)
 
 
select @sql  = ''
select @sql = 'select  '
select @sql +=   '' + 'case when  t.' + column_name + ' > tt.' + column_name
+ ' then ''truncation'' else ''no truncation'' end as '+ column_name
+ ',' 
from information_schema.columns
where table_name = @ImportTableCompare
and column_name <> '_col0'
select @sql = left(@sql,len(@sql) -1)
select @sql +='  from ' + @ImportTableCompare + ' t
join ' + @DestinationTableCompare + ' tt on t._col0 = tt._col0 '
 
--select @sql -debugging so simple, a caveman can do it
 
exec (@sql)
 
 
exec ('drop table ' + @ImportTableCompare+ ',' + @DestinationTableCompare )


Something like this only helps you if you have the data readily available, what if it is from an application? In that case you need profiler or extended events to capture the statement

It is also crazy that this connect item is almost 9 years old, it was opened in April 2008

We do have someone from Microsoft commenting on this issue last August

Posted by David [MSFT] on 8/5/2016 at 1:39 PM
Latest update - the developer working on it understands the challenges involved in creating a full fix. It may be tricky to plumb the information about columns needed to generate a full error message down to the actual conversion function in such a way that won't impact insert or update performance. We may implement something cheap in the short term such as logging the type and length of the data being truncated. It's still too early to know when such a fix would reach a publicly visible release.

This connect item has 1328 upvotes as of today, it also has 5 downvotes (who are these people..probably working on the SQL Server team :-) )

So there you have it that is my contribution to T-SQL Tuesday # 86, keep track of Brent's blog here https://www.brentozar.com/blog/ there will be a recap posted on Tuesday, January 2017

Saturday, October 10, 2015

Your first step in dealing with SQLState = S1000 I/O error while writing a BCP data-file

The other day one job started to fail every time it ran. The error was

Server: FancyServerName 
Job: SomeJobName 
Step: Step 3) SomeStepName 
Message: Executed as user: SomeDomain\SomeServiceAccount. Starting copy... SQLState = S1000, NativeError = 0 Error = [Microsoft][ODBC Driver 11 for SQL Server]I/O error while writing BCP data-file 19 rows copied. Network packet size (bytes): 4096 Clock Time (ms.) Total : 1 Average : (19000.00 rows per sec.). Process Exit Code 1. The step failed. 
Time: 2015-10-06 08:16:20


This was very interesting since the file we were writing had only 19 rows and this was very small, less than a kilobyte. So how do you investigate what the error is? It could be all kinds of things:


Permissions on the drive
Problems with the drive
Memory issues
Issue with a format file



If you do a search you will get all kinds of things back. However the first thing you need to run is the following


EXEC xp_fixeddrives



Here is what I saw after running xp_fixeddrives


drive      MB free
C             235644
D             1479234
E              10123
F              10123
G             10123
L              10123
S              117706
T              10123
Z              0




Take a look at the Z drive do you see that? The Z drive is full, there is no space left. It would be nice if the error message indicated that it couldn't write to the file because there was no space left. In a production environment this should not happen because hopefully you have something in place that sends out an alert if the drive is 90% full.

So, if you ever get this error, first thing you have to do is if you have space left on the drive.


I also want to point out that the xp_fixeddrives stored procedure is undocumented so there is no guarantee it will be available in a future version of SQL Server.

Sunday, December 16, 2007

GNET: Even Google Screws Up By Showing A Bunch Of MySQL Messages

When I visited http://www.gnet.com/ today I got greeted by the following error messages. These messages should never be seen by users. What about a friendly message instead? Anyway the mess is below.


Warning: Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2) in /home/gnet/public_html.v2/libs/adodb/drivers/adodb-mysql.inc.php on line 359

Warning: MySQL Connection Failed: Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2) in /home/gnet/public_html.v2/libs/adodb/drivers/adodb-mysql.inc.php on line 359

Warning: Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2) in /home/gnet/public_html.v2/libs/adodb/drivers/adodb-mysql.inc.php on line 359

Warning: MySQL Connection Failed: Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2) in /home/gnet/public_html.v2/libs/adodb/drivers/adodb-mysql.inc.php on line 359

Warning: Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2) in /home/gnet/public_html.v2/libs/adodb/drivers/adodb-mysql.inc.php on line 359

Warning: MySQL Connection Failed: Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2) in /home/gnet/public_html.v2/libs/adodb/drivers/adodb-mysql.inc.php on line 359

Warning: Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2) in /home/gnet/public_html.v2/libs/adodb/drivers/adodb-mysql.inc.php on line 359

Warning: MySQL Connection Failed: Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2) in /home/gnet/public_html.v2/libs/adodb/drivers/adodb-mysql.inc.php on line 359