Thursday, November 17, 2016

What's new in SQL Server 2016: CREATE OR ALTER



SQL Server 2016 Service Pack 1 added the CREATE OR REPLACE functionality, however in SQL Server, it is called CREATE OR ALTER.  Finally it is here, this has been asked for since the 90s, it was on the SQL Server wishlist (who remembers that before they had connect :-)

CREATE OR ALTER can be used with the following four object types

STORED PROCEDURES
FUNCTIONS
TRIGGERS
VIEWS

Let's take a look how this all works

If you want to run this code, you need to be on SQL Server 2016 Service pack 1 or higher or on vNext, the @@VERSION function on SP1 and vNext returns the following


Microsoft SQL Server 2016 (SP1) (KB3182545) - 13.0.4001.0 (X64)   Oct 28 2016 18:17:30
Microsoft SQL Server vNext (CTP1) - 14.0.1.246 (X64)   Nov  1 2016 23:24:39


Before they added CREATE OR ALTER, you had  a couple of options to create rerunable scripts. The first way was dropping the proc if it existed and having the CREATE PROC in the same script, it looked like this

--old way of dropping a proc then having a create script
IF OBJECT_ID('procTest') is not null
DROP PROCEDURE procTest
GO

CREATE PROCEDURE procTest
AS
BEGIN
 PRINT (1)
END;
GO


Another way would be to create a dummy proc if the proc didn't exist already and then just have the ALTER PROC part in your script


--another way is to have a dummy proc created,
--that way your alter part is always the same
IF OBJECT_ID('dbo.procTest') IS NULL
  EXEC ('CREATE PROCEDURE dbo.procTest AS RETURN 0;')
GO

ALTER PROCEDURE  procTest
AS
BEGIN
 PRINT (1)
END;
GO

In SQL Server 2016 with Service Pack 1, this becomes so much easier. Here is what it looks like


-- the easier way in sql server 2016 SP1 and up
CREATE OR ALTER PROCEDURE procTest
AS
BEGIN
 PRINT (1)
END;
GO

Here is how that looks with a function, pretty much the same as a procedure


-- works the same with functions
CREATE OR ALTER FUNCTION fnTest()
RETURNS INT
AS
BEGIN
 RETURN(1)
END;
GO

The view looks like this


-- also works with views
CREATE OR ALTER VIEW vwTest
AS
 SELECT 1 AS col;
GO


With triggers it is pretty much the same, here is the code that you can run, I first created a table otherwise I would not be able to create a trigger


-- first we need a table for the trigger
IF NOT EXISTS(SELECT * FROM sys.tables WHERE name = 'BooFar')
CREATE TABLE BooFar(id int)
GO

-- works with triggers
CREATE OR ALTER TRIGGER trTest
ON BooFar
AFTER INSERT, UPDATE
AS
 RAISERROR ('Hello from trigger', 1, 10);
 GO

 -- just a test to make sure the trigger works
 INSERT BooFar values(1)

 -- you should see this in the message tab
 /*
Hello from trigger
Msg 50000, Level 1, State 10

(1 row(s) affected)
*/

The CREATE OR REPLACE syntax does NOT work with DDL triggers. If you execute this the first time it will run fine.

CREATE OR ALTER TRIGGER safety   
ON DATABASE   
FOR DROP_TABLE, ALTER_TABLE   
AS   
   PRINT 'You must disable Trigger "safety" to drop or alter tables!'   
   ROLLBACK;  

Now run it again, you will get this error

Msg 2714, Level 16, State 2, Procedure safety, Line 1 [Batch Start Line 77]
There is already an object named 'safety' in the database.

So be aware that CREATE OR ALTER does not work with DDL triggers

I submitted a connect item for this: https://connect.microsoft.com/SQLServer/feedback/details/3111866

I want them to either disable CREATE OR ALTER for DDL triggers or they should fix it so it doesn't error on a second run


Tor wrap this up, let's drop all these objects we just created, we will do that by using the Drop if exists syntax


--  drop everything by using
--  DROP object IF EXISTS
DROP TABLE IF EXISTS  BooFar
DROP PROCEDURE IF EXISTS  procTest
DROP VIEW IF EXISTS  vwTest

You can find the github repo for the series here whatisnewinsqlserver2016, all the code for this post can be found here CREATE Or ALTER

Wednesday, November 16, 2016

Playing around with the new stuff in SQL Server vNext and SQL Server 2016 SP1


Earlier today Microsoft released the first CTP of the next version of SQL Server, I have a post about that here SQL Server 2016 SP1 released, SQL Server vNext available for download

I downloaded and installed SQL Server on a VM (But only after the VM decided to install Windows 10 anniversary update first). I played around with some of the new things, let's take a look

Version
If you select @@VERSION, you get the following back

Microsoft SQL Server vNext (CTP1) - 14.0.1.246 (X64)   Nov  1 2016 23:24:39   Copyright (c) Microsoft Corporation  Enterprise Evaluation Edition (64-bit) on Windows 10 Pro 6.3 <X64> (Build 14393: ) (Hypervisor)

The compatibility for this version of SQL Server is 140

STRING_AGG

This is a brand new string function in SQL Server, If you have used MySQL then the STRING_AGG is similar to the GROUP_CONCAT function, however you can't use DISTINCT like you can in MySQL

The function STRING_AGG concatenates the values of string expressions and places separator values between them. The separator is not added at the end of string. This function is available beginning with SQL Server 2016 SP1

Here is a sample query

SELECT STRING_AGG (name, ',') AS csv 
FROM master..spt_values
WHERE type = 'A'; 

Here is the output

rpc,pub,sub,dist,dpub,rpc out,data access,collation compatible,system,use remote collation,lazy schema validation,remote proc transaction promotion

Nothing special of course but at least you don't have to write your own version and thinking about removing the last comma

A better example would be if you wanted to get all the columns for a table next to the table name
You would think the query would be like this

SELECT t.name,STRING_AGG (c.name, ',') AS csv
FROM sys.tables t
JOIN sys.columns c on t.object_id = c.object_id
ORDER BY 1

However that gives you the following error

Msg 8120, Level 16, State 1, Line 41
Column 'sys.tables.name' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.

You need to add a GROUP BY to the query

SELECT t.name,STRING_AGG (c.name, ',') AS csv
FROM sys.tables t
JOIN sys.columns c on t.object_id = c.object_id
GROUP BY t.name
ORDER BY 1


Here is what you get back


That looks much better, I can see myself using this a lot in the future


In memory OLTP/Columnstore


According to the documentation sp_spaceused is now supported for in-memory tables.
I tried it out and with SQL Server 2016 I get back 0 rows and 0 KB, with vNext I get back rows but NULL for everything else


  name rows reserved data index_size unused
ShoppingCart 0                   0 KB 0 KB 0 KB 0 KB  -- 2016
ShoppingCart 53248              NULL NULL NULL NULL -- vNext

Not sure what is going on there.

You can now add a columnstore index to a table that has a varchar(max) data type in vNext


CREATE TABLE t_bulkload (
accountkey int not null,
accountdescription varchar (max),
accounttype char(500),
AccountCodeAlternatekey int)
GO

CREATE CLUSTERED COLUMNSTORE index t_bulkload_cci on t_bulkload

Running that on SQL Server 2016 gives the following error

Msg 35343, Level 16, State 1, Line 39
The statement failed. Column 'accountdescription' has a data type that cannot participate in a columnstore index. Omit column 'accountdescription'.

Running that same code on SQL Server vNext 2016 gives the following warning

Warning: Using Lob types (NVARCHAR(MAX), VARCHAR(MAX), and VARBINARY(MAX)) with Clustered Columnstore Index is in public preview. Do not use it on production data without backup during public preview period.


Here are some more new things in regard to in memory tables

  • sp_spaceused is now supported for in-memory tables.
  • sp_rename is now supported for native modules.
  • CASE statements are now supported for native modules.
  • The limitation of 8 indexes on in-memory tables has been removed.
  • TOP (N) WITH TIES is now supported in native modules.
  • ALTER TABLE against in-memory tables is now substantially faster in some cases.
  • Transaction redo In-memory tables is now done in parallel. This substantially reduces the time to do failovers or in some cases restarts.
  • In-memory checkpoint files can now be stored on Azure Storage. This provides equivalent capabilities to MDF compared to LDF files, which already have this capability.
As a final note on in memory OLTP, I must say that I ran everything in SQL Server 2016 and also in SQL Server vNext, running the stuff on the vNext instance seemed faster to me.

sys.dm_os_host_info

This is a new OS related system DMV and it returns one row that displays operating system version information.


SELECT * FROM sys.dm_os_host_info;  


Here is the output
host_platformhost_distributionhost_releasehost_service_pack_levelhost_skuos_language_version
WindowsWindows 10
Pro
6.3481033

Here is a sample result set on Linux:
host_platformhost_distributionhost_releasehost_service_pack_levelhost_skuos_language_version
LinuxUbuntu16.04NULL1033


Here is what it looks like in SSMS



That is all for now..... you can find all the new stuff on MSDN https://msdn.microsoft.com/en-US/library/mt788653(SQL.130).aspx

SQL Server 2016 SP1 released, SQL Server vNext available for download


Today Microsoft announced the CTP of the next version of SQL Server, you can download it here https://www.microsoft.com/evalcenter/evaluate-sql-server-vnext-ctp

As you can see I am already downloading this version



What's New in SQL Server vNext https://msdn.microsoft.com/en-US/library/mt788653(SQL.130).aspx

Install SQL Server on Linux https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-setup

SQL Server on Linux Documentation https://docs.microsoft.com/en-us/sql/linux/


SQL Server Service Pack 1

Also announced was Service Pack 1 of SQL Server 2016, you can download that here https://go.microsoft.com/fwlink/?linkid=835368

There are so many cool things in this service pack. For one,  all the editions now support all the programmability features like indexed views, columnstore indexes, partitioning etc etc, see image below


You are still bound by the memory and CPU limits but at least your code doesn't have to change, this is good news for ISVs.

A couple of more tidbits....

  • CREATE OR ALTER (Yes, we heard you !!!) – New CREATE OR ALTER support makes it easier to modify and deploy objects like Stored Procedures, Triggers, User–Defined Functions, and Views. This was one of the highly requested features by developers and SQL Community.

  • New DMF sys.dm_exec_query_statistics_xml – Use this DMF to obtain actual query execution showplan XML (with actual number of rows) for a query which is still being executed in a given session (session id as input parameter). The showplan with a snapshot of current execution statistics is returned when profiling infrastructure (legacy or lightweight) is on.
  • New DMF for incremental statistics – New DMF sys.dm_db_incremental_stats_properties to expose information per–partition for incremental stats.
  • Better correlation between diagnostics XE and DMVs – Query_hash and query_plan_hash are used for identifying a query uniquely. DMV defines them as varbinary(8), while XEvent defines them as UINT64. Since SQL server does not have “unsigned bigint”, casting does not always work. This improvement introduces new XEvent action/filter columns equivalent to query_hash and query_plan_hash except they are defined as INT64 which can help correlating queries between XE and DMVs.

There is much more, see all the news here: https://blogs.msdn.microsoft.com/sqlreleaseservices/sql-server-2016-service-pack-1-sp1-released/


I also played around with some of the new stuff after installing vNext, see here:
Playing around with the new stuff in SQL Server vNext and SQL Server 2016 SP1

Saturday, November 12, 2016

What's new in SQL Server 2016: Truncate partitions


In my The best thing about the PASS summit for me is... post I said that I would do the following

I will also do a series about new stuff in SQL Server 2016, but that will be in between all the other posts. Whenever someone asked during a session how many people were already on SQL Server 2016, very few hands went up, this is the reason, I want to blog about this as well.
For all these series of posts, I plan to have a repo on github with a yet to be determined name. Then I will have all the SQL code for all the series organized there
This is the first post in that series

You can find the github repo for the series here whatisnewinsqlserver2016, all the code for this post can be found here TruncatePartitions

New in SQL Server is the ability to truncate a partition or more than 1 partition, this is very nice because we all know that truncate is much faster than a delete. Let's see how this all works

First create this simple table


CREATE TABLE SalesPartitioned(
 YearCol SMALLINT NOT NULL,
 OrderID INT NOT NULL, 
 SomeData UNIQUEIDENTIFIER DEFAULT newsequentialid())
GO


Now it is time to insert some data, we are going to create data for 6 years, we will use those years then to partition the data on. This query will insert 6 times 2048 rows, 2048 rows per year


INSERT SalesPartitioned (YearCol,OrderID)
SELECT 2013,number
FROM master..spt_values
WHERE type = 'P'
UNION ALL
SELECT 2014,number + 2048
FROM master..spt_values
WHERE type = 'P'
UNION ALL
SELECT 2015,number + 4096
FROM master..spt_values
WHERE type = 'P'
UNION ALL
SELECT 2016,number + 6144
FROM master..spt_values
WHERE type = 'P'
UNION ALL
SELECT 2017,number + 8192
FROM master..spt_values
WHERE type = 'P'
UNION ALL
SELECT 2018,number + 10240
FROM master..spt_values
WHERE type = 'P'


Now let's create the partition function

CREATE PARTITION FUNCTION pfFiscalYear(SMALLINT)
AS RANGE LEFT FOR VALUES(2013,2014,2015,2016,2017)
GO


Create the partition scheme


CREATE PARTITION SCHEME psFiscalYear
AS PARTITION pfFiscalYear ALL TO ([PRIMARY])
GO


Add a primary key to the table, add it to the partition scheme we created above


ALTER TABLE dbo.SalesPartitioned ADD CONSTRAINT
    PK_Sales PRIMARY KEY CLUSTERED (YearCol,OrderID)
ON psFiscalYear(YearCol)
GO

Now let's see what we have as far as counts for each partition


SELECT partition_number,rows
FROM sys.partitions
WHERE object_id = OBJECT_ID('SalesPartitioned')

Here are the results

partition_number rows
1 2048
2 2048
3 2048
4 2048
5 2048
6 2048

As you can see we have 6 partitions, each partition has 2048 rows


We can verify this by running a count ourselves


SELECT YearCol, $PARTITION.pfFiscalYear(YearCol) AS Partition,
COUNT(*) AS [COUNT] FROM SalesPartitioned
GROUP BY $PARTITION.pfFiscalYear(YearCol),YearCol
ORDER BY Partition;
GO


YearCol Partition COUNT
2013 1 2048
2014 2 2048
2015 3 2048
2016 4 2048
2017 5 2048
2018 6 2048


Now it is time to truncate a partition.
In order to truncate a partition, you use the following syntax




TRUNCATE TABLE ........
WITH ( PARTITIONS ( { <partition_number_expression> | <range> } [ , ...n ] ) )

So if we just want to truncate partition 2, we would execute the following


TRUNCATE TABLE SalesPartitioned
WITH (PARTITIONS (2));
GO


Checking those same count queries from before shows that partition 2 has no rows after we executed the truncate command


SELECT partition_number,rows
FROM sys.partitions
WHERE object_id = OBJECT_ID('SalesPartitioned')


partition_number rows
1 2048
2 0
3 2048
4 2048
5 2048
6 2048



SELECT YearCol, $PARTITION.pfFiscalYear(YearCol) AS Partition,
COUNT(*) AS [COUNT] FROM SalesPartitioned
GROUP BY $PARTITION.pfFiscalYear(YearCol),YearCol
ORDER BY Partition;
GO


YearCol Partition COUNT
2013 1 2048
2015 3 2048
2016 4 2048
2017 5 2048
2018 6 2048



There are two ways you can truncate a bunch of partitions, for example if you want to remove 4,5 and 6, you can use WITH (PARTITIONS (4, 5, 6)) or you can use a range like shown in the code below


TRUNCATE TABLE SalesPartitioned
WITH (PARTITIONS (4 TO 6));
GO

After we execute that and we check the counts again, we see that we are now only left with partitions 1 and 3


SELECT partition_number,rows
FROM sys.partitions
WHERE object_id = OBJECT_ID('SalesPartitioned')

Here are the results from that query

partition_number rows
1 2048
2 0
3 2048
4 0
5 0
6 0



SELECT YearCol, $PARTITION.pfFiscalYear(YearCol) AS Partition,
COUNT(*) AS [COUNT] FROM SalesPartitioned
GROUP BY $PARTITION.pfFiscalYear(YearCol),YearCol
ORDER BY Partition;
GO

Here are the results

YearCol Partition COUNT
2013 1 2048
2015 3 2048



What happens if you specify an invalid partition in your truncate statement?


TRUNCATE TABLE SalesPartitioned 
WITH (PARTITIONS (12)); 

You get the following message

Msg 7722, Level 16, State 2, Line 1
Invalid partition number 12 specified for table 'SalesPartitioned', partition number can range from 1 to 6.


There you have it, this is how truncate partitions work


You can find the github repo for the series here whatisnewinsqlserver2016, all the code for this post can be found here TruncatePartitions


Friday, November 11, 2016

Interesting SQL related links for the week of Nov 11, 2016


Here are some interesting articles I read and tweeted about this past week, I think you will like these as well. If you are bored this weekend, some of these might be good for you to read


.Net 4.6.2. Framework client driver for Always Encrypted resulting in intermittent failures to decrypt individual rows

The SQL Product team has identified an issue with .Net 4.6.2 framework client driver for Always Encrypted enabled database on SQL Server 2016 and Azure SQL Database. The issue can lead to intermittent failure while trying to decrypt the records from the Always Encrypted enabled database with following error message
Decryption failed. The last 10 bytes of the encrypted column encryption key are: ‘7E-0B-E6-D3-39-CE-35-86-2F-AA’.The first 10 bytes of ciphertext are: ’01-C3-D7-39-33-2F-E6-44-C3-B1′.Specified ciphertext has an invalid authentication tag. 

In-Memory OLTP: Is your database just in memory or actually optimized for memory?

In my many conversations with customers during Microsoft events, people often confuse between the terms ‘In Memory’ and ‘Memory-Optimized’ and many think that they are one and the same. If you continue reading this blog, you will realize that they are somewhat related but can lead to very different performance/scalability.



Paper Review: Why Does the Cloud Stop Computing? Lessons from Hundreds of Service Outages

This paper conducts a cloud outage study of 32 popular Internet services, and analyzes outage duration, root causes, impacts, and fix procedures. The paper appeared in SOCC 2016, and the authors are Gunawi, Hao, Suminto Laksono, Satria, Adityatama, and Eliazar.

Availability is clearly very important for cloud services. Downtimes cause financial and reputation damages. As our reliance to cloud services increase, loss of availability creates even more significant problems. Yet, several outages occur in cloud services every year. The paper tries to answer why outages still take place even with pervasive redundancies.



Daniel H Pink: employees are faster and more creative when solving other people's problems

Recent research reveals that people are more capable of mental novelty when thinking on behalf of others than for themselves. This has far-reaching practical implications at every level of business.


I’ve gotten a few questions about shrinking SQL Server data files lately. What’s the best way to get shrink to run? And why might it fail in some cases?

Traditionally, every time you ask a DBA how to make shrinking suck less, they start ranting how shrinking is bad and you just shouldn’t do it. Sometimes it sounds kinda angry.

What people are trying to say is that shrinking data files is generally slow, frustrating, and painful for you.



How It Works (It Just Runs Faster): Non-Volatile Memory SQL Server Tail Of Log Caching on NVDIMM

SQL Server 2016 has been enhanced along with Windows 2016 to detect a direct access volume (/dax.) When you create or alter the SQL Server database you simply add the LOG ON clause, pointing to the DAX enabled volume. SQL Server will detect the request on the DAX based volume and create the non-volatile, tail of log cache of fixed size for the database. From all external aspects (DMVs for example) the tail of log cache looks like another file that is part of the database’s LOG file group.


NASA Is Harnessing Graph Databases To Organize Lessons Learned From Past Projects

NASA famously maintains a "lessons learned" database containing valuable information from its past programs and projects. But the vast system, which has been online since 1994, is not always easy to navigate. Now the agency is modernizing it with help from a tool more familiar to social media than space missions: graph databases.

The genesis of the change began about a year and a half ago when an engineer, attempting to search "lessons learned" for relevant documents, found the number of possible results overwhelming. "He was getting things that really were not relevant to what he was looking for," David Meza, NASA’s chief knowledge architect, recalls.

Looking to make the database more useful, and help users investigate relationships beyond what basic keyword searches could uncover, Meza experimented with storing the information in a graph database—that is, a database optimized to store information in terms of data records and the connections between them. In recent years, such network graphs have become a familiar feature of online social networks.


Microsoft Azure SQL Database provides unparalleled performance with In-Memory technologies

Azure SQL Database built-in In-Memory technologies are now generally available for the Premium database tier including Premium pools. In-memory technology helps optimize the performance of transactional (OLTP), analytics (OLAP), as well as mixed workloads (HTAP). These technologies allow you to achieve phenomenal performance with Azure SQL Database – 75,000 transactions per second for order processing (11X perf gain) and reduced query execution time from 15 seconds to 0.26 (57X perf). You can also use them to reduce cost – on a P2 database obtain 9X perf gain for transactions or 10X perf gain for analytics queries by implementing In-Memory technologies, without any additional cost!


Offshoring roulette: lessons from outsourcing to India, China and the Philippines

In what is now a former life, I spent a great deal of time sending work overseas. In fact, I spent quite a bit of time over there myself in some of the places I'm going to talk about. That former life was as a software architect for Pfizer and in the last half dozen years of my 14-year tenure, I had responsibility for software architecture in the Asia Pacific region. For those not from this corner of the world, Asia Pacific is a sizeable chunk of the globe



TDD Doesn't Work

TDD Doesn't work.

It doesn't? That's odd. I've always found it to work quite well.

Not according to a new study.

Another study?

Yeah, an in-depth study that repeated another study that was done a few years back. Both showed that TDD doesn't work. The new one uses a multi-site, blind analysis, approach. It looks conclusive.



Oops Recovery with Temporal Tables

Have you ever got that sinking feeling after hitting the Execute button in SSMS, thinking “I should not have done that”? DML statements with missing WHERE clause, DROP statements accidentally targeting slightly mistyped (but existing) tables or databases, RESTORE statements overwriting databases with new data that haven’t been backed up, are all examples of actions prompting an “Oops…” (or worse) shortly thereafter. “Oops recovery” is the term that became popular to describe the process of fixing the consequences.

For most of these scenarios, the usual, and often the only, recovery mechanism is to restore the database from backup to a point in time just before the “oops”, known as point-in-time recovery (PITR). Even though PITR remains the most general and the most effective recovery mechanism, it does have some drawbacks and limitations: the recovery process requires a full database restore, taking the time proportional to the size of the database; a sequence of restores may be needed if multiple “oops” transactions have occurred; in the general case, there will be difficulties reconciling recovered data with data modified after the “oops” point in time, etc. Nevertheless, PITR remains the most widely applicable recovery method for SQL Server databases, both on-premises and in the cloud.



Introduction to Machine Learning for Developers

Today’s developers often hear about leveraging machine learning algorithms in order to build more intelligent applications, but many don’t know where to start.


One of the most important aspects of developing smart applications is to understand the underlying machine learning models, even if you aren’t the person building them. Whether you are integrating a recommendation system into your app or building a chat bot, this guide will help you get started in understanding the basics of machine learning.

This introduction to machine learning and list of resources is adapted from my October 2016 talk at ACT-W, a women’s tech conference.

And that is all for this week, try to read some of these over the weekend....

Wednesday, November 09, 2016

What was the first computer you bought, what were some of your favorite games?

A non SQL post today and I posted a variation of this before. What was the first computer you bought, what were some of your favorite games?

The first computer I ever bought was a commodore 128 (I actually received it as a gift for my 16th birthday)


This baby had 128K (not MB) of RAM, 4 sound channels and 16 colors
With the C128 you had a C64 built in and you could run CP/M (it came with a floppy)
I almost always booted up C64, this gave you 39KB free memory to use, the speed was 1MHZ, the C128 could run at 2MHZ but then the screen would go dark before you switched back to 1MHZ. The C128 came with BASIC built in, I had a tape recorder so that I could store and retrieve programs or games. This was such a nuisance, if your friend gave you a game and the heads on his recorder were aligned different you could not load the game, you would have to use a screwdriver to fix the azimuth. It would take up to 30 minutes to load a game if you didn’t have a turbo.

Later on, I bought one of these cartridges you can see in the picture below

Using this cartridge, you could 'freeze' the commodore 64, you could then store what was in memory on tape. Now all you had to do was load that again from tape and you could continue where you left off. The cartridge also had an assembler built, it was pretty cool technology for the late 80s.

I still remember the great games from that time, here are some of my favorites

1942

This was a very fun game, basically you had to shoot a bunch of enemy planes and stay clear of any bombs


yie-ar kung fu

This games was very good for the time, you would fight these characters all specializing in different martial arts


kung fu master

I played this one in the arcades as well as on my commodore, fun game to pass the time



Zaxxon

Probably one of the more difficult games of that time, it was 3D so you have to think about 3 dimensions and you would always crash into something















Ghost N Goblins

Fun game and also fun music

The best part about the games is that you could change the value in an address space after you loaded a game but before typing run
You would use POKE for that, examples:
POKE 43719,234 POKE 43720,234 POKE 43721,234 Invincibility
POKE 44731,76 POKE 44732,253 POKE 44733,174 All doors unlocked
POKE 34202,200 SYS 2060 Unlimited lives
Programming on the commodore was primarily done in BASIC or assembler (built in) but you could also buy a C compiler, Oxford Pascal or many other languages.
Here is an example of basic

10 PRINT "THIS IS THE MAIN PROGRAM",
20 GOSUB 1000
30 PRINT "AND AGAIN";
40 GOSUB 1000
50 PRINT "AND THAT IS ALL."
60 STOP
1000 REM SUBROUTINE STARTS HERE
1010 PRINT "THIS IS THE SUBROUTINE,"
1020 RETURN


Here is some assembler language
LDA $5000
ASL
CLC
ADC $5000
STA $5000
BRK


So what was your first computer?

Monday, November 07, 2016

Sometimes, you can stare at something for 5 minutes before seeing it


The other day I was running some code and the variable that I created was not being populated. I looked at the code for a minute or so, ran it again (like that would change it) and it was still null. Before I show you the code, I want you to look at this image, what do you see?

 If you have never seen this image you will probably see Paris in the spring. But look closer, do you see the second the on the third line?

The way this works is that your brain eliminates the second the since it already processed.

Take a look at this text below, you probably have no problem "reading" it


"I cdn'uolt blveiee taht I cluod aulaclty uesdnatnrd waht I was rdanieg: the phaonmneel pweor of the hmuan mnid. Aoccdrnig to a rseearch taem at Cmabrigde Uinervtisy, it deosn't mttaer in waht oredr the ltteers in a wrod are, the olny iprmoatnt tihng is taht the frist and lsat ltteer be in the rghit pclae. The rset can be a taotl mses and you can sitll raed it wouthit a porbelm. Tihs is bcuseae the huamn mnid deos not raed ervey lteter by istlef, but the wrod as a wlohe. Scuh a cdonition is arppoiatrely cllaed Typoglycemia .

"Amzanig huh? Yaeh and you awlyas thguoht slpeling was ipmorantt."



So back to my code..here is a simple example


DECLARE @name varchar(100)

SELECT @name = @name FROM sys.databases
WHERE database_id = 6


SELECT @name

Do you see the problem?  The table has a column named name, the variable is @name, these look very similar


Let's take a look at another example, one of my co-workers was inserting data into a temp table from another table.

INSERT #Temp
SELECT lastrun, cpu_busy, io_busy, idle, pack_received,  connections, pack_errors, 
total_read, total_write, total_errors
FROM master.dbo.spt_monitor

He then needed to add a column, he modified the temp table to add this column, then he added this column to his insert query, the column name was pack_sent


INSERT #Temp
SELECT lastrun, cpu_busy, io_busy, idle, pack_received pack_sent, connections, 
pack_errors, total_read, total_write, total_errors
FROM master.dbo.spt_monitor

Running that gave him this error

Msg 213, Level 16, State 1, Line 1
Column name or number of supplied values does not match table definition.


Do you see what happened?  He added the column but forgot a comma, so pack_sent became an alias for the pack_received column. Sometimes I really wished as would be required to alias a column, at least you could eliminate errors like these,


So how many times have you done stuff like this..also do you have any other examples of similar stuff?