Tuesday, 13 January 2015

Create SQL Server on Windows Azure Virtual Machine

In the last post we saw the two different offering of SQL server on cloud services by Microsoft, Today we will go through the Step by Step Procedure on how to create a SQL 2014 SQL server on Windows Azure, As mentioned in the last post to create any Microsoft Cloud based service you need a Azure Subscription http://azure.microsoft.com/en-us/pricing/purchase-options/ . There are various Tiers of Servers offered based on the performance and Scalability. Let’s I assume you have purchased the required subscription which should grant you access to you own customized Azure Portal page when you login to https://manage.windowsazure.com as below,


In the Portal Page select on Virtual Machine and click Create a virtual Machine, 



As you can see in the above screen it will prompt you for configuration details, either you can select the configuration image from gallery(Pre-Define images) or select your own if you have subscribed for the configuration and Select Create Virtual Machine.

I have Chose "From Gallery" for our example,


As you can see in the above image you will be presented with various images of different products of Microsoft, As I want a Windows server Hosting SQL Server 2014 I Choose SQL Server->SQL Server 2014 RTM enterprise Image.

Note:- You can Still just create a Windows Machine and install SQL Server as in On-Premises Machines if you have the Licence for SQL Server Already.



Now It will Prompt you select Virtual Machine Name and Server Configuration/Server Tier Along with a Local Administrator and Password.(Above Image)


Next is if your creating a Azure for the first time it will prompt you create a Cloud Service Account and a Storage Account.(Above Image)


Next is if you want to Install any Anti-Virus or other Agents to your base Windows build, Select Appropriately. (Above Image). Once you click ok Microsoft will start provisioning your server
(Below Image)

Once the server is provisioned and ready to use you should see the Status as Running and at the bottom of the portal you should see the Connect option enabled.(Image Below)



Click Connect, It will download a RDP File, Click the RDP file to Remote Desktop to the new server(Image Below)





As you can see in the above figure, You can now access the server as a usual VM server with SQL Server 2014 Installed and it is no different from your On-Premises server.

Now Lets go back to the portal again and see what option we have from the Azure Portal, First Lets see the home page(Image Below)


As you can see you have Virtual server created for your account, let see what happens when you click the virtual server.




Which we select the virtual server from the home page we will be displayed with a dashboard with Server details and Performance. As you can see in the top Tabs as the name suggests you can also monitor the performance separately, configure endpoints for the server.


The Last page is the Configure Tab in the server setting page, In this page you change the server configuration to different Tier for scalability and also select something called Availability Set with the High availability option for the server which is not in the scope of this post.

In the next post we will see how we can create a SQL Azure from the Portal with Screen Shots.

Monday, 12 January 2015

Understanding SQL Server on Windows Azure VS Azure SQL database

Often people get confused with the two different types of SQL Server service available on Azure or Microsoft cloud services,

1.       SQL Server on Windows Azure
2.       Azure  SQL Server Databases

SQL Server on Windows Azure:-

SQL Server on Windows Azure is nothing but a regular instance of SQL Server installed on a Virtual Windows machine hosted on Cloud (Windows Azure), a virtual server hosted in Microsoft data centre instead of On-Premises Data Centre. You can use the Windows and SQL Server similar to your on-premises system. The Windows and SQL will still Admin to manage the environment. Microsoft calls the category of SQL Server in Azure Virtual Machine (VM) as Infrastructure as a Service (IaaS) 


Microsoft has come up with many pre-defined SQL Server configured images so when you provision your virtual Machine in Azure you can straight away request for a SQL Server Image which will have your SQL server installed when the system is delivered.
There are various purchase option http://azure.microsoft.com/en-us/pricing/purchase-options/ for cloud based SQL Services available, without going much into various option broadly you can either subscribe a licence for SQL Server Image which holds both windows or SQL Server Licence or you can just subscribe a Windows Azure image and install the SQL Server yourself and use if have any existing licence.

Azure SQL Server Databases:-

Azure SQL Database is a relational database-as-a-service, where Microsoft offers Database as a service. These databases are hosted in servers maintained by Microsoft and the server layer is not visible to the users or admins. In this configuration database maintenance requirements are very minimal as we do not control any physical features of the database and you do not have to maintain the server infrastructure.Microsoft calls the category of Azure SQL Databases as Platform as a Service (PaaS)


Since Azure SQL databases works on a slight different architecture to standard SQL databases there also some limitation with this type of databases. All the server features are in accessible as only database is offered as service and almost all system objects are not visible making it difficult to manage the database.

Find more details on features not supported by Azure SQL databases.


Below is an image of a SQL Instance hosted of Windows Azure and Azure SQL database just to have look and feel difference when accessed through SSMS. The First Image is the Windows Portal for Azure which has two sections highlighted Virtual Machine section is where we create Windows Azure and SQL Databases is the section we can create Azure SQL databases. The second Image shows how SQL database on Windows Azure and Azure SQL databases looks like in SSMS.

In my next post we will discuss how to create  SQL Server on Windows Azure and Azure SQL databases in detail with some screen shots.




Monday, 5 January 2015

Transaction Log Backups on Availability databases and its effect on database shrink

With the introduction of availability databases, I have been queried frequently about how to schedule a T-Log backup on Availability databases, do we need to schedule the backup on both the servers or only on Primary. To clear these doubts I planned to do some tests to explain the effect of T-Log backup on Availability databases. The architecture of T-Log has been clearly explained with respect to Availability databases in the below article, All I am going to do is do some testing to prove this.


I am going to use a availability database called AGTest for this testing,

Create the test Table on Primary database,

USE [AGTest]
GO
CREATE TABLE [dbo].[TLogTest](
       [Cnt] [int] NULL
) ON [PRIMARY]

GO

Get the T-Log usage before data updates

DBCC SQLPERF(LOGSPACE)

Primary Database:-
Database Name
Log Size (MB)
Log Space Used (%)
Status
AGTest
0.9921875
37.20472
0

Secondary HA Database:-
Database Name
Log Size (MB)
Log Space Used (%)
Status
AGTest
0.9921875
12.79528
0


Do a logged operation on Primary database for T-Log usage,

DECLARE @intLimit AS INT = 1000000
DECLARE @intCounter AS INT = 1

WHILE @intCounter <= @intLimit
BEGIN
INSERT INTO TLogTest VALUES (@intCounter)
SET @intCounter = @intCounter + 1
END

Get the T-Log usage after logged operation

Primary Database:-
Database Name
Log Size (MB)
Log Space Used (%)
Status
AGTest
51.17969
98.73397
0

Secondary HA Database:-
Database Name
Log Size (MB)
Log Space Used (%)
Status
AGTest
51.17969
98.21497
0

The above metric looks logical as per the HA architecture, now I am going to Take T-Log backup of the AGTest Database in Primary alone,

BACKUP LOG [AGTest] TO DISK = 't:\Test.bkp'

Now let’s look at the T-Log size,

Primary Database:-
Database Name
Log Size (MB)
Log Space Used (%)
Status
AGTest
51.17969
3.325828
0

Secondary HA Database:-
Database Name
Log Size (MB)
Log Space Used (%)
Status
AGTest
51.17969
2.814456
0


So the a T-Log backup on the primary does truncate the log records on both Primary and the secondary HA database. Technically backup on either of the database should be enough but there is a catch here, Now I try to shrink the T-Log on primary let’s see what happens.


Now let’s look at the Log File size,

Primary Database:-
Database Name
Log Size (MB)
Log Space Used (%)
Status
AGTest
51.17969
6.031522
0

Secondary HA Database:-
Database Name
Log Size (MB)
Log Space Used (%)
Status
AGTest
51.17969
5.566898
0

Whatever I try to do I was not able to shrink the file, Now I decided to the only step I missed BACKUP Secondary HA database.

BACKUP LOG [AGTest] TO DISK = 'P:\Test.bkp'

Now I ran the shrink again on the primary database, Let’s see the Log file size now,

Primary Database:-
Database Name
Log Size (MB)
Log Space Used (%)
Status
AGTest
0.9921875
30.01969
0

Secondary HA Database:-
Database Name
Log Size (MB)
Log Space Used (%)
Status
AGTest
0.9921875
5.659449
0

:-) And there it goes log file was shrunk in both Primary and secondary databases. Hope this article gives you an idea of how to schedule a log backup on HA databases.

Monday, 27 October 2014

SQL Server 2014 new In-memory Tables

I have summarised basic concepts of In-Memory OLTP from various Microsoft articles which will help you to start up with the basics,

SQL Server 2014 new In-memory Tables:-

Microsoft has come up new concept called In-Memory OLTP with the introduction of SQL Server 2014 for performance optimization on OLTP database servers. The traditional SQL Server stores the tables in the disk and SQL OS moves the data pages from the disk to the memory (SQL Server Buffer) depending based on the query executed, most of the times the frequently accessed tables pages in the memory are paged to disk to accommodate other requested data pages resulting in frequent paging and high PAGIOLATCH waits. With the cost of hardware reducing over the years In-Memory OLTP gives us the option to store the frequently used table in memory instead of disk. In-Memory architecture is fully integrated with SQL Server and the Memory Optimized tables are fully transactional, durable, and are accessed using Transact-SQL in the same way as disk-based tables. 


In-Memory Architecture


Legacy SQL server did have similar function called DBCC PINTABLE which pins the data pages of the table to memory and these pages does not flush to the disks, but this was a SQL Server internal function meaning the pages are pinned to SQL Server Buffer cache which most of times creates a memory crunch for other data which needs processing which resulted in more of a problem than resolution this function was depreciated in SQL Server. In-Memory OLTP architecture unlike DBCC PINTABLE has its own memory space for storing memory optimized tables and indexes and don’t use Buffer cache which can be still used effective for disk based table operation. SQL server can interact with memory optimized table and a disk based table in same transaction meaning you can have both memory-optimized tables and disk-based tables in the same database, and query across both types of tables.



Transactional Concurrency:-

One of major advantages of the memory optimized table is locks on the table are very minimal irrespective of the isolation level of the transaction. Take for example in READ SNAPSHOT ISOLATION on a disk based table does not block a SELECT operation during a data modification due to row versioning of the data in tempdb database but this still blocks any process which tries to update/delete the data simultaneously but in Memory optimized architecture there are no locking if two transactions attempt to update the same row, rather it is managed by a conflict management. memory-optimized tables use multi versioning meaning each row has different version instead of single row version used in snapshot isolation, Also unlike snapshot isolation which stores the row version in tempdb, memory optimized tables row version are stored in memory. This architecture allows even higher isolation levels to have transactional concurrency. As a part of conflict management SQL server detect conflicts between concurrent transactions, If a conflict is detected, the transaction is terminated and the client needs to retry.


Durability:-

Memory optimized Tables can be configured as both fully durable(SCHEMA_AND_DATA) and non-durable(SCHEMA_ONLY). Memory optimized tables configured fully durable(SCHEMA_AND_DATA) can be recovered  without any data loss on an SQL Server restart whereas Tables configured with SCHEMA_ONLY durability will be able to recover only the table metadata on a SQL server restart, these tables can be typically your staging tables used in ETL process which can be flushed on database restart like tempdb tables.

Like Disk-based tables all changes are logged to a transaction log for durable memory optimized tables and the SQL server writes the transaction to the disk. Now this where it can be misleading on why the data is written to disk when the table is in memory, Memory-optimized tables is fully integrated with the same log stream used by disk-based tables, to recover the table point in time and to allow operations like T-Log, backup and restore without any additional steps, Memory optimized table maintains a copy of table and data on disk on a different data structure stored as free form data rows with the help of two files called data and delta files. The data and delta files are located in one or more containers (using the same mechanism used for FILESTREAM data). These containers are mapped to a new type of filegroup, called a memory-optimized filegroup. On a SQL Server restart memory optimized tables are flushed back into memory from the data and delta files for fully durable table.


Implementation:-

Create a database  with a memory optimized filegroup and container

CREATE DATABASE MEMORYTABLETEST
ON PRIMARY
(NAME = ‘MEMORYTABLETEST_PRIMARY’, FILENAME = ‘C:\Data\ MEMORYTABLETEST_PRIMARY.mdf)
FILEGROUP INMEMORTYOLTP CONTAINS MEMORY_OPTIMIZED_DATA
(NAME = ‘MEMORYTABLETEST_INMEMORY’, FILE = ‘C:\InMemory\Container’)
 
 
Create a Durable memory optimized table
 
CREATE TABLE dbo.durabletable_memory 
( Col1 int PRIMARY KEY,
  Col2 char(200) NONCLUSTERED HASH (Id) WITH (BUCKET_COUNT = 1024) 
) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA)
 
   
Memory Optimization Indexes

Memory optimized tables supports indexes, Memory-optimized indexes exist only in memory and index operation are not logged into transaction logs.

There are two types of memory-optimized indexes

Non-clustered Hash Index
 Non clustered Index

Hash index architecture is different from traditional index and it uses a data structure called buckets to store index and data.
Each memory-optimized table must have at least one index. Note that each PRIMARY KEY constraint implicitly creates an index. Therefore, if a table has a primary key, it has an index. A primary key is a requirement for a durable memory-optimized table.


Limitations

In-Memory OLTP will improve performance best in OLTP with short-running transactions, but to gain the performance improvement we need to select the right table to be optimized otherwise it can cause huge performance issues, being a new component memory optimized OLTP has lot of function limitation which may be improved in future versions. For example cross database transaction, replication, mirroring, database snapshots is not supported and there are also various limitations on operation you can perform on these tables. But if a right table is configured to use this feature Microsoft suggests we may have have 5-20 times performance improvement which is a huge leap.

Wednesday, 8 October 2014

How to create a SSRS DR/Copy Report Server Database across different servers

Imagine you have installed and initialized SSRS in your Primary and DR Environment and To Mirror the DR environment to the Primary SSRS follow the below steps,

  1. Copy the ReportServer and ReportServerTempDB to the DR Server

Note:- If you just do the database restore and try to access the SSRS you get the error “The report server installation is not initialized. (rsReportServerNotActivated)”, You need to follow the below steps to configure the DR SSRS with the existing database.

2.Backup Encryption Key for the Primary SSRS





3.  Copy the Encryption Key File to DR SSRS Server, In my case say C:\Test\ssrsencryptkey.snk

4.  Run the following Query in ReportServer database of the DR server
'DELETE FROM [Dbo].[Keys] WHERE client > -1'

5.  Now search for Program Files folder for “RSKeymgmt.exe” in DR Server

6.  Execute the "RSKeymgmt.exe" from command Prompt of DR Server in Admin context with the below syntax, 
RSKeymgmt.exe -a -f "C:\Test\ssrsencryptkey.snk" -p "<passwordof theencryptionkey>"

Once you  have completed these steps you access the Reports web page for the DR SSRS it should match the primary server. Remember I have not configured any real time DR for the ReportServer and ReportServerTempDB databases in the above example, You have to set up Logshipping/Mirroring for the SSRS databases for complete DR Solution. The above steps merely explains how to configure in case of failover.The same set of steps can be used if you want to copy SSRS across different servers. Hope the article helps.


Wednesday, 6 August 2014

SQL Database Supect Mode due to MSDTC Transaction

A database may go into suspect mode for various reason most of them are related to corruption of disk or data but SQL server may also put a database in a suspect mode if it’s not able to recover the database properly on restart
We had one such scenario recently, On a restart of planned outage one of the database went in to Suspect mode and we saw the following error in the SQL error log,


2014-08-01 10:37:31.760 spid5s       Attempting to recover in-doubt distributed transactions involving Microsoft Distributed Transaction Coordinator (MS DTC). This is an informational message only. No user action is required.
2014-08-01 10:39:13.480 spid5s       SQL Server detected a DTC/KTM in-doubt transaction with UOW  {14EBFD4C-0955-4C25-91B1-C418EE182E64}.Please resolve it following the guideline for Troubleshooting DTC Transactions.
2014-08-01 10:39:13.480 spid5s       Error: 3437, Severity: 21, State: 3.
2014-08-01 10:39:13.480 spid5s       An error occurred while recovering database 'DBName'. Unable to connect to Microsoft Distributed Transaction Coordinator (MS DTC) to check the completion status of transaction (0:75299367). Fix MS DTC, and run recovery again.
2014-08-01 10:39:13.480 spid5s       Error: 3414, Severity: 21, State: 2.
2014-08-01 10:39:13.480 spid5s       An error occurred during recovery, preventing the database 'DBNAME' (6:0) from restarting. Diagnose the recovery errors and fix them, or restore from a known good backup. If errors are not corrected or expected, contact Technical Support.

We already know the application using the database uses MSDTC for various sources and any unexpected restart put an In-doubt transaction on DTC. Basically In-Doubt transactions are transaction which the DTC services cannot decide to abort or resolve by itself but since the UOW for the transaction exists
in SQL, SQLOS does not differentiate it aa transaction in In-doubt status and kill it and the transaction will be marked as active till is resolved or aborted from DTC. This is similar to the topic which I wrote earlier in the blog regarding “-2 SPID Blocking”.

when the database tries to recover a DB with UOW of in-doubt transaction on restart it could not rollback or roll forward the transaction so SQL puts database eventually in suspect mode failing to recover completely. You go Component Services->Computers->My Computer->Distributed transaction Coordinator->Local DTC->Transaction List for standalone servers or  Component Services->Computers->My Computer->Distributed transaction Coordinator->Clustered DTC->Transaction List for a cluster instance, Right Click on all transaction marked In-Doubt either commit or abort the transaction depending on your requirements. And restart the SQL Services which should recover the database clean.

But the clean way of doing a restart of a SQL with DTC transactions is to switch of the application before the SQL restart which will avoid this issue in the first place. Hope this article helps!


Friday, 27 June 2014

UPDATE and REPLACE TSQL

Find the example TSQL to use UPDATE statement with REPLACE function,


UPDATE [AdventureWorks2008R2].[Person].[Person] 
SET FirstName = REPLACE(FirstName,'sam','jim') Where FirstName = 'sam'