Tuesday 31 January 2017

SQL Server : SQL Query takes too long to execute

Story start with a call as i was oncall that weekend. On Sunday evening , Application support team called DBA team and tell that one of the job which was working all fine till yesterday , today it is running since last 3 hours.  Normally, it takes approx 10 mins to complete. 

My reaction was, there may be blocking or any optimization job may be running (Usually maintenance job runs over weekend) . Let me have a look. 

I checked and found that indexes had been rebuilt on that instance and Resources utilization was also normal. There were no blocking at all. 

While i was analyzing , Application team told that Job got completed but we still need to analyse to avoid issues in weekdays. 

My second question was, whether there were any changes which were implemented recently. I got reply "Friday implementation added two new columns in a table which is part of query and job is an informatica Job". As it was informatica job, I asked for actual SQL code to get to know that whether problem is with SQL Server databases or with informatica server. 

I ran SQL code on prod DB to capture estimate and actual execution plans and get IO and CPU statistics. I concluded that stats are updated as estimate and actual execution plans are almost same. 

I decided to compare with non-prod DBs as Data volume is comparable in Dev and Prod both the environments . Query ran all fine on dev DB ( took less than 10 mins) however Execution plans are different.

While comparing both execution plans, i observed that key lookup is there on prod for same table on which 2 columns were added and that gave me clue.


I further checked the query and found that part where one column was mentioned in where clause but there was no index on it while other colomns which were part of lookup and index scan were being selected based upon colomn mentioned in where clause.



I added one covering indexes and added those 2 columns in include clause and that worked like a magic.

CREATE INDEX [IX_dsc] ON [dbo].[case_t] ([case_typ_dsc]) INCLUDE ([cd], [id]) 

While dealing with above mentioned scenario,I case across a awesome article written by Denny Cherry (One of my favorite)  and You should also read that. 

https://redmondmag.com/articles/2013/12/11/slow-running-sql-queries.aspx

Hope it will help.

Thursday 12 January 2017

SQL Server :: When was a Database taken Offline

Who changed SQL Server database state to OFFLINE or When was my database last taken Offline?

Here is a T-SQL script which tells when and who took the database offline or online recently.
This script utilizes the default trace and if the trace is reset after the database went offline or online then you have change the trace file path and name in the script.

DECLARE  @DBNAME nvarchar(100)
  ,@FileName nvarchar(max)
  ,@spid int
  ,@LogDate Datetime
  ,@Status nvarchar(10)
  
SET @DBNAME = 'AdventureWorks2008R2' -- Change DB Name
SET @Status = 'OFFLINE' --[OFFLINE or ONLINE]
SELECT @FileName=[path] FROM sys.traces WHERE is_default=1

DECLARE @ErrorLogTable table (Logdate datetime, ProcessInfo nvarchar(10), [Text] nvarchar(max))

INSERT INTO @ErrorLogTable
EXEC xp_readerrorlog 0,1, @Status, @DBNAME, NULL, NULL, 'desc'

SELECT TOP 1 @spid=cast(SUBSTRING(ProcessInfo,5,5) AS int)
   ,@LogDate=cast(Logdate AS nvarchar)
FROM @ErrorLogTable

SELECT DatabaseID, DatabaseName, HostName, ApplicationName, LoginName, StartTime
FROM sys.fn_trace_gettable( @FileName, DEFAULT )
WHERE spid=@spid and DatabaseName=@DBNAME and CAST(StartTime AS nvarchar)=@LogDate


If you didn't get the result by running above code it means, trace is reset after the database went offline. 

In this case go run following code and get the path.

select path FROM sys.traces WHERE is_default=1

Change file name and run following code. Last row will have the timestamp. 

SELECT DatabaseID, DatabaseName, HostName, ApplicationName, LoginName, StartTime
FROM sys.fn_trace_gettable( 'H:\sqlsysdb\MSSQL10\MSSQL\Log\log_228.trc', DEFAULT )
WHERE DatabaseName='TestDB'

Second option is to check SQL Server error log and you'll get something similar for your database. 


Hope it will be helpful.

Wednesday 11 January 2017

SQL Server :: Database Design Best Practices

Database Design Best Practices :-

Although there are several factors that needs to be considered while designing databases, here are some tips for designing your relational data warehouse database: 

Keep the data files and log files on separate drives with separate spindles. 
Make use of the fastest drives possible for data and log files.
Create data files for as many processors on the machine and distribute the files equally on the different available drives.
As transactional backups are normally not taken, set the Recovery Model of the database to SIMPLE.  If it is required to do transactional backups, then switch to BULK LOGGED recovery model before bulk data load operations and switch back to FULL recovery model after the data load.
Design views to pull data from base tables of relational data warehouse and specify query hints or filter conditions in them.
To avoid more locks or lock escalations, specify the TABLOCK query hint while querying or ALLOW_ROW_LOCKS = OFF and ALLOW_PAGE_LOCKS = OFF when creating tables or indexes or pull data from a read only database. 
Sometimes to aggregate fact data at the source before pulling the data, one can improve performance by creating  indexed (materialized) views for this and instead of doing aggregations every time, pull the data from the indexed view. 
Make sure that the resources are available to SQL Server for serving the data pull requests; one can use RESOURCE GOVERNOR to control the amount of resources available to OLTP and OLAP operations.  

Hope it will be helpful.

Thanks,