jump to navigation

Who is Active ? October 25, 2010

Posted by furrukhbaig in DMV's, Execution Plan, Optimize, Queries running on the server, sp_who, sp_who2, sp_whoisactive, system monitoring, Tips, XML Execution Plan.
Tags: , , , , , ,
add a comment

Adam Machanic has released new build of system monitoring stored procedure SP_WhoIsActive. Its really useful and I would like to congratulate him for his efforts.

check out the link below for his original post and to download the code.

http://sqlblog.com/blogs/adam_machanic/archive/2010/10/21/who-is-active-v10-00-dmv-monitoring-made-easy.aspx

Enjoy !!

Find out whats running on SQL Server 2005 December 13, 2007

Posted by furrukhbaig in DMV's, Execution Plan, Optimize, Performance, Performance Tuning, Profiler, Queries running on the server, SQL Server 2005, SQL Server 2005 features, XML Execution Plan.
Tags: , , , ,
2 comments

Everyone wants to know whats running on the box. Specially if your job is to stabalise server you always concern what is killing the box.

A Friend of mine has published very usefull post with scripts to find out whats running on the SQL Server with their execution plan.

http://www.proteanit.com/b/2007/01/22/a-useful-script-to-analyse-current-activity-on-your-box/

The same can also be managed if you run profiler and capture XML execution plan which is not always possible due to security issue and overhead of profiler itself.

Index Fragmentation and dm_db_index_physical_stats December 12, 2007

Posted by furrukhbaig in DMV's, dm_db_index_physical_stats, Fragmentation, Index Fragmentation, Index tuning, Index usefulness, Indexes, Optimize, Performance, Performance Tuning, SQL Server 2005, SQL Server 2005 features, Tips.
Tags: , , , , , ,
1 comment so far

Fragmentation can cause serious performance issues and its vital to understand how this can impact performance before I tell you how to fix it.

Fragmentation can cause slow running queries due to more disk IO then usual which itself cause by page spliting. Fragmentation not only cause query performance it can also slow down write operation such as INSERT, DELETE and UPDATE.

SQL Server write data on 8k data pages. This limit can be reached very quickly, for wide tables storing lots of varchar/nvarchar columns (nvarchar take 2 bytes to store 1 character), will cause request for more data pages. When SQL Sever uses whole page then it occupy another page to store more data. If table has got cluster index then data needs to be stored in sorted order as cluster index ensure physical order of data according to the cluster key and thats why you can only have one clustered index per table.

Now assume you have compound clustered index on CountryID and CustomerNo colums on a transactional table which recieve 10 transactions per minute for an online car rental portal. Clustered index will make sure that data is stored in sorted order on CountryID and CustomerNo. Having millions of rows in table consider what will happen when you recieve transaction row which needs to be inserted on page 1 (CountryId = 1, CustomerId = 100) . Above will require to make some room on 1st data page (depending on FILLFACTOR, default is 100%). If require data page has not got enough room for the row to fit then rows on the data pages needs to be move forward (called PageSplit) in order to make some room for new row to fit on page in sorted order. This can cause lot of page split and fragmentation as new pages required (caused by page splits on existing data pages)  can not be guarantee to be in sequence with other page. This can seriously hurt performance for data modification (due to pagesplit) and data read (due to out of order pages) operations.

dm_db_index_physical_stats can be used to check fragmentation.

SELECT * FROM sys.dm_db_index_physical_stats(DB_ID(N’YourDB’), OBJECT_ID(N’dbo.CarRental’), NULL, NULL , ‘DETAILED’);

The above SQL will return fragmentation information for given Database name (Parameter 1) and Table Name (Parameter 2). Last parameter suggest the mode of analysis.

It is good practice to create clustered indexes on incrementing value (e.g. IDENTITY) to avoid page splits and out of order pages.

Parameter Sniffing and OPTIMIZE FOR August 22, 2007

Posted by furrukhbaig in Optimize, Parameter Sniffing, Performance, Query hint, SQL Server 2005, stored procedure, TSQL.
Tags: , , , , ,
14 comments

Parameter Sniffing refers to a process whereby SQL Server’s execution environment “sniffs” the parameter values during first invocation, and passes it along to the query optimizer so that they can be used to generate optimal query execution plans.

“First invocation” also refers to the first invocation after a plan was removed from cache for lack of reuse or for any other reason. The optimizer “knows” what the values of the input parameters are, and it generates an adequate plan for those inputs parameters. SQL Server internally maintains the statistics and distribution of the values in the columns used for filtering.

While parameter sniffing is certainly a powerful feature, it can cause problems when a procedure’s plan happens to have been kicked out of the procedure cache (or was never in it) just prior to the procedure being called with atypical parameter values. This can result in a plan that is skewed toward atypical use, one that is not optimal when called with typical values. Since, once cached, a query plan can be reused for parameter values that vary widely, the ideal situation is to have a plan in the cache that covers the typical usage of the procedure as much as possible. If a plan makes it into the cache that is oriented toward atypical parameter values, it can have a devastating effect on performance when executed with typical values.

An example would probably help here. Suppose we had a stored procedure that returns sales data by country. In our case, three-fourths of our sales is in the UK. The procedure takes a single parameter, @country, indicating the country for which to return sales info. It uses this parameter to filter a simple SELECT statement that returns the requested sales data.

CREATE PROCEDURE uspGetCountrySale
(@Country Varchar(50))
AS 
SELECT OrderID, CustomerID, EmployeeID, OrderDate
FROM dbo.SaleOrders
WHERE Country = @Country 
GO

 The optimizer would most likely to choose to do a clustered index scan when creating execution plan for this query because (given that “UK” would normally be passed in for @country) so much of the table would be traversed anyway that scanning it would require less I/O and be faster than repeated nonclustered index lookups. However, what happens if the plan happens to have been kicked out of the cache (let’s say due to an auto-statistics update) just prior to a user calling it with, say, “Spain”, where we have almost no sales? Assuming a suitable index exists, the optimizer may decide to use a nonclustered index seek in the new query plan. Subsequent executions of the procedure would reuse this plan, even if they passed in “UK” for @country. This could result in performance that is very slower than the scan-based plan.

As a workaround prior to SQL Server 2005, local variables can be used instead of stored procedure parameters. Please note SQL Server can not sniff the value of local variable. This will lead SQL Server to use statistics on filter column and create a plan which is best for average values in that column. This can also lead to serious performance when same procedure called with atypical value but will do best for typical values.

CREATE PROCEDURE uspGetCountrySale
(@Country Varchar(50))
AS 
DECLARE @_Country Varchar(20)
SET @_Country = @Country 
SELECT OrderID, CustomerID, EmployeeID, OrderDate
FROM dbo.SaleOrders
WHERE Country = @_Country 
GO
 
 

There’s a new query hint provided in SQL Server 2005 to tackle the problem—the OPTIMIZE FOR query hint. This hint allows you to provide SQL Server with a literal that reflects the selectivity of the variable, in case the input is typical. For example, if you know that the variable will typically end up with a highly selective value, you can provide the literal which reflects the typical value. for example.

 
CREATE PROCEDURE uspGetCountrySale (@Country Varchar(50))
AS 
DECLARE @_Country Varchar(20)
SET @_Country = @Country 
SELECT OrderID, CustomerID, EmployeeID, OrderDate
FROM dbo.SaleOrders
WHERE Country = @_Country
OPTION (OPTIMIZE FOR(@Country = ‘UK’)); 
GO

 

 

SEEK is better then SCAN August 17, 2007

Posted by furrukhbaig in DMV's, dm_db_index_physical_stats, dm_db_index_usage_stats, Execution Plan, Index tuning, Index usefulness, Indexes, Optimize, Performance, Performance Tuning, Query hint, SEEK vs Scan, SQL Server 2005, SQL Server 2005 features, TSQL, XML Execution Plan.
add a comment

I have been involved in performance tuning for several databases and usually come across a situation where indexes has been created but due to poorly written SQL Code those indexes are not utilized or badly used by optimizer. To find out how optimizer using your index use query below. This will provide you the frequency optimizer is uses SEEK, SCAN and LOOKUP operation on indexes. 

As a rule of thumb SEEK is better then SCAN. But there are few scenarios where SCAN operator perform better then SEEK specially when large number of rows are expected to return by operation.

SELECT 
            Table_Name = OBJECT_NAME(usg.object_id),
            ind.name,
            usg.user_seeks,
            user_scans,
            user_lookups,
            last_user_seek,
            last_user_scan,
            last_user_lookup
FROM
            sys.dm_db_index_usage_stats usg
            INNER JOIN sys.indexes ind
                  ON usg.object_id = ind.object_id
                  AND usg.index_id = ind.index_id
WHERE
            — Your table list goes here
            OBJECT_NAME(usg.object_id) IN ( ‘Table_Name1’, ‘Table_Name2’ )
ORDER BY
            Table_Name       

  

Is this Index useful ? August 17, 2007

Posted by furrukhbaig in DMV's, dm_db_index_usage_stats, Index tuning, Index usefulness, Indexes, Optimize, Performance, Performance Tuning, SQL Server 2005, SQL Server 2005 features, TSQL.
Tags: , , , , , , , , ,
1 comment so far

Creating indexes on tables always been a tricky question. Creating a index does not mean that optimizer will use that index to solve queries. SQL Server 2005 introduced a dynamic management view sys.dm_db_index_usage_stats to capture the use of indexes by SELECT, INSERT, UPDATE and DELETE. Thus any index or table which does not exists in this view is overhead to system and not useful.

Following query will return all the indexes and tables which never get used by SQL Server since the service started. The best way to check is to create index and run the data work load or regression test that should make use of all indexes and run following query.see full article on SQL Server Customer Advisory Team (CAT) How can SQL Server 2005 help me evaluate and manage indexes

SELECT  
            ObjectName = OBJECT_NAME(ind.object_id),
            IndexName = ind.name
FROM
            Sys.Indexes ind
            INNER JOIN Sys.Objects obj ON obj.object_id = ind.object_id
WHERE
            OBJECTPROPERTY(obj.object_id,‘IsUserTable’) = 1
            AND NOT EXISTS
            (
                  SELECT
                        1
                  FROM
                        Sys.dm_db_index_usage_stats usg
                  WHERE
                        usg.object_id = ind.object_id 
                        AND usg.index_id = ind.index_id
                        AND ind.index_id = usg.index_id
            )
ORDER BY
            ObjectName,
            IndexName