jump to navigation

About Me

Welcome to my blog !!

It my passion to find out internals of database engine and optimizer to make our code more useful and optimal.

I have started writing this blog to keep all the useful script and techniques in one place where I can alway refer back. The other reason is to share thought with others in community.


1. Chirag Darji - September 27, 2007


Very good collection !! I learn so many new things from this. Thanks.


2. Frank Bazan - October 3, 2007

You haven’t posted for a while Furrukh… I’m eagerly awaiting the next installment 😉


3. David Ly - October 26, 2007

Keep up the good work, I really enjoy your blog, hopefully u will have more articles for us all. I learn immensely from this blog


4. Satya SK Jayanty - January 13, 2008

Great resource on SQL Server, best on my reference library – keep it up.


5. kevin richardson - February 20, 2008

Very interesting queries. I am developing a tick client app for Community Care organisations.

As a novice I am amazed at the fantastic info you can generate with your scripts. And surprised at how many indexes I have create dthat are nota apparently beig used. I pretty much indexed every column where it is used in a join or a the leftside of a criteria statement eg. where columnID=??

Speed is very critical for me as I am running my app over the internet so will try to use your scripts to work out what is and is not needed, maybe remove a few of those indexes will make it faster.

Do you do consultancy ? how much would you charge to advise on the results ?

kind regards
Brisbane Australia


6. kevin richardson - February 20, 2008

Perhaps you can answer a question for me, I currently have a stored proced that has 12 select statements, one of the twelve statement s ever run depending upon the combination of 4 input paramenters.

I found this to be alot faster than using a case statement in the sql to evaluate which parameters to query on, and it allowed me to optimise the tweleve selects.

My question is, having all twelve in one stored proc, does this get cached properly or would it be better to break them out into 12 individual stored procs and do the filtering determining which stored proc to call on the clientside.

Many thanks for any feedback.
kind regards


7. furrukhbaig - February 21, 2008

Execution plan created for whole stored procedure as execution path is not determined by the time of creating execution plan. So all 12 statement would be included in execution plan.

Breaking the code in 12 different stored procedure can reduced the time of stored procedure compilation and execution plan creation. but often it is not noticeable. Even if optimizer decided to regenerate execution plan for one query within stored procedure it does not need to recompile whole stored procedure. see statement level recompile for more detail.

In contrast having all 12 statement in one stored procedure will increase maintenance of that stored procedure as in order to change one query whole stored procedure needs to be tested with all the possible execution paths (depending on parameters).

so its your choice if you think that stored procedure will not be changed very often and testing of single stored procedure is easy then I would keep all 12 statement in one stored procedure.

Hope this will help.

Furrukh Baig


8. gobalakrishnan - October 23, 2008

I have query which i couldn’t solve it… My requirement is as follows:


But the above query is producing some duplicates…

Can you please solve it for the above query? I need it very urgently…

I hope you will help me out…


9. Vishal - October 30, 2008

Hi There,

Your blogs are fantastic and very helpful.

One thing i wanted to ask which you may be able to help me on is how to optimize deleting from a large table.

I have a table which consists of many records and a stored procedure of mine deletes some records every time its run. It does not delete everything as i dont want everything to be deleted.

The delete statement alone is taking about 3 minutes to process.

I have added a index on this table but it still seems to take just as long. What can you recommend in this case?

Thanks for your help,


10. furrukhbaig - October 30, 2008

Hi Vishal,

Usually the technique used to delete rows is to have a column called “deleted / active” in table and flag rows which you want to delete rather physically deleting them.

But you want to physically delete rows in tables containing more then 10 millions rows then you would need to partition you table first so every partition of table will be considered as small table and would be easy to delete in small table rather one very big tables. deciding partition key is critical decission and performance of sql is based on that key. Please note having one big table with indexes can REDUCE the performance in delete operation as all the indexes needs to be updated. but on the other hand index will benefit to locate the rows for deletion so a balance of having no-index and having many-indexes on table is very critical.

I have personally played with table containing more then 400 million rows with partition created on date ranges. UPDATE and DELETE operation works significantly faster on partition table if partition key is used in WHERE clause as small dataset will be processed rather one very big dataset.

I hope above will help.

If you need more help then please provide some information like no of rows in table, row size, clustered and non-clustered indexes on table. And WHERE clause used in DELETE operation.

Furrukh Baig


11. DV - February 10, 2010

Excellent blog Furrukh Baig…. It helped me a lot.

Cheers – DV


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: