Bad Habits: Looking for Optimizations in All the Wrong Places

Over the years I've come across a lot of common scenarios where people are trying to optimize the wrong things. While many of these could probably make "bad habits" blog posts of their own, they all seemed to fit this same theme, so I thought I would pull together ten of the most common situations I see:


1. Always worrying about the highest wait

A common thing I see out in the wild is someone constantly monitoring their waits, and always assuming that the highest wait is a problem. An important thing to keep in mind is that something is always going to be the highest wait; keep things in perspective. Is something on your system accumulating a lot of waits? Is it a real wait you should be concerned about, or is it among the queue and timer waits that should always be ignored (see the NOT IN list here)? Is it actually associated with a real, observable performance problem on your system? If you are always responding to the highest wait on your system and trying to "fix" it, you just might be a solution searching for a problem. :-)

Paul Randal (@PaulRandal) has some great posts on how to approach wait stats analysis, one I already linked to:

Please read through those and, when using this methodology to tune servers, make sure you are focusing on waits that matter and that they are actually contributing to a real problem on your server.

Back to the Table of Contents


2. Treating "red flags" as red flags, always

There are a lot of "red flags" you might come across when analyzing an execution plan - from obvious ones like scans, sorts, and lookups to more subtle ones like implicit conversions, mismatched statistics, and spills to tempdb. While in a lot of cases these data points are good indicators of potentials issue, they aren't always necessarily a problem - much like the highest wait isn't always a crucial scenario that needs to be addressed immediately.

Let's take a look at a scan, for example. Just about every time anyone sees an index scan or table scan in their execution plan, they panic, and try to figure out how to coerce it into a seek. The problem is that sometimes SQL Server chose a scan precisely because, for this query, it is the most efficient way to retrieve the data. (I talk about this specific case in a lot more detail here - see #2.)

There are a variety of reasons why a scan would be chosen over a seek. For example,

So if you see yourself looking at a query that scans 12 rows and returns in 7 milliseconds, and catch yourself thinking:

"Oh dear, this plan has a scan! We need to fix it!"

Who cares? The scan was against a table with 12 rows, and the entire query completed in 7 milliseconds. How much faster do you think you can make that, even if it did make sense to use a seek?

Back to the Table of Contents


3. Changing a cursor to a while loop

In fact, I challenged several colleagues to write a WHILE loop that outperformed my cursor solution (see the post here), and none could do it. These aren't junior folks, either; these were MVPs you know, but I left them anonymous intentionally.

In reality, the problem is the underlying logistics of processing data row by row as opposed to set-based; it has nothing to do with whether your syntax uses an explicit cursor using DECLARE CURSOR or an implicit one using WHILE. The nice thing about using a cursor instead of a loop, though, is that you can have some control over the behavior and resources used by the cursor by overriding the default options.

Back to the Table of Contents


4. Changing the columns listed inside EXISTS

I see questions on Stack Overflow all the time where people will be complaining about a slow query, and the code happens to have an EXISTS clause, like this:

WHERE EXISTS
(
  SELECT * FROM dbo.something_else...
);

Inevitably, someone will chime in and say:

"You can make it faster by changing SELECT * to SELECT 1."

While in general I agree that SELECT * shouldn't be here, it's not for performance reasons; I prefer 1 simply because that self-documents that this returns no data (maybe more useful when the nesting of parentheses is much deeper than in this trivial example).

The truth of the matter is, SQL Server doesn't care all that much what you put there, because it knows no data is returned, and it is simply converting the entire clause to a Boolean expression. Want proof? Change it to:

WHERE EXISTS
(
  SELECT 1/0 FROM dbo.something_else...
);

If SQL Server cared what was in the column list, that would cause an error at runtime. (As a side note, though, the engine *does* parse and validate column names, so you can't put foo/bar there.)

Back to the Table of Contents


5. Adding an index to every view

I've wanted to write about this for ages. I often see this line of thinking:

"I have a view, and it's slow. I tried adding an index to it, and it's still slow!"

Adding an index to a view is not a magic turbo button - an indexed view actually serves a much narrower purpose than just generically speeding up all queries, and there is significant overhead to maintaining one, too.

To make a long story short, indexed views are typically used to make calculating aggregates cheaper, so if you often query from the Sales table something like CustomerID, SUM(OrderTotal) GROUP BY CustomerID for example, you can maintain that view with a small number of rows (the number of unique CustomerID values), rather than having to read the whole Sales table every time you run the query.

If you don't have an aggregate, and you are just materializing every single row in the Sales table (say, joined to the Customers table), then this isn't really going to be any faster than a join against the base tables, since you still have to read the same number of rows. Now, it might be a *little* bit faster, if the indexed view has fewer columns than the covering index on the base table, or if the query against the base table requires lookups, or if the view has a where clause that significantly reduces the number of rows that are persisted. Focus on a narrower index in the base table here, and/or a filtered index, instead of trying to apply magic to a view.

Back to the Table of Contents


6. Maximizing Productivity Through Shorthand

This isn't about performance at all; it's about productivity. I get the sense that people think their lives will be easier, they'll write code faster, and they'll get to leave earlier today if they do things like this:

DECLARE @d DATETIME = SYSDATETIME();
SELECT DATEPART(d, @d);
SELECT @d+1;

Instead of this:

DECLARE @d DATETIME = SYSDATETIME();
SELECT DATEPART(DAY, @d);
SELECT DATEADD(DAY, 1, @d);

I'll admit that the code is more verbose, but what these productivity seekers fail to recognize is that the shorter syntax can cause problems later. For example, let's say you task a junior developer with updating these code samples to make two changes: (1) get the week number instead of the calendar day, and (2) get more granular by switching to DATETIME2.

The more verbose code sample, where the junior developer follows the convention by spelling out the date part and using DATEADD() explicitly, works fine:

DECLARE @d DATETIME2 = SYSDATETIME();
SELECT DATEPART(WEEK, @d);
SELECT DATEADD(DAY, 1, @d);

However, the shorthand can lead to a big mess, since the developer is quite likely to use the wrong abbreviation for week (w is actually short for weekday), and the arithmetic operation is no longer valid for the new date/time types introduced in SQL Server 2008.

DECLARE @d DATETIME2 = SYSDATETIME();
SELECT DATEPART(w, @d);
SELECT @d+1;

The first statement yields the wrong value (well, it might just happen to appear to be right, if the developer does this work during the 6th week of the year on a Friday, for example), while the second yields this error:

Msg 206, Level 16, State 2, Line 3
Operand type clash: datetime2 is incompatible with int

I've written about this one before, but please, you're still going to finish work at the same time today, just be consistent and spell it out, even in cases where you know it won't break. Especially if you're blogging or furnishing answers to a wide audience, many of whom will not know about these dangers, and may develop these habits and pass them along for ages before it actually bites them. Ask me how I know.

Back to the Table of Contents


7. Encapsulating Common Code into Scalar UDFs

In most programming languages, one of the first things you learn is that if you're going to do the same thing in more than one place, you modularize that and put it in its own function, class, method, what have you. At my previous job we did a lot of formatting things in T-SQL (a bad habit in its own right: that's the presentation tier's job!), and to reduce the complexity of queries that used the same formatting routines, we would hide those away in scalar functions.

One example was a function we called dbo.PrettyDate - it simply formatted date/time values into nice English words, like September 16, 2014. It was actually quite simple:

CREATE FUNCTION dbo.PrettyDate( @d DATETIME )
RETURNS VARCHAR(20)
AS
BEGIN
  RETURN
  (
    SELECT DATENAME(MONTH, @d) + ' ' 
      + CONVERT(VARCHAR(2), DATEPART(DAY, @d)) + ', ' 
      + CONVERT(CHAR(4), DATEPART(YEAR,@d))
  );
END
GO

Now, we could choose to change the following query:

SELECT [SalesOrderID], OrderDate, 
  PrettyDate = DATENAME(MONTH, OrderDate) + ' ' 
  + CONVERT(VARCHAR(2), DATEPART(DAY, OrderDate)) 
  + ', ' + CONVERT(CHAR(4), YEAR(OrderDate))
FROM [Sales].[SalesOrderHeader];

To this simplified version, using the UDF instead:

SELECT [SalesOrderID], OrderDate, 
  PrettyDate = dbo.PrettyDate(OrderDate)
FROM [Sales].[SalesOrderHeader];

If you compare the execution plans in Management Studio, you may think these are 100% equivalent:

Comparing plans in Management StudioComparing plans in Management Studio

SSMS completely ignores the fact that a function is used, and by default does not tell you anything about which query was actually faster. If you ignore the query text and look at the plan diagram, the tooltips, and the properties, you would have no idea that these were actually two different queries, or that another object was involved.

However, if we look more closely at the plans in SQL Sentry Plan Explorer PRO (or SQL Sentry Performance Advisor), where we can see the entire T-SQL call stack (only a small portion is shown here), we can see that the query that uses the function is definitely more expensive:

The same plans in SQL Sentry Plan Explorer PROThe same plans in SQL Sentry Plan Explorer

This shows that the function is called for every row (over 31,000 times!), and that there is a significant performance penalty involved. In fact, the day we turned on Performance Advisor in our environment was the day we became painfully aware that we needed to change all of these "handy" functions to more performance-friendly inline versions. (Not only did this improve our end user query response time, but it also improved efficiency in other areas, such as statement-level tracing.)

Of course, the inline calculation doesn't have to be as complex as I've made it - these days (by which I mean SQL Server 2012 or better), you could just use FORMAT(OrderDate, 'MMMM dd, yyyy'). But I'd be careful with that one and compare its performance to the more verbose old-fashioned approaches, as the overhead of invoking CLR is certainly not free. And yes, this is after running the query multiple times, eliminating caching as a factor:

Demonstrating the CLR overhead inherent in FORMAT()Demonstrating the CLR overhead inherent in FORMAT()

(Which highlights yet another bad habit: using new functions because they're new, or tidier, or fancier, without validating that they're actually better than the tried and true methods you already have. MERGE and OFFSET/FETCH are two more examples I've written about.)

Long story short: resist the temptation to modularize and ignore the allure of shiny new functionality until you've properly vetted it.

Back to the Table of Contents


8. Avoiding Foreign Keys Due to Their Overhead

So yes, foreign keys do have write-time overhead, since values inserted into child tables must be validated at the parent. However, and even setting aside the arguments that you can't trust the application to guarantee data integrity, this actually can *help* the optimizer for read queries. And let's be honest, in most systems, we're doing more reading than writing, or at least the read side of the workload is much more sensitive to delays than the write side. Grant Fritchey (@GFritchey) talked about this here:

Don't sacrifice design or data integrity to avoid a performance problem you don't even have yet. IMHO this mindset is typically a fantastic example of premature optimization.

Back to the Table of Contents


9. Shrinking the database to free up space

This is one of my favorites. People seem to care about free space on a drive as if their pro-rated bonus depended on it. Unless you have a very tight disk with no room for any of your databases to grow, in which case you should just go buy more disk, this isn't something you should be worrying about. The database is not going to be faster because it's smaller, and neither are backups (a backup does not include empty pages). So, freeing up the space by shrinking the file is not really optimizing anything - in fact it is much more likely to cause worse problems due to increased fragmentation.

Besides, if the database is going to grow again, shrinking it temporarily is just completely wasted effort. Not just because you can't really use that space for anything in the meantime, but also because growing the file is expensive, and user transactions have to wait for it to finish. Depending on your auto growth settings, this can be quite disruptive, and will almost certainly happen during high volume periods. This is even worse for log files - at least with data files you can take advantage of instant file initialization.

(In all honesty, you are much better off pre-allocating the file size to something much bigger than you're going to need any time soon - this way you're prepared for any unexpected file growth.)

There are exceptions, of course - uncontrolled growth due to abnormal activity, or because backups were not set up, that kind of thing, and you are fairly confident the file won't have to be that large for a long time, if ever. If this is the case, then yes, feel free to return the database to normal size, but no smaller than you expect it to grow in the foreseeable future. Before you even touch the files though, make sure your recovery model is appropriate, that the right set of database and log backups are scheduled, and that autogrowth settings are correct (a good balance of minimizing the number of events and minimizing the time each one takes - you can partially predict this by reviewing past growth events in the default trace).

When you are ready to shrink your files, and you know the right target size you want to set, please use DBCC SHRINKFILE rather than DBCC SHRINKDATABASE (which is what Tasks > Shrink Database in Management Studio will do). The latter tries to free up space in ever single file, and you are far better off setting a specific size on a per-file basis. Here is an example for setting the data file in your_database to 500 MB:

DBCC SHRINKFILE(N'your_database_data', 500);

If the data file is massive, you may want to perform this task in stages, since it may take a lot of work to shuffle those pages around.

If, on the other hand, I've convinced you that you're better off making the file much larger to accommodate future growth, instead of fighting with your database to constantly free up the space that it needs to use, then you can make the file larger using ALTER DATABASE. For example, manually to expand the file to 2 GB:

ALTER DATABASE your_database
MODIFY FILE (name = N'your_database_data', size = 2000MB);

Log files are even more prone to this problem, and many people think they can optimize their databases by making the log file smaller or trying to get rid of it altogether. Please heed the same advice here: in order to manage your log, you need to be in the right recovery model, have the right backup schedule in place, and resist the temptation to frequently try to make the log as small as possible. Some other posts to read:

Back to the Table of Contents


10. AUTO-SHRINK, maintenance plans, and AUTO-CLOSE

I've already explained why shrinking files manually is usually a bad idea; having the system do this for you automatically is even worse. Enabling auto-shrink so that SQL Server arbitrarily decides when it should reorganize your data to wastefully shrink the file(s) temporarily is just a bad idea. Using the maintenance plan option is only slightly better (since at least you know when it is likely to happen), but still pretty bad:

Maintenance Plans : The dreaded Shrink Database taskMaintenance Plan : The dreaded Shrink Database task

Many people have suggested that auto-shrink option should actually be called auto-fragment. I tend to agree. Please don't turn either of these things on; if you have abnormal data or log file growth, deal with it manually, on a case-by-case basis. If you have to deal with this problem frequently, either get your recovery model and backup routines in check or, as I said before, buy more disk.

As for auto-close, this is an option commonly set in Web edition when your database is served by a 3rd party hosting company - they set all user databases to auto-close in an effort to conserve resources when your database isn't actively being used. The problem comes when someone does connect to your database - they have to wait seconds (or more, depending on a variety of factors) for the database to come completely online. Now, you might not have a lot of control over how your discount hosting company decides to implement policies on your servers, but if you are hosting SQL Server yourself, I'd argue that you would very rarely want to enable this option.