T-SQL BEST Practices

In this article, we will discover some best practices of T-SQL queries. Badly written queries can cause performance and I/O problems. For this reason, we should pay attention to keep some rules in our mind when writing T-SQL queries.

Don’t use “SELECT * “ in queries

When we write a query, we  need to  select only required columns. Because this approach reduces the I/O and network roundtrip. Assume that, in case you need one or two column but you select all columns of a table. It will damage the SQL Server performance.

Try to avoid “TABLE OR INDEX SCAN” in queries

Performance is the most critical issue for SQL Server. For this reason, we should pay attention to performance criteria of queries. Well written queries will consume minimum resources and will be effective in terms of performance. Indexes provide query performance, so we need use indexes effectively.

Tip: When we see index scan in the execution plan, it means SQL Server reads all index pages and then find relevant records. But an index seek reads only required pages and is much selective than index scan.

In this sample, we will look at the two queries and will compare the execution plans of queries.

When we looked at the query execution plan, we saw cluster index scan. But when we convert the same query like this.

It uses cluster index seek and provides better performance than the first query.

Tip: When the index covers all columns from referenced query it is called “cover index”.

The basic purpose of this type of index is to generate less I/O. To achieve this idea, queries have to use the index seek.

Now, we will look at the query below and activate “Include Actual Execution Plan” in SQL Server Management Studio.

When we execute the query, we see index scan and index recommendation.

In this step, we will apply index recommendation because we want to change the “index scan” operation to “index seek”.

When we look at the index “CREATE” statement,  it covers all columns. When we execute the same query, the execution plan will change.

Tip: Don’t use functions on the left side of “where” clause because it reduces query performance. Like this:

Use TRY-CATCH block for error handling

Sometimes our queries can return errors. We need to handle these errors. Especially when we get an error in transaction blocks, we have to rollback this transaction. Because open transactions may cause serious problems. Therefore, we need to use TRY-CATCH blocks. This sample is the main usage of the TRY-CATCH blocks.

But in some cases, the “TRY-CATCH” block cannot catch errors. For this case, we need to use “SET XACT_ABORT ON”. Now let’s take look at the below script. When we execute this query,  we will get an error.

We will check the number of open transactions.

To avoid this case, we will add the “SET XACT_ABORT ON” statement at the beginning of the query and then re-execute the query. We will get the same error, but the open transaction will rollback automatically when the error occurs.

Use SCHEMABINDING to create view or function

Sometimes, we want to keep objects from changing that we used in views or functions. Assume that we created a view and then some developer or dba drops a table that is used in this view. To prevent this case, we have to use SCHEMABINDING.

We will create a simple example in which SQL Server will not be allowed to drop table or change columns because we will use SCHEMABINDING.

Avoid index hints

Some SQL queries include index hint. When you use index hint in queries, it deactivates the query optimizer because this hint forces query optimizer to use a constant index.

This query makes 371818 logical reads.

But, when we remove index hint, this query works and offers much less logical reads

Avoid subqueries

Badly written queries always damage SQL Server performance. When you write a subquery, this subquery query will run a number of rows.

Now we will look at this example.

We can re-write above query like this.

Use EXISTS operator instead of IN

In T-SQL BEST Practices, notes use the EXISTS operator instead of IN. Because in the huge tables, it will make sense.

We can re-write this query like:

This query will offer effective performance than the previous query.

Use small batches in delete and update statements

Big update or delete statement sometimes can be very dangerous. Because transaction log can grow, it can consume a lot of resources or can create exclusive locks and this case affects concurrent users. For this reason, we need to change update or delete methodology. This update query will take a long time and create an exclusive lock on SalesOrderDetail.

We can change the previous query as the following.



Useful tool:

SQL Complete – write, beautify, refactor your code easily and boost your productivity.

Esat Erkeç

Esat Erkec is an SQL Server professional that began his career as a Software Developer over 8 years ago. He is an SQL Server Microsoft Certified Solutions Expert. Most of his career has focused on SQL Server Database Administration and Development. His current interests are in database administration and Business Intelligence.
Esat Erkeç

Latest posts by Esat Erkeç (see all)