Written by 10:17 Database administration • 4 Comments

Common Mistakes of DBA in MS SQL Server

In this article, we are going to review DBAs mistakes, the consequences of which were quite perceptible and which I had to deal with.

The purpose of the article is to prevent users from repeating these mistakes. Sometimes, a bad experience is even more valuable than a positive one.

  1. Percent incrementation of database files
    Since the file growth of the database is quite a resource-intensive operation, it may seem that setting this growth in percentage ratios can be a good idea. I agree that many guidelines recommend setting a fixed increment in MB, rather than percental. However, they do not explain the reasons. Based on practice, it is not recommended to set the increment of a database file above 1 GB, as MS SQL Server will allocate 1 GB 2 times rather than 2 GB at once.
    Also, if you allocate less than 32 MB, sooner or later the database will simply slow down. So, it is better to set a fixed increment on database files from 32 to 1024 MB. However, why is it impossible to specify the increment of database files as a percentage? It turns out that as soon as the file is less than 1 MB, the DBMS rounds off this value to 0 MB and stops increasing this file. As a result, the system is down. To find out how much we need to increase the file, simply perform a daily analysis in order to check how much each file gains in MB and set the appropriate number in the range from 32 to 1024 MB. We can collect statistics on the growth of database files in the following way.
  2. There are many foreign keys for a table
    Have you ever tried to check the plan when deleting at least one row from a table that is referenced by almost hundreds of other tables? You will be surprised to know how many nested loops there are. All of them are foreign key checks. That’s why if tables are large (over one million records), it is better to disable foreign keys for a table in which data will be deleted. Then, you will need to delete all the necessary and related data. After that, enable foreign keys. A similar situation occurs with cascading updates and deletions. If there are a lot of external links (hundreds), then deleting even one row can lead to a long and very extensive blocking.
  3. Many needless indexes
    Very often, you can see in guidelines that when creating foreign keys, it is necessary to build indexes, especially when using these keys for joins. You need to check that indexes are used, otherwise, these needless indexes will only slow down any data modification operations. To check this, use the following query:

    select DB_NAME(t.database_id)		as [DBName]
    	 , SCHEMA_NAME(obj.schema_id)	as [SchemaName]
    	 , OBJECT_NAME(t.object_id)		as [ObjectName]
    	 , obj.Type						as [ObjectType]
    	 , obj.Type_Desc				as [ObjectTypeDesc]
    	 , ind.name						as [IndexName]
    	 , ind.Type						as IndexType
    	 , ind.Type_Desc				as IndexTypeDesc
    	 , ind.Is_Unique				as IndexIsUnique
    	 , ind.is_primary_key			as IndexIsPK
    	 , ind.is_unique_constraint		as IndexIsUniqueConstraint
    	 , t.[Database_ID]
    	 , t.[Object_ID]
    	 , t.[Index_ID]
    	 , t.Last_User_Seek
    	 , t.Last_User_Scan
    	 , t.Last_User_Lookup
    	 , t.Last_System_Seek
    	 , t.Last_System_Scan
    	 , t.Last_System_Lookup
    from sys.dm_db_index_usage_stats as t
    inner join sys.objects as obj on t.[object_id]=obj.[object_id]
    inner join sys.indexes as ind on t.[object_id]=ind.[object_id] and t.index_id=ind.index_id
    where (last_user_seek	is null or last_user_seek		<dateadd(year,-1,getdate()))
    and (last_user_scan		is null or last_user_scan		<dateadd(year,-1,getdate()))
    and (last_user_lookup	is null or last_user_lookup		<dateadd(year,-1,getdate()))
    and (last_system_seek	is null or last_system_seek		<dateadd(year,-1,getdate()))
    and (last_system_scan	is null or last_system_scan		<dateadd(year,-1,getdate()))
    and (last_system_lookup is null or last_system_lookup	<dateadd(year,-1,getdate()))
    and t.database_id>4 and t.[object_id]>0 -- system databases are excluded

     

  4. Inefficient use of resources
    It is often recommended to store the transaction log and database file on different storage devices. If you use RAID 10 with 4 and more SSD-disks, then there is no sense in isolating files from each other. For a higher speed, if necessary, the tempdb database can be stored on a disk that is split from RAM. Also, too large amounts of RAM provided to DBMS will make the latter fill all the memory with irrelevant query plans.
  5. Bad backups
    It is always necessary not only to check the created backups but also to move them on a test stand and restore them. All this needs to be automated with further notification to administrators of both problematic and successful recoveries.
  6. Bad fail-tolerance
    Before making a cluster of two or more servers, you need to make sure that the data storage system is also fail tolerant. If the latter fails, the entire fail tolerance will be reduced to zero.
  7. Complicated diagnostic without simple checks
    If there is a system downtime, first, you need to check MS SQL Server logs and then dig deeper. You should first conduct simple checks and then proceed to a complex diagnostic.
  8. Lost tables
    Tables can be extended with unnecessary data which needs to be archived into a separate database or deleted. In addition, the tables may no longer be used.

These are the situations I have come across. Therefore, I would like to recommend not repeating the above mistakes.

Would you like to share your experience or such mistakes when administering databases, feel free to let me know – I will be glad to discuss them.

Tags: , Last modified: September 23, 2021
Close