Database backups, integrity checks, and performance optimizations are core regular tasks of DBAs. The client data is very important for a DBA to manage the database backup and make sure the integrity of the backups. So if something goes wrong with a production database, it can be recovered with minimum downtime. The database integrity checks are also important because, in the case of database corruption, it can be corrected with minimum downtime and data loss. Managing database performance is also important. Managing database performance is a combination of multiple tasks. Read More
The database is a critical and vital part of any business or organization. The growing trends predict that 82% of enterprises expect the number of databases to increase over the next 12 months. A major challenge of every DBA is to discover how to tackle massive data growth, and this is going to be a most important goal. How can you increase database performance, lower costs, and eliminate downtime to give your users the best experience possible? Is data compression is an option? Let’s get started and see how some of the existing features can be useful to handle such situations.
In this article, we are going to learn how the data compression solution can help us optimize the data management solution. In this guide, we’ll cover the following topics:
- An overview of compression
- Benefits of compression
- An outline about data is compression techniques
- Discussion of various types of data compression
- Facts about data compression
- Implementation considerations
- and more…
Before deploying your application into production, doing a performance load test is imperative for measuring future performance and ensure that your application is production-ready.
Testing is essential for every application to make sure that any application works and performs according to the desired requirements. During the application testing process, we can attempt and find out if any imperfections are remaining in the application. There are numerous sorts of testing like functional testing, unit testing, acceptance testing, and integration testing. We compose Functional and UI tests to see whether the application is working as per the requirements. Read More
There are a number of situations which would warrant the movement of database files or transaction log files from one volume to another on the same server. These may include:
- The need to format the volume assuming it was not formatted properly when SQL Server was installed. Recall that when installing SQL Server, it is recommended that 64K allocation unit size is used to format the volumes. If this is not done at the point of installation and needs to be done later, it will obviously require preserving a backup of the database fist or creating a new, properly formatted volume and moving the database to this new volume.
- The need to use a new volume assuming the limits have been reached for the underlying storage. A good example would be the 2TB limit of a VMware Data Store. This is the case as of VSphere 5.0. Higher versions of VSphere have much higher limits.
- The need to improve performance by managing IO. One more reason you may want to move datafiles is performance. There are cases where a database is created with multiple datafiles all sitting on one disk until it becomes obvious, as the database grows, that the you have created a “hot region” in the storage layer. One solution would be creating new data files and rebuilding clustered indexes, another would be moving data files.
Performance monitoring and troubleshooting in SQL Server is a vast topic. In SQL Server 2005, dynamic management views, also known as DMV’s, have been introduced and became an essential helping tool for diagnosing SQL Server performance problems. At the same time, we can use dynamic management views for Azure SQL Database. Some of them can differ from SQL Server on-premise database but the logic of work is still the same. Microsoft has very good documentation about dynamic management views. The only thing, you need to be careful about the version and product validation of dynamic management views. Read More
As you know, the main responsibility of the database administrator lies in the monitoring of the SQL Server performance and intervening in determined time. You can find several SQL Server performance monitoring tools in the market but sometimes we need additional information about SQL Server performance to diagnosis and troubleshoot the performance issues. So we must have enough information about SQL Server Dynamic Management Views to handle issues about SQL Server.
Dynamic Management View (DMV) is a concept that helps us to discover SQL Server Engine performance metrics. DMV was first announced in SQL Server 2005 version and it continued in all versions of SQL Server afterward. In this post, we will talk about particular DMV whose database administrator must have enough information. This is sys.dm_os_wait_stats.
Azure SQL DW (SQL Data Warehouse) is a massively parallel, petabyte-scale, cloud solution for data warehousing based on SQL. It is highly elastic and fully managed, allowing you to scale capacity in seconds and set up in minutes. You can scale computing and storage independently by yourself. It will enable you to burst computing of analytical workloads that are complex, or scale down your warehouse for archival scenarios and pay depending on what you are utilizing rather than lock yourself into cluster configurations that are predefined – and obtain better cost efficiency when compared with traditional data warehouse solutions. Read More
- Windows Failover Clustering comprising two nodes.
- Two SQL Server Failover Cluster Instances. This configuration optimizes the hardware. IN01 is preferred on Node1 and IN02 is preferred on Node2.
- Port Numbers: IN01 listens on port 1435 and IN02 listens on port 1436.
- High Availability. Both nodes back up each other. Failover is automatic in case of failure.
- Quorum Mode is Node and Disk majority.
- Backup LAN in place and routine backup configured using Veritas
In this post, we will discuss the SQL Server lock mechanism and how to monitor SQL Server locking with SQL Server standard dynamic management views. Before we start to explain SQL Server lock architecture, let’s take a moment to describe what the ACID (Atomicity, Consistency, Isolation, and Durability) database is. The ACID database can be explained as database theory. If a database is called relational database, it has to meet Atomicity, Consistency, Isolation, and Durability requirements. Now, we will explain these requirements briefly.
When we are beginning to think of migrating our on-premises databases to Azure SQL, we have to decide on a proper purchase model, a service tier, and a performance level. Before starting the Azure SQL migration process, we have to find logical and provable answers to the following questions:
- Which purchase model is suitable for my apps and business requirements?
- How much budget do I need?
- Which performance level meets my requirements?
- Can I achieve the acceptable performance of my apps?