In my previous article, we have explored how we can transform JSON into the relational data set. We have already analyzed main steps of working with JSON. Now, I am going to describe how you can modify JSON files built into SQL Server.
This article is the second one of the three articles devoted to a particular security configuration combination of database security.
In my previous article, I presented a scenario in which we were able to compromise data in a SQL Server database.
I would like to note that the knowledge of this configuration combination is critical. In this article, I am going to provide further information and reasons for the importance of this issue. Read More
JSON is one of the most widely used data interchange formats. It is used as a storing format in several NoSQL solutions, in particular, in Microsoft Azure DocumentDB. In my opinion, today JSON is yet more popular than XML. One of the reasons of its popularity is more simple form and better readability in comparison with XML. Naturally, there was a long-standing need in having an option to process data in this format within SQL Server. In SQL Server 2016, this option has been established. Read More
In this article, we are going to talk about using the nvarchar data type. We will explore how SQL Server stores this data type on the disk and how it is processed in the RAM. We will also examine how the size of nvarchar may affect performance.
Actual data size: nchar vs nvarchar
We use nvarchar when the size of column data entries are probably going to vary considerably. The storage size (in bytes) is twice as much the actual length of data entered + 2 bytes. This allows us to save disk storage in comparison of using nchar data type. Let us consider following example. We are creating two tables. One table contains nvarchar column, another table contains nchar columns. The size of the column is 2000 characters (4000 bytes).
SQL query describes the expected result, not the way to get the result. The set of specific steps the server must take to return the result is called the query execution plan. The plan is built by the optimizer. Selection of a plan affects execution speed, what makes it one of the most important elements of the query performance problem analysis.
Execution plan comprises operators and their properties that are interrelated with each other in the form of the tree structure. Each operator is responsible for a separate logical or physical operation. All together, they ensure the result described in the query text. Inside the tree, operators are represented by the class objects in the memory of SQL Server. Server users (that is, you and me) see the description generated in XML format with a specific schema, that is displayed graphically by the SQL Server Management Studio (SSMS) environment.
There are many various plan operators and even more properties. Besides, new ones emerge from time to time. This article does not dare to describe all possible variety of operators. Instead, I would like to share the most interesting additions in this subject and to remind some old but useful elements. Read More
This article is a short review of the main scheduled maintenance with a database of the 24/7 information system that does not have downtime, as well as approaches to their execution in MS SQL Server.
Any comments and updates to the article are much appreciated.
- If data is being changed in one transaction, selection of this data (in other transaction or without a transaction) will not wait till the first transaction is finished and will return data entries of uncommitted transactions.
- If data is being read in one transaction, updates of this data in other transaction will not wait till the first transaction is finished.
- Shared locks are not used. Identical to setting the NOLOCK hint for all selections in Read Committed.
- Exclusive locks are enabled during statement execution and disabled at the end of the transaction.
It is important for a database administrator to know when there is no space on a disk. Thus, it is better to automate the process in order for them not to do it manually on each server.
In this article, I am going to describe how to implement automatic daily data collection about logical drives and database files.