Implementing Incremental Load using Change Data Capture in SQL Server

Total: 1 Average: 5

This article will be interesting to those who often have to deal with data integration.


Assume that there is a database where users always modify data (update or remove). Perhaps, this database is used by a large application that does not allow modifying the table structure. The task is to load data from this database to another database on a different server from time to time. The simplest way to tackle the problem is to load the new data from a source database to a target database with preliminary cleaning up the target database. You can use this method as long as the time to load data is acceptable and does not exceed preset deadlines. What if it takes several days to load data? In addition, unstable communication channels lead to the situation when data load stops and restarts. If you face these obstacles, I suggest considering one of the ‘data reloading’ algorithms. It means that only data modifications occurred since the latest load are loaded.


In SQL Server 2008, Microsoft introduced a data tracking mechanism called Change Data Capture (CDC). Broadly speaking, the aim of this mechanism is that enabling CDC for any database table will create a system table in the same database with a similar name as the original table has (the schema will be as follows: ‘cdc’ as a prefix plus the old schema name plus ”_” and the end “_CT”. For example, the original table is dbo.Example, then the system table will be called cdc.dbo_Example_CT). It will store all the data that has been modified.

Actually, to dig deeper in CDC, consider the example. But first, make sure that SQL Agent that uses CDC works on the SQL Server test instance.

In addition, we are going to consider a script that creates a database and test table, populates this table with data and enables CDC for this table.

To understand and simplify the task, we will use one SQL Server instance without distributing the source and target databases to different servers.

Now, let’s look at what we have after executing this script in the dbo.Example and cdc.dbo_Example_CT tables (it should be noted that CDC is asynchronous. Data is populated into the tables where the change tracking is stored after a certain period of time).

Consider in details the table structure in which change tracking is stored. The __ $start_lsn and __ $seqval fields are LSN (log sequence number in the database) and the transaction number within the transaction respectively. There is an important property in these fields, namely, we can be sure that the record with a higher LSN will be performed later. Due to this property, we can easily get the latest state of each record in the query, filtering our selection by the condition – where __ $ rn = 1.

The __$operation field contains the transaction code:

  • 1 – the record is deleted
  • 2 – the record is inserted
  • 3, 4 – the record is updated. The old data before update is 3, the new data is 4.

In addition to service fields with prefix «__$», the fields of the original table are completely duplicated. This information is enough for us to proceed to the incremental load.

Setting up a database to data loading

Create a table in our test target database, into which data will be loaded, as well as an additional table to store data about the load log.

I would like to draw your attention to the fields of the LOG_CDC table:

  • TABLE_NAME stores information about what table was loaded (it is possible to load several tables in the future, from different databases or even from different servers; the table format is ‘SERVER_NAME.DB_NAME.SCHEMA_NAME.TABLE_NAME’
  • DT is a field of the loading date and time, which is optional for the incremental load. However, it will be useful for auditing loading.
  • LSN – after a table is loaded, we need to store information about the place where to start the next load, if required. Accordingly, after each load, we add the latest (maximum) __ $ start_lsn into this column.

Algorithm for data loading

As described above, using the query, we can get the latest state of the table with the help of window functions. If we know LSN of the latest load, the next time we load we can filter from the source all the data, the changes of which are higher than the stored LSN, if there was at least one complete previous load:

Then, we can get all the records for the complete load, if the load LSN is not stored:

Thus, depending on the @LSN value, this query will display either all the latest changes (bypassing the interim ones) with the status Removed or not, or all the data from the original table, adding status 2 (new record) – this field is used only for unifying two selections. With this query, we can easily implement either full load or reload using the MERGE command (starting with SQL 2008 version).

To avoid bottlenecks that can create alternative processes and to load matched data from different tables (in the future, we will load several tables and, possibly, there may be relational relations between them), I suggest using a DB snapshot on the source database (another SQL 2008 feature).

The full text of the load is as follows:


Alexey Kurenkov

Alexey is a certified Microsoft expert who has been working with SQL Server for over 15 years. He is engaged in developing and administering databases.
Alexey Kurenkov