Cloud Migration Guide:
SQL Server to Azure

Accelerate the migration of your databases to Microsoft Azure and minimize risks with SentryOne


The Microsoft Data Platform technology landscape is rapidly advancing, with increased release cadence and new capabilities regularly added. Businesses need to evolve their data platform strategies—migrating to Microsoft Azure is a key pillar of this strategy to deliver greater agility and operational efficiencies.

However, transitioning from on-premises to a cloud or hybrid-cloud data platform can be difficult without the right solutions to help discover, document, and validate migration activities. Whether moving to Infrastructure as a Service (IaaS) or Platform as a Service (PaaS), SentryOne provides a suite of solutions that can reduce the risks and time taken to perform cloud migrations to Microsoft Azure.

Given the increased cadence of SQL Server releases, many of the migration activities detailed in this guide also apply when performing SQL Server upgrades on-premises if you aren’t yet ready for full cloud adoption. This guide will provide a detailed look at the migration process and options and show you how SentryOne solutions can assist with the upgrade and migration of SQL Server platforms to the Microsoft Azure Cloud.

Prefer a PDF version? Click here.

The Migration Process


Migrating SQL Server workloads to Azure doesn’t have to be risky, difficult, or expensive. With the correct approach, many of the common pitfalls associated with the migration process can be avoided, and those that can’t be mitigated can be identified to allow for thorough assessment.

Although there are many project management frameworks that would work well for a migration, it’s important to identify the framework that works best for your organization. The elements that need to be addressed by the framework are discovery, testing/validation, migration, and operational support. Spanning all of these stages is planning.


The four stages of the waterfall project management method

The four stages of the waterfall project management method


Scope Definition

It’s possible to audit an entire environment and look for all SQL Server systems when determining the scope of your migration. However, to successfully migrate workloads, it’s best to restrict the initial project scope to that of a specific business system or department.

The initial scope will feed into the basic plan and, subsequently, the discovery phase. As the migration progresses through the discovery phase, the scope can be refined based on the data collected. Likewise, as the testing phase occurs, the scope will likely be refined depending on whether it’s viable to migrate workloads as-is or if more work needs to be completed before a workload can be migrated to Azure.

Refining your cloud migration scope to reflect changing conditions must happen during the migration process. However, it’s important to avoid scope creep that adds extra work without a compelling reason that’s supported by data.

You also want to consider how your business might evolve during the migration process. If standards or processes change, then it’s important to understand if the migration project needs to reflect the changes, too.



When developing a plan for conducting a migration, you and your business leaders must understand that it’s an iterative process. As the migration proceeds, new facts and data will become available that can impact the state of the plan. Addressing these changes is crucial—trying to force the migration to adhere to the original plan without accounting for feedback will result in failure.

A combination of waterfall and Kanban project management methods works very well for managing migration activities, especially if your operations staff are managing the process. Should the migration be a larger, more formal project, then there could be a case for a combination of waterfall and scrum.

The waterfall method is used to control the phases rather than the underlying order in which the discrete activities take place. The overall process for a cloud migration would be logically broken down so that you complete the discovery phase before the analysis phase, for example. Within each of these phases, the use of the Kanban method allows your teams to tailor the approach to each system and complete the appropriate tasks.


Looking to catch up on the latest cloud trends? Check out our report.


2020 Cloud Trends Report

Highlights compiled by the SentryOne team.

Learn More

Phase 1: Discovery

An in-depth understanding of your primary data systems will help you build a detailed migration plan. Key objectives for discovery include:

  1. Systems from which the migration target consumes data

  2. Systems for which the migration target produces data

  3. Current versions of software components

  4. Resource utilization (e.g., CPU, storage, memory)

  5. Performance baselines

  6. Sample workloads

  7. Recovery Point Objectives (RPO) and Recovery Time Objectives (RTO)

  8. Application owners


You can collect some of this information using automated tools such as monitoring or documentation software. However, gathering other crucial information will require you to engage your business leaders, which has the added benefit of ensuring buy-in for the migration. Buy-in ensures that when you need to make requests from your business leaders, they’re more inclined to collaborate with you.


Phase 2: Testing and Validation

Once you have a clear understanding of the data that’s within the scope of your migration, you need to know how it will behave on your new platform. This is an ideal time to look at the workload on the new system to identify regressions. Although performance degradation is important to identify, you should also document performance improvements, as improvements are also a change in workload behavior.

During this stage, you should begin evaluating migration options and testing the options that you think might meet your needs. Although experienced staff will have preferences for how to move databases and other objects, it’s important to keep an open mind about your migration options.

Finally, you’ll need to define and set up the tests that will validate the migration. It’s vital that decision points are defined, and that the information needed to make decisions about proceeding or rolling back is available well ahead of the migration.

When possible, automation should be at the forefront of your testing processes. By automating tests, you can ensure that the output between test runs can be compared and that any differences will be valid. If you manually perform testing, you risk that someone didn’t follow the process correctly, which invalidates the results, leading to additional work and risk being added to the migration process.


Phase 3: Migration

The actual migration, although critical to the process, should be one of the least stressful steps. With effective discovery and testing completed, there should be no surprises that stem from areas you control. By documenting a clearly defined timeline and decision gates, you can also provide your business leaders with confidence that the migration process will have minimal risk.


Phase 4: Post-Migration Operational Support

Often, once the migration has been completed, many teams just treat the new platform as business as usual (BAU). However, your team must keep a close eye on the system to ensure that any issues not detected during the pre-migration testing and discovery phases are identified and addressed immediately. During this phase, you will also need to update your documentation based on the data gathered during the migration.


Example Migration Scenario and Objective


Let’s look at an example scenario in which you have a range of SQL Server versions and technologies in play and explore your options for migrating to a fully supported data platform based in Microsoft Azure.


Source Systems and SentryOne ServicesSource systems and SentryOne services


For the purposes of this guide, this example scenario assumes that you’re leveraging SentryOne solutions to support your migration effort. The SentryOne solutions that will be referenced throughout the process are as follows:

Using these solutions, you can speed up the migration process by automating activities and providing more detailed data capture and a consolidated location for team members to store and obtain information about your database environment.



Once you have identified the systems involved in the migration process, you need to perform a comprehensive discovery on them to gather the data needed to make decisions about how best to perform the migration.


Scope Definition

In this example scenario, you’re working with a clearly defined scope and the data platform has OLTP, ETL, and reporting layers. However, you should perform effective due diligence to ensure you haven’t omitted any necessary systems or data and that there aren’t any departmental applications in use that you aren’t aware of. It’s not only a discovery exercise but also a validation of the initial scope.


Documenting the Current System

Typically, documenting a collection of SQL Server systems and producing a cohesive library of information can take some time. You can speed up this process by making use of SentryOne Document.

In SentryOne Document, you can create a solution containing the database, ETL, and reporting servers that you want to analyze. This information is then stored in a SQL Server database and can be accessed via the SentryOne Document interface to allow for effective collaboration among team members.


For more information about what data lineage is and why it’s important to identify dependencies, view our on-demand webinar “Data Lineage and Documentation: How Changes Affect Your Environment.

After you have completed a scan of your systems, you can then review all the configuration data about the servers and the databases that they contain. Collecting this data allows you to build a data map of your environment. The data lineage functionality enables you to perform detailed impact analysis of any changes that you will make as part of the migration process. Visibility into the data that flows through each server, database, and SSIS package that you will be migrating allows you to understand the upstream and downstream impact of changes. You can also document and analyze the risks to the business for the migration activity.

SentryOne Document data lineage—from source table to SQL Server Reporting Services (SSRS) report itemsSentryOne Document data lineage—from source table to SQL Server Reporting Services (SSRS) report items


In addition to collecting hard data related to servers, databases, packages, jobs, etc., you need to collect and catalog the metadata, which will be crucial to prioritizing activities, as well as ensuring that the data and components are handled appropriately. By leveraging the data dictionary capabilities in SentryOne Document, in combination with the data captured, you can easily tag sensitive data fields for PCI DSS, Personally Identifiable Information (PII), General Data Protection Regulation (GDPR), HIPAA or any other compliance requirements.

Additionally, you can include extra metadata about application and database owners, points of contact, and what these entities are used for. The ability to add information about what an ETL job does, and more importantly why it does what it does, is key to understanding if the job is working as planned. A great use of the data dictionary is to add information around test criteria and what is considered to be a pass/fail. This information can then be used not only to building tests to validate the migration but also for post-migration BAU activities to support the continued development of your platform.

The work you do at this stage to capture the data and turn it into actionable information will live on long past the migration process. Your future self will thank you for putting in the effort here.

Add business-specific and compliance metadata to data entities in the SentryOne SentryOne Document data catalogAdd business-specific and compliance metadata to data entities in the SentryOne Document data catalog


Understanding the Business

Another area of the discovery phase is understanding the business aspects of the migration process. You must consider what data is contained within the systems and databases, how important the data that is being migrated is to the business operation, and who is impacted by the migration. As part of the discovery process, you need to collect information about SLAs for performance, availability, and Disaster Recovery (DR) scenarios, including Recovery Point Objectives (RPO) and Recovery Time Objectives (RTO), as this information will drive the selection of the appropriate cloud services.


What data are we migrating?

Who will be impacted by the migration?

Understanding the classification of the data being migrated is important. In the case of PII or healthcare data, there are compliance requirements that need to be met. It’s also advisable to classify and document data entities as early as possible to ensure appropriate data handling.

This information will be used to understand which geo-political region the data needs to be constrained within, as well as security and compliance requirements.

To ensure your migration is successful, you need to identify the application and data owners so that you can get buy-in and effective communication can take place.

It’s also important to identify where users are based and their active hours so that you can assess the impact of the migration and plan for any outages. You should also understand who the key decision makers are when it comes to validating the new platform and making migration go/no-go decisions.



The information that you capture while speaking with your business leaders needs to be documented. Although this information can be added to your SQL Server systems via Extended Properties, annotations, or in a wiki, it’s more effective to add it to SentryOne Document. By making use of the data dictionary functionality, you can add a wealth of metadata to the data captured by the solution. Capturing this information in SentryOne Document has the added benefit of allowing business users to manage this metadata to ensure that it’s kept up-to-date.


System Performance Baselines

When performing an upgrade or migration, performance is a key metric that will be measured by the business. Often, there’s reluctance to adopt modern technology because of a perceived risk associated with being at the bleeding edge or even as a fast follower. However, you can establish the actual performance impact by providing greater visibility into performance and reliability by establishing a baseline. In many cases, you can leverage modern technologies to improve performance or get the same level of performance with less resources. However, it’s crucial to have a solid monitoring solution implemented that will capture baseline data, which can be referenced during the testing and post-migration phases.


For more information about how SentryOne can be used to collect performance baselines, check out the on-demand webinar, “Setting SQL Server Performance Baselines and Alerts,” from Kevin Kline and Richard Douglas.

When baselining performance for a system prior to an upgrade or migration, it’s important to identify the metrics that are most important to your business. Taking a blanket approach can result in too much data collected and important data points being lost in the noise. This is where SentryOne SQL Sentry comes in, as it delivers built-in baselining capabilities and can collect all of the key metrics for migration analysis.

When performing a cloud migration, the following elements can affect performance:

  • CPU utilization—When analyzing CPU resources, it’s important to take parallelism into account. Often, you will hear about people trying to drive higher CPU usage by using fewer cores for a workload. Although this can be achieved, it’s important to assess the impact on large queries that make use of multiple threads.

  • Disk I/O—Not all cloud storage is created equal. Although Azure premium storage might be solid state, it’s typically network attached for the workload. Additionally, it’s likely to be broken into performance tiers like a modern SAN. Understanding your workload’s use of storage is vital when trying to achieve optimal performance.

  • Memory utilization—Memory in the cloud is tightly controlled. With IaaS solutions, you only get predefined configurations. With PaaS solutions, memory is abstracted away and not something that you can affect directly. Knowing your memory utilization and associated PLE/storage throughput will help inform sizing decisions.

  • Query performance history—In recent versions of SQL Server and in the PaaS solutions, Microsoft has updated parts of the optimizer. This can result in behavioral changes both good and bad. Understanding query-level activity helps identify regressions that need to be fixed, as well as performance improvements experienced from moving to a new system.


When capturing baselines, you must ensure you have all the major time periods for your business covered, including key active hours for the business and any maintenance windows. (You must take maintenance periods into account because they’re still required in the cloud. There’s a misconception that these PaaS cloud solutions are entirely managed, but they aren’t.)


Discovery Summary

Once you have collected all of the data, you have the majority of the information that you need to move onto the testing and validation phase of the migration process. It’s important to understand that this data needs to be kept up-to-date. A common reason that migration activities fail is because the project happens in isolation and the business continues moving. It’s important to regularly renew the data captured to look for new databases and data feeds. Additionally, if new users are added to a system, the performance profile of the database systems can be altered.

All documentation created throughout the discovery phase should be considered living documents and maintained as such. Finding out about changes early is key to ensuring that they feed into the plan and result in a successful migration.


Analysis and Testing


Now that you have documentation and performance baseline data in hand, you can refine the plan and define the migration activities that you need to undertake. This phase of the migration is broken down into three work streams, which are fed back into one another as you select the services and mechanisms you will use to migrate your databases.


Cloud migration phasesCloud migration phases


Speaking to your business leaders will provide greater insight into their needs from the platform regarding service uptime, high availability (HA), DR, and allowable downtime for the migration. Then, you can evaluate which cloud services best align with those needs.


Availability and Uptime

Microsoft has uptime SLAs for Azure services, as shown in the table below. However, in the case of IaaS VMs, there are certain configuration requirements you must adhere to for the SLAs apply. These requirements will have an impact on the way that you design your cloud-based infrastructure. If your business needs higher uptime for your systems, then more complex configurations will be required to deliver it.


Azure Cloud Service


Uptime SLA


Two or more VMs deployed over two or more Azure Availability Zones within the same Azure region.



Two or more VMs deployed in the same Availability Set.



Single VM using premium storage for all Operating System and Data Disks


Azure SQL Database

Basic, Standard, and Premium Tiers


Managed Instance

General purpose or business critical



For example, you might need to have two IaaS VM configurations spread over multiple Azure regions, in addition to deploying across Availability Zones within each region. you might need to have two IaaS VM configurations spread over multiple Azure regions. Although each configuration still has the 99.99% uptime SLA, the chances of both configurations going offline at the same time is incredibly low. As with anything, there is always the risk of an outage, but it’s important to understand the risk profile.


Identifying the Migration Target

As we discussed earlier, there are many different options when it comes to moving your databases to the cloud. Each option has slight nuances to how it should be used and outright blockers to adoption. Working through these options manually can be difficult; however, Microsoft offers tools that can help you automate the selection process and size the target platform.


Data Migration Assistant

The Database Migration Assistant (DMA) is a free tool from Microsoft that can be used to perform analysis of source systems to identify potential blockers to migrating SQL Server systems. It can be used for migrating databases from one SQL Server system to another, to Azure SQL Database, or to Azure SQL Database Managed Instance.

The DMA performs analysis on both the instance and database level to identify issues that need to be resolved. Using the DMA on the systems that you have defined as within scope for the migration and performing an analysis yields the following results.


Identifying incompatibilities between SQL Server 2008 R2 and Azure SQL Database in the Microsoft Data Migration Assistant
Identifying incompatibilities between SQL Server 2008 R2 and Azure SQL Database in the Microsoft Data Migration Assistant


Sizing the Cloud Systems

When sizing the cloud systems, there are several options, including free tools from Microsoft, available to derive a starting point.


Azure SQL Database DTU Calculator

The DTU calculator, a free tool built by engineers at Microsoft, takes a CSV of performance counters and then calculates how many DTUs your Azure SQL Database would need. There are links to a PowerShell script or a command-line utility that can capture and output the required information. However, you can only get a real picture of what is going on with a longer-range capture.

It’s possible to extract this data from the SentryOne database and pass it to the DTU calculator. Detailed information on how to do so, as well as a stored procedure to do so, can be found in the article “Sending SentryOne Data to the Azure SQL Database DTU Calculator.” When extracting data, it’s highly recommended that you review activity periods to identify peak and low usage periods. You can then feed these data sets to the calculator to understand upper and lower resource requirements for Azure SQL Database.

Note that by default, SentryOne collects all the counters needed by the DTU calculator, except Database – Log Bytes Flushed/sec.. However, it’s relatively trivial to add this to SQL Sentry as a collected counter. Justin Randall (t|b) explains how to add counters to SQL Sentry in his blog post “I Didn’t know it could do that! : Enabling Additional Performance Counters.” By adding this counter, as well as any others that are important to your environment, ahead of time, it’s possible to leverage the DTU calculator to help right-size your target Azure SQL Database.

To gain a deeper understanding of the DTU performance metric, check out the blog post “What the heck is a DTU?” by Andy Mallon (t|b).


Database Migration Assistant

The DMA not only performs analysis on migration blockers but also includes a PowerShell script that can be used to help size target systems based on performance analysis data. The script takes several inputs to help provide an estimate on sizing.

DMA can capture the data it needs to perform this operation. However, this capture is typically a small set of data that isn’t always representative of the true workload. By using a monitoring solution to track performance over a longer period, it’s possible to leverage this tool to provide the analysis engine with more data. This, in turn, provides a higher degree of trust and accuracy in the sizing data that will be returned by DMA.


IaaS VMs

Sizing Azure IaaS VMs is more akin to the traditional way that you size on-premises systems. However, there are a couple of concepts that you need to understand about storage and how you approach cloud systems to ensure that you select the right size to host your workloads.


Azure Storage

There are multiple types of Azure storage available from Standard to Premium disks, both managed and un-managed. The type of storage that you select for your VMs will have a bearing not only on performance but also on whether you meet the requirements to be covered by the SLAs previously mentioned.

The size and class of VM selected will also dictate the number of data disks that can be used and the performance profile for them regarding IOPS and throughput. Microsoft’s recommendations for running SQL Server workloads on Azure VMs are available in the “Performance guidelines for SQL Server in Azure Virtual Machines” documentation article.


Adopting a Cloud Mindset

When you provision on-premises systems, you do so by estimating the lifespan and factoring in growth over time, which often means you will over-provision capability for the system just in case. If you take this approach with cloud systems, it will cost more money than you need to spend, ultimately reducing ROI.

When working with cloud-based systems, whether IaaS or PaaS, you want to run with an overall higher resource utilization. Whereas you would see many on-premises systems running with 30% to 40% utilization for CPU, in the cloud, you want to push CPU utilization as high as you can go while allowing for performance spikes. Doing so minimizes additional spend on resources that aren’t being used. It’s easy to plan for scaling up a VM from one class to another, and it’s just as easy with PaaS to move from one service tier to the next. Although these operations are typically offline events, with effective planning, you can minimize the impact to end users.


Analyzing Workloads

A crucial part of the testing and validation phase is ensuring that when databases are moved to cloud-based systems that they perform as expected. This analysis isn’t limited to hard performance counter metrics such as transactions per second, I/O throughput, and response times. It also includes confirming that queries return the correct results and that the correct data is being written to your databases.

Workload analysis should be completed in several iterations, starting with the most basic tests, to ensure that applications can connect and simple transactions can be processed. In many cases, you might need to perform an action on the dev/test version of the system on-premises, repeat that action on your new platform, and then compare the results.

Validating data can be a painstaking task—automation should be applied whenever possible.

Writing formal tests in code at this stage will greatly help when performing more in-depth workload testing. Also, automating the data validation process ensures that tests are repeatable, you can trust the output, and test runs can be compared to one another. When you manually run tests at scale, small discrepancies can creep in and invalidate the results.


Workload Replay Options

When performing workload testing, it’s best to make use of a production workload, which allows you to validate the new systems against a known system and production baselines to ensure parity. Capturing and replaying workloads isn’t a simple task. However, there are several options available to help with this task. These include OStress in the RML Utilities suite from Microsoft or SQLWorkload in the WorkloadTools open-source project by Gianluca Sartori (t|b). These tools have similar functionality but complement one another.



WorkloadTools is useful for live streaming a workload from one server to another. If you leverage Extended Events (XE), workloads can be streamed from one server to another with low overhead. It’s also possible to perform workload streaming via SQL Trace on earlier versions of SQL Server where there isn’t parity between SQL Trace and XE for events.

You can also capture a SQL Server workload with SQL Trace and output to Trace files. These files can then be converted to a replay format that SQLWorkload can process. The advantage of this approach is that it lets you capture several different workloads and store them for repeated use throughout the analysis phase as well as a baseline.



OStress is a replay tool that has been around for a long time and works on both SQL Trace and XE files. The process for capturing and converting a workload with OStress is a bit more complex than SQLWorkload. However, OStress includes an additional capability for the replay mode: multi-user simulation for testing concurrency limits. You can achieve this by combining OStress with ORCA in the RML Utilities suite.

The workload is captured using native XE or SQL Trace to files; these files are processed into the Replay Markup Language (RML) using the ReadTrace tool. OStress then reads the RML files to replay the workload against a target database.

OStress can be used in conjunction with the Orca tool in RML Utilities, making it possible to coordinate the replay of the workload over multiple clients. This approach allows you to simulate a multi-user environment to determine if there are potential concurrency issues when the workload is run on the new system.


Workload Replay Process

Microsoft’s Database Experimentation Assistant can help with workload replay. However, there’s licensing overhead for this tool due to its reliance on SQL Server Distributed Replay.

The most crucial element of the workload replay task is ensuring a consistent database that can be reset on the target systems. You should take a full backup of the source database(s) for the workload that you will capture. Then, create a
marked transaction right before starting the workload capture, which allows you to take a transaction log backup and restore the database to the marked transaction, after which you can replay the workload.

This assumes, however, that you’re working with a full or bulk-logged recovery model. If you’re using simple recovery, it can be more difficult to coordinate the backup with workload capture. In this instance, a potential solution would be to trace the backup as well as the workload, and then manually modify the workload records to remove the ones prior to the completion of the backup. However, this isn’t an exact science and can be more difficult to complete.

When replaying the workload, the database should be restored, workload replayed, and system monitored. This monitoring session can then be compared to the baseline established during the workload capture or discovery process baseline. This database can be restored, and the workload replayed as needed, based on different configurations for the target system.


Workload Replay ProcessWorkload replay process


For more information about how SQL Server backups work so that you can decide which events might need to be discarded from the trace based on the transactions in the backup, see Paul Randal’s blog posts “More on how much transaction log a full backup includes” and “Debunking a couple of myths around full database backups.”


Monitoring for Workload Analysis

When performing workload analysis, you’re likely to move through several iterations as you tune the configuration and make changes to code to work around any issues or incompatibilities.

Monitoring the source system via SQL Sentry allows you to create custom baselines for the periods of interest. The detailed performance metrics that are captured also enable you to track resource utilization over time for many counters. SQL Sentry’s Top SQL functionality also provides deep insight into how queries are executing. You can identify problematic large queries and spot the rapidly executing large volume queries that can cause problems with overall resource usage.


SentryOne Top SQL showing procedure execution history with multiple execution plans in history

SentryOne Top SQL showing procedure execution history with multiple execution plans in history


With this source data, you can monitor the new target systems with SentryOne to compare against your baselines. The deep insight into the way the workload is executing and the execution plans in use allow you to identify both areas of improvement due to the latest version of SQL Server in play. Or, more importantly, performance regressions that need to be resolved to ensure end users aren’t affected, and system performance isn’t compromised.

When migrating from older versions of SQL Server to PaaS or new versions in IaaS, it’s important to understand the changes that have taken place in the engine over time. SQL Server 2014 introduced a new cardinality estimator. SQL Server 2016 altered several default behaviors and the optimizer hot fixes were enabled by default. SQL Server 2017 introduced the new adaptive query processing features to optimize queries more efficiently. SQL Server 2019 will add capabilities that can dramatically alter the way that scalar UDFs work.

All these changes will be available in PaaS offerings ahead of on-premises SQL Server versions. You must understand the potential effect of these changes to how your queries and workloads function as part of any migration effort.


Validating the Data

In addition to validating the performance and behavior of workloads on the SQL Server you will be running, you must ensure that data is written to the database correctly and query output is as expected. This can be a complex and repetitive task that is best suited to automation rather than human operators.

By clearly defining the tests for a predefined input, you can ensure that with every workload run that it’s possible to validate the data written to the database and spot any issues. This is important for two reasons:

  1. The rate at which bugs are fixed in SQL Server has accelerated dramatically. This means that any behavioral changes introduced with these fixes could manifest in the results you see in your queries. Automated testing for data helps identify these changes before they become a problem.

  2. Microsoft has altered the default behavior of the SQL Server engine. For example, in SQL Server 2016, there were changes to the level of accuracy for values returned when there were implicit conversions between numeric and datetime types. Depending on the tolerances in your system, these variances could be something that you need to detect and then update code to maintain behavior.

Microsoft is very diligent when it comes to documenting changes and making that information available. It’s possible to see the breaking changes to SQL Server 2016 database engine features in the Microsoft documentation.

The most fundamental capability for workload validation is the ability to compare two data sets drawn from different databases.

In this scenario, having a static data set that is a snapshot of the database(s) as of the end of the workload capture allows you to then create tests that validate the data contained in that snapshot versus the state of your new database platform after the workload has been replayed. This test will help you understand whether there’s a discrepancy in the data written in the new system.


Platform Capability Validation

Although you must understand the performance characteristics of the workload and data validation, the operational manageability of the cloud platform must meet the business needs that you documented during the discovery phase.

Migrating from an on-premises system, where you have high degree of control over how backups and maintenance operations are undertaken, to a cloud platform, where many of these actions are abstracted or handled by the platform and are therefore outside of your control, requires that you adapt your operational practices.


High Availability Configurations

Azure PaaS solutions Azure SQL Database and Azure SQL Database Managed Instance have HA features built in. Therefore, you must cede control to the platform and trust that failovers will be as advertised because there’s no way to trigger and validate a failover event to understand the effect on your applications. However, IaaS solutions rely on you to set up the appropriate configuration.

The natural choice for HA in Azure VMs is Always On Availability Groups, especially given their support for DTC transactions and functionality historically available only in Failover Cluster Instances (FCI).

The implementation of Availability Groups in Azure is largely the same as an on-premises implementation, but there are two crucial differences. The management of resources, such as IP addresses, and network firewalls are controlled outside of the OS. Additionally, traffic redirection for the listener requires the use of an Azure Load Balancer.

These differences mean that operations staff needs to have a deep understanding of the Azure cloud platform as well as of SQL Server; otherwise, the operational capabilities of the solution can be compromised.


Disaster Recovery Scenarios

When it comes to DR scenarios, it doesn’t matter whether it’s a PaaS or IaaS solution that you’re deploying—there’s still an onus on operational staff to understand how to perform a recovery operation. There are two key elements to DR planning when it comes to cloud systems. These elements, described in greater detail below, can be used individually or in combination.


Backup Retention and Recovery

Restoring these managed backups isn’t the same as performing a native restore. Therefore, it’s strongly advised that you test and re-test the restore functionality during this phase of the testing process. Where possible, multiple full DR tests should be undertaken, not only to familiarize the operations staff with the new way of working but also to highlight any issues with the platform implementation that need to be resolved ahead of the final migration.

When using Azure SQL Database and Managed Instance, the backup process for your databases are handled by the Azure platform. By default, Azure SQL Database and Managed Instance have transaction log backups taken on a schedule between 5 and 10 minutes. Depending on the service tier selected, backups will be retained for one to five weeks. This schedule allows for point-in-time recovery for databases deployed to these services within a specific region. If you require longer term backup availability, then you will need to configure Long Term Retention, which allows for backups to be held for up to ten years.

At the time this guide was written, the long-term retention capability hasn’t been released for Managed Instance. If long-term backup retention is required, then the user will need to configure native COPY ONLY backups to Azure Blob storage.

The following diagram shows the Azure SQL Database restore process for a single database deployed to a server in an Azure region.


Process for Restoring an Azure SQL DatabaseProcess for restoring an Azure SQL Database


Restore operations for Azure SQL Database can be easily scripted with PowerShell, allowing for restore operations to be automated as needed and for effective testing.


## Declare Parameters
$SourceDatabaseName = "Accounts"
$ServerName = "s1dbwp01"
$SourceResourceGroupName = "SentryOne-WP-RG"
$RestoreTargetDatabaseName = "Accounts_Restored"
$PointInTimeToRestore = (Get-Date -Year 2019 -Month 05 -Day 28 -Hour 18 -Minute 17 -Second 00)

## Get Azure SQL Server
$SqlServerParams = @{
    Name = $ServerName
    ResourceGroupName = $SourceResourceGroupName
$AzureSqlServer = Get-AzSqlServer @SqlServerParams

## Get Azure SQL DB we want to restore a version of
$AzureSqlDbParams = @{
    Name = $SourceDatabaseName
    Server = $AzureSqlServer.ServerName
    ResourceGroupName = $AzureSqlServer.ResourceGroupName
$SourceDatabase = Get-AzSqlDatbase @AzureSqlDbParams

## Restore to new database on same server
$RestoreParams = @{
    FromPointInTimeBackup = $true
    PointInTime = $PointInTimeToRestore
    ServerName = $AzureSqlServer.ServerName
    ResourceGroupName = $AzureSqlServer.ResourceGroupName
    TargetDatabaseName = $RestoreTargetDatabaseName
    ResourceId = $SourceDatabase.ResourceId
    Edition = $SourceDatabase.Edition
    ServiceObjectiveName = $SourceDatabase.CurrentServiceObjectiveName
Restore-AzSqlDatabase @RestoreParams

$OldDbRenameParams = @{
    DatabaseName = $SourceDatabase.DatabaseName
    ServerName = $SourceDatabase.ServerName
    ResourceGroupName = $SourceDatabase.ResourceGroupName
    NewName = ($SourceDatabase.DatabaseName + "_old")
Set-AzSqlDatabase @OldDbRenameParams

## Rename new database to have old database name
$NewDbRenameParams = @{
    DatabaseName = $RestoreTargetDatabaseName
    ServerName = $SourceDatabase.ServerName
    ResourceGroupName = $SourceDatabase.ResourceGroupName
    NewName = $SourceDatabase.DatabaseName
Set-AzSqlDatabase @NewDbRenameParams


When it comes to DR planning for IaaS VMs, the process is remarkably similar to on-premises deployments. Your operations team would schedule native backups to storage and configure the appropriate clean-up routines. It’s advisable to make use of the native backup to URL functionality within SQL Server in this scenario not only due to the ease of management but also to be able to configure the storage to replicate to another Azure region to provide increased resiliency.


Geo-Replication and Multi-Region Deployments

One facet of cloud usage that many organizations neglect to account for is cloud region failures. All major cloud vendors have had outages that have impacted entire regions. Many of the Azure services have geo-redundancy capabilities that you can enable. Azure SQL Database and Managed Instance can leverage auto-failover groups for multi-region failover capability.

Auto-failover groups is like the Availability Group Listener for on-premises SQL Server, providing an abstraction layer for multiple replicas. Planning for a multi-region solution can help improve the resilience of the platforms you build in the cloud. To learn more about auto-failover groups, see the Microsoft article “Use auto-failover groups to enable transparent and coordinated failover of multiple databases.”

When building IaaS solutions with Azure VMs, it’s highly advisable to use existing patterns and practices for multi-site deployments. Again, you can use Availability Groups and Distributed Availability Groups within SQL Server. More information on these configurations for High Availability and Disaster recovery on Azure VMs can be found in Microsoft’s “High availability and disaster recovery for SQL Server in Azure Virtual Machines” documentation article.


Migration Dry Run

Toward the end of the analysis phase, you should be ready to test the methods that you will use to migrate your workload from on-premises to the cloud.

As with anything, there are multiple ways to achieve the objective. However, rather than immediately falling back to tried-and-tested methods that you’re familiar with, you need to evaluate all the options and make a final decision based on gathered data and facts to ensure the migration methodology is appropriate and meets the needs of the business.

When migrating SQL Server workloads from on-premises to Azure, there are several options, as illustrated in the diagram below.


Migration paths to Azure for SQL ServerMigration paths to Azure for SQL Server


Analysis and Validation Summary

The analysis and validation phase of the migration effort is vital. If you don’t complete this activity, there’s a higher degree of risk that the migration effort won’t succeed. The outputs that you should have in place are as follows:

  1. Comprehensive documentation

    1. Covering the source system

    2. Analysis from the tests that justifies the migration method selected

    3. Any changes that have been or need to be made for the migration to succeed

    4. A detailed migration plan with rollback steps in the event of a failure

  2. Test automation for validating the success of the migration checkpoints to provide details to the decision makers for continue/rollback quality gates

  3. Platform monitoring and analysis in place for source and destination systems


Migration to Azure


Now it’s time to review the actual migration process. We will not only cover the technology aspect but also how to implement the changes and decision-making process. Let’s look at the migration options for moving workloads from on-premises systems to Azure.


Managed Instance

Managed Instance is a version of Azure SQL Database that is targeted at businesses that want the convenience of PaaS solutions but still need many features that are found in on-premises SQL Server. This includes cross-database queries, SQL Server Agent, linked servers, and other instance-scoped objects. It’s an alternative to running an IaaS VM to move third-party vendor applications that aren’t compatible with Azure SQL Database.

There are several options when it comes to migrating databases to Managed Instance; let’s look at three of the most common methods.


Backup and Restore

Managed Instance supports restoring native SQL Server backups to the service, whereas Azure SQL Database singleton databases don’t. Organizations have used this tried-and-tested methodology for migrating databases between systems. It offers a high degree of versatility and allows for handling large databases and pre-staging databases via a combination of full, differential, and transaction log backups.

The process overall isn’t too dissimilar when performing database migrations between on-premises systems. However, rather than local storage or a network share, Azure Blob storage is the medium hosting the backups. Although SQL Server 2012 SP1 CU2 introduced the ability to natively backup to Azure Blob storage via URL, all isn’t lost if you’re on an earlier version of SQL Server. You can use PowerShell with dbatools and Azure RM modules to easily take a backup and copy it to Azure Blob storage.


# Define script parameters
$LocalBackupLocation = "\\MyShareLocation\SqlBackups"
$Database = "Adventureworks_EMEA"
$SqlServer = "SQL01"
$ContainerName = 'dbbackups'
# Define storage account details
$StorageAccountParams = @{
    ResourceGroupName = $resourceGroupName
    Name = $StorageAccountName
$StorageAccount = Get-AzStorageAccount @StorageAccountParams
$Context = $StorageAccount.Context
#  Backup database to local storage
$BackupParams = @{
    SqlInstance = $SqlServer
    BackupDirectory = $LocalBackupLocation
    Database = $Database
    Type = "Full"
    CompressBackup = $true
Backup-DbaDatabase @BackupParams
# Copy backup files to Azure Blob Storage
$Backups = Get-ChildItem -Path $LocalBackupDirectory
foreach($f in $Backups)
    $BackupsBlobContentParams = @{
        File = $f.FullName
        Container = $ContainerName
        Blob = $f.Name
        Context = $Context
    Set-AzStorageBlobContent @BackupsBlobContentParams

PowerShell script to backup databases and move to Azure Blob storage (requires dbatools and AZ PowerShell modules)


Once the backup is in blob storage, it’s then a simple task of restoring those backup files to Managed Instance. It’s important to note, however, that there are some differences in the way that the restore operation works for Managed Instance. Key among these is that you can’t specify WITH options. Managed Instance will handle the allocation of data and log files to its storage layer—you don’t need to supply new paths.

When restoring to Managed Instance, you don’t need to specify the MOVE clause on the restore to assign files to drives. Managed Instance will handle that for you.

Additionally, the restore operation will complete even if the session running the restore command is ended. It has been made an asynchronous operation and follows the recommended practice of making a system resilient to transient network failure. This is only possible because the backup files are also stored in Azure and so are effectively co-located with the Managed Instance service.

The process for moving a database to Managed Instance is illustrated below.


Managed Instance Migration Backup and RestoreManaged Instance migration—backup and restore


Consideration—Backup to URL Performance

When taking a backup to blob storage, the limiting factor will typically be bandwidth on your internet connection. However, if that isn’t the case, then striping a backup over multiple files will help drive higher throughput on the network due to multiple writer threads. Managed Instance can also make use of multiple threads when performing the restore operation, which results in higher throughput for the restore operation.


Database Migration Service

The Azure Database Migration Service (DMS) is an Azure service that will automate a large portion of the process when it comes to moving your databases from on-premises to Managed Instance. There are two different migration types available with DMS—online and offline. The main difference between the two is the amount of downtime during each migration.

There are a few infrastructure prerequisites for a successful DMS migration, including vNet connectivity between the source and destination systems. DMS also needs a file share location that the SQL Server and DMS can access to manage the transfer of data from on-premises to the Azure.


Azure Database Migration Services Offline Migration to Managed InstanceAzure Database Migration Service—Offline migration to Managed Instance


Offline DMS Migration

The offline migration scenario supported by DMS is the only option available when you make use of the General Purpose tier with DMS. Here, the term offline refers to the fact that the source system will be unavailable from the time the migration starts and until it completes.


Online DMS Migration

When using the business-critical tier of DMS, you have the option to perform an offline migration or an online migration.

Online, in this case, is a bit of a misnomer given that there’s some downtime associated with this migration option. However, the time window for the downtime is related to the time taken to reconfigure the application to connect to the new database location. “Online” refers to the fact that the source system will remain active and able to process transactions while the data is being synchronized between the source and the target.

The online migration option is very useful for larger systems or as part of the test and validation phase because of how it interacts with the source system. For those who aren’t comfortable supporting Transactional Replication as a technology, DMS is a viable alternative for migrating the database. This option also has the added benefit of requiring negligible post-migration clean-up activity, unlike when using Transactional Replication.


Transactional Replication

One of the main selling points of Managed Instance is that it supports a very high degree of parity with on-premises SQL Server. One of these areas of parity is that it supports all roles in the Transactional Replication topology.

The use of Transactional Replication for database migrations does rely on the database supporting this configuration. Restrictions about the use of Transactional Replication can be found in the Microsoft documentation.

If Transactional Replication is a viable option for your system, then it’s strongly recommended that you use a push subscription to the Managed Instance. Post-migration operations needed to decommission the source and replication topology will be minimal. The diagram below shows an example of the architecture for using Transactional Replication to migrate a database to Managed Instance. More information about how to configure replication with Managed Instance can be found in the Microsoft documentation article “Configure replication in an Azure SQL Database managed instance database.”

Transactional Replication to Managed Instance push subscription

Transactional Replication to Managed Instance—Push subscription


Azure SQL Database

Azure SQL Database doesn’t support restoring native backup files to the service. In this situation, there are several options that you can utilize instead. When we covered the DMA earlier in this guide, it was used to perform an analysis to identify migration blockers. It can also migrate SQL Server databases from on-premises SQL Server to Azure SQL Database. There’s also the DMS, Transactional Replication push subscriptions, and creating a BACPAC export/import from SQL Server to Azure SQL Database.

Let’s look at how each of these works below.



At its core, a BACPAC is a file that contains the schema and data for a database that has been exported from SQL Server. The use case for migrating a database to Azure SQL Database via BACPAC is quite specific; you should consider the following before utilizing this option:

  1. Size of the source database (smaller is better)

  2. Connectivity to Azure (vNet access isn’t required for deployment of BACPAC)

  3. Allowable downtime (you will need to stop the application processing during the export phase for consistency)


Although it’s possible to deploy a BACPAC via SSMS or SQLPackage.exe, the blob storage process is less susceptible to network connectivity issues.

The export, upload, and create process can be scripted with PowerShell. Automating the process not only reduces the chance of migration issues but also allows the process to be tested many times ahead of the actual migration to validate the approach and confirm timing for the overall activity, which you can feed back into your planning documents.

The key stages for this process are as follows:

  1. Export database as a BACPAC using SQLpackage.exe

  2. Copy the BACPAC file to Azure Blob storage

  3. Create an Azure SQL Database using the New-AzSqlDatabaseImport cmdlet in the AZ PowerShell module


Overall, this process is reliable and easy to achieve with minimal overhead. However, it does mean that the tolerance for downtime during the migration process needs to be considered. The generation of the BACPAC file can take some time in the event of a very complex database schema or large volume of data.



The DMA has two project types—the analysis project, which helps you identify blockers to migration to selected targets, and a migration project. The migration project allows you to copy the database schema and data from the source system and move it to Azure SQL Database.


Migrate database to Azure SQL Database with Database Migration AssistantMigrate database to Azure SQL Database with Database Migration Assistant


The DMA migrations can either be performed via the client application or scripted using the command line. However, if you would like to automate the migration, there are better options available in the form of the DMS from Microsoft. The syntax that you need to use for the DMA command line can be onerous and verbose, so it’s quite easy to end up with unmanageable scripts when automating the migration of a database with DMA.

If you’re migrating one or two small databases to Azure SQL Database, the DMA can be an extremely useful tool.



The DMS also supports migrating a database from on-premises SQL Server to Azure SQL Database. As with the DMS scenario we discussed for Managed Instance, the service needs VNet access to the source database server (either a VPN or Azure ExpressRoute connection). You can then perform the migration using the offline/online models.

There isn’t a requirement for the Azure SQL Database to have an VNet endpoint, as DMS will make use of the public endpoint to connect to the Azure SQL Database. In this scenario, it’s important to ensure that the Azure SQL Server is configured to allow other Azure services to access it. More information about how to configure the Azure SQL Database server firewall rules can be found in the online documentation from Microsoft.


Azure IaaS VM

The Azure PaaS offerings provide a wealth of capabilities. But there are still going to be reasons to migrate to an IaaS solution with on-premises SQL Server. These can be due to constraints such as third-party vendor applications that only support specific versions of SQL Server.

With SQL Server 2008/2008 R2 end of life in July 2019, migrating to an IaaS solution can be a shortcut to replace aging servers without a large outlay. In extreme situations in which you can’t migrate off either of these versions, Microsoft will provide an additional two years of security updates if SQL Server 2008 is running on an Azure VM.

There are several routes to moving on-premises databases to IaaS VMs. You can use both DMA and DMS to perform a migration to Azure IaaS VMs. If you’re using newer versions of SQL Server (SQL Server 2012 or newer), you can utilize Availability Groups to replicate the data. This migration model also helps with the application configuration, as the listener can be used to ease the application migration from a connectivity perspective.

The other option is to use Azure Site Recovery (ASR) to migrate VMs directly to Azure. Let’s look at the backup and restore and ASR methods.


Backup and Restore

You have likely been using backup and restore as a method for moving databases between SQL Servers for an exceptionally long time. It’s one of the most versatile options that can help with moving some of the largest databases by making use of full, differential, and transaction log backup options.

There are two main routes for using backup and restore to move SQL Server databases between servers. The first is to perform it manually via scripted methods such as T-SQL or PowerShell with the dbatools module. The second is to make use of log shipping within SQL Server. This allows for the priming of the target environment and can drastically reduce downtime for migration activities.

The option you choose to use will depend largely on the connectivity between on-premises systems and Azure. If the two environments are connected via either site-to-site VPN or Express Route, then both options are open, irrespective of whether you have extended your Active Directory (AD) domain. If the two environments are distinct, then you will restrict yourself to the first option and have to manual perform the migration activities via Azure Blob storage. It’s possible to automate these activities, but it will involve additional services and configuration to make work smoothly.


Backup and Restore SQL Database to IaaS VM

Backup and restore Azure SQL Database to IaaS VM


Native Backup and Restore

When using native backup and restore methods for migrating a SQL Server database to an IaaS VM, the best route is via Azure Blob storage. This allows for both connected and disconnected environment configurations. It’s highly recommended that you make use of the dbatools PowerShell module, which give you a wider range of options when scripting and subsequently automating the process.

Using Azure Blob storage as part of this process means there’s no need to allocate temporary storage to your SQL Servers or stand up dedicated infrastructure to handle the backups. Azure Blob storage can be created and destroyed with negligible impact to the target environment.

For connected hybrid environments in which the network and AD have been extended to Azure, having connectivity to both the source and destination makes this process a lot easier. Azure Blob storage would take the place of a network share that you would use. At this point, the process is the same as that used for migrating to Managed Instance via blob storage. This is where making use of PowerShell comes to the fore. If the version of SQL Server at the source supports backup to URL, then simply use Backup-DbaDatabase from the source to blob storage and Restore-DbaDatabase from blob storage. If the version of SQL Server doesn’t support native backup to URL, then you have to back up to a local directory and then move the file to blob storage using the AZ PowerShell module before completing the restore. This approach is typically easier than using a combination of T-SQL and PowerShell, even with the ability to execute T-SQL via PowerShell.

In disconnected environments, where the Azure portion is distinct from the on-premises environment, the method is the same. However, the implementation differs, as you will need to connect to the target systems via a public route such as RDP to perform the restore operations. It isn’t advisable to expose the SQL Server systems to the Internet via public IP addresses on Azure VMs.


Transaction Log Shipping

If you have a connected hybrid environment, then transaction log shipping is a viable option for migrating databases to Azure VMs. Log shipping is a tried-and-tested technology for creating copies of databases; it can leverage many of the new features, such as backup encryption introduced with SQL Server 2014. However, it doesn’t support the native use of Azure Blob storage for holding the backup files.

There are three options to configure log shipping regarding storage for backups:

  • Provision the required storage on the SQL Servers that will hold the transient backup files.

  • Stand up a dedicated file server within the environment that can be accessed by both source and target servers.

  • Make use of an Azure file share and connect it to the source and target SQL Servers (if the operating system supports the use of this capability).


Whether migrating to a single Azure VM or to a multi-replica Always On Availability Groups configuration, log shipping can add a lot of value to the process. Once the storage is provisioned for the backup files, then it’s simply a case of using the native log shipping feature in SQL Server to configure the primary and secondary servers.

SQL Server log shipping’s ability to ship to multiple secondary systems to prime an Availability Groups configuration can really simplify the deployment. It means that all replicas will be at the same state so that when the primary replica is brought online, then the secondary replicas are simply added to the Availability Groups.

Now you need to determine the timing for the shutdown of the application, wait for the final logs to be shipped, and then configure the target system. Once that’s done, the application(s) can be reconfigured to work with the new servers and begin processing again.


Azure Site Recovery

Azure Site Recovery (ASR) is a DR solution that allows for physical servers or Hyper-V and VMware VMs to be replicated from on-premises to Azure and converted to Azure IaaS VMs. This capability can also be applied to migrating VMs to Azure without needing to build and configure new servers to host databases. This mechanism isn’t one that would typically be recommended. However, when you have particularly complex SQL Server configurations deployed with additional software components, ASR can be the safest way to move to Azure. It’s especially relevant should you need to remain on your current SQL Server version and configuration. In the case of SQL Server 2008, ASR can provide a get out of jail free card, where you can migrate the VM to Azure and receive the additional two years of support for security updates while you plan to upgrade to newer versions.

Shared storage isn’t present in Azure VMs, so FCI aren’t typically used. It’s possible to use FCI in the cloud; however, it requires additional storage replication technology such as Storage Spaces Direct or another third-party solution.

There are a few constraints around what can be migrated from on-premises to Azure. Learn more about the supported configurations here. One of the restrictions that is likely to impact SQL Server systems is that clustered disks aren’t supported for migrations. As such, on-premises FCI configurations can’t be migrated to Azure VMs and an alternative mechanism will need to be identified.


Migration Summary

No matter which platform that you plan to migrate to, the fundamental process is the same. You want to move databases from one place to another as effectively as possible with minimal input. DMS is often is the first port of call for those looking to migrate to the Azure platform, no matter what your target is. The ability to manage the projects in the service and leverage the online migration option means that it’s one of the most effective options when looking to combine it with automation routines.

In the event that you’re not able to leverage DMS, there are still many other ways to migrate SQL Server systems to Azure.

Although this section of the guide has focused on the migration technology options, it’s still important to note that data testing and validation should be undertaken during the final migration. The tests and checks that you defined to ensure that the migration is successful need to be completed as part of the actual migration task, too. This will ensure that risks to the migration process are minimized.



Post-Migration Operations


If you encounter issues in the post-migration phase, it’s recommended that you fix forward rather than roll back. Due to the complex nature of database systems, the rollback can result in incurring data loss, and the longer between migration and detection can mean the difference between remaining a viable business or not.

After the actual database migration has been completed, the work isn’t done. Now is the time to watch very carefully for any issues related to the migration. The sooner that you can identify issues, the lower the overall impact is likely to be. The key to spotting issues early is to have effective monitoring in place that has performance metrics from pre-migration to compare to with regard to how the workloads performed prior to migration. The last thing you want is to have to deal with the dreaded “The application is slow” tickets.

Additionally, it’s just as important to understand and ensure that maintenance routines and Business Continuity testing are in place and functioning to ensure that in the event of an outage, you can restore service on the new platform.

Post-migration tasks that must be completed are as follows:

  1. Ensure that SentryOne monitoring has been updated to monitor the new targets and that the old targets are no longer monitored.

    1. It’s prudent to initially stop watching the old targets to ensure that historical monitoring data is retained for the old systems. Once you’re comfortable with the new systems, then delete the old targets and associated data.

  2. Update migration documentation in SentryOne Document.

    1. Once the migration is complete, it’s important that the documentation is updated to reflect the new environment. What was captured as a migration facilitator is now an important repository of support information that can assist in the platform running smoothly. It’s now a living document that needs to be maintained. The documentation update can be scheduled via the automatic snapshot capabilities in SentryOne Document.

  3. Transition tests from migration scope to BAU data monitoring and validation.

    1. As with the documentation you collected, the tests now have value to ensure the platform is functioning as required. The tests you build to verify the migration can be incorporated into application monitoring that helps the application owners have increased confidence in their reports and applications.


By ensuring that you carry forward the migration documentation and supporting systems, you can realize benefits in the daily operation of the hybrid or cloud platform that you have migrated to.



Earlier, we discussed how having monitoring in place early in the migration planning process can solve many problems as you test and validate workloads on the new systems.  You also need to ensure that once the workload is active on the new platform that it’s in-line with your pre-migration baselines and performance thresholds.

It’s also vital that you’re monitoring for discrepancies in the data contained in the migrated systems. This is especially true when you had to make modifications to the databases or applications to support the new version of SQL Server that you’re now running.


Performance Monitoring

The first thing that end users will complain about is performance. Without metrics to back up your position that the system is functioning well within expected parameters, you face an uphill battle.

For more information about how to use SentryOne to proactively identify performance issues before they reach production, check out this webinar with SentryOne partner

Therefore, it’s important to have a platform that spans pre- and post-migration configurations. SentryOne SQL Sentry provides powerful, scalable performance monitoring for SQL Server on Windows and Linux, Azure SQL Database, and Managed Instance, with all the data stored in one central database. As such, it’s possible to have a single pane of glass that can span pre- and post-migration. Leveraging SentryOne Advisory Conditions, which look at the historical performance data and baselines and compare to the current workload profile to alert when a deviation is detected, is ideal.


Data Integrity Monitoring

The area that can cause the most problems in a migration scenario is when data changes go wrong and incorrect data is stored. This is one of the more insidious issues that can arise, as it isn’t typically noticed for some time, at which point there’s almost no chance of rolling back. The sooner that you can spot this issue and remediate it, the better the situation will be. Incorrect data can be stored for any number of reasons, ranging from incorrect collation to the language used by the application login.

The quickest and easiest way to prevent data quality issues is to take the test suites that you created as part of the migration validation and testing phase and transition them from integration tests to production data monitoring tests.

Another advantage of transitioning your migration testing into production data monitoring is that you can incorporate the test results into your reporting and ETL routines. One of the simplest things you can do is add functionality to your reports that gives a confidence factor. If all tests complete successfully, then confidence is marked as high. If there are failures, the issues can be conveyed to the application owners. Confidence can be indicated by a traffic light system or percentages depending on the metric you want to use.


Maintenance Activities

The range of maintenance activities that you will need to configure will vary, depending on the platform you have migrated to. For the PaaS solutions on Azure SQL Database and Managed Instance, database backups and integrity checking are handled by the platform.

The architecture of both solutions is designed around ensuring storage consistency; therefore, there’s no need to run DBCC CHECKDB on Azure SQL Database or Managed Instance. You will, however, still need to run this operation if you’re using an Azure VM.

Likewise, backup operations are handled natively by Azure SQL Database and Managed Instance. But how do you know that they’re being completed? This is where monitoring provides a level of visibility. SentryOne monitors the backup operations not only on SQL Server in VMs but also on Azure SQL Database and Managed Instance. This monitoring provides a level of confidence that the backups are happening. It also allows you to identify if the backup operations impact the performance of your database workloads.


Index and Stats Maintenance

One area that consistently gets overlooked on PaaS solutions is index and statistics maintenance. There’s a misconception that Azure SQL Database and Managed Instance just handle everything. However, they’re still based on the same engine as on-premises SQL Server, and they need reliable statistics to be able to make decisions about how to execute queries.

For SQL Server systems that are running in IaaS VMs, SentryOne can provide deep insight into index utilization as well as managing any fragmentation found by the analysis.

For Azure SQL Database, you can use Elastic Database Jobs to schedule index and statistics maintenance over as many of your Azure SQL Databases as you want. It’s a highly flexible solution that can be configured to enumerate elastic pools, Azure SQL Servers, or shard maps to detect new databases. This functionality, in combination with deploying scripts such as Ola Hallengren’s maintenance solution, allow you to manage maintenance operations effectively.

Managed Instance includes SQL Agent, so you can transfer your existing agent jobs and scripts from your on-premises SQL Server systems to maintain maintenance operations. This continuity is important to ensuring that your workload performance is as you would expect.


Business Continuity Testing

The final stage of the migration process is to ensure that all staff are familiar with what it takes to bring the system back online in the event of an outage from the cloud vendor. Outages can and will happen, so your database team needs to be prepared for it.

Effective documentation is essential to Business Continuity. Now that you have built a documentation repository for the systems that you have migrated, you can use it to enhance your Business Continuity response.

SentryOne Document offers scheduled environment scanning to detect changes and allows you to compare scans to help identify drift in your environment. The added benefit of SentryOne Document is that it puts the ownership of maintaining the documentation in the hands of application owners and SMEs via a lightweight web interface.

The information that SentryOne Document provides will allow the operations staff to recreate an environment rapidly in the event of a disaster. And, it allows the operations staff to identify affected components should an outage occur.




Whether you migrate to IaaS VMs, Azure SQL Database, or Azure SQL Database Managed Instance, you now have established patterns and practices that you can implement to minimize risk and increase the chance of success with your cloud migration.

Through a combination of tried and tested methodologies and new Azure services supported by SentryOne solutions, it’s possible to accelerate and de-risk cloud migration activities to build reliable and efficient data platform solutions in the cloud.




Additional Resources

Eating the Elephant: Database Cloud Migration

SentryOne Chief Strategy Officer Douglas McDowell and SentryOne Solutions Engineer Devon Leann Wilson address the options, concerns, and benefits surrounding the many paths to migrating your on-premises and virtualized SQL Server workloads into the cloud.

View On-Demand Webinar

Azure SQL Database Managed Instance: A New Path to the Cloud

Learn how to determine if Azure SQL Database Managed Instance is right for your workload, as well as what the options are for moving SQL Server workloads from on-premises to Managed Instance.


View On-Demand Webinar

Evaluate Cloud Database Performance Using TPC Benchmark Scores

Kevin Kline, SentryOne Principal Program Manager, steps through the TPC benchmarks and gives actionable tips on how to use TPC benchmarks to evaluate cloud providers.

View On-Demand Webinar

Download the Cloud Migration Guide: SQL Server to Azure

Submit your information for immediate access to the PDF version of the Cloud Migration Guide: SQL Server to Azure