Saving client $800,000 during migration to Amazon RDS: Part 2
B
In our previous post, we shared an exciting success story about migrating a client’s SQL Server environment from Amazon EC2 to Amazon RDS. Migrating over 600 databases presented a significant challenge, and while AWS Database Migration Service (DMS) was initially considered, technical roadblocks led us to take a different approach. By leveraging a backup/restore method, we not only streamlined the process for the migration to Amazon RDS but also uncovered substantial cost-saving opportunities. By transitioning to RDS Custom instead of RDS Standard, we helped the client reduce their annual infrastructure costs by over $600,000.
Beyond cost savings, the migration to Amazon RDS facilitated a strategic upgrade to SQL Server 2022, improving performance and future-proofing the client’s environment. In this post, we’ll walk through the details of how we executed the migration while minimizing downtime and disruption for hundreds of databases.
The Migration to Amazon RDS
Understanding the Client’s Environment
Our client operates on a multi-database model, where each customer has a dedicated database, all managed through a central pointer database. This pointer database directs the application to the correct database location across six EC2 instances. When migrating the databases, we needed to update this pointer database to reflect the new locations.
Additionally, the client had demo and training databases in their production environment. Some customer databases were more sensitive to downtime than others, making it crucial to design a migration strategy that minimized service interruptions. To achieve this, we categorized the databases into seven migration groups, with each group ranging from 4 to 90 databases.
Solution: Executing a Seamless Migration
As discussed in our last post, we used a backup/restore process to facilitate the migration. Since the client was on SQL Server 2017, direct backup to S3 wasn’t an option (this feature became available with SQL Server 2022). Instead, we implemented the following approach:
- Backup to Local Drive – We took full backups on the day of cutover before the actual migration time.
- Transfer to S3 – PowerShell scripts were used to copy these backups to AWS S3.
- Restore on RDS Custom – The backups were restored from S3 to the new RDS environment.
- Minimizing Downtime – Databases were restored without recovery, allowing us to take differential backups at the exact cutover time. Once the differential backups were restored, the databases were brought online on RDS.
Our approach emphasized automation and efficiency. We created dynamic scripts to handle the migration of over 600 databases quickly and smoothly, reducing manual intervention and the risk of errors.
Solution Components: Key Resources and Access
To execute this migration successfully, we required the following:
- RDS Custom Instances – The new environment for hosting the databases.
- S3 Bucket & Access Key – For securely storing and transferring backup files.
- EC2 to S3 Connectivity – Ensuring seamless data transfer.
- SQL Server Credentials – Created on the RDS instances to facilitate direct interaction with S3.
Once access was verified, we configured the RDS instances and S3 bucket to enable SQL Server backups directly to S3, allowing for smoother database refreshes in lower environments post-migration.
Solution: Scripts and Migration Master Table
While the high-level migration process appeared straightforward, complexities arose due to RDS storage constraints. Unlike EC2, where database files can be placed across different drives, RDS stores all database files on the D:\ drive. This required incorporating MOVE statements in our restore scripts.
To streamline the process, we developed a Migration Master Table that stored:
- Database name
- Logical and physical file names (MDF and LDF)
- Migration group assignments
Automating the Migration
With the master table in place, we built scripts that dynamically generated:
- Backup Scripts – For both full and differential backups.
- Restore Scripts – Handling full and differential restores, ensuring databases were properly restored and brought online.
- Pre- and Post-Migration Tasks – Automating steps like setting databases to single-user mode, altering compatibility levels to SQL Server 2022, and fixing orphaned users.
By leveraging these automated scripts, we significantly reduced migration time. The entire process was executed efficiently, allowing us to complete the migration two hours ahead of the allotted four-hour downtime window.
The Takeaway
This migration project was a major success, demonstrating how strategic planning, automation, and the right AWS services can drive significant improvements in cost, performance, and scalability. Here are the key lessons learned:
- Automation is Key – Dynamic scripting helped streamline the migration of 600+ databases, reducing manual effort and risk.
- Cost Optimization Matters – Choosing RDS Custom over RDS Standard saved the client over $600,000 annually.
- Downtime Minimization is Critical – By leveraging full and differential backups with scripted execution, we reduced service disruptions and completed the migration ahead of schedule.
- Future-Proofing with SQL Server 2022 – The upgrade provided enhanced performance, security, and long-term scalability.
Migrating 600+ SQL Server databases from EC2 to RDS Custom was no small feat, but with the right strategy and automation in place, we achieved a seamless transition while delivering substantial cost savings and performance improvements. This project highlights the importance of planning, scripting, and leveraging AWS services effectively to overcome complex migration challenges.
Stay tuned for our next post, where we’ll dive deeper into post-migration.

Leave a Reply
Want to join the discussion?Feel free to contribute!