The Ultimate Guide to Cloud Migration – Part 2

By Simon Mohr, CEO at SEM Solutions

This is the second part in a three part series, taken from ‘The Ultimate Guide to Cloud Migration’ by Simon Mohr and Databarracks. Did you miss Part One? Or you can download the full whitepaper here.

A to B in Six Easy Steps

1. Planning

Planning for a migration is not significantly different than planning for any other major IT project, particularly if it’s already built in to the total cost of ownership. Companies will start with an initial need and then outline a roadmap towards the solution, generally composed of:

  • Starting conditions
  • Desired end results
  • Processes of change
  • Testing (of both functionality and performance)
  • Reporting to measure success

However, given the intensive (and intrusive) nature of migrations, migration partners also require a degree of transparency and disclosure not usually needed in other projects – of both the reasons for the migration and of the initial shape of the IT environment itself.

For instance, certain initial conditions, such as ageing pieces of infrastructure, a lack of disk space or recurring downtime must be factored in to the migration strategy. These types of issues can often be planned around, but failing to highlight them at an early stage could unexpectedly impact the image quality or the ability to complete synchronisations successfully.

2. Getting Access

Actually getting access to the companies’ systems can be quite complicated depending on the day-to-day hygiene of the client’s IT environment. It relies on strong password management policies and a solid understanding of access criteria to routers and servers. Essentially, they need a bunch of keys to hand over and they need to know what each key does. This is usually a good indicator of how well the business knows their own systems and can foreshadow potential problem areas later in the migration.

Once those keys have been handed over the migration specialist will then need to perform a gap analysis to measure up how closely the customer’s idea of their IT estate aligns with reality. For instance, it’s fairly common for a company to employ a number of different developers over the lifetime of a single server. During that period, each of the developers could have been building scripts and uploading backups independently of one another, without ever informing the wider business.

So, out of a hypothetical terabyte of storage, ¾ of it could be taken up with archived zip files simply because there was nowhere better to back up that data at the time.

This is a fairly common situation, and it’s not at all unusual for companies to have a very poor understanding of their own environment – particularly given the shrinking numbers of technical and system administration jobs in modern IT departments.

3. Proof of Concept

Essentially, the proof of concept stage is a dry run of the migration model itself, executed on a sample set of data, without hitting the ‘Go Live’ button at the end. This can be configured around a single transaction or event just to demonstrate the functionality of the migration path, or be more complex depending on the number and variety of systems being moved.

If the move isn’t complicated, this can be a very fast piece of due diligence. Alternatively, the proof of concept stage can be a good point at which to align expectations with what is possible given the timeframe, budget and resources allowed. It can become a stage at which the migration partner says ‘We’re not going to commit to complete this – what we’re committing to is establishing whether or not this is possible.’

4. Carrying out the migration – Imaging or Domain-by-Domain?

Historically, migrating a large amount of data was a case of putting a disk drive in a jiffy bag and calling the courier. Obviously, the kind of downtime this creates isn’t compatible with the high-availability culture of today’s business environment.

Consequently, there are two fundamental ways to migrate non-disruptively, which can be loosely summarised as either moving all the data at once in as short a time as possible, or moving discrete elements of the environment in the background.


On the surface, the first method sounds like the obvious choice; a simple copy and paste process completed outside of business hours. Uninformed companies tend to presume they can shut their systems down in the legacy environment on a Friday, copy a clone image to the new environment over the weekend and press ‘Go’ at 6am on Monday so everyone can pick up their emails.

In reality, there are many factors that could affect the quality of the transfer, the consequences of which might not emerge until the destination. This is also the only method with scheduled downtime outside of the Go Live phase – you can’t take an image of a live environment, and the prospect of shutting everything down can be alarming. For many organisations, hitting the ‘off’ switch isn’t something they’ve done before.


Domain migrations are more reliable than imaging, but they take significantly longer. By moving across one system component at a time – such as an exchange server – the chances of service outage are significantly reduced. However this also means the customer has to maintain two environments simultaneously, which can be costly, particularly if the domain is part of a large infrastructure.

The transfer rate is also slower as the data transfer tends to happen over the internet, with most migration providers allowing around 1 Gb/hour. This is a skill in itself, because if the migration can be completed with no impact to existing services, it can be carried out at any time in the background without disrupting the IT environment.

A good analogy is to think of your IT environment as a house. Migrating data using imaging is the equivalent of picking up the house and moving it to a new location in its entirety. The likelihood of that house reaching its new destination without having a single brick loose or piece of furniture out of place is highly unlikely. The house is probably going to need to tweaking and spring cleaning once it arrives. Similarly, it’s certainly quicker, but during the process you don’t have anywhere to live.

Domain migration is the equivalent of moving the house room-by-room; much slower, but necessarily more attentive to the respective parts.

It’s very easy when doing a migration to trip up along the way – to do things and not consider the unintended consequences of a particular action. This is resolved by marrying up suitable IT skills with a more business-focused outlook – understanding the business impact of the technical decisions being made. A migration is a very specific, contained process, with long term implications. Having someone with a foot in either camp (business and technical) is preferential.

5. Testing

It’s surprising how frequently companies don’t anticipate the possibility that the migration might not have worked perfectly first time round. For many the testing phase is regarded as just a formality – similar to a car mechanic driving around the block to demonstrate that everything works perfectly.

It’s not rigorous enough.

Testing should be a point at which the migration is given a chance to fail; better in the testing phase than weeks after completion, when the migration specialist will need to be called back in, usually at a cost to the business.

In fact, testing can be significantly more work than the actual move itself, particularly if you’re migrating complex websites with many points of entry, types of interaction and different user groups – those interdependent processes all need to be tested thoroughly. Ideally, organisations will have a shopping list of required tests to carry out at the other end, though again, the loss of internal IT skills can make this unlikely.

Ultimately, it entirely depends on size, preparedness and complexity. Testing could only require up to half a day, or at the other end of the scale, up to 3 months for an organisation with thousands of clients who can’t risk downtime.

This is also perhaps the only stage at which there’s a risk of the customer being too cautious, so it’s important to make the clear distinction between ‘testing’ and ‘staging’ in more traditional projects.

Testing in a safe, staged development or test environment is an integral part to most IT projects, and so companies are often surprised to find out this is not the case with migrations. The testing phase, though not taking place in a discrete environment, fulfils this need completely so long as adequate resources are devoted to it. Carrying out additional staging as a precautionary measure is a duplication of effort.

6. Go Live

The go live phase is a very important part of the process and the only time (except for imaging) where scheduled downtime occurs. It’s at this point all the file structures and databases are verified and checked for changes.

Many organisations are tempted to go live very late at night on a weekend because of a presumed sense of security – if something goes wrong, there’s plenty of time to fix it before work starts again on Monday.

In reality though, this is often a mistake, for two reasons. Firstly, if the stages prior to the go live point have been completed exhaustively, no more errors should occur. Secondly, in the unlikely event something does go wrong, it’s incredibly difficult to get in contact with the relevant web developers and engineers in the middle of the night at the weekend. General best practice dictates that the optimum time is around 7am on a Monday morning, leaving a good amount of time to rectify any issues without causing much disruption.

Next time we’ll be covering how some of the world’s biggest migrations are done, along with some of the most common migration misconceptions. You can download the full whitepaper here.

AI Readiness - Harnessing the Power of Data and AI


Related articles

CIOs and CISOs Battle Cyber Threats, Climate, Compliance

CIOs and CISOs face unrelenting pressure from three massive...

Discover the Power of On-premise Cloud Innovation

For most organisations, the shift from on-premise to the...

The AI Show – Episode 8 – Theo Saville

In episode 8 of the AI Show, our host...

The Data Conundrum: How sustainable is its future?

In this article, Dan Smale, Senior Service Owner of...

Adopting open architecture for robust data strategy

As the world's economy grapples with continuous challenges and...