Multicloud strategy | The 5 Steps to transition

As businesses are undertaking a new multicloud strategy, many other companies are beginning to plot their courses from legacy to modern architectures. While some will adopt a Big Bang approach, attempting the migration in one fell swoop, successful enterprises will embark on a more measured endeavour, using the natural ebbs and flows of enterprise IT to gracefully make the transition.

While every path will be a bit different, here are five steps that enterprises can follow to start building a multicloud strategy.


Make the underlying network multicloud ready

Multicloud is mostly about operational changes. Making distributed pools of resources behave as one cohesive infrastructure requires over-the-top orchestration and automation, end-to-end across the entire enterprise network from the data centre and public cloud to the cloud on-ramps in the campus and branch.

None of that is possible unless the underlying network is capable of plugging into an end-to-end orchestration platform. So enterprises looking to build multicloud networks should begin by ensuring their underlays are multi cloud-ready. 

There are two things that must be true for multicloud underlays. First, they must have a rich set of open APIs that make the devices programmable. This includes standard APIs like NETCONF. Without ubiquitous support across all the devices in a multicloud architecture, the orchestration layer lacks the reach required to deliver. Second, given the critical role of automation in multicloud, devices need to support real-time streaming telemetry using standard mechanisms like gRPC. 

Of course, no enterprise can rip and replace an entire end-to-end set of network infrastructure. The key is to use every refresh and expansion opportunity to make the underlay more focused on multicloud strategy.


Embrace open fabrics across the data centre and campus

Network fabrics allow operators to manage groups of devices as a single unit. The automation built into these fabrics is a solid foundation for a more automated infrastructure required to operate as a multicloud. 

But the objective in adopting fabrics is not merely to abstract the network. Rather, enterprises should be looking to unify their infrastructure using open protocols that support heterogeneous IP fabrics. By deploying EVPN, network teams can deploy IP fabrics in a multi-vendor world, providing a common underlay technology on top of which overlays can be managed. 

By reducing the protocol diversity in the underlying infrastructure, enterprises will be simplifying the underlay in preparation for a unified underlay-overlay management solution.


Introduce controller-based management

At some point within your multicloud strategy, orchestration needs to move to a central platform. While it might be possible to make the leap from largely CLI-based operations to an SDN-driven orchestration model, the reality is that such a shift is more than just technology. 

The networking industry has managed networks through precise control over thousands of configuration knobs. Centralized management represents an entirely new operational model. Using intent-based management will force teams to abstract what they do. And that will require changes to both process and people. 

Introducing controller-based management in either the data centre or the campus is a good way to get teams more familiar with a different mode of operating. Of course, whatever platform is selected should also be capable of operating in a multicloud environment to avoid an operational rip and replace when the enterprise is ready to manage the whole network from end to end.


Familiarize the team with public cloud workloads

Multicloud will certainly involve public cloud workloads, so moving a few select applications to the cloud is an important step in the journey. 

For most enterprises, a public cloud is mostly an exercise in lift and shift. That is to say that the easiest move for enterprises is to move workloads that previously ran in the private cloud over to a public cloud instance. This will not drive huge benefits in terms of either agility or cost savings—running infrastructure the same way but on a different set of resources is not transformative. However, it will allow teams to familiarize themselves with a different set of tools required to operate within the public cloud. 

Solving for connectivity between the private data centre and the virtual private cloud (VPC) instances, for example, will allow teams to extend their orchestration platform to the cloud arena. Working with templating tools like CloudFormation or TerraForm will also give teams some exposure to tools closer to the DevOps space.

Enterprises should also try multiple public clouds, which will force the thoughtful architecture of an operational model that supports policy and control across diverse environments. Ideally, the orchestration platform of record will easily extend to multiple private cloud instances, allowing enterprises to begin operating as a multicloud.


Instrument multicloud strategy

Multicloud operations rely on automation in a way that traditional infrastructure historically hasn’t. The most basic way to think about automation is this: see something, do something. If an enterprise does not have end-to-end visibility, a meaningful part of the multicloud promise simply cannot be achieved. 

And while automation is a long journey that will itself require careful thought, the first step of instrumenting everything in a way that it can be shared across an end-to-end infrastructure is a great first step. Enterprises should decide on the tools required to provide end-to-end visibility, along with the collection mechanisms and event-driven infrastructure required to take action once a condition is detected.


Putting it all together

Enterprises need to understand that multicloud is going to be a sophisticated journey. It is essential that the path is broken down into individual steps small enough to take without overwhelming the team. While the technology is new and can seem daunting, the real changes will be to the people operating the infrastructure. The silos that have defined teams for decades will need to come down, and that requires a different way of not just operating, but also organizing. 

The path is difficult, but if companies dutifully take advantage of every opportunity to make themselves more multicloud-ready, they will find that multicloud can be a natural outcome of graceful technology adoption.

Mike Bushong is Vice President of cloud and enterprise marketing at Juniper Networks. Mike spent 12 years at Juniper in a previous tour of duty, running product management, strategy, and marketing for Junos Software. In that role, he was responsible for driving Juniper's automation ambitions and incubating efforts across emerging technology spaces (notably SDN, NFV, virtualization, portable network OS, and DevOps). After the first Juniper stint, Mike joined datacenter switching startup Plexxi as the head of marketing. In that role, he was named a top social media personality for SDN. Most recently, Mike was responsible for Brocade's data center business as vice president of data center routing and switching, and then Brocade's software business as vice president of product management, software networking

AI Readiness - Harnessing the Power of Data and AI


Related articles

Data Sovereignty in AI: Five Key Considerations

Data is the most valuable economic asset for modern...

Streamlining Cloud Management with Terraform Functions

The multi-cloud deployment trend is making infrastructure management more...

Defending Health and Social Care from Cyber Attacks

The National Cyber Security Centre (NCSC) recently calculated that...

How is AI changing patch management?

Cyberattacks are increasing in frequency and severity. This means...

AI Readiness – Harnessing the Power of Data and AI

Join us for a compelling session on "AI Readiness:...

Subscribe to our Newsletter