Consider this scenario. An organization, which has a data center, wants to hyperscale. It wants to add an analytics component for an application that requires data parallelism at scale. The data center may or may not have the ability to scale. By utilizing hybrid clouds, the enterprise can offload the compute of analytics to the cloud environment, which provides the virtualized hardware, including CPU, memory, general purpose computation on graphics processing units (GPGPU) and more.

Such scenarios are driving the adoption of container-based technologies for multi-clouds, slowly but surely. It is happening despite some confusion and misunderstanding about the technology. A survey from IBM last year found that less than half (41%) of enterprises still do not have a multi-cloud strategy and only 38 percent had the necessary tools to even operate multi-clouds.

Without a question, Virtual Machines (VMs) will continue to exist for many years. However, for hybrid or multi-cloud development and deployment scenarios, CIOs are increasingly turning to containers. Containers provide an opportunity for enterprises to scale deployments as necessary in hybrid-cloud scenarios. As such, it’s important for CIOs to understand the near- and short-term benefits of containers and the challenges.

Container conundrums

One of the biggest conundrums has been to demonstrate the value of containers when making the transition. In many cases, CIOs will choose containers for new initiatives; however, for existing systems that reside on a VM and bare metal, it’s more costly to move those to a container-based system. Some CIOs have even had to forego containers until costs can be mitigated.

Containers do provide a number of cost benefits over VMs. For instance, the compute does not have to run all of the time for analytics components. In a VM setting, the analytics component has to run on demand, and provisioning a VM takes several minutes to launch. With Docker, the launch is reduced to a few milliseconds, and the component runs instantly. With Kubernetes, scaling primitives (called ReplicaSets) are common, and they don’t vary across different environments.

Additionally, Docker and Kubernetes are environment agnostic, meaning the deployment of applications is the same in local data centers or across public clouds. Provisioning of the environments also is common for most cases with very minor differences, making it easy to distribute workloads across clouds – but that can come at a cost.

Typically, provisioning container-based services happens in a matter of milliseconds, which helps control the consumption of resources like hardware, network and storage. Nevertheless, certain specialized software programs and scenarios, such as complex graph databases, will need to be run on VMs and may not be adaptable to containers. This has been another challenge IT teams have had to overcome.

Security scares

What’s more, there are critical pressure points that CIOs and IT teams have had to overcome when managing Docker across the enterprise. One of the key challenges of containers is security. VMs are relatively secure and have a smaller attack surface because they’re fairly isolated. Containers are more vulnerable to attack and carry with them the possibility of spreading any security issues to other containers they touch.

Because container technology is relatively new, some developers have limited understanding about container security. The good news, however, is that there is a committed effort across the industry to tighten container security using multiple approaches, including encryption and process whitelisting. Additionally, best practices guidelines for container security is now are available for developers. This has been a bone of contention for many CIOs and enterprises.

Containers are DevOps’s best friend

Containers provide a new way to approach application development for hybrid cloud platforms from a DevOps perspective too. Enterprises that have not turned to DevOps in the cloud – are part of a tiny minority. According to a recent study, just 12 percent of enterprises have not adopted cloud-based DevOps.

A separate study from CA Technologies found that DevOps in the cloud could boost software delivery performance by around 80%. Continuous Integration (CI) and continuous Delivery/Deployment (CD) – two of the most important characteristics of DevOps – benefit from containers. Docker integrates with many CI tools, including Jenkins and Puppet, helping developers continuously collaborate on code and ensure that a build is always stable and successful.

Docker and Kubernetes are changing microservices architectures, wherein a single modular function is packaged as a service and consumed by a client. The advantage of this microservices approach is the creation of a distinct entity for each function, so any changes can be implemented without adversely impacting others. With containers, this approach is reinforced, as each microservice definition becomes a single container image and is deployed and scaled on a Kubernetes cluster. With containers, it also becomes easier to have multiple versions of the same service running at the same time.

Despite some misunderstandings and challenges, the key takeaway for CIOs is that containers and Kubernetes are transforming hybrid clouds and the way enterprises can envision the technology solutions they are launching and hosting. According to 61 percent of the multi-cloud experts IBM surveyed, at least 80 percent of new apps will be developed using containers by 2021. Containers are here to stay.