Edge computing | How IoT is changing the cloud

The cloud…how did we get here, how did it all start…surprisingly as early as the 1950s, when mainframes – physically huge and extremely expensive – became available to large corporations and universities. They were accessed by multiple users via terminals that had no compute capabilities and acted solely as stations to facilitate access to the mainframes. Some argue this was Cloud 1.0.

Fast-forward several decades and as high-speed internet becomes commonplace, users can access almost limitless compute resources from any internet-connected device, subject of course to being able to pay for it.

As we enter the age of the Internet of Things (IoT), Machine-to-Machine (M2M) communication and as the number of data sources continues to grow, the cloud model, with its ultimate physical limitations in terms of network latency will no longer be the panacea it is touted as today, enter Edge computing.

What is edge computing?

A relatively new concept, edge computing – also referred to as cloudlets or grid computing – effectively pushes the computational functions to the edge of the network. In other words, rather than pumping all the data back up to the cloud for analysis and action, this process takes place much closer to the data’s source.  

Edge devices can be anything with sufficient compute capacity and capability; for instance, switches, routers or even the IoT sensors collecting the data themselves.

By processing the data as close to the IoT devices involved in generating and responding to the data, the physical limitations of the network becomes less relevant. The result is eliminating bottlenecks and redundant cloud computer and network-related costs.

When and where is edge computing useful?

Some of the key benefits from edge computing come from its ability to reduce latency. In a network, latency is the time taken for data to get to its destination across the network. Usually, this is measured by a round trip time. For instance, the latency between California and the Netherlands is approximately 150 milliseconds.

While this seems an insignificant amount of time, and for traditional applications, such latency is statistically irrelevant, in a world increasingly reliant on IoT-connected devices, this might quickly change. A prime example is self-driving cars. Decisions such as collision avoidance need to be made by the vehicle in as close to real time as possible. In this scenario, even the smallest amount of latency can pose serious safety risks.

Another aspect of edge computing is related to the capacity and availability of networks and bandwidth. Despite the increases we became accustomed to, it can still easily cause a bottleneck in the cloud stack as the Internet is still ultimately a shared public communication medium.  

Edge computing reduces the load on networks by reducing the volume of data pushed back to the core network. This is because traditionally, not all data generated by IoT devices is needed, therefore processing data at the edge of the network allows enterprises to send back to the central cloud service only what’s truly relevant.

Similarly, the connectivity to networks that we are accustomed to is not ubiquitous. Devices such as wind turbines typically located in remote areas could be given the limited ability to self-diagnose and self-heal various issues with the help of edge computing.

 

Security and compliance are a must

With the General Data Protection Regulation (GDPR) on everyone’s mind, and data sovereignty becoming a key concern for organisations, public cloud often presents a series of challenges. GDPR compliance can be complex in the cloud, particularly for global businesses. With the advent of the ‘smart’ era comes ever more data from devices such as watches, homes, phones etc.

By hosting all this in the cloud, there could be future issues with data sovereignty, ownership, consent and right to be forgotten. Offerings in the enterprise and corporate space are already racing to catch up with legislation, with vendors such as such Microsoft now offering Multi-Geo options for where you store your data.

In terms of security, with edge computing, organisations can limit their exposure, simply by storing and processing data locally, and only transferring to the cloud what is really required. Not only does this aid compliance, but also decreases cybersecurity threat risks. A lower amount of data in transit equates to a lower exposure surface, giving opportunists fewer attack vectors to exploit.  

The era of big data is here, and it’s everywhere

With the ability to store and process vast quantities of data available, we are entering an era of unrestricted data-driven innovation.

The business case for edge computing will continue to be driven in the short term by cost savings network capacity and compute, while in the longer term this will be supplemented by its ability to provide faster and more accurate access to automation and time-sensitive decision making at the source. All this will fit into a wider cloud-based ecosystem shaped by the demand for services for customers rather than the limitations of cloud infrastructure itself.

Ultimately, the symbiotic relationship between the data we create and the ability to process and store it will define future technology systems. While edge computing is in a state of constant evolution right now, its benefits are undeniable – and increasingly, in demand.

+ posts

CIF Presents TWF - Miguel Clarke

Newsletter

Related articles

Generative AI and the copyright conundrum

In the last days of 2023, The New York...

Cloud ERP shouldn’t be a challenge or a chore

More integrated applications and a streamlined approach mean that...

Top 7 Cloud FinOps Strategies for Optimising Cloud Costs

According to a survey by Everest Group, 67% of...

Eco-friendly Data Centres Demand Hybrid Cloud Sustainability

With COP28’s talking points echoing globally, sustainability commitments and...

The Path to Cloud Adoption Success

As digital transformation continues to be a priority for...

Subscribe to our Newsletter