While cloud computing brings numerous benefits such as economies of scale, consumption-based pricing and the ability to get applications to market quickly, there are indications that on its own, it might not be able to cater for the evolving needs of new age technology. This is where edge computing is going to step up.  

If cloud computing is all about centralising computing power in the core, edge computing, as the name suggests, is about taking that power back out to the periphery. In simple terms, the edge is about enabling processing and analytics to take place closer to where the endpoint devices, users or data sources are located.

Why would we want processing activity to happen at the edge?

One of the main drivers is the rise of the IoT (Internet of Things) and other applications that require real-time decision making or artificial intelligence based on the fast processing of large, multiple data sets.

Take the example of a self-driving car that might rely on 100+ data sources tracking areas such as speed, road conditions, the trajectory of nearby vehicles, etc. If a child steps into the road, the car needs to make an immediate, real-time decision to stop. If it waited for the data to be sent into the cloud, processed in the core, and for instructions to come back, it’s going be too late.

[easy-tweet tweet=”Edge computing allows the processing to happen locally within or near the device” hashtags=”Cloud,Computing”]

It’s the same for an AI based drone or robot that has to perform services equivalent to a human being. It needs the data it collects to be acted upon swiftly and not have to wait for the analysis to arrive from somewhere at some time.

Edge computing allows the processing to happen locally within or near the device – right where the action is. In doing this, it eliminates the latency involved in waiting for data to go to the core and back.

The demand for applications that rely on real-time or near real-time processing and analytics on edge is on the increase. Retailers, for example, are wanting to use big data analytics to help identify shoppers who walk into their stores so they can deliver personalised real-time offers and products ads.

As well as driving out latency, the move to the edge delivers numerous other benefits. For example, if more processing is done at the periphery, this means fewer data overall needs to be transmitted across the network into the cloud, which can help reduce cloud computing costs and improve performance. And with processing power located at numerous points throughout the edge rather than centralised at the core, there is no single point of failure.

The edge architecture may see greater deployment of micro data centres – small, modularised systems that host less than ten servers – to provide localised processing. Or the smart devices themselves will become more compute and storage intensive. Or the architecture may include edge gateways that are located near the devices and sensors and act as processing engines and a conduit to the core/cloud based setups.

So does all this mean the end of the cloud?

No. It’s likely that we’ll see a co-existence of edge computing alongside the centralised cloud model. The real time “instantaneous” insights will be processed near the endpoint or data source, while the cloud acts as the big brother that processes and stores the large data sets that can wait. The cloud may also act as the central engine to control and push policies towards the edge devices and applications.

Both cloud and edge computing are required to work in tandem to ensure both hot and cold insights are delivered on time, cost effectively and efficiently

For instance, a connected and smart locomotive engine would need instantaneous insights to be processed in order to support its smooth operation while it is travelling. It will generate massive amounts of data that would have to be combined with external data (environment, overall temperature, track health etc.) and processed immediately, often in less than nanoseconds. This function has to be performed using the edge architecture (either the engine acts as the edge or will be in constant communication with the edge gateways).

On the other hand, to maintain and manage the overall health of the engine, the insights may not be required in real time. Here the data sets could be transferred back to the central core where they can be processed to gain insights. There would be a segregation of functions: some real-time insights will be required immediately for smooth operations while others can take longer and be processed in the core.

Similar use cases can be found with autonomous cars, drones, other connected devices and IoT applications. As more data is collected and needs to be processed in real-time or near real-time at the margins, so there will be a greater need for edge computing to complement the cloud.

Kalyan Kumar B. (KK) is the Global CTO – IT Services, HCL Technologies. He is the leader of the cloud native services business unit across all service lines within HCL; the leader of ‘Global Product and Technology Organization’ and the DRYiCETM Business Unit, a unified autonomics and orchestration platform business, which is the core foundation of the 21st Century Enterprise.

AI Readiness - Harnessing the Power of Data and AI


Related articles

Is Poor Collaboration Hurting Your Team’s Productivity?

Regardless of how many days you believe hybrid workers...

AI Decodes Human Behaviour Like Never Before

In this exciting episode of "The AI Show," we...

The AI Show – Episode 7 – Martin Taylor

In this episode of the AI Show, host Emily...

Three ways to protect your business from cyberattacks

Keeping on top of cyberattacks in this current digital...

Data Sovereignty in AI: Five Key Considerations

Data is the most valuable economic asset for modern...

Subscribe to our Newsletter