#TheCloudShow – S2E4 – Cloud and the Edge

Edge computing can be defined as a “mesh network of micro data centres that process or store critical data locally and push all received data to a central data centre or cloud storage repository, in a footprint of less than 100 square feet.”

Now there are some crystal ball gazers who are saying that Cloud is dead and Edge computing is the future.

But, where did Edge come from? A product called NATS started this trend, where enterprise messaging systems and platform technologies could be decomposed into things called microservices using the processing capacity of devices at the end user rather in remote computing.

There is clear value in utilising the processing power of devices that are close to the end user. The key factor here lies in some basic physics – so if you did not pay attention in school about the speed of light, you might have to concentrate hard now.

Historically, the technology industry has gone back and forth between centralised computing and computing all happening at the end user. This is now being disrupted by things like Machine Learning and Artificial Intelligence.

As humans, we understand linear processing – our basic instincts demand responses like fight or flight. Essentially we are 50 thousand year old wet ware – as a result we struggle understand exponential growth.

Artificial Intelligence & Machine Learning are seeing exponential growth – what is happening now is basic compute is limited by the speed of light, namely how fast the compute takes as it goes up and down the internet.

This is where Edge computing brings that distance for far away data centres that cloud traditionally runs on and brings it to the devices that are closer to where the action is.

The argument is that software and compute will move closer to consumer in order to meet their insatiable demand for instant access. The fastest response will lie in telco providers providing 5G and in the devices themselves to process the demand. Fundamentally, this is about what is known as latency, namely the time it take for data to go up and down the pipe.

Or is it?

+ posts

CIF Presents TWF – Professor Sue Black

Newsletter

Related articles

Three tips for managing complex Cloud architectures

"Moving to the Cloud is a strategic choice many...

Demystifying AI Image Copyright

Stable Diffusion and Legal Confusion: Demystifying AI Image Copyright Think...

CIF Presents TWF – Duane Jackson

In this episode of our weekly show, TWF! (Tech...

CIF Presents TWF – Emily Barrett

In this episode of our weekly show, TWF! (Tech...

AI Show – Episode 4 – Richard Osborne

On the latest captivating instalment of the AI Show,...

Subscribe to our Newsletter