How cloud is evolving so systems can act autonomously

Just as cloud adoption is beginning to accelerate and organisations are focusing on how to manage multiple cloud providers rather than on whether to adopt cloud in the first place, a few ‘experts’ have begun to predict cloud’s demise. This may be a way of grabbing the headlines, but in my view, they are asking the wrong question. Here’s why, and how we can realistically expect cloud to evolve in the next few years.

The current growth of cloud is simply the next logical step in the regular waves of centralisation and decentralisation that characterise the IT sector. One moment we all think that the best place for intelligence in the network is at the edge, and then technology changes and the most logical place for that intelligence becomes the centre instead. For example, mainframes never died when client-server came along; instead, the terminal became a PC, and the mainframe became the database server.

The aim of cloud was to consolidate data centres and enable organisations to benefit from economies of scale

The aim of cloud was to consolidate data centres and enable organisations to benefit from economies of scale for hosting their business systems. For many, this has been a great success, whether they are small businesses who no longer need to invest in their own infrastructure or large organisations that can free up real estate and benefit from cloud’s flexibility and scalability for many of their routine applications – albeit moving legacy applications to cloud remains more challenging. Cloud also offers significant advantages for archiving, back-up and disaster recovery.

However, we are also seeing a rapid growth in intelligent devices, such as robots in manufacturing, medical diagnostic systems and autonomous vehicles, up to and including self-driving cars – what you might term ‘intelligent client mark 2’. For these devices, which are in effect data centres in their own right, there is a need to process information in real time, and so for them, the latency of cloud is becoming a major issue.

Take an autonomous vehicle, which needs information on the changing obstacles around it, or a robot scanning fruit on a conveyor belt in a factory and picking off substandard items. These need to make instant decisions, not wait for information to transit six router hops and three service providers to reach the cloud datacentre and then do the same on the way back. If you think about it for a moment, it’s basic maths!

Having intelligence at the edge is vital for applications that seek to do things in real time, such as robotics, and as ‘smart’ devices, embedded systems and use of artificial intelligence to train and manage these devices develops this need will increase. Organisations will need to use cloud for what it’s good at i.e. scale, training and developing algorithms and large-scale data stores, and then bring the intelligence to make decisions to the edge device to enable it to act autonomously. An example would be a facial recognition system, where you would use cloud to store petabytes of data to enable you to train the system with many thousands of photos, and then load the algorithm developed into the camera control system so that the initial facial recognition is at the edge. The system can then revert to the data stored in the cloud if further confirmation is required.

Different sectors will, of course, require different approaches, and each will need to consider where best to place the intelligence in their network to meet their specific needs. IT can provide the tools and capability but, as always, it is up to each business to decide how best to use them for the greatest benefit to their clients and themselves.

In the insurance industry, for example, where actuaries have traditionally analysed massive amounts of data to enable underwriters to make policy decisions, the economies of scale provided by cloud processing offer significant advantages. In this scenario, there is no major benefit in moving intelligence to the edge as such decisions are not sensitive to latency.

[clickToTweet tweet=”To paraphrase Mark Twain, any tales of the imminent demise of cloud have been greatly exaggerated! via @fordwaybloke” quote=”To paraphrase Mark Twain, any tales of the imminent demise of cloud have been greatly exaggerated!”]

I hope that by now I’ve convinced you that any tales of the imminent demise of cloud have, to paraphrase Mark Twain, been greatly exaggerated. Each organisation will need to consider its own use case and choose the most appropriate solution, depending on how much real-time processing is required. All will benefit from the scale and flexibility of centralised cloud processing and storage, from construction companies putting together consortia to deliver specific projects such as Crossrail and HS2, who require capacity for a finite amount of time, to public sector organisations who can hand routine applications to a cloud provider in order to focus on their core activities.

Even those organisations working at the cutting edge of robotics and AI will benefit from cloud’s scale and capacity. However, their smart devices will need to rely on inbuilt intelligence, supported by cloud services, if they are to succeed.

+ posts

CIF Presents TWF - Miguel Clarke

Newsletter

Related articles

Generative AI and the copyright conundrum

In the last days of 2023, The New York...

Cloud ERP shouldn’t be a challenge or a chore

More integrated applications and a streamlined approach mean that...

Top 7 Cloud FinOps Strategies for Optimising Cloud Costs

According to a survey by Everest Group, 67% of...

Eco-friendly Data Centres Demand Hybrid Cloud Sustainability

With COP28’s talking points echoing globally, sustainability commitments and...

The Path to Cloud Adoption Success

As digital transformation continues to be a priority for...

Subscribe to our Newsletter