How AI applications are being used to transform business

AI is a new paradigm in computing. You’ll find authors defining it differently in various tech publications, but the general consensus appears to be this: AI, of the type that’s prevalent today, determines solutions to various issues without being explicitly programmed to do so.

Historically, our software’s sole purpose has been automating logic. We’ve given it problems we knew how to fix and had it perform calculations within predefined rules, to achieve objectives promptly. Programming has been based on proof and certainty.

AI, as a computer science, has a more empirical nature. It doesn’t require prior knowledge of the truth and applies probability and statistics to deal with unsureness. In a nutshell, it strives to draw inferences, as accurate as possible, from incomplete information.

Why Are We Talking About AI Now?

Three factors are considered to have spurred the current AI wave. The recent algorithmic advances in Machine Learning (this is referring primarily to deep learning algorithms), the emergence of large enough datasets for ML models to detect patterns in, and, finally, the availability of strong computation hardware that can power Big Data processing. If we were to name one company that helped escalate the progress, we’d say it was NVIDIA.

The firm’s robust GPU cards, invented initially to conjure up realistic graphics for PC gaming, turned out to possess properties nearly ideal for Deep Learning. Namely, their chips each contained about 4000 cores, which weren’t particularly powerful on their own but enabled parallelisation of computing and, thus, provided sound platforms on which neural networks could function efficiently. There were also many influential research papers published in recent years such as Dropout: a simple way to prevent neural networks from overfitting, Deep Residual Learning for Image Recognition, Large-Scale Video Classification with Convolutional Neural Networks and many more.

Collectively, they have all helped ML practitioners to increase, gradually, the prediction accuracy of ML models from 60% (10 years ago) to about 90% (now) and thus enabled firms in various industries to invest with confidence in AI offerings to drive tangible benefits.

Where is AI being used now?

Nearly 95% percent of the overall economic value generated by AI is produced via supervised machine learning. That is why the two terms – Artificial Intelligence and Machine Learning – have almost become synonymous in many a tech outlet.

Supervised learning concerns itself with determining quickest paths from inputs to outputs. It enables us, humans, to teach computers how to perform specific tasks, after being trained on labeled datasets, and maximise specific metrics. ML is being used widely in:

Digital Advertising. The abundance of easily available social data helps companies gain deeper insights into their target audience and segment prospects with more precision. Marketers feed information about a person (input) to an AI model and have it calculate, for example, the probability of how likely the user is to click on a certain add. According to Salesforce, around 60% of marketing leaders are now convinced that AI technology will help them better personalise content and increase the efficiency of programmatic ad campaigns.

Web Fraud Prevention. Cisco, an international tech giant, has recently launched an Encrypted Traffic Analytics platform, which leverages machine learning to detect malicious behaviour and threats in encrypted traffic. The system spots suspicious actions by monitoring the metadata of the packet flow, without decrypting information.

Video surveillance. Relying on the same mapping input to output principle, ML algorithms can be taught to recognise, with great accuracy, people’s faces. Panasonic’s FacePRO technology (one among many similar tools) can identify one’s features even if their faces are partially obscured (by sunglasses or a mask) or they’ve aged 10 years from their previous photo.

Biometrics and facial recognition, empowered by ML models, are expected to become mainstream soon. Oracle’s Hotel Report 2025, for instance, suggests that 72% of hotel operators are planning to adopt such systems in the next few years.

Voice and speech recognition. With Google claiming that 20% of their queries are voice searches (and Comscore predicting that 50% of all online searches will be conducted via voice by 2020) it makes sense that the market for speech recognition technology, which turns audio clips into text transcription, is projected reach $6B by 2020.

Self-driving cars. At their core, self-driving vehicles utilise ML technologies that input images of the surroundings (readings from radars, etc.) and output positions of other cars and objects on the road. Uber already has a self-driving car test program running while Ford, Hyundai, Renaut-Nissan, and other automotive titans have all promised to catch up by 2021.

Does Artificial Intelligence Mirror Human Intelligence?

Contrary to the bogus claims some sensation-loving bloggers love to spew, we are nowhere near replicating, in any meaningful way, the interlinked and sophisticated functions of the human brain. Nor we are trying to. The name Artificial Intelligence, which triggers the comparison of machines to people, is largely considered unfortunate.

As people, we have:

Motor intelligence. Though we take this one for granted (don’t treat with such reverence as, say, excellence at math) we do pay millions to athletes who have exceptional coordination and physical skills.

Visual intelligence. We’re not only able to grasp visual input from our surroundings but also interpret images in a fraction of a second. Our perception system is fine-tuned to capture light (through eyes) sends it to our brains (via receptors) and, processes it, rapidly, in our visual cortex.

Speech intelligence and natural language understanding. Our genetically engineered audio recognition system is, too, extremely complex in its anatomy. We’re able to pick up air vibrations (sound), push them along through multiple processing mechanisms to the auditory cortex and have them transformed into meaningful information.

These are all parts of our interconnected intelligence. One can recall an image from an immense database that is their memory and then immediately find words and gestures to describe it. AI systems are only designed to tackle one specific objective at a time: to move, to summarise/translate text, to recognise faces on the photos, etc. No one is intending to transform them into omniscient, human-like beings.

The fields AI has made the most progress in so far include:

Computer Vision. We’ve made strides at mimicking human perception of light and colour. Our cameras can capture images in digital form. However, we have yet to train neural networks to comprehend pictures, which are just sequences of integer numbers representing intensities on a colour spectrum to a machine, with the speed and precision of a human.

Speech recognition. Our AI systems can already perceive natural language and translate texts, especially on technical matters, with a remarkable accuracy. That said, plenty of work is still ahead. To interpret language fully, in a human sense, a computer would have to learn to comprehend social context, read humour and sentiment, and get a grasp on human feelings.

So, what does AI’s Future Look Like?

AI’s adoption, experts predict, will come in three stages. At first, industries already sitting on big data – finance, healthcare, ecommerce, education, etc. – will transform profoundly their business processes and incorporate AI to enhance, restructure, and automate operations.

Next, we’ll see the abundance of new types of data floating around on the web – info from wearables, interactive personal assistants (such as Echo), smart vehicles, various IoT devices, and ubiquitous computing.

Finally, as AI platforms become commonplace, we’ll be able to teach machines to move autonomously (this means walking robots and self-driving cars will become accessible.) According to Dr. Kai-Fu Lee, a renowned AI guru, we can expect the last stage to come into full effect in about 15 years.

AI has moved out of the lab and companies across different verticals are increasingly adjusting their business models to accommodate for its full potential. Now, the goals of AI investors are no longer centred around increasing efficiency and automating simple processes (for which, say, chatbots were developed.) They’re focused on cracking high-grade tasks such as enhancing pricing optimisation and preventive maintenance, which, in turn, can increase uptime.

+ posts

CIF Presents TWF – Professor Sue Black

Newsletter

Related articles

Three tips for managing complex Cloud architectures

"Moving to the Cloud is a strategic choice many...

Demystifying AI Image Copyright

Stable Diffusion and Legal Confusion: Demystifying AI Image Copyright Think...

CIF Presents TWF – Duane Jackson

In this episode of our weekly show, TWF! (Tech...

CIF Presents TWF – Emily Barrett

In this episode of our weekly show, TWF! (Tech...

AI Show – Episode 4 – Richard Osborne

On the latest captivating instalment of the AI Show,...

Subscribe to our Newsletter