Siri gets sassy – the next step for AI

Virtual assistants have come a long way in the past few years. Whilst the likes of Amazon’s Alexa and Apple’s Siri aren’t quite in the same league as Iron Man’s Jarvis (sometimes feeling like they’re deliberately trying to test the definition of intelligence), they do provide a view of things to come.

Whether we will reach the point of technological singularity (the point where man and machine converge) in our lifetime is widely debated with both excitement and concern, but a lot will change before we reach that stage, and you might find Siri getting a little sassier in the meantime.

Speaking at TED2005, futurist and author Ray Kurzweil showed that the rate of adopting new ideas doubles roughly every decade. He built on this further in his book, The Singularity is Near, summarising that man and machine would eventually converge by 2030.

Many notable personalities have raised concern for humanity’s future, about what might happen when we reach this point, including Elon Musk and Steven Hawking. Yet the plausibility of AI being able to rapidly evolve to surpass the intelligence of a human brain is also widely disputed.

Author Martin Ford postulated a “technology paradox”, in that for the singularity to be reached, the development of the level of technology required would firstly end up automating most routine jobs in the economy, causing mass unemployment and plummeting consumer demand, thus destroying the incentive to invest in the technologies required to take AI to that point.

[clickToTweet tweet=”As they stand, virtual assistants don’t care about whether you’re affectionate with them, speak bluntly, or even shout at them. #AI #Siri #VirtualAssistant” quote=”As they stand, virtual assistants don’t care about whether you’re affectionate with them, speak bluntly, or even shout at them.”]

Either way, former President Obama rightly pointed out in an interview with Wired magazine in 2016, that a greater focus on the economic implications of the journey towards that point needs to happen.

Everyone has a different definition of AI. The simplest way of thinking about it is if you’re using a machine to make a decision on behalf of you, that is artificial intelligence. It can be extremely simplistic, or very complex. Regardless of whether the singularity will take place in our lifetime, rather than putting us all out of a job, embracing AI in our daily lives should help us become more productive as a race, automating the mundane tasks and allowing us to focus on the value of being human.

Virtual assistants are the first real evidence of humans interacting with machines in the same way they interact with other humans. As individuals, we turn to the Alexas and Cortanas of this world to augment our day-to-day lives and enable us to free up time for other things (despite them being prone to errors and causing frustration when they give an incorrect response). Currently, computers arguably have no real ‘intelligence’, yet we design them to behave as if they have a certain level of psychology to make them easier to interact with in our daily lives.

Improvements in this area are coming fast. In February this year, news leaked that Amazon are reportedly working on building AI chips for the Echo device, enabling Alexa to respond to questions at a much faster and more natural speech rate.

The next battleground for virtual assistants to focus on will be inference. As our relationship with technology continues to change, the inability to understand tone in speech remains a real shortfall.

Very little emotional meaning of a message is conveyed through words, which is why too often we find that meaning within text messages or even tweets is often lost, and equally as often can land us into trouble.

If virtual assistants are going to live up to promise, they need to be able to read between the lines. Aside from the more obvious benefits of improved communication, the possibilities enabled by better machine emotional intelligence are endless. Whether they be as simple as picking up on the fact that you’re in a bad mood, adjusting the lights and putting on some calming music whilst ordering you your favourite takeaway, or even early flagging of potential health issues through detecting a change in your voice or behaviour.

Many remain sceptical about whether our changing relationship with machines will benefit us from a social or mental state. Certainly the way we communicate with other humans through technology falls short of the rightly-favoured face to face conversation. Yet as virtual assistants become more aware of nuances in the user’s tone, we may even start to learn something from them in return.

As they stand, virtual assistants don’t care about whether you’re affectionate with them, speak bluntly, or even shout at them. As this begins to change, and they start to push us towards treating them the way we would want to be treated ourselves, as a society we may start to remember just how much of our communication lies outside of words when we email or text each other.

AI, through whatever definition you give it, is advancing fast, and there’s lots of great things we’re going to see along the way. The singularity may be coming, but Siri’s going to get sassy first.

+ posts

CIF Presents TWF - Miguel Clarke

Newsletter

Related articles

Generative AI and the copyright conundrum

In the last days of 2023, The New York...

Cloud ERP shouldn’t be a challenge or a chore

More integrated applications and a streamlined approach mean that...

Top 7 Cloud FinOps Strategies for Optimising Cloud Costs

According to a survey by Everest Group, 67% of...

Eco-friendly Data Centres Demand Hybrid Cloud Sustainability

With COP28’s talking points echoing globally, sustainability commitments and...

The Path to Cloud Adoption Success

As digital transformation continues to be a priority for...

Subscribe to our Newsletter