A Quantum Leap into the Future of Computing

The world of fundamental research is dominated by a small number of incredibly talented people. They may have a great support system around them and some talented colleagues to work with and with whom to exchange ideas and theories, but they are the stars of the research world.  When you bring a few of these stars together you can get incredible breakthroughs.

I am a big fan of one particular researcher, Richard Feynman who was not only part of the Los Alamos project in WW2, but won a Nobel Prize and was one of those to initiate the field of quantum computing. I’d highly recommend his semi-autobiographical books “Surely You’re Joking, Mr. Feynman!” and “What Do You Care What Other People Think?” which will show you what a great sense of humour as well as a great mind the man had.

IBM posts more patents every year then any other company and has done each year since 1993. On top of this five of IBM’s alumni have won the Nobel Prize: Leo Esaki, of the Thomas J. Watson Research Centre in Yorktown Heights, N.Y., in 1973, for work in semiconductors; Gerd Bining and Heinrich Rohrer, of the Zurich Research Centre, in 1986, for the scanning tunneling microscope; and Georg Bednorz and Alex Mueller, also of Zurich, in 1987, for research in superconductivity.

I remember on a visit to IBM’s Zurich Research Centre once remarking on the art work in reception, only to be told that it had been painted by one to the researchers whose hobby was art. His hobbies also included 16 languages and 8 instruments all of which he had mastered. These guys are supremely talented polymaths and justifiably the stars of their world.

IBM has announced that they have made a breakthrough in Quantum Computing, the very field initiated by Richard Feynman. Bearing in mind that all technical advances are in reality thousands of small breakthroughs that accumulate to create a revolution, the breakthrough announced by IBM marks a significant step on the path to the creation of real Quantum Computers.

So what is quantum computing?

Digital computers use transistors to encode data into binary digits with two possible states 1 and 0. This is is like the bead on an abacus that you can move from left to right. Advances in microchip designs have achieved two things.

  1. They have enabled us to cram in ever more processors (lots more beads on the abacus) using ever smaller fabrications (smaller beads so that you can fit more in).
  2. They have allowed us to pack more and more processing power onto the silicon chips that we use today, but we are reaching a set of physical limits that will bring such advances to an end.

Quantum computers are fundamentally different to what we know today, and have quantum bits, or “qubits,” that can exist as a 1, a 0, or as both at the same time. So while a regular computer made of two bits can encode information in only one of four possible combinations: 00, 01, 10, 11. A quantum computer can hold all four of those combinations at once, and can therefore handle exponentially more information than regular computers.

While a regular computer might evaluate a series of options in turn before comparing the results to find a winner, a quantum computer would consider all the options at the same time, because qubits can process lots of information all at once, getting to the answer much faster.

There are a few challenges however, such as how to overcome the inherent instability of the qubits which have a tendency to forget what information they have been given, or the risk of altering the information by simply trying to read it. Then there is the challenge of programming the qubits.

Thankfully, there have been several recent advances in quantum computing. In March researchers at Google and the University of California, Santa Barbara (UCSB) demonstrated an error-correction quantum bit (qubit) circuit that allows quantum computers to remain stable enough to reproduce the same results.

Then a Canadian company called D-Wave built a type of quantum computer. Google and Lockheed Martin LMT are evaluating the D-Wave machine to see how it compares to traditional machines.

At the end of April 2015 a team at Cambridge Quantum Computing (CQCL) announced the development of a quantum computer operating system, using a high speed supercomputer to accurately simulate a quantum processor.

However this was almost immediately topped by IBM who announced a way to detect and measure the two types of quantum errors, called bit-flip and phase-flip, at the same time. They produced a prototype 4-qubit circuit, but while previous quantum computers had been built in a linear fashion with qubits all in a row allowing for the measurement of only one quantum error at a time, IBM’s new model sees qubits stacked in a square array, allowing two methods of error correction to occur simultaneously.

Given their exponential advantages over classical computers, quantum systems promise to revolutionise our world. It isn’t easy to simulate even a modest sized quantum system, because to write down the quantum state of a system of just 300 qubits, you would need 2^300 numbers. This is roughly the number of protons in the known universe, so no amount of Moore’s Law scaling will ever make it possible for a classical computer to process that many numbers.

+ posts

AI Show - Episode 1 - Clare Walsh

Newsletter

Related articles

Next-Generation Ransomware Protection for the Cloud

Have you heard the story of the LockBit gang?...

The art and science of GenAI in software development

The past year has shown just how quickly technologies...

From the lab to production: Driving GenAI business success

Generative AI (GenAI) was the technology story of 2023....

AI Show – Episode 1 – Clare Walsh

We invite you to join us on an extraordinary...

Generative AI and the copyright conundrum

In the last days of 2023, The New York...

Subscribe to our Newsletter