Quantum Computing

What is quantum computing and why is it so damn difficult?

Before I get started, I think this comic from the webcomic Saturday Morning Breakfast Cereal is one of the best explanations for how quantum computing actually works:

SMBC Quantum Computing

Another way to think about it is through the classic double-slit experiment. An electron fired through a surface with two tiny slits will, in effect, go through both slits and interfere with itself on the other side.

2000px-Double-slit.svg

The pattern on the detector is an interference pattern, where light bands represent constructive interference and dark bands destructive interference. Being able to have this constructive and destructive interference is crucial to the working of a quantum computer, as pointed out in the SMBC comic above. However, this effect is observed only as long as the electron maintains quantum coherence.

Unfortunately, quantum coherence is lost as the electron interacts with other things in the environment – other particles or energy. This is known as (drumroll) quantum decoherence. It’s the reason that quantum computers have to use their qubits in a vacuum at extremely cold temperatures – to decrease interactions with other particles and with energy respectively.

But no vacuum is perfect and reaching absolute zero is impossible. As such, quantum computers always lose coherence over time. This means that any operation a quantum computer performs has a time constraint before decoherence loses all the information. As this Scientific American article by Scott Pakin and Patrick Coles of Los Alamos National Laboratory points out, scientists are attempting various workarounds to the problem:

One approach is to guess what an error-free computation would look like based on the results of computations with various noise levels. A completely different approach, hybrid quantum-classical algorithms, runs only the most performance-critical sections of a program on a quantum computer, with the bulk of the program running on a more robust classical computer. These strategies and others are proving to be useful for dealing with the noisy environment of today’s quantum computers.

The authors of the article goes on to point out a breakthrough their team made to the problem:

We use machine learning to translate, or compile, a quantum circuit into an optimally short equivalent that is specific to a particular quantum computer. Until recently, we have employed machine-learning methods on classical computers to search for shortened versions of quantum programs. Now, in a recent breakthrough, we have devised an approach that uses currently available quantum computers to compile their own quantum algorithms. That will avoid the massive computational overhead required to simulate quantum dynamics on classical computers.

Because this approach yields shorter algorithms than the state of the art, they consequently reduce the effects of noise. This machine-learning approach can also compensate for errors in a manner specific to the algorithm and hardware platform. It might find, for instance, that one qubit is less noisy than another, so the algorithm preferentially uses better qubits. In that situation, the machine learning creates a general algorithm to compute the assigned task on that computer using the fewest computational resources and the fewest logic gates. Thus optimized, the algorithm can run longer.

Essentially what they’re doing is trying to make algorithms short enough, using as few gates as possible, to perform the quantum computer’s operations fast enough that it can be completed before quantum decoherence loses the information. This is a great breakthrough, but it’s still only a workaround to the problem. If quantum computing is to become a viable, widely used technology, further breakthroughs will be needed. This may require a better understanding of the quantum decoherence phenomenon – so don’t let anyone tell you that quantum mechanics is just useless navel gazing by a bunch of nerds in their ivory tower.