D-Wave, why all the controversy?

By on 14 June, 2016


When D-Wave released their first quantum annealing quantum computer in 2011, much of the world did not take note. This begs the question: Why not? Are QCs not supposed to be significantly more powerful than normal computers? If 160 bit ECC is supposed to be breakable by a 1000 qubit quantum computer, why are we not seeing this being done by D-Wave's 1152 qubit quantum computer?

This article explains the architecture of D-Wave quantum computers, showing that the architecture is meant specifically for optimisation problems. The limitations along with the advantages of this commercial quantum computer are discussed, providing insight into the new world of quantum computation.


When D-Wave released their first quantum annealing (QA) quantum computer in 2011, much of the world did not take note. A contributing factor could be that much of the quantum computation research community was caught off-guard. In the ensuing foray, researchers pointed out that the D-Wave machine might not be a quantum computer (QC) after all, but instead operates in a complex but classical way. Many researchers were highly sceptical about what D-Wave were claiming at first, but in the ensuing time, researchers have begun to accept that the D-Wave machine is, in fact, a quantum computer of sorts. The remaining contention lies in how useful of a quantum computer the D-Wave machine is, and some researchers still question whether D-Wave’s approach will yield a significant computation increase.

A lack of computational advantage seems to contradict what others are saying about QCs. Are QCs not supposed to be significantly more powerful than normal computers? The problem is made worse by the fact that D-Wave’s new processor claims to operate with 1152 qubits, which sounds like much more than what IBM’s Quantum Experience offers, with only five qubits. Trying to understand why some researchers are opposed to D-Wave’s QC while simultaneously being very optimistic about other QC designs can be challenging. In this post, the reasons for the controversies are discussed with the aim of understanding how D-Wave compares with other QCs. The distinction arises from the architecture that D-Wave uses and the difference in its design.


Quantum annealing explained

It is essential to understand quantum annealing (QA) to understand the architecture that D-Wave relies on. Quantum annealing originated from a succession of developments in optimisation theory. The starting point is a principle that is derived from the cooling of metals. During cooling, the atoms of metal will steadily lose kinetic energy, and as they do so, they are likely to settle in an atomic lattice structure that minimises the bonding energy of the atoms. This process is likened to an optimisation process, and the theory of simulated annealing (SA) was devised to capture the cooling process in mathematics. SA is simply an optimisation algorithm that solves a mathematical function. The SA algorithm takes a starting point on the mathematical function, calculates another point of the function, then decides whether or not to accept this new point based on a mathematical relation. Eventually, SA leads to finding the lowest point of a function (the optimal point) with high probability if the probability of moving to a higher point in the function is slowly decreased.

SA operates on classical principles, but there exists a quantum principle that can be used to improve the optimisation further. Quantum tunnelling is a process whereby a particle in a quantum state can move through a barrier. Tunnelling can occur because, when in a quantum state, a particle acts a lot like a wave. Think about it, is it possible to confine a wave in a shallow enclosure? The wave is likely to spill over the short barrier, causing the wave to spread beyond the barrier. Similarly, quantum tunnelling occurs when a particle is in a quantum state and is confined. Because the particle acts like a wave in that state, it can spill over the barrier that tries to confine it (this is fully explained by the Schrodinger equation solution in a finite potential well).

Quantum annealing adds the concept of quantum tunnelling to simulated annealing. In particular, it is the mathematical concept of quantum tunnelling that is added to SA to ensure that the optimisation proceeds even when the solution is hard to find.  A practical analogue is made by comparison to a hiker (the reader) trying to get to a lake in a mountain range. If there was a vertical cliff face in front of you or if you encountered very rough terrain, like large boulders, to get to the lake, you would find it very difficult to get to the lake. In this case, you are facing similar difficulties to what SA faces getting to the lowest and optimal solution to the problem. Having the ability to 'teleport' through the cliff would make your journey much easier. This is what quantum tunnelling allows for by letting the solution skip ahead to a lower point in the solution even in the presence of an intervening barrier.

Here a distinction should be made to avoid confusion. Simulated QA is a mathematical method that extends the theory of SA and can be directly combined with SA to find an optimal solution using a classical computer. Stated differently, Simulated QA adds the concept of quantum mechanical tunnelling to SA so as to improve the solution that SA returns when run on a classical computer. A full hardware implementation of QA operates rather differently. A true hardware implementation of QA acts like a shower of rain over the mountain range. Thousands of droplets propagate over the rocks, flowing towards the lowest areas where streams form. If it meets a cliff face, the water will seep through the ground, and possibly emerge from the ground into a river that flows into the lake. That is the potential of a QA computer: it can solve optimisation problems by trying many possibilities simultaneously and, on top of that, can escape getting stuck before it finds the final solution.

The advantage that the QA implementation shows over SA exists in two cases. The first is when the problem has a sharp ‘spike’ in the solution, and the second is when the path to the final solution is not smooth. Only some problems are of this nature, but some examples exist. At present, it is thought that optimisation of a classical processor design meets both these criteria. 

The type of QA suggested by Kadowaki and Nishimori is not an abstract mathematical model that can be implemented with classical algorithms (Kadowaki & Nishimori, 1998). It limits itself to hardware-implementable QA and suggests an implementation method. In this case, the minimum point of the function is found by applying a strong field to qubits that have either an up or down spin. Applying a strong field allows the qubits to settle into a state that represents all states in the problem. The field strength is then slowly decreased, allowing the system to move towards an optimal point because the reduction of the field strength reduces the number of states that the system is likely to take. If the system gets ‘stuck’ in a state that is not the optimal point, the qubits are likely to tunnel out of that state because the state’s barrier cannot always contain the particle waves that make up the qubits. In addition, the presence of a more optimal state elsewhere will increase the chances of the particles ‘spilling over’. The final state is reached when the applied field strength is zero and then, when the system state is read, will return the result of the computation. It is this suggestion by Kadowaki and Nishimori that describes the implementation of the D-Wave quantum computer.


D-Wave's hardware advantage

So we can conclude that the D-Wave computer is built to solve optimisation problems. But still the question remains, is a hardware implementation of QA truly better at solving problems than a software implementation of the underlying mathematics on a classical computer? Here researchers disagree. Initial results by Kadowaki and Nishimori show that QA solves optimisation problems faster than SA. The probability of finding a correct solution of a problem scales with the speed with which the algorithms are made to converge to a final solution for both QA and SA. Speeding up the time to a solution reduces the probability that the final solution is the correct solution. QA is found to converge to the correct solution with higher probability than SA when both are made to converge at the same rate. Therefore, QA is shown to be better than SA. The problem with the proof is that the computation speed improvement arguably does not justify millions of dollars of investment in quantum hardware.

Later papers on QA do find an area of application where SA fails while QA succeeds. The problem is that SA is not the only optimisation algorithm that exists. As mentioned above, quantum annealing can be simulated. A quantum Monte-Carlo (QMC) algorithm simulates quantum systems and can be used to reproduce the hardware QA operation on a classical computer. Such an implementation has been found to achieve similar scaling to that of hardware QA with problem difficulty (Denchev et al., 2016). This means that if a problem is solved with both the hardware QA and a classical QMC implementation, the speed difference would be a constant factor with problem size.

Scaling with problem size is the main advantage that QCs are supposed to provide. Other QC architectures are found to have an exponential speed up with an increase in QC size for certain problems. What this means is that, ideally, a QC has the potential to solve a problem that is twice as difficult by adding one qubit to its design. Hence, if a QC were to have massive compute potential, then it would have to be true that if you have 1152 qubits in a QC and you had to solve an optimisation problem of twice the maximum difficulty in the same amount of time, you would simply build a machine with 1153 qubits. This type of scaling does not hold for D-Wave QCs. Instead, you would have to build a D-Wave machine with 1152x2 = 2304 qubits to be able to solve a problem with twice the difficulty in the same amount of time. The problem difficulty is in this case specified relative to the time that a classical computer would take to solve the same problem and, therefore, the D-Wave speed scaling is comparable to the speed increase that a classical computer would be able to achieve if it were also doubled in size.

Fortunately, D-Wave does have a large absolute speed advantage that is on the order of 10^8 versus the QMC implementation. This means that while hardware QA does not scale any better with larger problem size compared to a classical QMC implementation, it still solves these problems much faster, 10^8 times faster. Thus, we can conclude that even though we need a D-Wave QC with twice the number of qubits to solve a problem that is twice as difficult, the D-Wave machine will still reach the solution to any problem size much faster than a classical computer would. This advantage gives a reason to continue development of the D-Wave QC. Even so, other QC architectures are more promising because the problems that they solve only require adding one qubit for every doubling of problem difficulty, but that does not apply to optimisation problems necessarily. It remains to be seen how effective other QCs are at solving optimisation problems. The D-Wave may still be the best architecture for solving certain optimisation problems because quantum algorithms for optimisation may not be faster on other QCs.

It should be noted that, if the problem to be solved does not meet the difficulty criteria of containing a ‘spike' or difficult terrain in approaching the solution, then QA is not much faster than SA, and the D-Wave architecture may not have a large advantage over a classical SA implementation. Therefore, we can conclude that while the D-Wave quantum computer is, in fact, a quantum computer, it can only solve a subset of problems faster than classical computers can. It remains to be seen how many problems can be cast into a form that is rapidly solvable by a D-Wave QC.


D-Wave's architecture limitations

We have seen that particles act much like waves in quantum mechanics. Entanglement can be described as when many quantum particles are made to mix in a way that a melding of their waves occurs. The mixed particles then exhibit a single more dynamic wave that has many different possible configurations. The computational power of a QC stems from the fact that this dynamic wave grows massively in the number of possible configurations as more particle waves are added to it. The QC can quickly reduce a complex calculation to a solution by utilising the dynamic wave to find the best solution for the many configurations of the dynamic wave. At the end of a QC calculation, it is this dynamic multi-element wave that returns the result of the computation. It can be said that entanglement is what gives a QC its ability to function.

The extent of entanglement of the D-Wave QA computer should be considered. The architecture consists of eight qubits that are entangled with four connections per qubit (16 connections per eight-qubit cluster), and these eight-qubit clusters are then entangled with an adjacent eight-qubit cluster with four connections. This setup reduces the programmability of the D-Wave computer to a large extent. This, together with temperature and other constraints, makes the D-Wave operate with less than ideal computation ability. Future designs of D-Wave could be more powerful than commonly thought if smart engineering reduces these multiple limiting factors. 

The D-Wave architecture solves optimisation problems, but Shor's algorithm is not an optimisation algorithm, however. Therefore, it is not possible to factor large numbers or solve discrete logarithms using Shor's algorithm on a D-Wave quantum computer. Interestingly, a method has been developed that finds the factors of certain numbers using the solution of an optimisation problem (Dattani & Bryans, 2014). If factorization of large numbers is proven to be possible using the D-Wave architecture and the D-Wave systems prove to be significantly faster than classical computers, the security of RSA keys could be at risk.


To summarise, D-Wave is, in fact, a quantum computer, but it operates on a very different set of principles to what other QCs operate. Due to characteristics of the mathematical problem that D-Wave attempts to solve, it can only solve a subset of problems more efficiently than classical computers. It has not been shown by researchers that the underlying principle on which the D-Wave machine is based allows accelerated calculation by the operation of quantum mechanics in hardware. The implementation of the mathematics of quantum mechanics in a classical computer allows for simulated solutions that scale equally fast.

Fortunately, due to progress in the design of the D-Wave computer, initial results show that its quantum hardware is much faster than simulated quantum algorithms at solving some problems. It follows that classical processors using simulations of quantum mechanics to solve problems may fall far behind in speed comparisons with D-Wave processors that use a physical implementation of quantum mechanics. If this is the case, the use of D-Wave QCs may, after all, prove to be economically justifiable over the use of classical processors to solve some problems. Furthermore, other QC architectures may be unable to compete with D-Wave QCs when it comes to solving optimisation problems. So even though D-Wave QCs may not be solving mathematical problems that display the full power of QCs, it could still be the best solution available for solving optimisation problems.

While Shor's algorithm cannot be run on D-Wave quantum computers, extending factorization to an optimisation algorithm may be successful. Even so, many problems cannot at present be successfully programmed to run on D-Wave QCs because the hardware cannot support the needed complexity to describe the problems in hardware. Advancements in entanglement and enhanced control of qubits in the D-Wave QCs could unlock much faster and versatile processing than is currently thought possible.

The field of quantum computation is progressing rapidly. Only time will tell whether the D-Wave approach will truly be successful. Regardless of the results achieved by D-Wave and hardware QA. Fully programmable QCs operate much differently than D-Wave and initial results for universal QCs are very promising. The method of operation of these universal QCs will be delved into in an upcoming article.



QA - quantum annealing

QC - quantum computer

SA - simulated annealing

QMC - quantum Monte-Carlo



Dattani, N. S., & Bryans, N. (2014). Quantum factorization of 56153 with only 4 qubits. Arxiv:Quant-Ph, (143), 1–6.

Denchev, V. S., Boixo, S., Isakov, S. V, Ding, N., Smelyanskiy, V., Martinis, J., & Neven, H. (2016). What is the Computational Value of Finite Range Tunneling? Arxiv, (3), 1–17.

Kadowaki, T., & Nishimori, H. (1998). Quantum annealing in the transverse Ising model. Physical Review E, 58(5), 5355.