Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Quantum Computing (Download Semianr Report)
Post: #1

Imagine a computer whose memory is exponentially larger than its apparent physical size; a computer that can manipulate an exponential set of inputs simultaneously; a computer that computes in the twilight zone of Hilbert space. You would be thinking of a quantum computer. Relatively few and simple concepts from quantum mechanics are needed to make quantum computers a possibility. The subtlety has been in learning to manipulate these concepts. Is such a computer an inevitability or will it is too difficult to build?

The subject of quantum computing brings together ideas from classical information theory, computer science, and quantum physics. This review aims to summarize not just quantum computing, but the whole subject of quantum information theory. It turns out that information theory and quantum mechanics fit together very well. In order to explain their relationship, the review begins with an introduction to classical information theory and computer science, including Shannonâ„¢s theorem, error correcting codes, Turing machines and computational complexity. The principles of quantum mechanics are then outlined, and the EPR experiment described. The EPR-Bell correlations and quantum entanglement in general, form the essential new ingredient which distinguishes quantum from classical information theory, and, arguably, quantum from classical physics. Basic quantum information ideas are described, including key distribution, teleportation, data compression, quantum error correction, the universal quantum computer and quantum algorithms. The common theme of all these ideas is the use of quantum entanglement as a computational resource. Experimental methods for small quantum processors are briefly sketched, concentrating on ion traps, high Q cavities, and NMR. The review concludes with an outline of the main features of quantum information physics, and avenues for future research.
Post: #2

Quantum Computing


Imagine a computer whose memory is exponentially larger than its apparent physical size; a computer that can manipulate an exponential set of inputs simultaneously; a computer that computes in the twilight zone of Hilbert space. You would be thinking of a quantum computer. Relatively few and simple concepts from quantum mechanics are needed to make quantum computers a possibility. The subtlety has been in learning to manipulate these concepts. Is such a computer an inevitability or will it is too difficult to

The subject of quantum computing brings together ideas from classical information theory, computer science, and quantum physics. This review aims to summarize not just quantum computing, but the whole subject of quantum information theory. It turns out that information theory and quantum mechanics fit together very well. In order to explain their relationship, the review begins with an introduction to classical information theory and computer science, including Shannon's theorem, error correcting codes, Turing machines and computational complexity. The principles of quantum mechanics are then outlined, and the EPR experiment described. The EPR-Bell correlations and quantum entanglement in general, form the essential new ingredient which distinguishes quantum from classical information theory, and, arguably, quantum from classical physics. Basic quantum information ideas are described, including key distribution, teleportation, data compression, quantum error correction, the universal quantum computer and quantum algorithms. The common theme of all these ideas is the use of quantum entanglement as a computational resource. Experimental methods for small quantum processors are briefly sketched, concentrating on ion traps, high Q cavities, and NMR. The review concludes with an outline of the main features of quantum information physics, and avenues for future research.


What is quantum computing? It's something that could have been thought up a long time ago - an idea whose time has come. For any physical theory one can ask: what sort of machines will do useful computation? or, what sort of processes will count as useful computational acts? Alan Turing thought about this in 1936 with regard (implicitly) to classical mechanics, and gave the world the paradigm classical computer: the Turing machine.

But even in 1936 classical mechanics was known to be false. Work is now under way -mostly theoretical, but tentatively, hesitantly groping towards the practical - in seeing what quantum mechanics means for computers and computing.

The basic idea behind quantum computing in the merging of computer science, information theory and quantum mechanics from classical physics for better efficiency of operation of systems. Information is the most important element of any system.
Information can be represented in many ways. It can be analyzed provided we know how it was encoded. For example, the two statements "the quantum computer is very interesting'' and "l'ordinateur quantique est tres interessant'' have something in common, although they share no words. The thing they have in common is their information content. Essentially the same information could be expressed in many other ways, for example by substituting numbers for letters in a scheme such as a -> 97, b -> 98, c -> 99 and so on, in which case the English version of the above statement becomes 116 104 101 32 113 117 97 110 116 117 109... . It is very significant that information can be expressed in different ways without losing its essential nature, since this leads to the possibility of the automatic manipulation of information: a machine need only be able to manipulate quite simple things like integers in order to do surprisingly powerful information processing, from document preparation to differential calculus, even to translating between human languages. We are familiar with this now, because of the ubiquitous computer, but even fifty years ago such a widespread significance of automated information processing was not foreseen.

However, there is one thing that all ways of expressing information must have in common: they all use real physical things to do the job. Spoken words are conveyed by air pressure fluctuations, written ones by arrangements of ink molecules on paper, even thoughts depend on neurons (Landauer 1991). The rallying cry of the information physicist is "no information without physical representation!'' Conversely, the fact that information is insensitive to exactly how it is expressed, and can be freely translated from one form to another, makes it an obvious candidate for a fundamentally important role in physics, like energy and momentum and other such abstractions. However, until the second half of this century, the precise mathematical treatment of information, especially information processing, was undiscovered, so the significance of information in physics was only hinted at in concepts such as entropy in thermodynamics. It now appears that information may have a much deeper significance. Historically, much of fundamental physics has been concerned with discovering the fundamental particles of nature and the equations which describe their motions and interactions. It now appears that a different programme may be equally important: to discover the ways that nature allows, and prevents, information to be expressed and manipulated, rather than particles to move. For example, the best way to state exactly what can and cannot travel faster than light is to identify information as the speed-limited entity. In quantum mechanics, it is highly significant that the state vector must not contain, whether explicitly or implicitly, more information than can meaningfully be associated with a given system. Among other things this produces the wave function symmetry requirements which lead to Bose Einstein and Fermi Dirac statistics, the periodic structure of atoms, and so on.

The history of computer technology has involved a sequence of changes from one type of physical realization to another --- from gears to relays to valves to transistors to integrated circuits and so on. Today's advanced lithographic techniques can squeeze fraction of micron wide logic gates and wires onto the surface of silicon chips. Soon they will yield even smaller parts and inevitably reach a point where logic gates are so small that they are made out of only a handful of atoms. On the atomic scale matter obeys the rules of quantum mechanics, which are quite different from the classical rules that determine the properties of conventional logic gates. So if computers are to become smaller in the future, new, quantum technology must replace or supplement what we have now. The point is, however, that quantum technology can offer much more than cramming more and more bits to silicon and multiplying the clock-speed of microprocessors. It can support entirely new kind of computation with qualitatively new algorithms based on quantum principles!

A bit is a fundamental unit of information, classically represented as a 0 or 1 in your digital computer. Each classical bit is physically realized through a macroscopic physical system, such as the magnetization on a hard disk or the charge on a capacitor. A document, for example, comprised of n-characters stored on the hard drive of a typical computer is accordingly described by a string of 8n zeros and ones. Herein lies a key difference between your classical computer and a quantum computer. Where a classical computer obeys the well understood laws of classical physics, a quantum computer is a device that harnesses physical phenomenon unique to quantum mechanics (especially quantum interference) to realize a fundamentally new mode of information processing.
In a quantum computer, the fundamental unit of information (called a quantum bit or qubit), is not binary but rather more quaternary in nature. This qubit property arises as a direct consequence of its adherence to the laws of quantum mechanics which differ radically from the laws of classical physics. A qubit can exist not only in a state corresponding to the logical state 0 or 1 as in a classical bit, but also in states corresponding to a blend or superposition of these classical states. In other words, a qubit can exist as a zero, a one, or simultaneously as both 0 and 1, with a numerical coefficient representing the probability for each state. This may seem counterintuitive because everyday phenomenon is governed by classical physics, not quantum mechanics -- which takes over at the atomic level.

To explain what makes quantum computers so different from their classical counterparts we begin by having a closer look at a basic chunk of information namely one bit. From a physical point of view a bit is a physical system which can be prepared in one of the two different states representing two logical values --- no or yes, false or true, or simply 0 or 1 For example, in digital computers, the voltage between the plates in a capacitor represents a bit of information: a charged capacitor denotes bit value 1 and an uncharged capacitor bit value 0. One bit of information can be also encoded using two different polarizations of light or two different electronic states of an atom. However, if we choose an atom as a physical bit then quantum mechanics tells us that apart from the two distinct electronic states the atom can be also prepared in a coherent superposition of the two states. This means that the atom is both in state 0 and state 1.

Experimental validation

In an experiment like that in figure a, where a photon is fired at a half-silvered mirror, it can be shown that the photon does not actually split by verifying that if one detector registers a signal, then no other detector does. With this piece of information, one might think that any given photon travels vertically or horizontally, randomly choosing between the two paths. However, quantum mechanics predicts that the photon actually travels both paths simultaneously, collapsing down to one path only upon measurement. This effect, known as single-particle interference, can be better illustrated in a slightly more elaborate experiment, outlined in figure b.

Figure b depicts an interesting experiment that demonstrates the phenomenon of single-particle interference. In this case, experiment shows that the photon always reaches detector A, never detector B! If a single photon travels vertically and strikes the mirror, then, by comparison to the experiment in figure a, there should be an equal probability that the photon will strike either detector A or detector B. The same goes for a photon traveling down the horizontal path. However, the actual result is drastically different. The only conceivable conclusion is therefore that the photon somehow traveled both paths simultaneously; creating interference at the point of intersection that destroyed the possibility of the signal reaching B. This is known as quantum interference and results from the superposition of the possible photon states, or potential paths. So although only a single photon is emitted, it appears as though an identical photon exists and travels the 'path not taken,' only detectable by the interference it causes with the original photon when their paths come together again. If, for example, either of the paths are blocked with an absorbing screen, then detector B begins registering hits again just as in the first experiment! This unique characteristic, among others, makes the current research in quantum computing not merely a continuation of today's idea of a computer, but rather an entirely new branch of thought. And it is because quantum computers harness these special characteristics that give them the potential to be incredibly powerful computational devices.

The most exciting really new feature of quantum computing is quantum parallelism. A quantum system is in general not in one "classical state", but in a "quantum state" consisting (crudely speaking) of a superposition of many classical or classical-like states. This superposition is not just a figure of speech, covering up our ignorance of which classical-like state it's "really" in. If that was all the superposition meant, you could drop all but one of the classical-like states (maybe only later, after you deduced retrospectively which one was "the right one") and still get the time evolution right. But actually you need the whole superposition to get the time evolution right. The system really is in some sense in all the classical-like states at once! If the superposition can be protected from unwanted entanglement with its environment (known as decoherence), a quantum computer can output results depending on details of all its classical-like states. This is quantum parallelism - parallelism on a serial machine. And if that wasn't enough, machines that would already, in architectural terms, qualify as parallel can benefit from quantum parallelism too - at which point the mind begins to seriously boggle!

The Potential and Power of Quantum Computing

Let us start by describing the problem at hand: factoring a number N into its prime factors
23 x 7 x 13 x 71
(e.g., the number 51688 may be decomposed as ). A convenient way
to quantify how quickly a particular algorithm may solve a problem is to ask how the number of steps to complete the algorithm scales with the size of the "input" the algorithm is fed. For the factoring problem, this input is just the number N we wish to
factor; hence the length of the input is . (The base of the logarithm is determined by our numbering system. Thus a base of 2 gives the length in binary; a base of 10 in decimal.) 'Reasonable' algorithms are ones which scale as some small-degree polynomial in the input size (with a degree of perhaps 2 or 3).

On conventional computers the best known factoring algorithm runs in 0(exp[(64/9)1^3(lniV)1/3(lnlniV)3/3])
steps. This algorithm, therefore, scales
exponentially with the input size . For instance, in 1994 a 129 digit number
(known as RSA129) was successfully factored using this algorithm on approximately 1600 workstations scattered around the world; the entire factorization took eight months . Using this to estimate the prefactor of the above exponential scaling, we find that it would take roughly 800,000 years to factor a 250 digit number with the same computer power; similarly, a 1000 digit number would require 1035years (significantly Ion ger than the age of the universe). The difficulty of factoring large numbers is crucial for public-key cryptosystems, such as ones used by banks. There, such codes rely on the difficulty of factoring numbers with around 250 digits.
Recently, an algorithm was developed for factoring numbers on a quantum computer 0((logiV)^)
which runs in steps where e is small. This is roughly quadratic in the
input size, so factoring a 1000 digit number with such an algorithm would require only a few million steps. The implication is that public key cryptosystems based on factoring may be breakable.

To give you an idea of how this exponential improvement might be possible, we review an elementary quantum mechanical experiment that demonstrates where such power may lie hidden. The two-slit experiment is prototypic for observing quantum mechanical behavior: A source emits photons, electrons or other particles that arrive at a pair of slits.
These particles undergo unitary evolution and finally measurement. We see an interference pattern, with both slits open, which wholly vanishes if either slit is covered. In some sense, the particles pass through both slits in parallel. If such unitary evolution were to represent a calculation (or an operation within a calculation) then the quantum system would be performing computations in parallel. Quantum parallelism comes for free. The output of this system would be given by the constructive interference among the parallel computations.

In a traditional computer, information is encoded in a series of bits, and these bits are
manipulated via Boolean logic gates arranged in succession to produce an end result.
Similarly, a quantum computer manipulates qubits by executing a series of quantum
gates, each a unitary transformation acting on a single qubit or pair of qubits. In applying
these gates in succession, a quantum computer can perform a complicated unitary
transformation to a set of qubits in some initial state. The qubits can then be measured,
with this measurement serving as the final computational result. This similarity in
calculation between a classical and quantum computer affords that in theory, a classical
computer can accurately simulate a quantum computer. In other words, a classical
computer would be able to do anything a quantum computer can. So why bother with
quantum computers? Although a classical computer can theoretically simulate a quantum
computer, it is incredibly inefficient, so much so that a classical computer is effectively
incapable of performing many tasks that a quantum computer could perform with ease.
The simulation of a quantum computer on a classical one is a computationally hard
problem because the correlations among quantum bits are qualitatively different from
correlations among classical bits, as first explained by John Bell. Take for example a
system of only a few hundred qubits, this exists in a Hilbert space of dimension ~1090
that in simulation would require a classical computer to work with exponentially large
matrices (to perform calculations on each individual state, which is also represented as a
matrix), meaning it would take an exponentially longer time than even a primitive
quantum computer.
Richard Feynman was among the first to recognize the potential in quantum superposition for solving such problems much faster. For example, a system of 500 qubits, which is impossible to simulate classically, represents a quantum superposition of as many as 2500 states. Each state would be classically equivalent to a single list of 500 1's and 0's. Any quantum operation on that system --a particular pulse of radio waves, for instance, whose action might be to execute a controlled - NOT operation on the 100th and 101st qubits-- would simultaneously operate on all 2500 states. Hence with one fell swoop, one tick of the computer clock, a quantum operation could compute not just on one machine state, as serial computers do, but on 2500 machine states at once! Eventually, however, observing the system would cause it to collapse into a single quantum state corresponding to a single answer, a single list of 500 1's and 0's, as dictated by the measurement axiom of quantum mechanics. The reason this is an exciting result is because this answer, derived from the massive quantum parallelism achieved through superposition, is the equivalent of performing the same operation on a classical super computer with ~10150 separate processors (which is of course impossible)!!
Early investigators in this field were naturally excited by the potential of such immense computing power, and soon after realizing its potential, the hunt was on to find something interesting for a quantum computer to do. Peter Shor, a research and computer scientist at AT&T's Bell Laboratories in New Jersey, provided such an application by devising the first quantum computer algorithm. Shor's algorithm harnesses the power of quantum superposition to rapidly factor very large numbers (on the order ~10200 digits and greater) in a matter of seconds. The premier application of a quantum computer capable of implementing this algorithm lies in the field of encryption, where one common (and best) encryption code, known as RSA, relies heavily on the difficulty of factoring very large composite numbers into their primes. A computer which can do this easily is naturally of great interest to numerous government agencies that use RSA -- previously considered to be "uncrackable" -- and anyone interested in electronic and financial privacy.
Encryption, however, is only one application of a quantum computer. In addition, Shor has put together a toolbox of mathematical operations that can only be performed on a quantum computer, many of which he used in his factorization algorithm. Furthermore, Feynman asserted that a quantum computer could function as a kind of simulator for quantum physics, potentially opening the doors to many discoveries in the field. Currently the power and capability of a quantum computer is primarily theoretical speculation; the advent of the first fully functional quantum computer will undoubtedly bring many new and exciting applications.

Now we push the idea of superposition of numbers a bit further. Consider a register composed of three physical bits. Any classical register of that type can store in a given moment of time only one out of eight different numbers i.e. the register can be in only one out of eight possible configurations such as 000, 001, 010, ... 111. A quantum register composed of three qubits can store in a given moment of time all eight numbers in a quantum superposition (Fig. 2). This is quite remarkable that all eight numbers are physically present in the register but it should be no more surprising than a qubit being both in state 0 and 1 at the same time. If we keep adding qubits to the register we increase its storage capacity exponentially i.e. three qubits can store 8 different numbers at once, four qubits can store 16 different numbers at once, and so on; in general L qubits can store 2L numbers at once. Once the register is prepared in a superposition of different

numbers we can
perform operations
on all of them. For
example, if qubits
are atoms then
suitably tuned laser
pulses affect
atomic electronic
states and evolve
initial super
positions of
encoded numbers into different super positions. During such evolution each number in the superposition is affected and as the result we generate a massive parallel computation albeit in one piece of quantum hardware. This means that a quantum computer can in only one computational step perform the same mathematical operation on 2L different input numbers encoded in coherent super positions of L qubits. In order to accomplish the same task any classical computer has to repeat the same computation 2L times or one has to use 2L different processors working in parallel. In other words a quantum computer offers an enormous gain in the use of computational resources such as time and memory.

But this, after all, sounds as yet another purely technological progress. It looks like classical computers can do the same computations as quantum computers but simply need more time or more memory. The catch is that classical computers need exponentially more time or memory to match the power of quantum computers and this is really asking for too much because an exponential increase is really fast and we run out of available time or memory very quickly. Let us have a closer look at this issue.

In order to solve a particular problem computers follow a precise set of instructions that can be mechanically applied to yield the solution to any given instance of the problem. A specification of this set of instructions is called an algorithm. Examples of algorithms are the procedures taught in elementary schools for adding and multiplying whole numbers; when these procedures are mechanically applied, they always yield the correct result for any pair of whole numbers. Some algorithms are fast (e.g. multiplication) other are very slow (e.g. factorisation, playing chess). Consider, for example, the following factorisation problem

? x ? = 29083

How long would it take you, using paper and pencil, to find the two whole numbers which should be written into the two boxes (the solution is unique)? Probably about one hour. Solving the reverse problem

127 x 129 =? ,

again using paper and pencil technique, takes less than a minute. All because we know fast algorithms for multiplication but we do not know equally fast ones for factorisation. What really counts for a ''fast'' or a ''usable'' algorithm, according to the standard definition, is not the actual time taken to multiply a particular pairs of number but the fact that the time does not increase too sharply when we apply the same method to ever larger numbers. The same standard text-book method of multiplication requires little extra work when we switch from two three digit numbers to two thirty digits numbers. By contrast, factoring a thirty digit number using the simplest trial divison method is about 1013 times more time or memory consuming than factoring a three digit number. The use of computational resources is enormous when we keep increasing the number of digits. The largest number that has been factorised as a mathematical challenge, i.e. a number whose factors were secretly chosen by mathematicians in order to present a challenge to other mathematicians, had 129 digits. No one can even conceive of how one might factorize say thousand-digit numbers; the computation would take much more that the estimated age of the universe.

Skipping details of the computational complexity we only mention that computer scientists have a rigorous way of defining what makes an algorithm fast (and usable) or slow (and unusable). For an algorithm to be fast, the time it takes to execute the algorithm must increase no faster than a polynomial function of the size of the input. Informally think about the input size as the total number of bits needed to specify the input to the problem, for example, the number of bits needed to encode the number we want to factorize. If the best algorithm we know for a particular problem has the execution time (viewed as a function of the size of the input) bounded by a polynomial then we say that the problem belongs to class P. Problems outside class P are known as hard problems. Thus we say, for example, that multiplication is in P whereas factorizations is not in P and that is why it is a hard problem. Hard does not mean "impossible to solve'' or "non-computable'' --- factorizations is perfectly computable using a classical computer, however, the physical resources needed to factor a large number are such that for all practical purposes, it can be regarded as intractable.

It worth pointing out that computer scientist has carefully constructed the definitions of efficient and inefficient algorithms trying to avoid any reference to a physical hardware. According to the above definition factorizations is a hard problem for any classical computer regardless its make and the clock-speed. Have a look at Fig.3 and compare a modern computer with its ancestor of the nineteenth century, the Babbage differential engine. The technological gap is obvious and yet the Babbage engine can perform the same computations as the modern digital computer. Moreover, factoring is equally difficult both for the Babbage engine and top-of-the-line connection machine; the execution time grows exponentially with the size of the number in both cases. Thus purely technological progress can only increase the computational speed by a fixed multiplicative factor which does not help to change the exponential dependence between the size of the input and the execution time. Such change requires inventing new, better algorithms. Although quantum computation requires new quantum technology its real power lies in new quantum algorithms which allow exploiting quantum superposition that can contain an exponential number of different terms. Quantum computers can be programmed in a qualitatively new way. For example, a quantum program can incorporate instructions such as "... and now take a superposition of all numbers from the previous operations...'; this instruction is meaningless for any classical data processing device but makes lots of sense to a quantum computer. As the result we can construct new algorithms for solving problems, some of which can turn difficult mathematical problems, such as factorizations, into easy ones!

Brief history

The idea of a computational device based on quantum mechanics was first explored in the 1970's and early 1980's by physicists and computer scientists such as Charles H. Bennett of the IBM Thomas J. Watson Research Center, Paul A. Benioff of Argonne National Laboratory in Illinois, David Deutsch of the University of Oxford, and the late Richard P. Feynman of the California Institute of Technology (Caltech). The idea emerged when scientists were pondering the fundamental limits of computation. They understood that if technology continued to abide by Moore's Law, then the continually shrinking size of circuitry packed onto silicon chips would eventually reach a point where individual elements would be no larger than a few atoms. Here a problem arose because at the atomic scale the physical laws that govern the behavior and properties of the circuit are inherently quantum mechanical in nature, not classical. This then raised the question of whether a new kind of computer could be devised based on the principles of quantum physics.
Feynman was among the first to attempt to provide an answer to this question by producing an abstract model in 1982 that showed how a quantum system could be used to do computations. He also explained how such a machine would be able to act as a simulator for quantum physics. In other words, a physicist would have the ability to carry out experiments in quantum physics inside a quantum mechanical computer.
Later, in 1985, Deutsch realized that Feynman's assertion could eventually lead to a
general purpose quantum computer and published a crucial theoretical paper showing that
any physical process, in principle, could be modeled perfectly by a quantum computer.
Thus, a quantum computer would have capabilities far beyond those of any traditional
classical computer. After Deutsch published this paper, the search began to find
interesting applications for such a machine.
Unfortunately, all that could be found were a few rather contrived mathematical problems, until Shor circulated in 1994 a preprint of a paper in which he set out a method for using quantum computers to crack an important problem in number theory, namely factorization. He showed how an ensemble of mathematical operations, designed specifically for a quantum computer, could be organized to enable a such a machine to factor huge numbers extremely rapidly, much faster than is possible on conventional computers. With this breakthrough, quantum computing transformed from a mere academic curiosity directly into a national and world interest.

Obstacles and Research

The field of quantum information processing has made numerous promising advancements since its conception, including the building of two- and three-qubit quantum computers capable of some simple arithmetic and data sorting. However, a few potentially large obstacles still remain that prevent us from "just building one," or more precisely, building a quantum computer that can rival today's modern digital computer. Among these difficulties, error correction, decoherence, and hardware architecture are probably the most formidable. Error correction is rather self explanatory, but what errors need correction? The answer is primarily those errors that arise as a direct result of decoherence, or the tendency of a quantum computer to decay from a given quantum state into an incoherent state as it interacts, or entangles, with the state of the environment. These interactions between the environment and qubits are unavoidable, and induce the breakdown of information stored in the quantum computer, and thus errors in computation. Before any quantum computer will be capable of solving hard problems, research must devise a way to maintain decoherence and other potential sources of error at an acceptable level. Thanks to the theory (and now reality) of quantum error correction, first proposed in 1995 and continually developed since, small scale quantum
computers have been built and the prospects of large quantum computers are looking up.
Probably the most important idea in this field is the application of error correction in
phase coherence as a means to extract information and reduce error in a quantum system
without actually measuring that system. In 1998, researches at Los Alamos National
Laboratory and MIT led by Raymond Laflamme managed to spread a single bit of
quantum information (qubit) across three nuclear spins in each molecule of a liquid
solution of alanine or trichloroethylene molecules. They accomplished this using the
techniques of nuclear magnetic resonance (NMR). This experiment is significant because
spreading out the information actually made it harder to corrupt. Quantum mechanics
tells us that directly measuring the state of a qubit invariably destroys the superposition of
states in which it exists, forcing it to become either a 0 or 1. The technique of spreading
out the information allows researchers to utilize the property of entanglement to study the
interactions between states as an indirect method for analyzing the quantum information.
Rather than a direct measurement, the group compared the spins to see if any new
differences arose between them without learning the information itself. This technique
gave them the ability to detect and fix errors in a qubit's phase coherence, and thus
maintain a higher level of coherence in the quantum system. This milestone has provided
argument against skeptics, and hope for believers. Currently, research in quantum error
correction continues with groups at Caltech (Preskill, Kimble), Microsoft, Los Alamos,
and elsewhere.
At this point, only a few of the benefits of quantum computation and quantum computers are readily obvious, but before more possibilities are uncovered theory must be put to the test. In order to do this, devices capable of quantum computation must be constructed. Quantum computing hardware is, however, still in its infancy. As a result of several significant experiments, nuclear magnetic resonance (NMR) has become the most popular component in quantum hardware architecture. Only within the past year, a group from Los Alamos National Laboratory and MIT constructed the first experimental demonstrations of a quantum computer using nuclear magnetic resonance (NMR) technology. Currently, research is underway to discover methods for battling the destructive effects of decoherence, to develop optimal hardware architecture for designing and building a quantum computer, and to further uncover quantum algorithms to utilize the immense computing power available in these devices. Naturally this pursuit is intimately related to quantum error correction codes and quantum algorithms, so a number of groups are doing simultaneous research in a number of these fields. To date, designs have involved ion traps, cavity quantum electrodynamics (QED), and NMR. Though these devices have had mild success in performing interesting experiments, the technologies each have serious limitations. Ion trap computers are limited in speed by the vibration frequency of the modes in the trap. NMR devices have an exponential attenuation of signal to noise as the number of qubits in a system increases. Cavity QED is slightly more promising; however, it still has only been demonstrated with a few qubits. Seth Lloyd of MIT is currently a prominent researcher in quantum hardware. The future of quantum computer hardware architecture is likely to be very different from what we know today; however, the current research has helped to provide insight as to what obstacles the future will hold for these devices.
Future Outlook

At present, quantum computers and quantum information technology remains in its pioneering stage. At this very moment obstacles are being surmounted that will provide the knowledge needed to thrust quantum computers up to their rightful position as the fastest computational machines in existence. Error correction has made promising progress to date, nearing a point now where we may have the tools required to build a computer robust enough to adequately withstand the effects of decoherence. Quantum hardware, on the other hand, remains an emerging field, but the work done thus far suggests that it will only be a matter time before we have devices large enough to test Shor's and other quantum algorithms. Thereby, quantum computers will emerge as the superior computational devices at the very least, and perhaps one day make today's modern computer obsolete. Quantum computation has its origins in highly specialized fields of theoretical physics, but its future undoubtedly lies in the profound effect it will have on the lives of all mankind.
Post: #3


Quantum computer is a device that can arbitrarily manipulate the quantum state of a part of itself. The fled of quantum computation is largely a body of the theoretical promises for some impressively fast algorithms which could be executed on quantum computers. How ever, since the first significant algorithm was proposed in 1994 experimental progress has been rapid with several schemes yielding two and three quantum-bit manipulations
Quantum computers were first discussed by Benio® in the context of simulating classical Turing machines (very elementary conventional computers) with quantum unitary evolution. Feynman considered the converse question of how well classical computers can simulate quantum systems. He concluded that classical computers invariably super from an exponential slow-down in trying to simulate quantum systems, but that quantum systems could, in principle, simulate each other without this slowdown. It was Deutsch [8], however, who first suggested that quantum superposition might allow quantum evolution to perform many classical computations in parallel.
A schematic model of a quantum computer is described as well as some of the subtleties in its programming. The Shorâ„¢s algorithm [1, 10] for efficiently factoring numbers on a quantum computer is presented in two parts: the quantum procedure within the algorithm and the classical algorithm that calls the quantum procedure. The mathematical structure within the factoring problem is discussed, making it clear what contribution the quantum computer makes to the calculation. The complexity of the Shorâ„¢s algorithm is compared to that of factoring on conventional machines and its relevance to public key cryptography is noted. In addition, we discuss the experimental status of the field and also quantum error correction which may in the long run help solve some the 2 most pressing difficulties. We conclude with an outlook to the feasibility and prospects for quantum computation in the coming years.


A quantum computer is a machine that performs calculations based on the laws of quantum mechanics, which is the behavior of particles at the sub-atomic level.
A technology of quantum computers is also very different. For operation, quantum computer uses quantum bits (qubits). Qubit has a quaternary nature. Quantum mechanicâ„¢s laws are completely different from the laws of a classical physics. A qubit can exist not only in the states corresponding to the logical values 0 or 1 as in the case of a classical bit, but also in a superposition state.
A qubit is a bit of information that can be both zero and one simultaneously (Superposition state). Thus, a computer working on a qubit rather than a standard bit can make calculations using both values simultaneously. A qubyte is made up of eight qubits and can have all values from zero to 255 simultaneously. Multi-qubyte systems have a power beyond anything possible with classical computers. (Quantum Computers & Moore's Law, p.1).
Forty qubits could have the same power as modern supercomputers. According to Chuang a supercomputer needs about a month to find a phone number from the database consisting of world's phone books, where a quantum computer is able to solve this task in 27 minutes.
The Major Difference between Quantum and Classical Computers is
Memory of a classical computer is a string of 0s and 1s, and it can perform calculations on only one set of numbers simultaneously. The memory of a quantum computer is a quantum state that can be a superposition of different numbers. A quantum computer can do an arbitrary reversible classical computation on all the numbers simultaneously. Performing a computation on many different numbers at the same time and then interfering all the results to get a single answer, makes a quantum computer much powerful than a classical one.

3.Quantum Mechanics

Quantum mechanics is generally about the novel behavior of very small things. At this scale matter becomes quantized, this means that it can be subdivided no ore. Quantum mechanics has never been wrong, it explains why the stars shine, how matter is structured, the periodic table, and countless other phenomena. One day scientists hope to use quantum mechanics to explain everything, but at present the theory remains incomplete as it has not been successfully combined with classical theories of gravity. Some strange effects happen at the quantum scale. The following effects are
Important for quantum computing:

1. Superposition and interference
2. Uncertinity
3. Entanglement.

This chapter is broken into two parts. In the first part weâ„¢ll look briefly at the History of quantum mechanics. Then, in the second part we will examine some important concepts (like the ones above) of quantum mechanics and how they relate to quantum computing.
The main references used for this chapter are, Introducing Quantum Theory by McEvoy and Oscar Zarate, and Quantum Physics, Illusion or Reality by Alistair. Both of these are very accessible introductory books.

3.1 History
3.1.1 Classical Physics

Classical physics roughly means pre-20th century physics, or pre-quantum physics. Two of the most important classical theories are electromagnetism, by James time due to the large body of work he produced that is still relevant today. Prior to this, Nicolas Copernicus, 1473 - 1543 and Galileo Galilee, 1564 - 1642 4.2) created the modern scientific method (we might also include Leonardo Davinci,1452 - 1519) by testing theories with observation and experimentation. Classical physics has a number of fundamental assumptions, they are:
1 The universe is a giant machine.
2 Cause and effect, i.e. all non-uniform motion and action is caused by something (Uniform motion doesnâ„¢t need a cause; this is Galileoâ„¢s principle of Inertia).
3 Determinism, if a state of motion is known now then because the universe is predictable, we can say exactly what it has been and what it will be at anytime.
4 Light is a wave that is completely described by Maxwellâ„¢s wave equations. These are four equations that describe all electric and magnetic phenomena.
5 Particles and waves exist, but they are distinct.
6 We can measure any system to an arbitrary accuracy and correct for any errors caused by the measuring tool.

3.1.2 Important Concepts

In the lead up to quantum mechanics there are some important concepts from Classical physics that we should look at. These are the concepts of atoms, thermodynamics, and statistical analysis.
Atoms are defined as indivisible parts of matter first postulated by Democritus, 460 - 370 BC. The idea was dismissed as a waste of time by Aristotle, (384 - 322 BC) but two thousand years later the idea started gaining acceptance. The first major breakthrough was in 1806 when John Dalton, predicted properties of elements and compounds using the atomic concept.
Thermodynamics is the theory of heat energy. Heat is understood to be disordered energy; e.g. the heat energy in a gas is the kinetic energies of all the molecules. The temperature is a measure of how fast the molecules are traveling (if a gas or liquid; if solid, how fast they are vibrating about their fixed positions in the solid).
Thermodynamics is made up of two laws:

i. The first law of thermodynamics

In a closed system, whenever a certain amount of energy disappears in one place an equivalent amount must appear elsewhere in the system is some form. This law of conservation of energy was originally stated by Herman VonHelmholtz, 1824 “ 1894.

ii. The second law of thermodynamics

Figure 4.1: Maxwell distribution
The total entropy of a system increases when heat flows from hot body to a coldone. Heat always flows from hot to cold [Rae, A. 1996, p.18].
Rudolf Clausius, 1822 - 1888 called the previous law the first Of two laws. He introduced a new concept, entropy which in terms of heat transfer .So this implies that an isolated systemâ„¢s entropy is always increasing until the system reaches thermal equilibrium (i.e. all parts of the system are at the same temperature).

3.1.3 Statistical Mechanics

In 1859, J.C. Maxwell, using the atomic model, came up with a way of statistically averaging the velocities of randomly chosen molecules of a gas in a closed system like a box (because it was impossible to track each one). The graph is shown in figure 4.1, remember hotter molecules tend to go faster, the graphâ„¢s n axis denotes the number of molecules (this particular graph shows CO2). The v axis denotes velocity. The letters a, b, and c represent molecules at 100K, 400K,
and 1600K respectively.
In the 1870s Ludwig Boltzmann, 1844 - 1906 generalized the theory to any collection of entities that interact randomly, are independent, and are free to move. He rewrote the second law of thermodynamics to say: As the energy in a system degrades the systemâ„¢s atoms become more disordered and there is an increase in entropy. To measure this disorder we consider the number of configurations or state that the collection of atoms can be in.
If this number is W then the entropy S is defined as:
S = k logW
Where k is Boltzmann™s constant k = 1:38 £ 10¡23 J/K.
So the behavior oflarge things could now be predicted by the average statistical Behavior of their smaller parts, which is important for quantum Mechanics. There also remains the probability that a fluctuation can occur, a statistical improbability that may seem nonsensical but nonetheless the theory must cater for it. For example, if we have a box containing a gas a fluctuation could be all particles of the gas randomly clumping together in one corner of the box.
3.1.4 Important Experiments
There are two major periods in the development of quantum theory, the first culminating in 1913 with the Niels Bohr, 1885 - 1962 model of the
Atom and ending in about 1924. This is called old quantum theory. The new quantum theory began in 1925. The old quantum theory was developed in some part to explain the results of three experiments which could not be explained by classical physics, they are:
1 Black body radiation and the ultraviolet catastrophe.
2 The Photoelectric effect.
3 Bright Line Spectra.
These experiments and their subsequent explanations are described in the next three sections.

3.1.5 Black Body Radiation

A black body absorbs all electromagnetic radiation (light) that falls on it and would appear black to an observer because it reflects no light. To determine the temperature of a black body we have to observe the radiation emitted from it.
Associates of Max Planck, 1858 - 1947,measured the distribution of radiation and energy over frequency in a cavity, a kind of oven with a little Hole for a small amount of heat (light, radiation) to escape for observation. Because the radiation is confined in the cavity it settles down to an equilibrium distribution of the molecules in a gas. They found the frequency distributions to be similar to Maxwellâ„¢s velocity distributions. The color of the light emitted is dependent on the temperature. E.g. the element of your electric stove goes from red hot to white hot as the temperature increases. It didnâ„¢t take long for physicists to apply a Maxwell style statistical analysis to the waveâ„¢s electromagnetic energy present in the cavity. The difference being is that classical Physics saw waves as continuous which means that more and more waves could be packed into a box as the wavelengths get smaller, i.e. the frequency gets higher. This means that as the temperature was raised the radiation should keep getting stronger and stronger indefinitely. This was called the ultraviolet catastrophe. If nature did indeed behave in this way you would get singed sitting in front of a fire by all the ultraviolet light coming out of it. Fortunately this doesnâ„¢t occur so the catastrophe is not in nature but in classical physics which
Figure 4.7: Albert Einstein and Johann Jacob Balmer
Predicted something that doesnâ„¢t happen. The results of several experiments had given the correct frequency distributions and it was Max Planck who found a formula that matched the results. He couldnâ„¢t find a classical solution, so grudgingly he used Boltzmannâ„¢s version of the second law of thermodynamics. Planck imagined that the waves emitted from the black body were produced by a finite number of tiny oscillators (a kind of precursor to modern atoms). Eventually he had to divide the energy into finite chunks of a certain size to fit his own radiation formula. This finally gives us the first important formula for quantum mechanics:
E = hf¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦.. (4.2)
Where E is energy, f is frequency and h is Planckâ„¢s constant which is:
h = 0:000000000000000000000000006626¦¦¦¦¦¦¦¦¦¦.. (4.3)

3.1.6 The Photoelectric Effect

If a light is shone onto certain kinds of material (e.g. some metals or semi conductors) then electrons are released. When this effect was examined it was found that the results of the experiments did not agree with classical electromagnetic theory which predicted that the energy of the released electron should depend on the intensity of the incident light wave. However it was found that the energy released was dependent not on
intensity (an electron would come out no matter how low the intensity was) but on the frequency of the light.
E = hf
Albert Einstein, 1879 - 1955 (figure 4.7) showed that if we look at the light as a collection of particles carrying energy proportional to the frequency (as given by Planckâ„¢s law E = hf) and if those particles can transfer energy to electrons in a target metal then the experimental results could be explained. Put simply a light particle hits the metalâ„¢s surface and its energy is transferred to an electron and becomes kinetic energy; so the electron is ejected from the metal. With Different kinds of metals it can be easier or harder for electrons to escape.

3.1.7 Bright Line Spectra

When a solid is heated and emits white light then, if that light is concentrated and broken up into the separate colors by a prism, we get a rainbow like spectrum (Continuous spectrum) like the following:
If we do the same thing with a hot gas emitting light then the spectrum consists of a number of bright lines that have the colors of the rainbow above, with dark regions in between. The spectrum for this, which is called an emission spectrum, is shown below.
If a cold gas is placed between a hot solid emitting white light and the prism we get the inverse of the above. This is called an absorption spectrum, shown below.
The hot gas is emitting light at certain frequencies and example three shows us that the cold gas is absorbing light at the same frequencies.
These lines are different for each element, and they allow us to determine the composition of a gas even at astronomical distances, by observing its spectrum. In 1885 Johann Jacob Balmer, 1825 “ 1898 derived a formula for the spectral lines of Hydrogen. Which is:
Where R is the Rydberg constant of 3:29163 £ 1015 cycles/second and nf and ni are whole numbers. The trouble was that no one knew how to explain the formula. The explanation came in 1913 with Niels Bohr™s atomic model.
3.1.8 Proto Quantum Mechanics
During the last part of the 19th century it was discovered that a number ofrays were actually particles. One of these particles was the electron, discovered by Joseph J. Thomson, 1856 - 1940. In a study of cathode ray
Figure 4.8: Thomsonâ„¢s atomic model.
Figure 4.9: Rutherfordâ„¢s atomic model.
tubes Thompson showed that electrically charged particles (electrons) are emitted when a wire is heated. Thomson went on to help develop the first model of the atom which had his (negatively charged) electrons contained within a positively charged sphere (figure 4.8).
This first atomic model was called the Christmas pudding model. Then, in 1907,Ernest Rutherford, 1871 - 1937 (figure 4.9), developed a new model, which was found by firing alpha particles (Helium ions) at gold foil and observing that, very occasionally, one would bounce back. This model had a tiny but massive nucleus surrounded by electrons (figure 4.9).
The new model was like a mini solar system with electrons orbiting the nucleus, but the atomic model was thought to still follow the rules of classical physics. However, according to classical electromagnetic theory an orbiting electron, subject to centripetal acceleration (the electron is attracted by the positively charged nucleus) would radiate energy and so rapidly spiral in towards the nucleus.
But this did not happen, atoms were stable, and all the atoms of an element emitted the same line spectrum. To explain this Bohr assumed that the atom could exist in only certain stationary states - stationary because, even if the electron was orbiting in such a state (and, later, this was questioned by Heisenberg) it would not radiate, despite what electromagnetic theory said. However, if the electron jumped from a stationary state to one of lower energy then the transmission was accompanied by the emission of a photon; vice versa there was absorption of light in going from a lower to a higher energy. In this scheme there was a lowest stationary state, called the ground state below which the electron could not jump; so this restored stability to the atom.
The frequency of the light emitted as a jump was given by Einsteinâ„¢s formula:
f =E/h¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦..(4.5)
Where E is the difference in the energies of the stationary states involved. These energies of the stationary states could be calculated from classical physics if one additional assumption was introduced: that the orbital angular momentum was an integer multiple of Planckâ„¢s constant. Then the calculated frequencies were found to agree with those observed.
Figure 4.10: Bohrâ„¢s first atomic model.
So Bohr developed a model based on stable orbital shells which only gave a certain number of shells to each atom. Bohr quantized electron orbits in units of Planckâ„¢s constant. He gave us the first of several quantum numbers which are useful in quantum computing, the shell number, n (see figure 4.10) of particular interest are the ground state at n = 1 and the excited state at n > 1of an atom. Bohr developed a formula for the radius of the electron orbits in a hydrogen atom:
Where r is the radius of the orbital, h is Planckâ„¢s constant, and m and q are the mass and charge of the electron. In real terms the value of r is 5:3 nanometers for n = 1.
Bohr went on with this model to derive the Balmerâ„¢s formula for hydrogen by two postulates:

1. Quantum angular momentum:

1. A jump between orbital will emit or absorb radiation by:
hf = Ei “ Ef¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦. (4.8)
Where Ei is the initial energy of the electron and Ef is the final energy of the electron. Although very close, it didnâ„¢t quite match up to the spectral line data. Arnold Sommerfeld, 1868 - 1951 then proposed a new model with elliptical orbits and a new quantum number was added, k to deal with the shape of the orbit, Bohr then introduced quantum number m to explain the Zeeman effect which produced extra spectral lines when a magnetic field was applied to the atom (i.e. the direction the field was pointing).
It was soon discovered that m could not account for all the spectral lines produced by magnetic fields. Wolfgang Pauli, 1900 - 1958 hypothesized another quantum number to account for this. It was thought, but not accepted by Pauli that the electron was spinning around and it turns out that Pauli was right but the name stuck, so we still use spin up and spin down to describe this property of an electron. Pauli then described why electrons fill the higher energy levels and donâ„¢t just occupy the ground state which we now call the Pauli Exclusion Principle. Niels Bohr went on to explain the periodic table in terms of orbital shells with the outer most shell being the valence shell that allows binding and the formation of molecules.

3.1.9 The New Theory of Quantum Mechanics

In 1909, a few years after demonstrating the photoelectric effect, Einstein used his photon hypothesis to obtain a simple derivation of Planckâ„¢s black body distribution. Planck himself had not gone as far as Einstein: he had indeed assumed that the transfer of energy between matter (the oscillators in the walls of the cavity) and radiation was quantized (i.e. the energy transferred to/from an oscillator occurred ingrains less than h times the frequency of the oscillator).But Planck had assumed the energy in the electromagnetic field, in the cavity, was continuously distributed, as in classical theory. By contrast, it was Einsteinâ„¢s hypothesis that the energy in the field itself was quantized: that for certain purposes, the field behaved like an ideal gas, not of molecules, but of photons, each with energy h times frequency, with the number of photons being proportional to the intensity. The clue to this was Einsteinâ„¢s observation that the high frequency part of Planckâ„¢s distribution for black body radiation(described by Wienâ„¢s law) could be derived by assuming a gas of photons and
applying statistical mechanics to it. This was in contrast to the low frequency part (described by the Rayleigh-Jeans law) which could be successfully obtained using classical electromagnetic theory, i.e. assuming waves. So you had both particles and waves playing a part. Furthermore, Einstein looked at fluctuations of the energy about its average value, and observed that the formula obtained had two forms, one which you would get if light was made up of waves and the other if it was made up of particles. Hence we have wave-particle duality.
In 1924, Louis de Broglie, 1892 - 1987extended the particle duality for light to all matter. He stated: The motion of a particle of any sort is associated with the propagation of a wave. This is the idea of a pilot wave which guides a free particleâ„¢s motion. De Brogliethen suggested the idea of electron waves be extended to bound particles in atoms, meaning electrons move around the nucleus guided by pilot waves. So, again a duality, de Broglie waves and Bohrâ„¢s particles. de Broglie was able to show that Bohrâ„¢s orbital radii could be obtained by fitting a whole number of waves around the nucleus. This gave an explanation of Bohrâ„¢s angular momentum quantum condition (see above).
The new quantum theory was developed between June 1925 and June 1926.Werner Heisenberg, 1901 - 1976, using a totally different and more simple atomic model (one that did not use orbits) worked out a code to connect quantum numbers and spectra. He also discovered that quantum mechanics does not follow the commutative law of multiplication i.e. pq 6= qp. When Max Born, 1882 “ 1970 saw this he suggested that Heisenberg use matrices. This became matrix mechanics, eventually all the spectral lines and quantum numbers were deduced for hydrogen. The first complete version of quantum mechanics was born. It™s interesting to note that it was not observation, or visualization that was used to deduce to theory - but pure mathematics.
Later we will see matrices cropping up in quantum computing.
At around the same time Erwin Schr¨odinger, 1887 “ 1961 built on de Broglie™s work on matter waves. He developed a wave equation (for whichª is the solution) for the core of bound electrons, as in the Hydrogen atom. It turns out that the results derived from this equation agree with the Bohr model. He then showed that Heisenberg™s matrix mechanics and his wave mechanics were equivalent. Max Born proposed that ª, the solution to Schrödinger™s equation can be interpreted as a probability amplitude, not a real, physical value. The probability
amplitude is a function of the electronâ„¢s position (x; y; z) and, when
Squared, gives the probability of finding the electron in a unit volume at the point (x; y; z). This gives us a new, probabilistic atomic model, in which there is a high probability that the electron will be found in a particular orbital shell. A representation of the ground state of hydrogen is shown in figure 4.16 and at the places where the density of points is high there is a high probability of finding the particle. The linear nature of the wave equation means that if 1 and 2 are two solutions then so is 1 + 2, a superposition state (weâ„¢ll look at superposition soon). This probabilistic interpretation of quantum mechanics implies the system is in both states until measured.
Now back to Niels Bohr. In 1927 Niels Bohr described the concept of complementarity: it depends on what type of measurement operations you are using to look at the system as to whether it behaves like a particle or a wave. He then put together various aspects of the work by Heisenberg, Schr¨odinger, and Born and concluded that the properties of a system (such as position and momentum) are undefined having only potential values with certain probabilities of being measured. This became know as the Copenhagen interpretation of quantum mechanics. Einstein did not like the Copenhagen interpretation and, for a good deal of time, Einstein kept trying to refute it by thought experiment, but Bohr always had an answer. But in 1935 Einstein raised an issue that was to later have profound implications for quantum computation and lead to the phenomenon we now call entanglement, a concept we™ll look at in a few pages.
Figure 4.16: Bohrâ„¢s atomic model

3.2 Important Principles for Quantum Computing

The main parts of quantum mechanics those are important for quantum computing are:
1 Linear algebra
2 Superposition.
3 Dirac notation.
4 Representing information.
5 Uncertainty.
6 Entanglement.
7 The 4 postulates of quantum mechanics.

3.2.1 Linear Algebra

Quantum mechanics leans heavily on linear algebra. Some of the concepts of quantum mechanics come from the mathematical formalism, not thought experiments thatâ„¢s what can give rise to counter intuitive conclusions


Superposition means a system can be in two or more of its states simultaneously. For example a single particle can be traveling along two different paths at once. This implies that the particle has wave-like properties, which can mean that the waves from the different paths can interfere with each other. Interference can cause the particle to act in ways that are impossible to explain without these wave-like properties.
The ability for the particle to be in a superposition is where we get the parallel nature of quantum computing: If each of the states corresponds to a different value then, if we have a superposition of such states and act on the system, we effectively act on all the states simultaneously.

3.2.3 Dirac Notation

As described in the previous chapter Dirac notation is used for quantum computing. We can represent the states of a quantum system as kets. For example, an electronâ„¢s spin can be represented as = spin up and = spin down. The electron can be thought of as a little magnet, the effect of a charged particle spinning on its axis. When we pass a horizontally traveling electron through an inhomogeneous magnetic field, in say, the vertical direction, the electron either goes up or down. If we then repeat this with the up electron it goes up, with the down electron it goes down. We say the up electron after the first measurement is in the state and the down electron is in state j1i. But, if we take the up electron and pass it through a horizontal field it comes out on one side50% of the time and on the other side 50% of the time. If we represent these two states as and can say that the up spin electron was in a superposition
of the two states and :
such that, when we make a measurement with the field horizontal we projectthe electron into one or the other of the two states, with equal probabilities ½ given by the square of the amplitudes.
3.2.4 Representing Information

Quantum mechanical information can be physically realized in many ways. To have something analogous to a classical bit we need a quantum mechanical system with two states only, when measured. We have just seen two examples: electron spin and photon direction. Two more methods for representing binary information in a way that is capable of exhibiting quantum effects (e.g. entanglement and superposition) are: polarization of photons and nuclear spins.

3.2.5 Uncertainty

The quantum world is irreducibly small so itâ„¢s impossible to measure a quantum system without having an effect on that system as our measurement deviceis also quantum mechanical. As a result there is no way of accurately predicting all of the properties of a particle. There is a trade off - the properties occurring complementary pairs (like position and momentum, or vertical spin and horizontal spin) and if we know one property with a high degree of certainty then we must know almost nothing about the other property. That unknown propertyâ„¢s behavior is essentially random. An example of this is a particleâ„¢s position and velocity: if we know exactly where it is then we know nothing about how fast it is going. It has been postulated (and currently accepted) that particles in fact DO NOT have defined values for unknown properties until they are measured. This is like saying that something does not exist until it is looked at.

3.2.6 Entanglement

In 1935 Einstein (along with colleagues Podolski and Rosen) demonstrated paradox (named EPR after them) in an attempt to refute the undefined nature of quantum systems. The results of their experiment seemed to show that quantum systems were defined, having local state BEFORE measurement. Although the original hypothesis was later proven wrong (i.e. it was proven that quantum systems do not have local state before measurement). The effect they demonstrated was still important, and later became known as entanglement.
Entanglement is the ability for pairs of particles to interact over any distance instantaneously. Particles donâ„¢t exactly communicate, but there is a statistical correlation between results of measurements on each particle that is hard to understand using classical physics. To become entangled, two particles are allowed to interact; they then separate and, on measuring say, the velocity of one of them (regardless of the distance between them), we can be sure of the value of velocity of the other one (before it is measured). The reason we say that they communicate instantaneously is because they store no local state [Rae, A. 1996] and only have well defined state once they are measured. Because of this
limitation particles canâ„¢t be used to transmit classical messages faster than the speed of light as we only know the states upon measurement. Entanglement has applications in a wide variety of quantum algorithms and machinery, some of which weâ„¢ll look at later.
As stated before, it has been proven that entangled particles have no local state.

3.2.7 The Four Postulates of Quantum Mechanics

The theory of quantum mechanics has four main postulates. These are introduced here as simple sentences. Later, in section 5.4, they will be explained in more detail in terms of quantum computing.
1. In a closed quantum system we need a way of describing the state of all the particles within it. The first postulate gives us a way to do this by using a single state vector to represent the entire system. Say the state is to be a vector in Cn, this would be C2 for a spin system.
2. The evolution of a closed system is a unitary transform. Say that, while the system is evolving under its own steam - no measurement - the state at some stage is related to the state at some previous stage (or time) by a unitary transform = . This means that we can totally describe the behavior of a system by using unitary matrices.
3. The third postulate relates to making measurements on a closed quantum system, and the affect those measurements have on that system.
4. Postulate four relates to combining or separating different closed quantum systems using tensor products.

4. Quantum Computing
4.1 Elements of Quantum Computing
4.1.1 Introduction

Generally weâ„¢ll think of a quantum computer as a classical computer with a quantum circuit attached to it with some kind of interface between conventional and quantum logic. Since there are only a few things a quantum computer does better than a classical computer it makes sense to do the bulk of the processing on the classical machine.

4.1.2 History

In 1982 Richard Feynman theorized that classic computation could be dramatically improved by quantum effects, building on this, David Deutsch developed the basis for quantum computing between 1984 and 1985. The next major breakthrough came in 1994 when Peter Shor described a method to factor large numbers in quantum poly-time (which breaks RSA encryption). This became known as Shorâ„¢s algorithm. At around the same time the quantum complexity classes were developed and the quantum Turing machine was described.
Then in 1996 Lov Grover developed a fast database search algorithm (known as Groverâ„¢s algorithm). The first prototypes of quantum computers were also built in 1996. In 1997 quantum error correction techniques were developed at Bell labs and IBM. Physical implementations of quantum computers improved with a three qubit machine in 1999 and a seven qubit machine in 2000.

4.1.3 Bits and Qubits

This section is about thenuts and bolts of quantum computing. It describes qubits, gates, and circuits.
Quantum computers perform operations on qubits which are analogous to conventional bits (see below) but they have an additional property in that they can be in a superposition. A quantum register with 3 qubits can store 8 numbers in superposition simultaneously and a 250 qubit register holds more numbers (superposed) than there are atoms in the universe!
The amount of information stored during thecomputational phase is essentially infinite - itâ„¢s just that we canâ„¢t get at it. The inaccessibility of the information is related to quantum measurement: When we attempt to readout a superposition state holding many values the state collapses and we get only one value (the rest get lost). This is tantalizing but, in some cases, can be made to work to our computational advantage.

4.1.4Representation of Data - Qubits

A bit of data is represented by a single atom that is in one of two states denoted by |0> and |1>. A single bit of this form is known as a qubit.
A physical implementation of a qubit could use the two energy levels of an atom. An excited state representing |1> and a ground state representing |0>.

4.1.5 Representation of Data - Superposition

5.Single Qubits:

Classical computers use two discrete states (e.g. states of charging of a capacitor) to represent a unit of information, this state is called a binary digit (or bit for short). A bit has the following two values:
0 and 1.
There is no intermediate state between them, i.e. the value of the bit cannot be in a superposition.
Quantum bits, or qubits, can on the other hand be in a state between 0 and 1, but only during the computational phase of a quantum operation. When measured, a qubit can become either:
i.e. we readout 0 or 1. This is the same as saying a spin particle can be in a superposition state but, when measured, it shows only one value.
The symbolic notation is part of the Dirac notation (see chapters 3 and 4). In terms of the above it essentially means the same thing as 0 and 1, just like a classical bit. Generally, a qubitâ„¢s state during the computational phase is represented by a linear combination of states otherwise called a superposition state.
Here and are the probability amplitudes. They can be used to calculate the probabilities of the system jumping into j0i or j1i following a measurement or readout operation. There may be, say a 25% chance a 0 is measured and a 75% chance a 1 is measured. The percentages must add to 100%. In terms of their representation qubits must satisfy:
¦¦¦¦¦¦¦ (5.1)
This the same things as saying the probabilities add to 100%.
Once the qubit is measured it will remain in that state if the same measurement is repeated provided the system remains closed between measurements. The probability that the qubitâ„¢s state, when in a superposition, will collapse to states is
are actually vectors, they are called the computational basis states that form an orthonormal basis for the vector space C2.
The other type of phase is called global phase. Two states can differ by a global phase factor and still be considered the same, as the global phase factor is not observable. One reason for this is that the probabilities for the outcomes and unaffected if and are each multiplied by the same complex number of magnitude 1. Likewise the relative phase (which figures in interference effects) is unaffected if and are multiplied by a common phase factor. What this means is that if we have a state on n qubits we can put a complex factor in front of the entire state to make it more readable.

5.1 Operations on qubit “ reverse logic:

Due to the nature of quantum physics, the destruction of information in a gate will cause heat to be evolved which can destroy the superposition of qubits.

6. Quantum gates

Quantum Gates are similar to classical gates, but do not have a degenerate output. I.e. their original input state can be derived from their output state, uniquely. They must be reversible
This means that a deterministic computation can be performed on a quantum computer only if it is reversible. Luckily, it has been shown that any deterministic computation can be made reversible. (Charles Bennet, 1973)

6.1 Quantum gates “ Handmard:

Simplest gate involves one qubit and is called a Hadamard Gate (also known as a square-root of NOT gate.) Used to put qubits into superposition.
Note: Two Hadamard gates used in succession can be used as a NOT gate

6.1.1 Quantum Gates - Controlled NOT:

A gate which operates on two qubits is called a Controlled-NOT (CN) Gate. If the bit on the control line is 1, invert the bit on the target line

Note: The CN gate has a similar behavior to the XOR gate with some extra information to make it reversible.

Example Operation - Multiplication By 2

We can build a reversible logic circuit to calculate multiplication by 2 using CN gates arranged in the following manner:

6.1.2 Quantum Gates - Controlled Controlled NOT (CCN):

A gate which operates on three qubits is called a Controlled Controlled NOT (CCN) Gate. If the bit on both of the control lines is 1, then the target bit is inverted.

6.2 A Universal Quantum Computer:

The CCN gate has been shown to be a universal reversible logic gate as it can be used as a NAND gate.

When our target input is 1, our target output is a result of a NAND of B and C

7. Shorâ„¢s Algorithm

Shorâ„¢s algorithm is used to factor numbers into their components (which can potentially be prime). Since the difficulty of factoring is believed to be exponentially hard it forms the basis of most modern crypto systems. Hence, being able to factor in polynomial time on a quantum computer has attracted significant interest.
In this section we are going to begin by constructing a set of tools needed to actually implement Shorâ„¢s algorithm.
The algorithm is dependant on
Modular Arithmetic
Quantum Parallelism
Quantum Fourier Transform

7.1 Shor™s Algorithm “ Periodicity:

An important result from Number Theory:
F(a) = x mod N is a periodic function
Choose N = 15 and x = 7 and we get the following:
7.2 Shorâ„¢s Algorithm - In Depth Analysis:
To Factor an odd integer N (Letâ„¢s choose 15):
1. Choose an integer q such that N < q < 2N letâ„¢s pick 256
2. Choose a random integer x such that GCD(x, N) = 1 letâ„¢s pick 7
3. Create two quantum registers (these registers must also be entangled so that the collapse of the input register corresponds to the collapse of the output register)
¢ Input register: must contain enough qubits to represent numbers as large as q-1. up to 255, so we need 8 qubits
¢ Output register: must contain enough qubits to represent numbers as large as N-1. up to 14, so we need 4 qubits
7.3 Preparing Data:
4. Load the input register with an equally weighted superposition of all integers from 0 to q-1. 0 to 255
5. Load the output register with all zeros.

7.4 Modular Arithmetic:

6. Apply the transformation x mod N to each number in the input register, storing the result of each computation in the output register.
Note that we are using decimal numbers here only for simplicity
Input Register 7 Mod 15 Output Register
|0> 7 Mod 15 1
|1> 7 Mod 15 7
|2> 7 Mod 15 4
|3> 7 Mod 15 13
|4> 7 Mod 15 1
|5> 7 Mod 15 7
|6> 7 Mod 15 4
|7> 7 Mod 15 13

7.5 Superposition Collapse:

7. Now take a measurement on the output register. This will collapse the superposition to represent just one of the results of the transformation, letâ„¢s call this value c.

7.6 Entanglement:

8. Since the two registers are entangled, measuring the output register will have the effect of partially collapsing the input register into an equal superposition of each state between 0 and q-1 that yielded c (the value of the collapsed output register.)

7.7 QFT:

We now apply the Quantum Fourier transform on the partially collapsed input register. The Fourier transform has the effect of taking a state |a> and transforming it into a state given by:
Note: A is the set of all values that 7 mod 15 yielded 1. In our case A = {0, 4, 8, ¦, 252}
So the final state of the input register after the QFT is:
The QFT will essentially peak the probability amplitudes at integer multiples of q/4 in our case 256/4, or 64.
|0>, |64>, |128>, |192> ¦
So we no longer have an equal superposition of states, the probability amplitudes of the above states are now higher than the other states in our register. We measure the register, and it will collapse with high probability to one of these multiples of 64, letâ„¢s call this value p.
With our knowledge of q, and p, there are methods of calculating the period (one method is the continuous fraction expansion of the ratio between q and p.)
7.8 The Factors:
10. Now that we have the period, the factors of N can be determined by taking the greatest common divisor of N with respect to x ^ (P/2) + 1 and x ^ (P/2) - 1. The idea here is that this computation will be done on a classical computer.

7.9.1Shor™s Algorithm “ Problems:

1 The QFT comes up short and reveals the wrong period. This probability is actually dependant on your choice of q. The larger the q, the higher the probability of finding the correct probability.
2 The period of the series ends up being odd


It is important that making a practical quantum computing is still far in the future. Programming style for a quantum computer will also be quite different.
Development of quantum computer needs a lot of money. Even the best scientists canâ„¢t answer a lot of questions about quantum physics. Quantum computer is based on theoretical physics and some experiments are already made. Building a practical quantum computer is just a matter of time.
Quantum computers easily solve applications that canâ„¢t be done with help of todayâ„¢s computers. This will be one of the biggest steps in science and will undoubtedly revolutionize the practical computing world.


While designing and developing this paper, lot of study material is referred; maximum information is gathered from internet. The referred websites are

Important Note..!

If you are not satisfied with above reply ,..Please


So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: quantum computing ai, oscar seminario perez, technical semianr on ipad 2, ebomb arnold, project documentation on quantum computing**dit, semianr on spintronics, seminar topic for quantum computing**aper 2015 2016,

Quick Reply
Type your reply to this message here.

Image Verification
Image Verification
(case insensitive)
Please enter the text within the image on the left in to the text box below. This process is used to prevent automated posts.

Possibly Related Threads...
Thread: Author Replies: Views: Last Post
  Optical Computer Full Seminar Report Download computer science crazy 43 34,283 29-04-2016 09:16 AM
Last Post: dhanyavp
Lightbulb Virtual Laser Keyboards (Download PPT,Abstract,Report) computer science crazy 493 104,738 28-03-2016 04:10 AM
Last Post: zotufalist
  Digital Signature Full Seminar Report Download computer science crazy 20 14,044 16-09-2015 02:51 PM
Last Post: seminar report asees
  Steganography In Images (Download Seminar Report) Computer Science Clay 16 15,158 08-06-2015 03:26 PM
Last Post: seminar report asees
  Mobile Train Radio Communication ( Download Full Seminar Report ) computer science crazy 10 12,239 01-05-2015 03:36 PM
Last Post: seminar report asees
  A SEMINAR REPORT on GRID COMPUTING Computer Science Clay 5 7,873 09-03-2015 04:48 PM
Last Post: iyjwtfxgj
Heart wireless intelligent network(win) (Download Full Report And Abstract) computer science crazy 7 7,179 10-02-2015 05:52 PM
Last Post: seminar report asees
  Image Processing & Compression Techniques (Download Full Seminar Report) Computer Science Clay 42 19,702 07-10-2014 07:57 PM
Last Post: seminar report asees
  IRIS SCANNING Full Seminar Report download Computer Science Clay 27 22,698 17-08-2014 05:49 PM
Last Post: ewpltnbbq
  Bluetooth Security Full Download Seminar Report and Paper Presentation computer science crazy 21 17,230 07-08-2014 11:32 PM
Last Post: [email protected]