Spelling suggestions: "subject:"kuantum computing"" "subject:"auantum computing""
51 |
Stepping stones towards linear optical quantum computingTill Weinhold Unknown Date (has links)
The experiments described in this thesis form an investigation into the path towards establishing the requirements of quantum computing in a linear optical system. Our qubits are polarisation encoded photons for which the basic operations of quantum computing, single qubit rotations, are a well understood problem. The difficulty lies in the interaction of photons. To achieve these we use measurement induced non-linearities. The first experiment in this thesis describes the thorough characterisation of a controlled-sign gate based on such non-linearities. The photons are provided as pairs generated through parametric down-conversion, and as such share correlations unlikely to carry over into large scale implementations of the future. En route to such larger circuits, a characterisation of the actions of the controlled-sign gate is conducted, when the input qubits have been generated independently from each other, revealing a large drop in process fidelity. To explore the cause of this degradation of the gate performance a thorough and highly accurate model of the gate is derived including the realistic description of faulty circuitry, photon loss and multi-photon emission by the source. By simulating the effects of the various noise sources individually, the heretofore largely ignored multi-photon emission is identified as the prime cause of the degraded gate performance, causing a drop in fidelity nearly three times as large as any other error source. I further draw the first comparison between the performance of an experimental gate to the error probabilities per gate derived as thresholds for fault-tolerant quantum computing. In the absence of a single vigourous threshold value, I compare the gate performance to the models that yielded the highest threshold to date as an upper bound and to the threshold of the Gremlin-model, which allows for the most general errors. Unsurprisingly this comparison reveals that the implemented gate is clearly insufficient, however just remedying the multi-photon emission error will allow this architecture to move to within striking distance of the boundary for fault-tolerant quantum computing. The utilised methodology can be applied to any gate in any architecture and can, combined with a suitable model of the noise sources, become an important guide for developments required to achieve fault tolerant quantum computing. The final experiment on the path towards linear optical quantum computing is the demonstration of a pair of basic versions of Shor's algorithm which display the essential entanglement for the algorithm. The results again highlight the need for extensive measurements to reveal the fundamental quality of the implemented algorithm, which is not accessible with limited indicative measurements. In the second part of the thesis, I describe two experiments on other forms of entanglement by extending the actions of a Fock-State filter, a filter that is capable of attenuating single photon states stronger than multi-photon states, to produce entangled states. Furthermore this device can be used in conjunction with standard wave-plates to extend the range of operations possible on the bi-photonic qutrit space, showing that this setup suffices to produce any desired qutrit state, thereby giving access to new measurement capabilities and in the process creating and proving the first entanglement between a qubit and a qutrit.
|
52 |
Stepping stones towards linear optical quantum computingTill Weinhold Unknown Date (has links)
The experiments described in this thesis form an investigation into the path towards establishing the requirements of quantum computing in a linear optical system. Our qubits are polarisation encoded photons for which the basic operations of quantum computing, single qubit rotations, are a well understood problem. The difficulty lies in the interaction of photons. To achieve these we use measurement induced non-linearities. The first experiment in this thesis describes the thorough characterisation of a controlled-sign gate based on such non-linearities. The photons are provided as pairs generated through parametric down-conversion, and as such share correlations unlikely to carry over into large scale implementations of the future. En route to such larger circuits, a characterisation of the actions of the controlled-sign gate is conducted, when the input qubits have been generated independently from each other, revealing a large drop in process fidelity. To explore the cause of this degradation of the gate performance a thorough and highly accurate model of the gate is derived including the realistic description of faulty circuitry, photon loss and multi-photon emission by the source. By simulating the effects of the various noise sources individually, the heretofore largely ignored multi-photon emission is identified as the prime cause of the degraded gate performance, causing a drop in fidelity nearly three times as large as any other error source. I further draw the first comparison between the performance of an experimental gate to the error probabilities per gate derived as thresholds for fault-tolerant quantum computing. In the absence of a single vigourous threshold value, I compare the gate performance to the models that yielded the highest threshold to date as an upper bound and to the threshold of the Gremlin-model, which allows for the most general errors. Unsurprisingly this comparison reveals that the implemented gate is clearly insufficient, however just remedying the multi-photon emission error will allow this architecture to move to within striking distance of the boundary for fault-tolerant quantum computing. The utilised methodology can be applied to any gate in any architecture and can, combined with a suitable model of the noise sources, become an important guide for developments required to achieve fault tolerant quantum computing. The final experiment on the path towards linear optical quantum computing is the demonstration of a pair of basic versions of Shor's algorithm which display the essential entanglement for the algorithm. The results again highlight the need for extensive measurements to reveal the fundamental quality of the implemented algorithm, which is not accessible with limited indicative measurements. In the second part of the thesis, I describe two experiments on other forms of entanglement by extending the actions of a Fock-State filter, a filter that is capable of attenuating single photon states stronger than multi-photon states, to produce entangled states. Furthermore this device can be used in conjunction with standard wave-plates to extend the range of operations possible on the bi-photonic qutrit space, showing that this setup suffices to produce any desired qutrit state, thereby giving access to new measurement capabilities and in the process creating and proving the first entanglement between a qubit and a qutrit.
|
53 |
Stepping stones towards linear optical quantum computingTill Weinhold Unknown Date (has links)
The experiments described in this thesis form an investigation into the path towards establishing the requirements of quantum computing in a linear optical system. Our qubits are polarisation encoded photons for which the basic operations of quantum computing, single qubit rotations, are a well understood problem. The difficulty lies in the interaction of photons. To achieve these we use measurement induced non-linearities. The first experiment in this thesis describes the thorough characterisation of a controlled-sign gate based on such non-linearities. The photons are provided as pairs generated through parametric down-conversion, and as such share correlations unlikely to carry over into large scale implementations of the future. En route to such larger circuits, a characterisation of the actions of the controlled-sign gate is conducted, when the input qubits have been generated independently from each other, revealing a large drop in process fidelity. To explore the cause of this degradation of the gate performance a thorough and highly accurate model of the gate is derived including the realistic description of faulty circuitry, photon loss and multi-photon emission by the source. By simulating the effects of the various noise sources individually, the heretofore largely ignored multi-photon emission is identified as the prime cause of the degraded gate performance, causing a drop in fidelity nearly three times as large as any other error source. I further draw the first comparison between the performance of an experimental gate to the error probabilities per gate derived as thresholds for fault-tolerant quantum computing. In the absence of a single vigourous threshold value, I compare the gate performance to the models that yielded the highest threshold to date as an upper bound and to the threshold of the Gremlin-model, which allows for the most general errors. Unsurprisingly this comparison reveals that the implemented gate is clearly insufficient, however just remedying the multi-photon emission error will allow this architecture to move to within striking distance of the boundary for fault-tolerant quantum computing. The utilised methodology can be applied to any gate in any architecture and can, combined with a suitable model of the noise sources, become an important guide for developments required to achieve fault tolerant quantum computing. The final experiment on the path towards linear optical quantum computing is the demonstration of a pair of basic versions of Shor's algorithm which display the essential entanglement for the algorithm. The results again highlight the need for extensive measurements to reveal the fundamental quality of the implemented algorithm, which is not accessible with limited indicative measurements. In the second part of the thesis, I describe two experiments on other forms of entanglement by extending the actions of a Fock-State filter, a filter that is capable of attenuating single photon states stronger than multi-photon states, to produce entangled states. Furthermore this device can be used in conjunction with standard wave-plates to extend the range of operations possible on the bi-photonic qutrit space, showing that this setup suffices to produce any desired qutrit state, thereby giving access to new measurement capabilities and in the process creating and proving the first entanglement between a qubit and a qutrit.
|
54 |
Topological Quantum Computing with Fibonacci AnyonsEnblad, Lovisa January 2024 (has links)
This thesis introduces the emerging field of quantum computing, emphasizing its capability to surpass traditional computing by solving complex problems that are beyond the reach of classical computers. Unlike classical systems that operate with bits and logic gates, quantum computing utilizes qubits and quantum gates, exploiting the vast computational space offered by quantum mechanics. A focal point of this study is topological quantum computing, a novel approach designed to overcome the inherent vulnerability of quantum systems to errors, such as decoherence and operational inaccuracies. At the heart of this method lies the use of non-Abelian anyons, with a particular focus on Fibonacci anyons, whose unique topological characteristics and braiding operations present a viable path to fault-tolerant quantum computation. This thesis aims to elucidate how the braiding of Fibonacci anyons can be employed to construct the necessary quantum gates for topological quantum computing. By offering a foundational exploration of quantum computing principles, especially topological quantum computing, and detailing the process for creating quantum gates through braiding of Fibonacci anyons, the work sets the stage for further research and development in this transformative computing paradigm.
|
55 |
Molecular engineering with endohedral fullerenes : towards solid-state molecular qubitsPlant, Simon Richard January 2010 (has links)
Information processors that harness quantum mechanics may be able to outperform their classical counterparts at certain tasks. Quantum information processing (QIP) can utilize the quantum mechanical phenomenon of entanglement to implement quantum algorithms. Endohedral fullerenes, where atoms, ions or clusters are trapped in a carbon cage, are a class of nanomaterials that show great promise as the basis for a solid-state QIP architecture. Some endohedral fullerenes are spin–active, and offer the potential to encode information in their spin-states. This thesis addresses the challenges of how to engineer the components of a scalable QIP architecture based on endohedral fullerenes. It focuses on the synthesis and characterization of molecules which may, in the future, permit the demonstration of entanglement; the optical read-out of quantum states; and the creation of quasi-one-dimensional molecular arrays. Due to its long spin decoherence time, N@C<sub>60</sub> is the selected as the basic molecular unit for ‘coupled’ fullerene pairs, molecular systems for which it may be possible to demonstrate entanglement. To this end, isolated fullerene pairs, in the form of spin-bearing fullerene dimers, are created. This begins with the processing of N@C<sub>60</sub> at the macroscale and leads towards the synthesis of <sup>15</sup>N@C<sub>60</sub>-<sup>15</sup>N@C<sub>60</sub> dimers at the microscale. High throughput processing is introduced as the most efficient technique to obtain high purity N@C<sub>60</sub> on a reasonable timescale. A scheme to produce symmetric and asymmetric fullerene dimers is also demonstrated. EPR spectroscopy of the dimers in the solid-state confirms derivatization, whilst permitting the modelling of spin–spin interactions for 'coupled' fullerene pairs. This suggests that the optimum inter–spin separation for which to observe spin–spin coupling in powders is circa 3 nm. Motivated by the properties of the trivalent erbium ion for the optical detection of quantum states, optically–active erbium–doped fullerenes are also investigated. These erbium metallofullerenes are synthesized and isolated as individual isomers. They are characterized by low temperature photoluminescence spectroscopy, emitting in the infra- red at a wavelength of 1.5 μm. The luminescence is markedly different where a C<sub>2</sub> cluster is trapped alongside the erbium ions in the fullerene cage. Er<sub>2</sub>C<sub>2</sub>@C<sub>82</sub> (isomer I) exhibits emission linewidths that are comparable to those observed for Er<sup>3+</sup> in crystals. Finally, the discovery of a novel praseodymium-doped fullerene is reported. The balance of evidence favours the structure being assigned as Pr<sub>2</sub>@C<sub>72</sub>. This novel endohedral fullerene forms quasi-one-dimensional arrays in carbon nanotubes, which is a useful proof-of-principle of how a scaled fullerene-based architecture may be achieved.
|
56 |
Shaping single photonsNisbet-Jones, Peter January 2012 (has links)
The possibility of creating a scaleable quantum network by interconverting photonic and atomic qubits shows great promise. The fundamental requirement for such a network is deterministic control over the emission and absorption of photons from single atoms. This thesis reports on the experi-mental construction of a photon source that can emit single-photons with arbitrary spatio-temporal shape, phase, and frequency. The photon source itself is a strongly-coupled atom cavity system based on a single <sup>87</sup> Rb atom within a macroscopic high-finesse Fabry-Perot cavity. It operates intermittently for periods of up to 100µs, with single-photon repetition rates of 1.0 MHz and an efficiency of almost 80%. Atoms are loaded into the cavity using an atomic fountain, with the upper turning point near the centre of the cavity mode. This ensures long interaction times without any disturbances introduced by trapping potentials. The photons’ indistinguishability was tested, with a two-photon Hong-Ou-Mandel visibility of 87%. This ability to both generate, and control, the photons’ properties, for example producing photons with symmetric or multi-peaked spatio-temporal shapes, allows for the production of photons in an n-time-bin superposition state where each time-bin has an arbitrarily defined amplitude and phase. These photons can be used as photonic qubits, qutrits and qquads, and their properties have been tested using a small linear-optics network.
|
57 |
Fabrication and Device Applications of Self Assembled NanostructuresKanchibotla, Bhargava Ram, V 28 May 2009 (has links)
The spin dynamics of electrons in inorganic and organic semiconducting nanostructures has become an area of interest in recent years. The controlled manipulation of an electron’s spin, and in particular its phase, is the primary requirement for applications in quantum information processing. The phase decoheres in a time known as the transverse relaxation time or T2 time. We have carried out a measurement of the ensemble-averaged transverse spin relaxation time (T2*) in bulk and few molecules of the organic semiconductor tris-(8-hydroxyquinolinolato aluminum) or Alq3. The Alq3 system exhibits two characteristic T2* times: the longer of which is temperature independent and the shorter is temperature dependent, indicating that the latter is most likely limited by spin-phonon interaction. Based on the measured data, we infer that the single-particle T2 time in Alq3 is probably long enough to meet Knill's criterion for fault-tolerant quantum computing even at room temperature. Alq3 is also an optically active organic, and we propose a simple optical scheme for spin qubit readout. Moreover, we found that the temperature-dependent T2* time is considerably shorter in bulk Alq3 powder than in few molecules confined in 1–2-nm-sized cavities. Because carriers in organic molecules are localized over individual molecules or atoms but the phonons are delocalized, we believe that this feature is caused by a phonon bottleneck effect. Organic fluorophore molecules, electrosprayed within nanometer sized pores of an anodic alumina film, exhibit unusually large molecule-specific red- or blue-shifts in the fluorescence peak. This molecular specificity allows us to resolve different constituents in a mixture optically, providing a unique new technology for bio- and chemical sensing. We have also observed that the fluorescence efficiency progressively increases with decreasing pore diameter. This trend cannot be explained by the usual photo carrier confinement model since the photo carriers are localized over individual molecules (or atoms) which are much smaller than the pore diameter. A more likely explanation is the metal enhanced fluorescence caused by the plasmon resonance of nanotextured aluminum lying at the bottom of the pores.
|
58 |
Quantum Circuits for Symmetric CryptanalysisUnknown Date (has links)
Quantum computers and quantum computing is a reality of the near feature. Companies
such as Google and IBM have already declared they have built a quantum computer
and tend to increase their size and capacity moving forward. Quantum computers have
the ability to be exponentially more powerful than classical computers today. With this
power modeling behavior of atoms or chemical reactions in unusual conditions, improving
weather forecasts and traffic conditions become possible. Also, their ability to exponentially
speed up some computations makes the security of todays data and items a major
concern and interest. In the area of cryptography, some encryption schemes (such as RSA)
are already deemed broken by the onset of quantum computing. Some encryption algorithms
have already been created to be quantum secure and still more are being created
each day. While these algorithms in use today are considered quantum-safe not much is
known of what a quantum attack would look like on these algorithms. Specifically, this
paper discusses how many quantum bits, quantum gates and even the depth of these gates
that would be needed for such an attack. The research below was completed to shed light
on these areas and offer some concrete numbers of such an attack. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2018. / FAU Electronic Theses and Dissertations Collection
|
59 |
Efficient Implementations of Post-quantum Isogeny-based CryptographyUnknown Date (has links)
Quantum computers are envisioned to be able to solve mathematical problems
which are currently unsolvable for conventional computers, because of their
exceptional computational power from quantum mechanics. Therefore, if quantum
computers are ever built in large scale, they will certainly be able to solve many classical
exponential complexity problems such as the hard problems which the current
public key cryptography is constructed upon. To counteract this problem, the design
of post-quantum cryptography protocols is necessary to preserve the security in the
presence of quantum adversaries. Regardless of whether we can estimate the exact
time for the advent of the quantum computing era, security protocols are required to
be resistant against potentially-malicious power of quantum computing.
In this thesis, the main focus is on the sperformance improvement of one
of the potential PQC candidates, isogeny-based cryptography. Several optimized
implementations of cryptography applications based on this primitive are presented.
From a general viewpoint, the proposed methods, implementation techniques and
libraries have a practical impact on the performance evaluation of post-quantum
cryptography schemes in a wide range of applications. In particular, the provided benchmarks and optimizations on ARM-powered processors provide a reference for
comparison and evaluation of isogeny-based cryptography with other post-quantum
candidates during the first round of NIST's PQC standardization process. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2018. / FAU Electronic Theses and Dissertations Collection
|
60 |
An electrically driven resonant tunnelling semiconductor quantum dot single photon sourceConterio, Michael John January 2015 (has links)
No description available.
|
Page generated in 0.104 seconds