• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 248
  • 43
  • 28
  • 24
  • 7
  • 6
  • 5
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 444
  • 233
  • 116
  • 105
  • 98
  • 83
  • 68
  • 64
  • 39
  • 38
  • 37
  • 35
  • 35
  • 35
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Field theory of interacting polaritons under drive and dissipation

Johansen, Christian Høj 25 January 2023 (has links)
This thesis explores systems that exhibit strong coupling between an optical cavity field and a many-particle system. To treat the drive and dissipative nature of the cavity on the same footing as the dynamics of the many-particle system, we use a non-equilibrium field theoretic approach. The first system considered is an ultracold bosonic gas trapped inside a cavity. The dispersive coupling between the cavity field and the atoms' motion leads to the formation of a polariton. We show how a modulation of the pump laser on the energy scale of the transverse cavity mode splitting can be used to create effective interactions between different cavity modes. This effective interaction results in the polariton acquiring a multimode nature, exemplified by avoided crossings in the cavity spectrum. As the laser power is increased, the polariton softens and at a critical power becomes unstable. This instability signals the transition into a superradiant state. If the multimode polariton contains a cavity mode with an effective negative detuning, then the transition does not happen through a mode softening but at a finite frequency. To investigate this, classical non-linear equations are constructed from the action and from these we derive the critical couplings and frequencies. It is shown how the superradiant transition happening at a finite frequency is a consequence of a competition between the negatively and the positively detuned cavity modes making up the polariton. The finite-frequency transition is found to be equivalent to a Hopf bifurcation and leads to the emergence of limit cycles. Our analysis shows that the system can exhibit both bistabilities and evolution constricted to a two-torus. We end the investigation by showing how interactions among the atoms combined with the emerging limit cycle open new phonon scattering channels. The second system considered in the thesis is inspired by the recent experiments on gated Transition-metal dichalcogenides (TMD) monolayers inside cavities. An exciton within the TMD can couple strongly to the cavity and, due to the electronic gating, also interact strongly with the conduction electrons. To treat the strong interactions of the excitons with both cavity and electrons, we solve the coupled equations for the correlation functions non-perturbatively within a ladder approximation. The strong interactions give rise to new quasiparticles known as polaron-polaritons. By driving the system through the cavity, we show how the competition between electron-induced momentum relaxation and cavity loss leads to the accumulation of polaritons at a small but finite momentum, which is accompanied by significant decrease of the polariton linewidth Due to the hybrid nature of the polaron-polariton, we show that this behavior can by qualitatively modified by changing the cavity detuning.
262

A Modular Platform for Adaptive Heterogeneous Many-Core Architectures

Atef, Ahmed Kamaleldin 18 December 2023 (has links)
Multi-/many-core heterogeneous architectures are shaping current and upcoming generations of compute-centric platforms which are widely used starting from mobile and wearable devices to high-performance cloud computing servers. Heterogeneous many-core architectures sought to achieve an order of magnitude higher energy efficiency as well as computing performance scaling by replacing homogeneous and power-hungry general-purpose processors with multiple heterogeneous compute units supporting multiple core types and domain-specific accelerators. Drifting from homogeneous architectures to complex heterogeneous systems is heavily adopted by chip designers and the silicon industry for more than a decade. Recent silicon chips are based on a heterogeneous SoC which combines a scalable number of heterogeneous processing units from different types (e.g. CPU, GPU, custom accelerator). This shifting in computing paradigm is associated with several system-level design challenges related to the integration and communication between a highly scalable number of heterogeneous compute units as well as SoC peripherals and storage units. Moreover, the increasing design complexities make the production of heterogeneous SoC chips a monopoly for only big market players due to the increasing development and design costs. Accordingly, recent initiatives towards agile hardware development open-source tools and microarchitecture aim to democratize silicon chip production for academic and commercial usage. Agile hardware development aims to reduce development costs by providing an ecosystem for open-source hardware microarchitectures and hardware design processes. Therefore, heterogeneous many-core development and customization will be relatively less complex and less time-consuming than conventional design process methods. In order to provide a modular and agile many-core development approach, this dissertation proposes a development platform for heterogeneous and self-adaptive many-core architectures consisting of a scalable number of heterogeneous tiles that maintain design regularity features while supporting heterogeneity. The proposed platform hides the integration complexities by supporting modular tile architectures for general-purpose processing cores supporting multi-instruction set architectures (multi-ISAs) and custom hardware accelerators. By leveraging field-programmable-gate-arrays (FPGAs), the self-adaptive feature of the many-core platform can be achieved by using dynamic and partial reconfiguration (DPR) techniques. This dissertation realizes the proposed modular and adaptive heterogeneous many-core platform through three main contributions. The first contribution proposes and realizes a many-core architecture for heterogeneous ISAs. It provides a modular and reusable tilebased architecture for several heterogeneous ISAs based on open-source RISC-V ISA. The modular tile-based architecture features a configurable number of processing cores with different RISC-V ISAs and different memory hierarchies. To increase the level of heterogeneity to support the integration of custom hardware accelerators, a novel hybrid memory/accelerator tile architecture is developed and realized as the second contribution. The hybrid tile is a modular and reusable tile that can be configured at run-time to operate as a scratchpad shared memory between compute tiles or as an accelerator tile hosting a local hardware accelerator logic. The hybrid tile is designed and implemented to be seamlessly integrated into the proposed tile-based platform. The third contribution deals with the self-adaptation features by providing a reconfiguration management approach to internally control the DPR process through processing cores (RISC-V based). The internal reconfiguration process relies on a novel DPR controller targeting FPGA design flow for RISC-V-based SoC to change the types and functionalities of compute tiles at run-time.
263

Entanglement certification in quantum many-body systems

Costa De Almeida, Ricardo 07 November 2022 (has links)
Entanglement is a fundamental property of quantum systems and its characterization is a central problem for physics. Moreover, there is an increasing demand for scalable protocols that can certify the presence of entanglement. This is primarily due to the role of entanglement as a crucial resource for quantum technologies. However, systematic entanglement certification is highly challenging, and this is particularly the case for quantum many-body systems. In this dissertation, we tackle this challenge and introduce some techniques that allow the certification of multipartite entanglement in many-body systems. This is demonstrated with an application to a model of interacting fermions that shows the presence of resilient multipartite entanglement at finite temperatures. Moreover, we also discuss some subtleties concerning the definition entanglement in systems of indistinguishable particles and provide a formal characterization of multipartite mode entanglement. This requires us to work with an abstract formalism that can be used to define entanglement in quantum many-body systems without reference to a specific structure of the states. To further showcase this technique, and also motivated by current quantum simulation efforts, we use it to extend the framework of entanglement witnesses to lattice gauge theories. / L'entanglement è una proprietà fondamentale dei sistemi quantistici e la sua caratterizzazione è un problema centrale per la fisica. Inoltre, vi è una crescente richiesta di protocolli scalabili in grado di certificare la presenza di entanglement. Ciò è dovuto principalmente al ruolo dell'entanglement come risorsa cruciale per le tecnologie quantistiche. Tuttavia, la certificazione sistematica dell'entanglement è molto impegnativa, e questo è particolarmente vero per i sistemi quantistici a molti corpi. In questa dissertazione, affrontiamo questa sfida e introduciamo alcune tecniche che consentono la certificazione dell'entanglement multipartito in sistemi a molti corpi. Ciò è dimostrato con un'applicazione a un modello di fermioni interagenti che mostra la presenza di entanglement multipartito resiliente a temperature finite. Inoltre, discutiamo anche alcune sottigliezze riguardanti la definizione di entanglement in sistemi di particelle indistinguibili e forniamo una caratterizzazione formale dell'entanglement multipartito. Ciò ci richiede di lavorare con un formalismo astratto che può essere utilizzato per definire l'entanglement nei sistemi quantistici a molti corpi senza fare riferimento a una struttura specifica degli stati. Per mostrare ulteriormente questa tecnica, e anche motivata dagli attuali sforzi di simulazione quantistica, la usiamo per estendere la struttura dei testimoni di entanglement alle teorie di gauge del reticolo.
264

Quantum algorithms for many-body structure and dynamics

Turro, Francesco 10 June 2022 (has links)
Nuclei are objects made of nucleons, protons and neutrons. Several dynamical processes that occur in nuclei are of great interest for the scientific community and for possible applications. For example, nuclear fusion can help us produce a large amount of energy with a limited use of resources and environmental impact. Few-nucleon scattering is an essential ingredient to understand and describe the physics of the core of a star. The classical computational algorithms that aim to simulate microscopic quantum systems suffer from the exponential growth of the computational time when the number of particles is increased. Even using today's most powerful HPC devices, the simulation of many processes, such as the nuclear scattering and fusion, is out of reach due to the excessive amount of computational time needed. In the 1980s, Feynman suggested that quantum computers might be more efficient than classical devices in simulating many-particle quantum systems. Following Feynman's idea of quantum computing, a complete change in the computation devices and in the simulation protocols has been explored in the recent years, moving towards quantum computations. Recently, the perspective of a realistic implementation of efficient quantum calculations was proved both experimentally and theoretically. Nevertheless, we are not in an era of fully functional quantum devices yet, but rather in the so-called "Noisy Intermediate-Scale Quantum" (NISQ) era. As of today, quantum simulations still suffer from the limitations of imperfect gate implementations and the quantum noise of the machine that impair the performance of the device. In this NISQ era, studies of complex nuclear systems are out of reach. The evolution and improvement of quantum devices will hopefully help us solve hard quantum problems in the coming years. At present quantum machines can be used to produce demonstrations or, at best, preliminary studies of the dynamics of a few nucleons systems (or other equivalent simple quantum systems). These systems are to be considered mostly toy models for developing prospective quantum algorithms. However, in the future, these algorithms may become efficient enough to allow simulating complex quantum systems in a quantum device, proving more efficient than classical devices, and eventually helping us study hard quantum systems. This is the main goal of this work, developing quantum algorithms, potentially useful in studying the quantum many body problem, and attempting to implement such quantum algorithms in different, existing quantum devices. In particular, the simulations made use of the IBM QPU's , of the Advanced Quantum Testbed (AQT) at Lawrence Berkeley National Laboratory (LBNL), and of the quantum testbed recently based at Lawrence Livermore National Laboratory (LLNL) (or using a device-level simulator of this machine). The our research aims are to develop quantum algorithms for general quantum processors. Therefore, the same developed quantum algorithms are implemented in different quantum processors to test their efficiency. Moreover, some uses of quantum processors are also conditioned by their availability during the time span of my PhD. The most common way to implement some quantum algorithms is to combine a discrete set of so-called elementary gates. A quantum operation is then realized in term of a sequence of such gates. This approach suffers from the large number of gates (depth of a quantum circuit) generally needed to describe the dynamics of a complex system. An excessively large circuit depth is problematic, since the presence of quantum noise would effectively erase all the information during the simulation. It is still possible to use error-correction techniques, but they require a huge amount of extra quantum register (ancilla qubits). An alternative technique that can be used to address these problems is the so-called "optimal control technique". Specifically, rather than employing a set of pre-packaged quantum gates, it is possible to optimize the external physical drive (for example, a suitably modulated electromagnetic pulse) that encodes a multi-level complex quantum gate. In this thesis, we start from the work of Holland et al. "Optimal control for the quantum simulation of nuclear dynamics" Physical Review A 101.6 (2020): 062307, where a quantum simulation of real-time neutron-neutron dynamics is proposed, in which the propagation of the system is enacted by a single dense multi-level gate derived from the nuclear spin-interaction at leading order (LO) of chiral effective field theory (EFT) through an optimal control technique. Hence, we will generalize the two neutron spin simulations, re-including spatial degrees of freedom with a hybrid algorithm. The spin dynamics are implemented within the quantum processor and the spatial dynamics are computed applying classical algorithms. We called this method classical-quantum coprocessing. The quantum simulations using optimized optimal control methods and discrete get set approach will be presented. By applying the coprocessing scheme through the optimal control, we have a possible bottleneck due to the requested classical computational time to compute the microwave pulses. A solution to this problem will be presented. Furthermore, an investigation of an improved way to efficiently compile quantum circuits based on the Similarity Renormalization Group will be discussed. This method simplifies the compilation in terms of digital gates. The most important result contained in this thesis is the development of an algorithm for performing an imaginary time propagation on a quantum chip. It belongs to the class of methods for evaluating the ground state of a quantum system, based on operating a Wick rotation of the real time evolution operator. The resulting propagator is not unitary, implementing in some way a dissipation mechanism that naturally leads the system towards its lowest energy state. Evolution in imaginary time is a well-known technique for finding the ground state of quantum many-body systems. It is at the heart of several numerical methods, including Quantum Monte Carlo techniques, that have been used with great success in quantum chemistry, condensed matter and nuclear physics. The classical implementations of imaginary time propagation suffer (with few exceptions) of an exponential increase in the computational cost with the dimension of the system. This fact calls for a generalization of the algorithm to quantum computers. The proposed algorithm is implemented by expanding the Hilbert space of the system under investigation by means of ancillary qubits. The projection is obtained by applying a series of unitary transformations having the effect of dissipating the components of the initial state along excited states of the Hamiltonian into the ancillary space. A measurement of the ancillary qubit(s) will then remove such components, effectively implementing a "cooling" of the system. The theory and testing of this method, along with some proposals for improvements will be thoroughly discussed in the dedicated chapter.
265

Implementation and Evaluation of a TDMA Based Protocol for Wireless Sensor Networks

Fiske, Robert M. January 2010 (has links)
No description available.
266

Enabling Efficient Use of MPI and PGAS Programming Models on Heterogeneous Clusters with High Performance Interconnects

Potluri, Sreeram 18 September 2014 (has links)
No description available.
267

Aspects of the Many-Body Problem in Nuclear Physics

Dyhdalo, Alexander 18 September 2018 (has links)
No description available.
268

Advances in the Application of the Similarity Renormalization Group to Strongly Interacting Systems

Wendt, Kyle Andrew 17 December 2013 (has links)
No description available.
269

STOCHASTIC MODELS ASSOCIATED WITH THE TWO-PARAMETER POISSON-DIRICHLET DISTRIBUTION

Xu, Fang 04 1900 (has links)
<p>In this thesis, we explore several stochastic models associated withthe two-parameter Poisson-Dirichlet distribution and population genetics.The impacts of mutation, selection and time onthe population evolutionary process will be studied by focusing on two aspects of the model:equilibrium and non-equilibrium. In the first chapter, we introduce relevant background on stochastic genetic models, andsummarize our main results and their motivations. In the second chapter, the two-parameter GEM distribution is constructedfrom a linear birth process with immigration. The derivationrelies on the limiting behavior of the age-ordered family frequencies. In the third chapter, to show the robustness of the sampling formula we derive the Laplace transform of the two-parameterPoisson-Dirichlet distribution from Pitman sampling formula. The correlationmeasure of the two-parameter point process is obtained in our proof. We also reverse this derivationby getting the sampling formula from the Laplace transform. Then,we establish a central limit theorem for the infinitely-many-neutral-alleles modelat a fixed time as the mutation rate goes to infinity.Lastly, we get the Laplace transform for the selectionmodel from its sampling formula. In the fourth chapter, we establisha central limit theorem for the homozygosity functions under overdominant selectionwith mutation approaching infinity. The selection intensity is given by a multiple of certain powerof the mutation rate. This result shows an asymptotic normality for the properly scaled homozygosities,resembling the neutral model without selection.This implies that the influence of selection can hardly be observed with large mutation. In the fifth chapter, the stochastic dynamics of the two-parameter extension of theinfinitely-many-neutral-alleles model is characterized by the derivation of its transition function,which is absolutely continuous with respect to the stationary distribution being the two-parameter Poisson-Dirichlet distribution.The transition density is obtained by the expansion of eigenfunctions.Combining this result with the correlation measure in Chapter 3, we obtain the probability generatingfunction of a random sampling from the two-parameter model at a fixed time. Finally, we obtain two results based on the quasi-invariance of the Gamma processwith respect to the multiplication transformation group.One is the quasi-invariance property of the two-parameter Poisson-Dirichletdistribution with respect to Markovian transformation group.The other one is the equivalence between the quasi-invarianceof the stationary distributions of aclass of branching processes and their reversibility.</p> / Doctor of Philosophy (PhD)
270

Neural-network compression methods for computational quantum many-body physics

Medvidovic, Matija January 2024 (has links)
Quantum many-body phenomena have been a focal point of the physics community for the last several decades. From material science and chemistry to model systems and quantum computing, diverse problems share mathematical description and challenges. A key roadblock in many subfields is the exponential increase in problem size with increasing number of quantum constituents. Therefore, development of efficient compression and approximation methods is the only way to move forward. Parameterized models coming from the field of machine learning have successfully been applied to very large classical problems where data is abundant, leveraging recent advances in high-performance computing. In this thesis, state-of-the-art methods relying on such models are applied to the quantum many-body problem in two distinct ways: from first principles and data-driven, as described in chapter 1. In chapters 2 and 3, the framework of quantum Monte Carlo is used to efficiently manipulate variational approximations of many-body states, obtaining non-equilibrium states occurring in quantum circuits and real-time dynamics of large systems. In chapters 4 and 5, simulated synthetic data is used to train surrogate models that enhance original methods, allowing for computations that would otherwise be out of reach for conventional solvers. In all cases, a computational advantage is established when using machine learning methods to compress different versions of the quantum many-body problem. Each chapter is concluded by proposing extensions and novel applications of new compressed representation of the problem.

Page generated in 0.04 seconds