• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 47
  • 3
  • Tagged with
  • 50
  • 50
  • 50
  • 50
  • 50
  • 16
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Positronium in the AEgIS experiment: study on its emission from nanochanneled samples and design of a new apparatus for Rydberg excitations

Di Noto, Lea January 2014 (has links)
This experimental thesis has been done in the framework of AEgIS (Antimatter Experiment: Gravity, Interferometry, Spectroscopy), an experiment installed at CERN, whose primary goal is the measurement of the Earth's gravitational acceleration on anti-hydrogen. The antiatoms will be produced by the charge exchange reaction, where a cloud of Ps in Rydberg states interacts with cooled trapped antiprotons. Since the charge exchange cross section depends on Ps velocity and quantum number, the velocity distribution of Ps emitted by a positron-positronium converter as well as its excitation in Rydberg states have to be studied and optimized. In this thesis Ps cooling and emission into vacuum from nanochannelled silicon targets was studied by performing Time of Flight measurements with a dedicated apparatus conceived to receive the slow positron beam as produced at the Trento laboratory or at the NEPOMUC facility at Munich. Measurements were done by varying the positron implantation energy, the sample temperature and the nanochannel dimensions, with the aim of finding the best parameters to increase Ps fraction having velocity lower than 5 10^4 m/s. Preliminary data were analyzed in order to extract the Ps velocity distribution and its average temperature. More, an original method for evaluating the permanence time of Ps inside the nanochannels before being emitted into vacuum, was described. A first rough evaluation based on the performed measurements is reported and this result will be useful to investigate the Ps cooling process and to synchronize the laser pulse for Ps excitation in the AEgIS experiment. In order to perform measurements of Ps excitation in Rydberg states, a new apparatus for bunching positron pulses, coming from the AEgIS positron line, was designed and built. COMSOL and SIMION softwares were used for designing a magnetic transport line and an electron optical line, which extracts positrons from the magnetic field and focus them on the nanochanneled Si sample. More, a buncher device, which spatio-temporally compresses the positron bunches, was built and a fast circuit for supplying the 25 buncher electrodes with a parabolic shaped potential was designed and tested. According to the simulations, at the target position the device will deliver positrons with an energy ranging from 6 to 9 keV, in bunches of 5 ns duration and a spot of 2.5 mm in diameter. By using this apparatus, first measurements for the optimization of Ps excitation in Rydberg states and studies on the Ps levels with and without magnetic field will be performed. At a later stage investigations of Ps spectroscopy or of Ps laser cooling with the same apparatus could be achievable for the first time.
42

Ground state and dynamical properties of many-body systems by non conventional Quantum Monte Carlo algorithms

Roggero, Alessandro January 2014 (has links)
In this work we develop Quantum Monte Carlo techniques suitable for exploring both ground state and dynamical properties of interacting many-body systems. We then apply these techniques to the study of excitations in superfluid He4 and to explore the structure of nuclear systems using chiral effective field theory interactions.
43

Topological Dynamics in Low-Energy QCD

Millo, Raffaele January 2011 (has links)
In this work we discuss the role of topological degrees of freedom in very low-energy hadronic processes (vacuum polarization and vacuum birefringence). We also present an approach which enables to investigate the microscopic dynamics of non-perturbative processes: this is achieved by constructing an effective statistical theory for topological vacuum gauge configurations, by means of Lattice QCD simulations.
44

Studies of the Higgs sector in H->ZZ->2l2q and bbH->4b semileptonic channels at CMS.

Kanishchev, Konstantin January 2014 (has links)
The thesis is devoted to my Ph.D. research activities during last three years within the CMS collaboration. My primary field of interest was the investigation of the Higgs sector of the Standard Model and in connection with Beyond-Standard-Model New Physics searches.
45

Pionless Effective Field Theory: Building the Bridge Between Lattice Quantum Chromodynamics and Nuclear Physics

Contessi, Lorenzo January 2017 (has links)
We analyze ground state properties of few-nucleons systems and $^{16}$O using \eftnopi (Pionless Effective Field Theory) at \ac{LO}. This is the first time the theory is extended to many-body nuclear systems. The free constants of the interaction are fitted using both experimental data and \ac{LQCD} results. The nuclear many-body Schr\"odinger equation is solved by means of the Auxiliary Field Diffusion Monte Carlo method. A linear optimization procedure has been used to recover the correct structure of the ground state wavefunction. {\eftnopi} as revealed to be an appropriate theory to describe light nuclei both in nature, and in the case where heavier quarks are used in order to make \ac{LQCD} calculation feasible. Our results are in good agreement with experiments and \ac{LQCD} predictions. In our \ac{LO} calculation, $^{16}$O appears to be unstable against breakup into four $^4$He for the quark masses considered.
46

Quantum algorithms for many-body structure and dynamics

Turro, Francesco 10 June 2022 (has links)
Nuclei are objects made of nucleons, protons and neutrons. Several dynamical processes that occur in nuclei are of great interest for the scientific community and for possible applications. For example, nuclear fusion can help us produce a large amount of energy with a limited use of resources and environmental impact. Few-nucleon scattering is an essential ingredient to understand and describe the physics of the core of a star. The classical computational algorithms that aim to simulate microscopic quantum systems suffer from the exponential growth of the computational time when the number of particles is increased. Even using today's most powerful HPC devices, the simulation of many processes, such as the nuclear scattering and fusion, is out of reach due to the excessive amount of computational time needed. In the 1980s, Feynman suggested that quantum computers might be more efficient than classical devices in simulating many-particle quantum systems. Following Feynman's idea of quantum computing, a complete change in the computation devices and in the simulation protocols has been explored in the recent years, moving towards quantum computations. Recently, the perspective of a realistic implementation of efficient quantum calculations was proved both experimentally and theoretically. Nevertheless, we are not in an era of fully functional quantum devices yet, but rather in the so-called "Noisy Intermediate-Scale Quantum" (NISQ) era. As of today, quantum simulations still suffer from the limitations of imperfect gate implementations and the quantum noise of the machine that impair the performance of the device. In this NISQ era, studies of complex nuclear systems are out of reach. The evolution and improvement of quantum devices will hopefully help us solve hard quantum problems in the coming years. At present quantum machines can be used to produce demonstrations or, at best, preliminary studies of the dynamics of a few nucleons systems (or other equivalent simple quantum systems). These systems are to be considered mostly toy models for developing prospective quantum algorithms. However, in the future, these algorithms may become efficient enough to allow simulating complex quantum systems in a quantum device, proving more efficient than classical devices, and eventually helping us study hard quantum systems. This is the main goal of this work, developing quantum algorithms, potentially useful in studying the quantum many body problem, and attempting to implement such quantum algorithms in different, existing quantum devices. In particular, the simulations made use of the IBM QPU's , of the Advanced Quantum Testbed (AQT) at Lawrence Berkeley National Laboratory (LBNL), and of the quantum testbed recently based at Lawrence Livermore National Laboratory (LLNL) (or using a device-level simulator of this machine). The our research aims are to develop quantum algorithms for general quantum processors. Therefore, the same developed quantum algorithms are implemented in different quantum processors to test their efficiency. Moreover, some uses of quantum processors are also conditioned by their availability during the time span of my PhD. The most common way to implement some quantum algorithms is to combine a discrete set of so-called elementary gates. A quantum operation is then realized in term of a sequence of such gates. This approach suffers from the large number of gates (depth of a quantum circuit) generally needed to describe the dynamics of a complex system. An excessively large circuit depth is problematic, since the presence of quantum noise would effectively erase all the information during the simulation. It is still possible to use error-correction techniques, but they require a huge amount of extra quantum register (ancilla qubits). An alternative technique that can be used to address these problems is the so-called "optimal control technique". Specifically, rather than employing a set of pre-packaged quantum gates, it is possible to optimize the external physical drive (for example, a suitably modulated electromagnetic pulse) that encodes a multi-level complex quantum gate. In this thesis, we start from the work of Holland et al. "Optimal control for the quantum simulation of nuclear dynamics" Physical Review A 101.6 (2020): 062307, where a quantum simulation of real-time neutron-neutron dynamics is proposed, in which the propagation of the system is enacted by a single dense multi-level gate derived from the nuclear spin-interaction at leading order (LO) of chiral effective field theory (EFT) through an optimal control technique. Hence, we will generalize the two neutron spin simulations, re-including spatial degrees of freedom with a hybrid algorithm. The spin dynamics are implemented within the quantum processor and the spatial dynamics are computed applying classical algorithms. We called this method classical-quantum coprocessing. The quantum simulations using optimized optimal control methods and discrete get set approach will be presented. By applying the coprocessing scheme through the optimal control, we have a possible bottleneck due to the requested classical computational time to compute the microwave pulses. A solution to this problem will be presented. Furthermore, an investigation of an improved way to efficiently compile quantum circuits based on the Similarity Renormalization Group will be discussed. This method simplifies the compilation in terms of digital gates. The most important result contained in this thesis is the development of an algorithm for performing an imaginary time propagation on a quantum chip. It belongs to the class of methods for evaluating the ground state of a quantum system, based on operating a Wick rotation of the real time evolution operator. The resulting propagator is not unitary, implementing in some way a dissipation mechanism that naturally leads the system towards its lowest energy state. Evolution in imaginary time is a well-known technique for finding the ground state of quantum many-body systems. It is at the heart of several numerical methods, including Quantum Monte Carlo techniques, that have been used with great success in quantum chemistry, condensed matter and nuclear physics. The classical implementations of imaginary time propagation suffer (with few exceptions) of an exponential increase in the computational cost with the dimension of the system. This fact calls for a generalization of the algorithm to quantum computers. The proposed algorithm is implemented by expanding the Hilbert space of the system under investigation by means of ancillary qubits. The projection is obtained by applying a series of unitary transformations having the effect of dissipating the components of the initial state along excited states of the Hamiltonian into the ancillary space. A measurement of the ancillary qubit(s) will then remove such components, effectively implementing a "cooling" of the system. The theory and testing of this method, along with some proposals for improvements will be thoroughly discussed in the dedicated chapter.
47

From Hypernuclei to Hypermatter: a Quantum Monte Carlo Study of Strangeness in Nuclear Structure and Nuclear Astrophysics

Lonardoni, Diego January 2013 (has links)
The work presents the recent developments in Quantum Monte Carlo calculations for nuclear systems including strange degrees of freedom. The Auxiliary Field Diffusion Monte Carlo algorithm has been extended to the strange sector by the inclusion of the lightest among the hyperons, the Λ particle. This allows to perform detailed calculations for Λ hypernuclei, providing a microscopic framework for the study of the hyperon-nucleon interaction in connection with the available experimental information. The extension of the method for strange neutron matter, put the basis for the first Diffusion Monte Carlo analysis of the hypernuclear medium, with the derivation of neutron star observables of great astrophysical interest.
48

Progress of Monte Carlo methods in nuclear physics using EFT-based NN interaction and in hypernuclear systems.

Armani, Paolo January 2011 (has links)
Introduction In this thesis I report the work of my PhD; it treated two different topics, both related by a third one, that is the computational method that I use to solve them. I worked on EFT-theories for nuclear systems and on Hypernuclei. I tried to compute the ground state properties of both systems using Monte Carlo methods. In the first part of my thesis I briefly describe the Monte Carlo methods that I used: VMC (Variational Monte Carlo), DMC (Diffusion Monte Carlo), AFDMC (Auxiliary Field Diffusion Monte Carlo) and AFQMC (Auxiliary Field Quantum Monte Carlo) algorithms. I also report some new improvements relative to these methods that I tried or suggested: I remember the fixed hypernode extension (§ 2.6.2) for the DMC algorithm, the inclusion of the L2 term (§ 3.10) and of the exchange term (§ 3.11) into the AFDMC propagator. These last two are based on the same idea used by K. Schmidt to include the spin-orbit term in the AFDMC propagator (§ 3.9). We mainly use the AFDMC algorithm but at the end of the first part I describe also the AFQMC method. This is quite similar in principle to AFDMC, but it was newer used for nuclear systems. Moreover, there are some details that let us hope to be able to overcome with AFQMC some limitations that we find in AFDMC algorithm. However we do not report any result relative to AFQMC algorithm, because we start to implement it in the last months and our code still requires many tests and debug. In the second part I report our attempt of describing the nucleon-nucleon interaction using EFT-theory within AFDMC method. I explain all our tests to solve the ground state of a nucleus within this method; hence I show also the problems that we found and the attempts that we tried to overcome them before to leave this project. In the third part I report our work about Hypernuclei; we tried to fit part of the ΛN interaction and to compute the Hypernuclei Λ-hyperon separation energy. Nevertheless we found some good and encouraging results, we noticed that the fixed-phase approximation used in AFDMC algorithm was not so small like assumed. Because of that, in order to obtain interesting results, we need to improve this approximations or to use a better method; hence we looked at AFQMC algorithm aiming to quickly reach good results.
49

Simulation and Characterization of Single Photon Detectors for Fluorescence Lifetime Spectroscopy and Gamma-ray Applications

Benetti, Michele January 2012 (has links)
Gamma-ray and Fluorescence Lifetime Spectroscopies are driving the development of non-imaging silicon photon sensors and, in this context, Silicon Photo-Multipliers (SiPM)s are leading the starring role. They are 2D array of optical diodes called Single Photon Avalanche Diodes (SPAD)s, and are normally fabricated with a dedicated silicon process. SPADs amplify the charge produced by the single absorbed photon in a way that recalls the avalanche amplification exploited in Photo-Multiplier Tubes (PMT)s. Recently 2D arrays of SPADs have been realized also in standard CMOS technology, paving the way to the realization of completely custom sensors that can host ancillary electronic and digital logic on-chip. The designs of scientific apparatus have been influenced for years by the bulky PMT-based detectors. An overwhelming interest in both SiPMs and CMOS SPADs lies in the possibility of displacing these small sensors realizing new detectors geometries. This thesis examines the potential deployment of SiPM-based detector in an apparatus built for the study of the Time-Of-Flight (TOF) of Positronium (Ps) and the displacement of 2D array of CMOS SPADs in a lab-on-chip apparatus for Fluorescence Lifetime Spectroscopy. The two design procedures are performed using Monte-Carlo simulations. Characterizations of the two sensor have been carried out, allowing for a performance evaluation and a validation of the two design procedures.
50

Development of enhanced double-sided 3D radiation sensors for pixel detector upgrades at HL-LHC

Povoli, Marco January 2013 (has links)
The upgrades of High Energy Physics (HEP) experiments at the Large Hadron Collider (LHC) will call for new radiation hard technologies to be applied in the next generations of tracking devices that will be required to withstand extremely high radiation doses. In this sense, one of the most promising approaches to silicon detectors, is the so called 3D technology. This technology realizes columnar electrodes penetrating vertically into the silicon bulk thus decoupling the active volume from the inter-electrode distance. 3D detectors were first proposed by S. Parker and collaborators in the mid ’90s as a new sensor geometry intended to mitigate the effects of radiation damage in silicon. 3D sensors are currently attracting growing interest in the field of High Energy Physics, despite their more complex and expensive fabrication, because of the much lower operating voltages and enhanced radiation hardness. 3D technology was also investigated in other laboratories, with the intent of reducing the fabrication complexity and aiming at medium volume sensor production in view of the first upgrades of the LHC experiments. This work will describe all the efforts in design, fabrication and characterization of 3D detectors produced at FBK for the ATLAS Insertable B-Layer, in the framework of the ATLAS 3D sensor collaboration. In addition, the design and preliminary characterization of a new batch of 3D sensor will also be described together with new applications of 3D technology.

Page generated in 0.0805 seconds