• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 161
  • Tagged with
  • 162
  • 162
  • 159
  • 159
  • 23
  • 16
  • 15
  • 14
  • 11
  • 11
  • 10
  • 10
  • 10
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Search for Stop Using the ATLAS Detector and Performance Analysis of the Tile Calorimeter with Muons from W Decays

Andrean, Stefio Yosse January 2021 (has links)
This thesis presents a search for the supersymmetric partner of the top quark in the final state with one lepton. The search focuses especially in the region of the parameter space where the 2-body decay dominates. The analysis is performed using LHC full Run 2 data at √s = 13 TeV as recorded by the ATLAS detector. No significant excess above the backgrounds is observed, and 95% confidence level exclusion limits are calculated in the stop-neutralino mass plane. Stops are excluded up to 1200 GeV in the low neutralino mass scenario of below 400 GeV. The Tile Calorimeter is part of the ATLAS calorimeter system whose main task is to measure the energy of hadrons. A performance study is conducted on the Tile Calorimeter using muons from W boson decay originating from proton-proton collisions. Each calorimeter cell response is measure in data and compared with detector simulation.  The azimuthal cell response uniformity is also investigated using a likelihood method. Overall, a good data to detector simulation agreement and azimuthal uniformity is observed which shows well-calibrated cells and uniform responses among the calorimeter modules.
102

CP Violation Studies in Cascade Decay Sequence

Sultanov, Roman January 2022 (has links)
This thesis studies CP violation in the decay of the Ξ− hyperon, also known as the cascade baryon, which decays in the sequence Ξ− → Λπ− → pπ−π−. A difference between the angular distribution of this decay sequence and the angular distribution of the charge conjugate decay sequence ͞Ξ+ →͞Λπ+ → ͞pπ+π+, after taking into account the inversion of the momenta due to the parity operation, is indicative of CP violation. The decay sequence is described by three asymmetry parameters: αΞ, αΛ and φΞ, while the charge conjugate decay's sequence is described by: ͞αΞ, ͞αΛ and ͞φΞ. A measure of CP violation is given by the CP violating observables: AΞ, AΛ and ΦΞ. The aim of this thesis is to study how the normalised statistical uncertainties in the asymmetry parameters and in the CP violating observables depend on the magnitude of the polarisation vector (polarisation) of the cascade and the anticascade. This was done by simulating 1.0×107 Ξ− → Λπ− → pπ−π− and  ͞Ξ+ →͞Λπ+ → ͞pπ+π+ decays for different values of polarisation of the cascades using Monte Carlo, and then utilising maximum likelihood estimation and error propagation to estimate the uncertainties in the parameters and in the observables. It was shown using the methods of this thesis that the normalised statistical uncertainties in the asymmetry parameters and the CP violating observables decreased whenever the polarisation was increased, although with diminishing returns. In the region of 10% − 50% polarisation, the decrease in the uncertainties was substantial. An increase from 10% to 50% polarisation lowered all of the uncertainties by 76% − 80%. In the region of 50% − 100% polarisation, the decrease in the uncertainties was slightly less. An increase from 50% to 100% polarisation lowered the uncertainties in αΞ,  ͞αΞ and AΞ by roughly 33%, in αΛ,  ͞αΛ and AΛ by roughly 40%, and in φΞ,  ͞φΞ and ΦΞ it went down by 53%. It was also shown that, if one were to produce 60% polarised cascades and use the method of this thesis, it would require a sample of 1.1×1011 − 1.3×1011 Ξ− → Λπ− → pπ−π− and  ͞Ξ+ →͞Λπ+ → ͞pπ+π+ decays to reach the precision in the observables of the order given by the Standard Model. However, if one wished to match the uncertainty given by the most recent and best measurement of the observables, using 60% polarised cascades, one would only need a sample of 5.7×104 − 8.6×104 for each decay sequence.
103

Random Matrix Models

Tsolakidis, Evangelos January 2022 (has links)
In this thesis, we provide a self contained introduction to the theory of random matrices and matrix models. Our analysis has a chronological order and it begins with the study of nuclear energy levels and ensemble averaging which yields the famous Wigner surmise. Then, the standard Gaussian theory of the orthogonal, unitary and symplectic ensemble is derived from symmetry arguments of the corresponding physical system, which is then followed by the explicit calculation of Wigner's semicircle distribution. Moreover, interactions are introduced to the free theory which leads to the topological expansion, a diagrammatic way of evaluating certain expectation values. Also, it is shown that there exists a duality between the resulting graphs and the quantization of a two dimensional surface through mapping as well as a method for solving a specific family of potentials. Finally, the numerical confirmation for some observables is carried out using the Hamburger moment problem, and the derivation of critical points for some theories.
104

Pulse Shape Analysis of Si Detector Signals from Fission Fragments using the LOHENGRIN Spectrometer

Papaioannou, Dimitrios January 2023 (has links)
Nuclear physics experiments typically involve the collection and analysis of detector signals produced by the interaction of subatomic particles with matter to deduce various quantities. When heavy ions are involved, Si Detector signals are distorted by the formation of a plasma-like cloud from the interaction between the heavy ions and the detector material. The signal amplitude is reduced and delayed, two effects known as Pulse Height Defect (PHD) and Plasma Delay Time (PDT). A recent experiment was performed at the Institut Laue-Langevin(ILL) experimental nuclear reactor facility in Grenoble, using the LOHENGRIN mass spectrometer, to study these walk effects. The purpose of this project is to use a subset of the data to perform pulse shape analysis and develop a parametrization of the pulse waveform in order to better understand the PDT and PHD and how the pulses are affected. Initially, the PDT and PHD are estimated for masses 90, 100, 130 and 143 u using already established methods. The pulse waveforms are then investigated and a suitable parametrization of the pulse waveform is developed. The region around the pulse onset, which is important in extracting the timing characteristics of the pulse, is found to be described rather well by the Landau function. The Landau function parameters are further investigated and correlations with pulse shape characteristics are discussed. Finally, this novel parametrization is used as an alternative approach to estimate the PDT for the same masses as initially. Comparisons between the two methods indicate that the PDT is actually a combined effect of the physical plasma delay and the walk effects introduced by the underlying triggering routine that is used during offline analysis.
105

Applications of Pulse Shape Analysis Techniques for Segmented Planar Germanium Detectors

Khaplanov, Anton January 2007 (has links)
The application of pulse shape analysis (PSA) and γ-ray tracking techniques has attracted a great deal of interest in the recent years in fields ranging from nuclear structure studies to medical imaging. These new data analysis methods add position sensitivity as well as directional information for the detected γ-rays to the excellent energy resolution of germanium detectors. This thesis focuses on the application of PSA on planar segmented germanium detectors, divided into three separate studies. The pulse shape analysis technique known as the matrix method was chosen due to its ability to treat events with arbitrary number and combinations of interactions within a single detector. It has been applied in two experiments with the 25-fold segmented planar pixel detector -- imaging and polarization measurements -- as well as in a simulation of upcoming detectors for DESPEC at NuSTAR/FAIR. In the first experiment, a point source of 137Cs was imaged. Events where the 662 keV γ-rays scattered once and were then absorbed in a different segment were treated by the PSA algorithm in order to find the locations of these interactions. The Compton scattering formula was then used to determine the direction to the source. The experiment has provided a robust test of the performance of the PSA algorithm on multiple interaction events, in particular those with interactions in adjacent segments, as well as allowed to estimate the realistically attainable position resolution. In the second experiment, the response of the detector to polarized photons of 288 keV was studied. The polarization of photons can be measured through the observation of the angular distribution of Compton-scattered photons, Hence the ability to resolve the interaction locations had once again proven useful. The third study is focused on the performance of the proposed planar germanium detectors for the DESPEC array. As these detectors have not yet been manufactured at the time of this writing, a set of data simulated in GEANT4 was used. The detector response was calculated for two of the possible segmentation patterns -- that with a single pixelated contact and one where both contacts are segmented into mutually orthogonal strips. In both cases, PSA was applied in order to reconstruct the interaction locations from this response. It was found that the double-sided strip detector can achieve an over-all better position resolution with a given number of readout channels. However, this comes at the expense of a small number of complex events where the reconstruction fails. These results have also been compared to the performance of the 25-fold pixelated detector. / QC 20101110
106

Aspects of the ATLAS ITk Inner Tracker development for the high luminosity upgrade of the Large Hadron Collider

Steentoft, Jonas January 2022 (has links)
The High Luminosity upgrade of the Large Hadron Collider (HL-LHC), necessitates that the ATLAS experiment replace their current Inner Detector (ID) system. The new Inner Tracker (ITk) will be an all silicon detector, utilising both pixel and strip sensors, with the aim of performing as well, or better than the current system - but in a much more challenging environment. The ITk Strip detector will consist of 17888 modules, ∼ 700 of which will be produced in the Scandinavian ITk Cluster - a collaboration of Copenhagen, Lund, Oslo and Uppsala university and our industrial partner NOTE. This work encompasses the journey from individual components through industrial scale module assembly and on to performance evaluation studies at the DESY II testbeam facility. Optimisation studies were performed of the correlated multi-variable calibration necessary for a glue robot to precisely and reliably dispense the two component epoxy used in the bonding of front-end electronics to the silicon sensor. Procedures and tools were developed for integrating this process into an industrial workflow, and to account for future fundamental changes, such as a switch in the epoxy utilised. To demonstrate sufficient tracking performance of ITk strip modules, even at end-of-life, testbeam campaigns of pre-irradiated modules are conducted. These campaign serve as vi-tal feasibility studies for the ITk as a whole. Reconstruction of end-cap type modules have been historically tricky, due to their complex geometry. This work presents the full integration of semi-automated end-cap type module reconstruction in the Corryvreckan testbeam analysis framework. This represent a major improvement in turnover time from raw data to final result, making the previously impossible concept of live reconstruction during testbeam campaigns within reach.
107

Generation and detection of entangled single-photon pairs

Habtezion, Gabriella Tesfamichael January 2024 (has links)
Quantum information technology is an emerging field with important applications such as quantum cryptography and teleportation, quantum imaging and lithography. These applications make use of single photons and pairs of entangled photons. In this work, we experimentally generate and attempt to detect the entangled photons. The entangled photon pairs are produced using a nonlinear crystal of beta bariumborate through a process of spontaneous parametric down-conversion (SPDC). Alignment necessary to detect the entangled photon pairs is implemented using a HeNe laser. The experimental results reveal key signatures of the down-converted photons: (i) energy conservation as the wave length of generated photons (810 nm) is two times larger than that of the photons used to optically pump SPDC (405 nm), which is shown by using a 10-nm band-pass filter centred around 810 nm; (ii) the angles between the two photons of a pair correspond to the configuration of momentum conservation calculated analytically;(iii) the photons arrived at the detectors within the jitter time of those; and (iv) orthogonal polarisation of down-converted photons (810 nm) with respect to pump photons (405 nm). These findings show the consequences of SPDC.
108

Signatures of Dark Matter at the LHC : A phenomenological study combining collider and cosmological bounds to constrain a vector dark matter particle model

Olsson, Anton January 2022 (has links)
Everything that humans have ever touched, created or built something from consists of a type of matter that only makes up 15 percent of the total matter in the universe. The remaining 85 percent is attributed to dark matter, a so far not discovered and non-luminous type of matter. In this thesis a potential dark matter particle candidate has been studied by investigating an extension of the SU(2) symmetry into a dark gauge sector, where the new sector is connected to the standard model through a vector-like fermion portal. In order to understand how such an extension is made, the Lagrangian density of the standard model and its different gauge sectors were derived. The cross sections of the process of pair production of dark matter particles and tau leptons in the final state due to proton-proton collisions at the LHC was simulated with the software \texttt{MadGraph}. The cross sections were used to draw significance contours for the exclusion and discovery regions for parts of the parameter space of the new model, for current and projected luminosities of the LHC. The projected luminosity scans also consider how lowering the uncertainty in the number of background events through hypothetical improvements to detectors would impact the exclusion and discovery contours. The significance contours were combined with relic density constraints, derived from comparisons between measurements of the Planck telescope and calculations from the software \texttt{MicrOMEGAs}. The resulting graphs show that there are non-forbidden regions of the parameter space that are significant for exclusion and discovery for luminosity of current searches. Increasing the luminosity while keeping the uncertainty in the number of background events the same yielded only minor increases to the exclusion and discovery contours. Combining the projected luminosities with improvements to the background uncertainty instead produced exclusion and discovery regions that were significantly larger than those for the current luminosity.
109

Benchmark of simulation of an ion guide for neutron-induced fission products

Gao, Zhihao January 2022 (has links)
Independent yield distributions of high-energy neutron-induced fission are of importance to achieve a good understanding of fission. Even though the mass and charge yield distributions of thermal neutron-induced fission are well known, there are few experimental data for high-energy neutron-induced fission. In addition to basic research on the fission process, independent yield distributions of high-energy neutron-induced fission play a key role in the development of Generation IV fast nuclear reactors. To facilitate measurements of independent fission yields of high-energy neutron-induced fission, a dedicated ion guide and a proton-neutron converter were developed and put to use in experiments at the isotope separator facility IGISOL in Jyväskylä. In parallel, a simulation model of the system was developed in order to optimize the collecting efficiency of fission products in the ion guide. The model uses the Monte Carlo code MCNPX to simulate the neutron production, the fission model code GEF to simulate the fission process, and GEANT4 for ion transportation. In order to benchmark the simulation model, metal foils were inserted in the ion guide with the purpose of collecting fission products. At the same time, nickel, cobalt and indium foils were located between the pn-converter and the ion guide to record the neutron flux from the pn-converter. After the beam was turned off, and after several days of cooling, g-ray spectroscopy measurements of the foils were conducted using a well shielded HPGe detector. Based on the identified g-ray transitions in the spectroscopy data, the productions of corresponding fission products and neutron activation products were calculated, and then used to benchmark the transportation and collection of fission products, as well as neutron production, in the simulations. The conclusion from the benchmark is that the transportation of fission products in the helium gas, as simulated by GEANT4, agrees very well with the measurement, while the transportation of fission products in the uranium targets agrees with the measurement within 10%. The neutron flux at the high-energy part of the neutron spectrum is overestimated by about 40%.Thanks to the benchmark it has been shown that the predictive power of the model is satisfactory and sufficient for the purpose of modeling the ion guide. Furthermore, the parameters involved in the simulations, such as neutron production, distance between the neutron source and the ion guide, volume of the ion guide and so on, play an important role in the optimization of the setup. However, the lower than expected fission rate suggests that the optimization on these parameters may not be enough to achieve a sufficiently high intensity of fission products, especially for nuclei far from the stability line. To achieve a sufficiently high intensity, an electric field guidance, similar to the RF structure of the CARIBU gas catcher presented in G.Savard et al. Nucl. Inst. Meth. B, 376: 246, 2016, to collect fission products is considered.
110

Determination of the homogeneity of the detection efficiency of silicon detectors using light ions

Ellen, Hamarstedt January 2022 (has links)
In this project, the homogeneity of the detection efficiency of two silicon detectors were examined using a radioactive alpha-source, 241Am, to study the surfaces of the detectors by exposing a small part of the detector at a time. By then observing the variations of the deposited alpha-energies at different positions on the detector, one can map the differences in the homogeneity of the surface. Many variations of different magnitudes were found; some variations can reasonably be represented by either variations in the dead layer or residue glue along the edges. Some variations seemed best explained by pieces of dust or dirt on the surface. The possibility of using heavy fission fragments from the decay of 252Cf to compare the effects was explored but shown to be non-feasible in the scope of this project. Finally, proposals for further work and improvements are discussed.

Page generated in 0.0728 seconds