• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 195
  • 102
  • 99
  • 26
  • 15
  • 12
  • 9
  • 6
  • 5
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 593
  • 188
  • 180
  • 66
  • 64
  • 58
  • 57
  • 54
  • 54
  • 53
  • 50
  • 49
  • 48
  • 48
  • 45
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Investigation of the CP properties of VBF Higgs production in hadronic final states of H → τ τ decays with the ATLAS detector

Ördek, Serhat 28 January 2021 (has links)
No description available.
42

Narrating rape at the Truth and Reconciliation Commission in South Africa

Rattazzi, Erin Alexis January 2005 (has links)
Includes bibliographical references. / The seven women who shared their stories of rape at the human rights violation hearings of the Truth and Reconciliation Commission ('TRC') in South Africa offer a nascent public record of women's experiences of rape under apartheid. This project is motivated by a desire to examine how these testimonies of rape were affected by explicit and implicit underlying narrative frameworks associated with the language of the TRC, and that of rape. In particular, this project analyses the extent to which the juxtaposition of these two frameworks at the TRC may have either enabled or constrained the seven women's narratives.
43

The Effects of Past Betrayals On Trust Behavior

Lam, Trenton D 01 January 2023 (has links) (PDF)
Experiencing interpersonal betrayals or trust violations can often create negative consequences for victims when creating new relationships. Past studies have found that trauma from previous betrayals can impair trust and thereby trust behavior for victims in the future. However, little research has been done to empirically characterize this connection and existing studies have provided conflicting results. The goal of this study was to explore the relationship between past trust violations, measured through the Brief Betrayal Trauma Survey (BBTS), and present self-reported trust and trust behavior. Differences in trust behavior between those with or without a history of betrayals was measured through an experimental economic trust game. Results found that those with a history of betrayal trauma had marginally lowered self-rated trust in strangers. While a history of betrayal trauma did not yield main effects on either first or average investments in the trust game, those with a history of betrayal had similar first and average investments in partners regardless of visual cue trustworthiness. Victims of betrayal seem to lack discriminatory trust behavior or possibly disregard visual cues entirely. These findings add to the current understanding of how victims of interpersonal betrayal interpret and respond to visual cues both initially and across multiple interactions and is especially relevant for those who aim to form close relationships with these individuals such as care providers.
44

Measurement of time-dependent CP asymmetries in the decays <i>B</i> → <i>D</i><sup>*</sup>π using a partial reconstruction technique

Bahinipati, Seema January 2007 (has links)
No description available.
45

Search for the Lepton Flavor Violating Decay <i>Z</i>→<i>eμ</i>

Fernando, Waruna Sri 14 December 2010 (has links)
No description available.
46

<b>NORMALIZATION OF THE MU2E CHARGED LEPTON FLAVOR VIOLATION EXPERIMENT</b>

Jijun Chen (18398139) 18 April 2024 (has links)
<p dir="ltr">The Mu2e experiment is searching for Beyond-Standard-Model, Charged Lepton Flavor Violation (CLFV) in the muon capture reaction μ<sup>− </sup>+ Al → e<sup>−</sup> + Al. To compare the accessible energy scale of this experiment, the Large Hadron Collider (LHC) is capable of observing new physics at the few TeV mass scale. However, by searching for μ-to-e conversion at a branching ratio sensitivity of 10<sup>−17</sup>, Mu2e will probe for new physics at mass scales up to 10<sup>3</sup> ∼ 10<sup>4 </sup>TeV, far beyond the reach of any planned accelerator and surpassing the current world’s best limit by 10<sup>4</sup> times. In addition, there is no competing Standard Model process that produces this decay to a branching ratio level < 10<sup>−54</sup>. To report a reliable result, the number of stopped muons will be normalized to 10% precision utilizing two γ-ray transitions and one x-ray atomic transition. The first, directly proportional to the CLFV signal, is the 1808.7 keV γ-ray emitted promptly in the muon capture process. The second, the 2p→1s atomic transition of muonic aluminum, is the 346.8 keV x-ray line. The third, is the 844 keV γ-ray from the β-decay process. These signals need to be measured in the presence of an energy flux background of 3.2 x 10<sup>8 </sup>TeV/sec, consisting of muons, electrons, neutrons, x-rays and γ-rays. Here, two com- 11 photon counting detectors are used in the luminosity measurement. One of them, the LaBr<sub>3</sub> detector, is capable of high rate operation up to and above 800 kcps and energy resolution of 0.7%, producing highly accurate statistical measurements. The other, the HPGe detector is capable of energy resolution of 0.1%, with limited rate capability ∼ 70 kcps, yet producing measurements having low systematic error. Once signals are found within the background, corrections must be understood and applied including: geometric factors, detector efficiency, branching ratio of the observed physics pro- cesses, signal loss due to propagation to the detector, interfering lines, event loss due to pile-up, event loss due to algorithm miscalculation, time evolution of the signal, and others. The normalization measurement will be reported out in real time every 5 to 10 minutes, and a comprehensive off-line analysis will be undertaken using merged data sets.</p>
47

Gamma Veto Detectors in the KOPIO Experiment

Graham, Nicholas L. 24 August 2006 (has links)
KOPIO is an experiment designed to search for the CP-symmetry-violating reaction K<sub>L</sub>⁰ → π⁰νν̅. Measurement of the branching ratio of this reaction, depending on the accuracy of the measurement, could be the most precise measurement of the CP-violation parameters of the Standard Model to date. The K<sub>L</sub>⁰ → π⁰νν̅ reaction is exceedingly rare, with an expected branching ratio of (2.6 ± 1.2) ·10⁻¹¹ . The rareness of this reaction means two things: 1) that we need prodigious numbers of kaons, and 2) that a multitude of "improper" decays will have to be screened out by means of a veto detector system, part of which is being designed here at Virginia Tech. This detector must be able to detect the passage of daughters of the undesired decay reactions (charged particles and gammas). It must be operational inside a magnetic field, and must have signal timing fast enough to accommodate the rate at which these decays occur. A detector consisting of alternating layers of scintillator and lead, with wavelength-shifting fibers embedded in the scintillator, provides the characteristics sought after. This paper presents methodology used in design and construction of this detector, as well as results of signal property tests, using both cosmic rays and gammas as event triggers. Also included is a discussion on transporting the detector signal outside of the magnetic field so it can be read by photomultiplier tubes resting outside of the sweeping magnet. / Master of Science
48

Étude des invariants de rephasage en passant par la transformée de Mellin

Pelletier-Dumont, Jasmine 02 February 2024 (has links)
De la première mention d'une masse pour les neutrinos par B. Pontecorvo en 1957 jusqu'aux récentes expériences sur les neutrinos, la mise en évidence de leur comportement oscillatoire pointe vers la nécessité d'une physique au-delà du modèle standard. Heureusement, l'oscillation des neutrinos n'apporte pas uniquement son lot de problèmes ; ce phénomène, pouvant être expliqué par l'existence d'une masse pour ces particules, mène également à une piste de solution quant à la brisure de la symétrie CP. En considérant des masses aux neutrinos, il devient possible d'expliquer cette brisure de symétrie de la même manière que pour les quarks, c'està-dire par la présence de phases dans les matrices de mélange. Cependant, l'étude de ces phases n'est pas révélatrice de la brisure de la symétrie CP puisque celles-ci dépendent de la paramétrisation utilisée pour la matrice de mélange. De plus, les quantités qui sauraient remplir un tel rôle devraient également être invariantes sous changement de base. C'est pour répondre à ces besoins qu'en 1985, Jarlskog a développé un formalisme plus adéquat basé sur des quantités nommées invariants de rephasage qui sont l'objet principal de ce projet de recherche. Les objectifs sont de calculer et de déterminer les propriétés des distributions de ces invariants dans le cadre du principe anarchique. Ce cadre théorique permet l'étude des entrées de la matrice PMNS sans qu'aucune symétrie ne soit initialement imposée de sorte que celles-ci y apparaissent aléatoires à basse énergie. Il est possible de conclure que la mesure de Haar, qui apparaît naturellement à partir du principe anarchique, est susceptible de reproduire la matrice PMNS à basses énergies. On développe alors un formalisme permettant l'étude des distributions des invariants de rephasage sous la mesure de Haar. De là, on montre que pour un nombre fixe de générations de neutrinos, tous les invariants de rephasage d'un même type possèdent la même distribution sous la mesure de Haar. Puis, on calcule les distributions des invariants de rephasage quadratiques et quartiques sous cette même mesure à partir d'une nouvelle approche passant par la transformée de Mellin. On obtient alors des résultats complètement analytiques dont les implications physiques en fonction du nombre de générations de neutrinos sont finalement discutées. / From the first mention of massive neutrinos by Pontecorvo in 1957 to recent experiments with neutrinos, the demonstration of their oscillatory behavior indicates the need for a physics beyond the standard model. One way to solve neutrinos oscillation is by adding a mass to these particles. Fortunately, this deviation from the physics of the Standard Model is the solution to another problem, CP violation. Assuming massive neutrinos, one can add phases to the mixing matrix and then explain CP violation in the same way as for quarks. Those phases cannot inform about the amplitude of the CP violation since they depend on the chosen parametrization for the PMNS matrix, and they are not invariant under change of basis. That is why in 1985, Jarlskog developed a new formalism based on basis invariant quantity namely the rephasing invariants. This memoir aims to study those phases in the context of the anarchy principle. In this theoretical framework, the elements of the PMNS matrix are studied without any constraints being imposed on them so that they appear random in the low energy limit. It is possible to conclude that the Haar measure, which follows naturally from the anarchy principle, is likely to reproduce the PMNS matrix at low energies. A formalism is therefore developed to study the rephasing invariants under this measure. Moreover, we show that all the rephasing invariants of the same type have the same probability density function under the Haar measure for a fixed number of neutrinos. From these results, the probability density functions for all types of rephasing invariants under the Haar measure are easily obtained for an arbitrary number of neutrinos. Finally, the physical implications of our analytical results in terms of neutrino generation number are discussed.
49

Search for the lepton flavor violating decays Bs->tµ and Bd->tµ with the LHCb experiment / Recherche des désintégrations violant la saveur leptonique Bs -> tµ et Bd -> tµ avec l'expérience LHCb

Arnau Romeu, Joan 10 September 2018 (has links)
La désintégration $B_{ (s)}$ → τμ est supprimée dans le SM, où le nombre leptonique est conservé. Son observation serait donc une preuve non ambiguë de la physique au-delà du SM. Des résultats récents [1,2] ont ravivé l'intérêt pour la recherche de tels processus [3]. Cette thèse présente la recherche des désintégrations $B_{ (s)}$→ τμ dans l'expérience LHCb, l'une des 4 grandes expériences menées au Large Hadron Collider (LHC) du CERN.Le lepton τ se désintègre avant d'atteindre le détecteur LHCb et est reconstruit en utilisant le canal τ → πππν. Le neutrino provenant de la désintégration du τ échappe à la détection. Une technique de reconstruction spécifique est utilisée pour déduire l'énergie du neutrino et donc la masse invariante du méson B qui s'est désintégré.Afin de séparer le signal du bruit de fond, une sélection hors ligne composée de différentes étapes est appliquée. Des techniques d'analyse multivariées, telles que les arbres de décision boostés (BDT), sont utilisées pendant le processus de sélection.La stratégie d'analyse est complétée par un ajustement simultané à la distribution de masse invariante des mésons B dans différentes régions d'un BDT final. Selon les prédictions du SM, aucun événement de signal n'est attendu. Dans ce cas, la méthode CLs sera utilisée pour extraire les limites supérieures des rapports de branchement BR($B_{ (s)}$→ τμ).[1] Test of lepton universality using $B^+$→$K^+$$l ^+$$l ^-$ decaysPhys. Rev. Lett. 113, 151601 (2014)[2] Measurement of the ratio of branching fractions BR(B → D*τ ν)/BR(B → D*μν)Phys. Rev. Lett. 115 (2015) 111803[3] Lepton Flavour Violation in B Decays ? Phys. Rev. Lett. 114, 091801 (2015) / The decay $B_{ (s)}$→τμ is suppressed in the SM, in which lepton flavour is conserved. Its observation would therefore be an unambiguous evidence of physics beyond the SM. Recent results [1,2] revived the interest for the search of such processes [3]. This thesis presents the search for the $B_{ (s)}$→τμ decays within the LHCb experiment, one of the 4 large experiments operated at the CERN Large Hadron Collider (LHC).The τ lepton decays before reaching the LHCb detector and is reconstructed using the τ→πππν channel. The neutrino from the τ decay escapes detection. A specific reconstruction technique is used in order to infer the energy of the ν and thus the invariant mass of the decaying B meson. This way, the complete kinematics of the process can be solved up to a two fold ambiguity.In order to disentangle signal from background, an offline selection consisting of different steps is applied. Data driven and multivariate analysis techniques, such as Boosted Decision Trees (BDT), are used during the selection process. The analysis strategy is completed by a simultaneous fit to the B meson invariant mass distribution over the different bins of a final BDT. According to the SM expectations, no signal events should be observed. In this case, the CLs method will be used to extract the upper limits on the branching fractions.[1] Test of lepton universality using $B^+$→$K^+$$l ^+$$l ^-$ decaysPhys. Rev. Lett. 113, 151601 (2014)[2] Measurement of the ratio of branching fractions BR(B → D*τ ν)/BR(B → D*μν)Phys. Rev. Lett. 115 (2015) 111803[3] Lepton Flavour Violation in B Decays ? Phys. Rev. Lett. 114, 091801 (2015)
50

Effective fault localization techniques for concurrent software

Park, Sang Min 12 January 2015 (has links)
Multicore and Internet cloud systems have been widely adopted in recent years and have resulted in the increased development of concurrent programs. However, concurrency bugs are still difficult to test and debug for at least two reasons. Concurrent programs have large interleaving space, and concurrency bugs involve complex interactions among multiple threads. Existing testing solutions for concurrency bugs have focused on exposing concurrency bugs in the large interleaving space, but they often do not provide debugging information for developers to understand the bugs. To address the problem, this thesis proposes techniques that help developers in debugging concurrency bugs, particularly for locating the root causes and for understanding them, and presents a set of empirical user studies that evaluates the techniques. First, this thesis introduces a dynamic fault-localization technique, called Falcon, that locates single-variable concurrency bugs as memory-access patterns. Falcon uses dynamic pattern detection and statistical fault localization to report a ranked list of memory-access patterns for root causes of concurrency bugs. The overall Falcon approach is effective: in an empirical evaluation, we show that Falcon ranks program fragments corresponding to the root-cause of the concurrency bug as "most suspicious" almost always. In principle, such a ranking can save a developer's time by allowing him or her to quickly hone in on the problematic code, rather than having to sort through many reports. Others have shown that single- and multi-variable bugs cover a high fraction of all concurrency bugs that have been documented in a variety of major open-source packages; thus, being able to detect both is important. Because Falcon is limited to detecting single-variable bugs, we extend the Falcon technique to handle both single-variable and multi-variable bugs, using a unified technique, called Unicorn. Unicorn uses online memory monitoring and offline memory pattern combination to handle multi-variable concurrency bugs. The overall Unicorn approach is effective in ranking memory-access patterns for single- and multi-variable concurrency bugs. To further assist developers in understanding concurrency bugs, this thesis presents a fault-explanation technique, called Griffin, that provides more context of the root cause than Unicorn. Griffin reconstructs the root cause of the concurrency bugs by grouping suspicious memory accesses, finding suspicious method locations, and presenting calling stacks along with the buggy interleavings. By providing additional context, the overall Griffin approach can provide more information at a higher-level to the developer, allowing him or her to more readily diagnose complex bugs that may cross file or module boundaries. Finally, this thesis presents a set of empirical user studies that investigates the effectiveness of the presented techniques. In particular, the studies compare the effectiveness between a state-of-the-art debugging technique and our debugging techniques, Unicorn and Griffin. Among our findings, the user study shows that while the techniques are indistinguishable when the fault is relatively simple, Griffin is most effective for more complex faults. This observation further suggests that there may be a need for a spectrum of tools or interfaces that depend on the complexity of the underlying fault or even the background of the user.

Page generated in 0.0685 seconds