• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1988
  • 524
  • 512
  • 204
  • 117
  • 91
  • 55
  • 42
  • 35
  • 28
  • 27
  • 18
  • 18
  • 18
  • 18
  • Tagged with
  • 4312
  • 1286
  • 517
  • 516
  • 464
  • 330
  • 315
  • 306
  • 296
  • 291
  • 282
  • 274
  • 271
  • 260
  • 243
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

New physics at the LHC : direct and indirect probes

Lewis, Dave January 2017 (has links)
This thesis presents the results for two searches for new physics performed with the ATLAS experiment. The first, a search for the rare B-meson decay Bs → μμ and measurement of its branching ratio, uses 25 fb⁻¹ of √s = 7 and 8 TeV data recorded during 2011 and 2012. After observing a small number of these decays, a branching ratio of B(Bs → μμ) = (0.9⁺¹·¹₋₀.₈) x 10⁻⁹ is measured, assuming non-negative event yields. This is compatible with the Standard Model at the 2σ level. The second, a search for direct pair production of the supersymmetric top quark partner, is performed using 36.07 fb⁻¹ of √s = 13 TeV data recorded during 2015 and 2016. Final states with a high jet multiplicity, no leptons and large missing transverse momentum are selected to target these decays, with several signal regions designed to cover a wide range of particle masses. No excess is observed, with all signal regions being compatible with the Standard Model within 2σ. Limits are set on the stop mass, excluding up to mt̃1 = 940 GeV for values of mx̃⁰₁ below 160 GeV, assuming a 100% branching fraction to t̃1 → tX̃⁰₁ decays. In addition two reinterpretations of this data are presented, for a gluino-mediated stop production scenario and a direct dark matter production scenario. No excess is observed for either model, and limits are set on the mass of the relevant particles. Finally a viability study into using machine learning techniques to improve on existing SUSY search methods has been performed, with the initial results proving promising.
112

A study of tau identification with the CMS detector at the LHC

Ilten, Philip James January 2008 (has links)
Thesis (S.B.)--Massachusetts Institute of Technology, Dept. of Physics, 2008. / Includes bibliographical references (p. 49-50). / In this thesis I explore the identification of [tau] leptons from simulated reconstructed data that will be collected by the Compact Muon Solenoid (CMS) detector on the Large Hadron Collider (LHC) at CERN. The two components of particle identification, efficiencies of [tau] identification from generator level information, along with fake rates of the current default algorithm have been determined and analyzed for a photon plus jets background sample and QCD background sample. I propose a new [tau] lepton identification algorithm that employs a signal cone parametrized with respect to the 7 transverse energy, and an isolation cone parametrized with respect to charged particle density surrounding the [tau] jet. Using the default algorithm an efficiency of 27.7% is achieved along with a photon plus jets fake rate of 1.96%. Using the proposed algorithm and matching the efficiency of the default algorithm, an efficiency of 26.9% and a fake rate of 0.44% is achieved. Approximately matching fake rates, an efficiency of 37.4% is achieved with a fake rate of 2.36%. / by by Philip James Ilten. / S.B.
113

Cosmological dynamics and structure formation

Gosenca, Mateja January 2018 (has links)
Observational surveys which probe our universe deeper and deeper into the nonlinear regime of structure formation are becoming increasing accurate. This makes numerical simulations an essential tool for theory to be able to predict phenomena at comparable scales. In the first part of this thesis we study the behaviour of cosmological models involving a scalar field. We are particularly interested in the existence of fixed points of the dynamical system and the behaviour of the system in their vicinity. Upon addition of spatial curvature to the single-scalar field model with an exponential potential, canonical kinetic term, and a matter fluid, we demonstrate the existence of two extra fixed points that are not present in the case without curvature. We also analyse the evolution of the equation-of-state parameter. In the second part, we numerically simulate collisionless particles in the weak field approximation to General Relativity, with large gradients of the fields and relativistic velocities allowed. To reduce the complexity of the problem and enable high resolution simulations, we consider the spherically symmetric case. Comparing numerical solutions to the exact Schwarzschild and Lemaître-Tolman-Bondi solutions, we show that the scheme we use is more accurate than a Newtonian scheme, correctly reproducing the leading-order post-Newtonian behaviour. Furthermore, by introducing angular momentum, configurations corresponding to bound objects are found. In the final part, we simulate the conditions under which one would expect to form ultracompact minihalos, dark matter halos with a steep power-law profile. We show that an isolated object exhibits the profile predicted analytically. Embedding this halo in a perturbed environment we show that its profile becomes progressively more similar to the Navarro-Frenk-White profile with increasing amplitude of perturbations. Next, we boost the power spectrum at a very early redshift during radiation domination on a chosen scale and simulate clustering of dark matter particles at this scale until low redshift. In this scenario halos form earlier, have higher central densities, and are more compact.
114

Trace-based post-silicon validation for VLSI circuits. / CUHK electronic theses & dissertations collection

January 2012 (has links)
The ever-increasing design complexity of modern circuits challenges our ability to verify their correctness. Therefore, various errors are more likely to escape the pre-silicon verification process and to manifest themselves after design tape-out. To address this problem, effective post-silicon validation is essential for eliminating design bugs before integrated circuit (IC) products shipped to customers. In the debug process, it becomes increasingly popular to insert design-for-debug (DfD) structures into the original design to facilitate real-time debug without intervening the circuits’ normal operation. For this so-called trace-based post-silicon validation technique, the key question is how to design such DfD circuits to achieve sufficient observability and controllability during the debug process with limited hardware overhead. However, in today’s VLSI design flow, this is unfortunately conducted in a manual fashion based on designers’ own experience, which cannot guarantee debug quality. To tackle this problem, we propose a set of automatic tracing solutions as well as innovative DfD designs in this thesis. First, we develop a novel trace signal selection technique to maximize the visibility on debugging functional design errors. To strengthen the capability for tackling these errors, we sequentially introduce a multiplexed signal tracing strategy with a trace signal grouping algorithm for maximizing the probability of catching the propagated evidences from functional design errors. Then, to effectively localize speedpathrelated electrical errors, we propose an innovative trace signal selection solution as well as a trace qualification technique. On the other hand, we introduce several low-cost interconnection fabrics to effectively transfer trace data in post-silicon validation. We first propose to reuse the existing test channel for real-time trace data transfer, so that the routing cost of debug hardware is dramatically reduced. The method is further improved to avoid data corruption in multi-core debug. We then develop a novel interconnection fabric design and optimization technique, by combining multiplexor network and non-blocking network, to achieve high debug flexibility with minimized hardware cost. Moreover, we introduce a hybrid trace interconnection fabric that is able to tolerate unknown values in “golden vectors“, at the cost of little extra DfD overhead. With the fabric, we develop a systematic signal tracing procedure to automatically localize erroneous signals with just a few debug runs. Our empirical evaluation shows that the solutions presented in this thesis can greatly improve the validation quality of VLSI circuits, and ultimately enable the design and fabrication of reliable electronic devices. / Liu, Xiao. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2012. / Includes bibliographical references (leaves 143-152). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract --- p.i / Acknowledgement --- p.iv / Preface --- p.vii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- VLSI Design Trends and Validation Challenges --- p.1 / Chapter 1.2 --- Key Contributions and Thesis Outline --- p.4 / Chapter 2 --- State of the Art on Post-Silicon Validation --- p.8 / Chapter 2.1 --- Trace Signal Selection --- p.12 / Chapter 2.2 --- Interconnection Fabric Design for Trace Data Transfer --- p.14 / Chapter 2.3 --- Trace Data Compression --- p.15 / Chapter 2.4 --- Trace-Based Debug Control --- p.16 / Chapter 3 --- Signal Selection for Visibility Enhancement --- p.18 / Chapter 3.1 --- Preliminaries and Summary of Contributions --- p.19 / Chapter 3.2 --- Restorability Formulation --- p.23 / Chapter 3.2.1 --- Terminologies --- p.23 / Chapter 3.2.2 --- Gate-Level Restorabilities --- p.24 / Chapter 3.3 --- Trace Signal Selection --- p.28 / Chapter 3.3.1 --- Circuit Level Visibility Calculation --- p.28 / Chapter 3.3.2 --- Trace Signal Selection Methodology --- p.30 / Chapter 3.3.3 --- Trace Signal Selection Enhancements --- p.31 / Chapter 3.4 --- Experimental Results --- p.34 / Chapter 3.4.1 --- Experiment Setup --- p.34 / Chapter 3.4.2 --- Experimental Results --- p.35 / Chapter 3.5 --- Conclusion --- p.40 / Chapter 4 --- Multiplexed Tracing for Design Error --- p.47 / Chapter 4.1 --- Preliminaries and Summary of Contributions --- p.49 / Chapter 4.2 --- Design Error Visibility Metric --- p.53 / Chapter 4.3 --- Proposed Methodology --- p.56 / Chapter 4.3.1 --- Supporting DfD Hardware for Multiplexed Signal Tracing --- p.58 / Chapter 4.3.2 --- Signal Grouping Algorithm --- p.58 / Chapter 4.4 --- Experimental Results --- p.62 / Chapter 4.4.1 --- Experiment Setup --- p.62 / Chapter 4.4.2 --- Experimental Results --- p.63 / Chapter 4.5 --- Conclusion --- p.66 / Chapter 5 --- Tracing for Electrical Error --- p.68 / Chapter 5.1 --- Preliminaries and Summary of Contributions --- p.69 / Chapter 5.2 --- Observing Speedpath-Related Electrical Errors --- p.71 / Chapter 5.2.1 --- Speedpath-Related Electrical Error Model --- p.71 / Chapter 5.2.2 --- Speedpath-Related Electrical Error Detection Quality --- p.73 / Chapter 5.3 --- Trace Signal Selection --- p.75 / Chapter 5.3.1 --- Relation Cube Extraction --- p.76 / Chapter 5.3.2 --- Signal Selection for Non-Zero-Probability Error Detection --- p.77 / Chapter 5.3.3 --- Trace Signal Selection for Error Detection Quality Enhancement --- p.78 / Chapter 5.4 --- Trace Data Qualification --- p.80 / Chapter 5.5 --- Experimental Results --- p.83 / Chapter 5.6 --- Conclusion --- p.87 / Chapter 6 --- Reusing Test Access Mechanisms --- p.88 / Chapter 6.1 --- Preliminaries and Summary of Contributions --- p.89 / Chapter 6.1.1 --- SoC Test Architectures --- p.89 / Chapter 6.1.2 --- SoC Post-Silicon Validation Architectures --- p.90 / Chapter 6.1.3 --- Summary of Contributions --- p.92 / Chapter 6.2 --- Overview of the Proposed Debug Data Transfer Framework --- p.93 / Chapter 6.3 --- Proposed DfD Structures --- p.94 / Chapter 6.3.1 --- Modified Wrapper Design --- p.95 / Chapter 6.3.2 --- Trace Buffer Interface Design --- p.97 / Chapter 6.4 --- Sharing TAM for Multi-Core Debug Data Transfer --- p.98 / Chapter 6.4.1 --- Core Masking for TestRail Architecture --- p.98 / Chapter 6.4.2 --- Channel Split --- p.99 / Chapter 6.5 --- Experimental Results --- p.101 / Chapter 6.6 --- Conclusion --- p.104 / Chapter 7 --- Interconnection Fabric for Flexible Tracing --- p.105 / Chapter 7.1 --- Preliminaries and Summary of Contributions --- p.106 / Chapter 7.2 --- Proposed Interconnection Fabric Design --- p.111 / Chapter 7.2.1 --- Multiplexer Network for Mutually-Exclusive Signals --- p.111 / Chapter 7.2.2 --- Non-Blocking Concentration Network for Concurrently-Accessible Signals --- p.114 / Chapter 7.3 --- Experimental Results --- p.117 / Chapter 7.4 --- Conclusion --- p.121 / Chapter 8 --- Interconnection Fabric for Systematic Tracing --- p.123 / Chapter 8.1 --- Preliminaries and Summary of Contributions --- p.124 / Chapter 8.2 --- Proposed Trace Interconnection Fabric --- p.128 / Chapter 8.3 --- Proposed Error Evidence Localization Methodology --- p.130 / Chapter 8.4 --- Experimental Results --- p.133 / Chapter 8.4.1 --- Experimental Setup --- p.133 / Chapter 8.4.2 --- Results and Discussion --- p.134 / Chapter 8.5 --- Conclusion --- p.139 / Chapter 9 --- Conclusion --- p.140 / Bibliography --- p.152
115

Challenges and prospects of probing galaxy clustering with three-point statistics

Eggemeier, Alexander January 2018 (has links)
In this work we explore three-point statistics applied to the large-scale structure in our Universe. Three-point statistics, such as the bispectrum, encode information not accessible via the standard analysis method-the power spectrum-and thus provide the potential for greatly improving current constraints on cosmological parameters. They also present us with additional challenges, and we focus on two of these arising from a measurement as well as modelling point of view. The first challenge we address is the covariance matrix of the bispectrum, as its precise estimate is required when performing likelihood analyses. Covariance matrices are usually estimated from a set of independent simulations, whose minimum number scales with the dimension of the covariance matrix. Because there are many more possibilities of finding triplets of galaxies than pairs, compared to the power spectrum this approach becomes rather prohibitive. With this motivation in mind, we explore a novel alternative to the bispectrum: the line correlation function (LCF). It specifically targets information in the phases of density modes that are invisible to the power spectrum, making it a potentially more efficient probe than the bispectrum, which measures a combination of amplitudes and phases. We derive the covariance properties and the impact of shot noise for the LCF and compare these theoretical predictions with measurements from N-body simulations. Based on a Fisher analysis we assess the LCF's sensitivity on cosmological parameters, finding that it is particularly suited for constraining galaxy bias parameters and the amplitude of fluctuations. As a next step we contrast the Fisher information of the LCF with the full bispectrum and two other recently proposed alternatives. We show that the LCF is unlikely to achieve a lossless compression of the bispectrum information, whereas a modal decomposition of the bispectrumcan reduce the size of the covariancematrix by at least an order of magnitude. The second challenge we consider in this work concerns the relation between the dark matter field and luminous tracers, such as galaxies. Accurate knowledge of this galaxy bias relation is required in order to reliably interpret the data gathered by galaxy surveys. On the largest scales the dark matter and galaxy densities are linearly related, but a variety of additional terms need to be taken into account when studying clustering on smaller scales. These have been fully included in recent power spectrumanalyses, whereas the bispectrummodel relied on simple prescriptions that were likely extended beyond their realm of validity. In addition, treating power spectrumand bispectrum on different footings means that the two models become inconsistent on small scales. We introduce a new formalism that allows us to elegantly compute the lacking bispectrum contributions from galaxy bias, without running into the renormalization problem. Furthermore, we fit our new model to simulated data by implementing these contributions into a likelihood code. We show that they are crucial in order to obtain results consistent with those fromthe power spectrum, and that the bispectrum retains its capability of significantly reducing uncertainties in measured parameters when combined with the power spectrum.
116

Higher-order methods for large-scale optimization

Fountoulakis, Kimon January 2015 (has links)
There has been an increased interest in optimization for the analysis of large-scale data sets which require gigabytes or terabytes of data to be stored. A variety of applications originate from the fields of signal processing, machine learning and statistics. Seven representative applications are described below. - Magnetic Resonance Imaging (MRI): A medical imaging tool used to scan the anatomy and the physiology of a body. - Image inpainting: A technique for reconstructing degraded parts of an image. - Image deblurring: Image processing tool for removing the blurriness of a photo caused by natural phenomena, such as motion. - Radar pulse reconstruction. - Genome-Wide Association study (GWA): DNA comparison between two groups of people (with/without a disease) in order to investigate factors that a disease depends on. - Recommendation systems: Classification of data (i.e., music or video) based on user preferences. - Data fitting: Sampled data are used to simulate the behaviour of observed quantities. For example estimation of global temperature based on historic data. Large-scale problems impose restrictions on methods that have been so far employed. The new methods have to be memory efficient and ideally, within seconds they should offer noticeable progress towards a solution. First-order methods meet some of these requirements. They avoid matrix factorizations, they have low memory requirements, additionally, they sometimes offer fast progress in the initial stages of optimization. Unfortunately, as demonstrated by numerical experiments in this thesis, first-order methods miss essential information about the conditioning of the problems, which might result in slow practical convergence. The main advantage of first-order methods which is to rely only on simple gradient or coordinate updates becomes their essential weakness. We do not think this inherent weakness of first-order methods can be remedied. For this reason, the present thesis aims at the development and implementation of inexpensive higher-order methods for large-scale problems.
117

Optimising alignment of a multi-element telescope

Kamga, Morgan M. 23 April 2013 (has links)
A thesis submitted to the Faculty of Science in fulfillment of the requirements of the degree of Doctor of Philosophy School of Computational and Applied Mathematics University of the Witwatersrand September 20, 2012 / In this thesis, we analyse reasons for poor image quality on the Southern African Large Telescope (SALT) and we analyse control methods of the segmented primary mirror. Errors in the control algorithm of SALT (circa 2007) are discovered. More powerful numerical procedures are developed and in particular, we show that singular value decomposition method is preferred over normal equations method as used on SALT. In addition, this method does not require physical constraints to some mirror parameters. Sufficiently accurate numerical procedures impose constraints on the precision of segment actuator displacements and edge sensors. We analyse the data filtering method on SALT and find that it is inadequate for control. We give a filtering method that achieves improved control. Finally, we give a new method (gradient flow) that gives acceptable control from arbitrary, imprecise initial alignment.
118

A study of large-scale focusing Schlieren systems

Goulding, John Stuart 19 May 2008 (has links)
Abstract The interrelationship between variables involved in focusing schlieren systems is fairly well understood, however how changing the variables affects the resultant images is not. In addition, modified grids and arrangements, such as two dimensional, colour and retroreflective systems have never been directly compared to a standard system. The existing theory is developed from first principles to its current state. An apparatus was specifically designed to test grid and arrangement issues while keeping the system geometry, optical components and the test object identical. Source grid line spacing and clear line width to dark line width ratio were varied to investigate the limits of diffraction and banding and to find an optimum grid for this apparatus. Two dimensional, colour, retroreflective and a novel projected arrangement were then compared to this optimum case. In conclusion, the diffraction limit is accurately modelled by the mathematical equations. The banding limit is slightly less well modelled as additional factors seem to affect the final image. Inherent problems with the two dimensional and colour systems indicate that while they can be useful, they are not worth developing further though chromatism in the system meant that colour systems were not fully investigated. The retroreflective and projected systems have the most potential for large scale use and should be developed further.
119

Radiation tolerant low power 12 bit ADC in 130 nm CMOS technology

Sousa, Filipe José Pereira Alves de January 2009 (has links)
Estágio realizado no CERN e orientado pelo Doutor Paulo Rodrigues Simões Moreira / Tese de mestrado integrado. Engenharia Electrotécnica e de Computadores (Major Telecomunicações). Faculdade de Engenharia. Universidade do Porto. 2009
120

Reducing the Complexity of Large Ecosystem Models.

Lawrie, Jock Sebastian, jock.lawrie@forethought.com.au January 2006 (has links)
During the 1990s a large-scale study of Port Phillip Bay, Australia, was undertaken by the CSIRO (the Commonwealth Scientific and Industrial Research Organisation, Australia's national research body). A major outcome of the study was a complex ecosystem model intended to provide scientific input into management decisions concerning the nutrient load to the bay. However, its development was costly and time-consuming. Given this effort, it is natural to seek smaller models (reduced models) that reproduce the performance measures of the large model (the full model) that are of interest to decision makers. This thesis is concerned with identifying such models. More generally, this thesis is concerned with developing methods for identifying these smaller models. Several methods are developed for this purpose, each simplifying the full model in different ways. In particular, methods are proposed for aggregating state variables, setting state variables to constants, simplifying links in the ecological network, and eliminating rates from the full model. Moreover, the methods can be implemented automatically, so that they are transferable to other ecological modelling situations, and so that the reduced models are obtained objectively. In the case of the Port Phillip Bay model, significant reduction in model complexity is possible even when estimates of all the performance measures are of interest. Thus, this model is unnecessarily complex. Furthermore, the most significant reductions in complexity occur when the methods are combined. With this in mind, a procedure for combining the methods is proposed that can be implemented for any ecological model with a large number of components. Aside from generating reduced models, the process of applying the methods reveals insights into the mechanisms built into the system. Such insights highlight the extent to which the model simplification process can be applied. Given the effectiveness of the model simplification process developed here, it is concluded that this process should be more routinely applied to large ecosystem models. In some cases, the full sequence of methods might prove too computationally expensive to justify its purpose. However, it is shown that even the application of a subset of the methods can yield both simpler models and insight into the structure and behaviour of the system being modelled.

Page generated in 0.0396 seconds