• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3081
  • 943
  • 353
  • 314
  • 185
  • 108
  • 49
  • 49
  • 49
  • 49
  • 49
  • 48
  • 40
  • 37
  • 30
  • Tagged with
  • 6330
  • 1456
  • 1126
  • 1081
  • 845
  • 741
  • 735
  • 723
  • 651
  • 625
  • 510
  • 493
  • 484
  • 484
  • 457
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
601

A Statistical Approach for Assessing Seismic Transitions Associated with Fluid Injections

Wang, Pengyun 01 December 2016 (has links)
The wide application of fluid injection has caused a concern of the potential critical risk associated with induced seismicity. To help clarify the concern, this dissertation proposes a statistical approach for assessing seismic transitions associated with fluid injections by scientifically analyzing instrumental measures of seismic events. The assessment problem is challenging due to the uncertain effects of wastewater injections on regional seismicity, along with the limited availability of seismic and injection data. To overcome the challenge, three statistical methods are developed, with each being focused on a different aspect of the problem. Specifically, a statistical method is developed for early detection of induced seismicity, with the potential of allowing for site managers and regulators to act promptly and preparing communities for the increased seismic risk; the second method aims for addressing the further need of quantitatively assessing the transition of induced seismicity, which can reveal the underlying process of induced seismicity and provide data to support probabilistic seismic hazard analysis; and finally, the third method steps further to characterize the process of spatial distribution of induced seismicity, which accounts for spatial evolution of induced seismicity. All the proposed methods are built on the principles of Bayesian technique, which provides a flexible inference framework to incorporate domain expertise and data uncertainty. The effectiveness of the proposed methods is demonstrated using the earthquake dataset for the state of Oklahoma, which shows a promising result: the detection method is able to issue warning of induced seismicity well before the occurrence of severe consequences; the transition model provides a significantly better fit to the dataset than the classical model and sheds light on the underlying transition of induced seismicity in Oklahoma; and the spatio-temporal model provides a most comprehensive characterization of the dataset in terms of its spatial and temporal properties and is shown to have a much better short-term forecasting performance than the “naïve methods”. The proposed methods can be used in combination as a decision-making support tool to identify areas with increasing levels of seismic risk in a quantitative manner, supporting a comprehensive assessment to decide which risk-mitigation strategy should be recommended.
602

Galaxy cluster mass estimation from stacked spectroscopic analysis

Farahi, Arya, Evrard, August E., Rozo, Eduardo, Rykoff, Eli S., Wechsler, Risa H. 21 August 2016 (has links)
We use simulated galaxy surveys to study: (i) how galaxy membership in redMaPPer clusters maps to the underlying halo population, and (ii) the accuracy of a mean dynamical cluster mass, M-sigma(lambda), derived from stacked pairwise spectroscopy of clusters with richness lambda. Using similar to 130 000 galaxy pairs patterned after the Sloan Digital Sky Survey (SDSS) redMaPPer cluster sample study of Rozo et al., we show that the pairwise velocity probability density function of central-satellite pairs with m(i) < 19 in the simulation matches the form seen in Rozo et al. Through joint membership matching, we deconstruct the main Gaussian velocity component into its halo contributions, finding that the top-ranked halo contributes similar to 60 per cent of the stacked signal. The halo mass scale inferred by applying the virial scaling of Evrard et al. to the velocity normalization matches, to within a few per cent, the log-mean halo mass derived through galaxy membership matching. We apply this approach, along with miscentring and galaxy velocity bias corrections, to estimate the log-mean matched halo mass at z = 0.2 of SDSS redMaPPer clusters. Employing the velocity bias constraints of Guo et al., we find aEuroln (M-200c)|lambda aEuro parts per thousand = ln (< M-30) + alpha(m) ln (lambda/30) with M-30 = 1.56 +/- 0.35 x 10(14) M-aS (TM) and alpha(m) = 1.31 +/- 0.06(stat) +/- 0.13(sys). Systematic uncertainty in the velocity bias of satellite galaxies overwhelmingly dominates the error budget.
603

Relationships among Attitude Extremity, Polarity, and Intensity

Hebert, Patrick J. 08 1900 (has links)
This research attempt further analyzes implications of statistical correlations regarding specific relationships between the extremity-intensity variables, as defined by the social judgment instrument, and the polarity variable, as defined by the semantic differential scale.
604

Iterative parameter mixing for distributed large-margin training of structured predictors for natural language processing

Coppola, Gregory Francis January 2015 (has links)
The development of distributed training strategies for statistical prediction functions is important for applications of machine learning, generally, and the development of distributed structured prediction training strategies is important for natural language processing (NLP), in particular. With ever-growing data sets this is, first, because, it is easier to increase computational capacity by adding more processor nodes than it is to increase the power of individual processor nodes, and, second, because data sets are often collected and stored in different locations. Iterative parameter mixing (IPM) is a distributed training strategy in which each node in a network of processors optimizes a regularized average loss objective on its own subset of the total available training data, making stochastic (per-example) updates to its own estimate of the optimal weight vector, and communicating with the other nodes by periodically averaging estimates of the optimal vector across the network. This algorithm has been contrasted with a close relative, called here the single-mixture optimization algorithm, in which each node stochastically optimizes an average loss objective on its own subset of the training data, operating in isolation until convergence, at which point the average of the independently created estimates is returned. Recent empirical results have suggested that this IPM strategy produces better models than the single-mixture algorithm, and the results of this thesis add to this picture. The contributions of this thesis are as follows. The first contribution is to produce and analyze an algorithm for decentralized stochastic optimization of regularized average loss objective functions. This algorithm, which we call the distributed regularized dual averaging algorithm, improves over prior work on distributed dual averaging by providing a simpler algorithm (used in the rest of the thesis), better convergence bounds for the case of regularized average loss functions, and certain technical results that are used in the sequel. The central contribution of this thesis is to give an optimization-theoretic justification for the IPM algorithm. While past work has focused primarily on its empirical test-time performance, we give a novel perspective on this algorithm by showing that, in the context of the distributed dual averaging algorithm, IPM constitutes a convergent optimization algorithm for arbitrary convex functions, while the single-mixture distribution algorithm is not. Experiments indeed confirm that the superior test-time performance of models trained using IPM, compared to single-mixture, correlates with better optimization of the objective value on the training set, a fact not previously reported. Furthermore, our analysis of general non-smooth functions justifies the use of distributed large-margin (support vector machine [SVM]) training of structured predictors, which we show yields better test performance than the IPM perceptron algorithm, the only version of the IPM to have previously been given a theoretical justification. Our results confirm that IPM training can reach the same level of test performance as a sequentially trained model and can reach better accuracies when one has a fixed budget of training time. Finally, we use the reduction in training time that distributed training allows to experiment with adding higher-order dependency features to a state-of-the-art phrase-structure parsing model. We demonstrate that adding these features improves out-of-domain parsing results of even the strongest phrase-structure parsing models, yielding a new state-of-the-art for the popular train-test pairs considered. In addition, we show that a feature-bagging strategy, in which component models are trained separately and later combined, is sometimes necessary to avoid feature under-training and get the best performance out of large feature sets.
605

Emulation of random output simulators

Boukouvalas, Alexis January 2010 (has links)
Computer models, or simulators, are widely used in a range of scientific fields to aid understanding of the processes involved and make predictions. Such simulators are often computationally demanding and are thus not amenable to statistical analysis. Emulators provide a statistical approximation, or surrogate, for the simulators accounting for the additional approximation uncertainty. This thesis develops a novel sequential screening method to reduce the set of simulator variables considered during emulation. This screening method is shown to require fewer simulator evaluations than existing approaches. Utilising the lower dimensional active variable set simplifies subsequent emulation analysis. For random output, or stochastic, simulators the output dispersion, and thus variance, is typically a function of the inputs. This work extends the emulator framework to account for such heteroscedasticity by constructing two new heteroscedastic Gaussian process representations and proposes an experimental design technique to optimally learn the model parameters. The design criterion is an extension of Fisher information to heteroscedastic variance models. Replicated observations are efficiently handled in both the design and model inference stages. Through a series of simulation experiments on both synthetic and real world simulators, the emulators inferred on optimal designs with replicated observations are shown to outperform equivalent models inferred on space-filling replicate-free designs in terms of both model parameter uncertainty and predictive variance.
606

Analysis of failure time data with ordered categories of response

Berridge, Damon M. January 1991 (has links)
No description available.
607

A statistical analysis of education in Ghana 1950-1960

Ohemeng, Edward 01 June 1963 (has links)
No description available.
608

Asymmetric particle systems and last-passage percolation in one and two dimensions

Schmidt, Philipp January 2011 (has links)
This thesis studies three models: Multi-type TASEP in discrete time, long-range last- passage percolation on the line and convoy formation in a travelling servers model. All three models are relatively easy to state but they show a very rich and interesting behaviour. The TASEP is a basic model for a one-dimensional interacting particle system with non-reversible dynamics. We study some aspects of the TASEP in discrete time and compare the results to recently obtained results for the TASEP in continuous time. In particular we focus on stationary distributions for multi-type models, speeds of second- class particles, collision probabilities and the speed process. We consider various natural update rules.
609

The formulation of the relativistic statistical mechanics

Suh, Kiu Suk. January 1957 (has links)
Call number: LD2668 .T4 1957 S94 / Master of Science
610

An econometric estimation of the demand for clothing in South Africa

11 September 2012 (has links)
M.A. / The purpose of this study is to document and build an econometric model of the demand in the South African Clothing industry. It is important to study the clothing industry because it is labour intensive and thus its growth and development could contribute positively toward eradicating the unemployment problem in South Africa. With globalization of world economies and South Africa being a signatory to the GATT/WTO, the implications for this industry are manifold. The opening chapter lists the problem statement, identifies the method of research utilised and the relevance of the study. Chapter two looks at demand theory, particularly with regard to the quantitative techniques involved in its estimation. It focusses on regression theory and the evaluation of results generated. The third chapter gives a background to the South African clothing industry, and touches on amongst others aspects of current importance such as trade reform, international best practice and the key issues the industry has to deal with. Chapter four looks at the econometrics aspects of the study. A near perfect forecast was obtained, which attests to the stability and superiority of the model which is presented. The main findings of this study are that it is supply considerations such as the wage bill, costs of inputs (eg textile materials) etc which play an important part in the survival and prosperity of the industry. It is also reveals the fact that low productivity levels could be easily and quickly rectified through the introduction of new organizational practices and human resource development, development of quick response relationships and training to support new organizational practices. The study further and finally asserts that, while trade reform could necessitate painful adjustments the industry could actually come out a stronger world player

Page generated in 0.0605 seconds