• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 462
  • 32
  • 16
  • 16
  • 15
  • 14
  • 14
  • 14
  • 14
  • 14
  • 13
  • 13
  • 10
  • 6
  • 6
  • Tagged with
  • 683
  • 683
  • 142
  • 141
  • 115
  • 89
  • 86
  • 57
  • 55
  • 49
  • 49
  • 40
  • 38
  • 38
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
461

Reconstruction of foliations from directional information

Yeh, Shu-Ying January 2007 (has links)
In many areas of science, especially geophysics, geography and meteorology, the data are often directions or axes rather than scalars or unrestricted vectors. Directional statistics considers data which are mainly unit vectors lying in two- or three-dimensional space (R² or R³). One way in which directional data arise is as normals to foliations. A (codimension-1) foliation of {R} {d} is a system of non-intersecting (d-1)-dimensional surfaces filling out the whole of {R} {d}. At each point z of {R} {d}, any given codimension-1 foliation determines a unit vector v normal to the surface through z. The problem considered here is that of reconstructing the foliation from observations ({z}{i}, {v}{i}), i=1,...,n. One way of doing this is rather similar to fitting smooth splines to data. That is, the reconstructed foliation has to be as close to the data as possible, while the foliation itself is not too rough. A tradeoff parameter is introduced to control the balance between smoothness and closeness. The approach used in this thesis is to take the surfaces to be surfaces of constant values of a suitable real-valued function h on {R} {d}. The problem of reconstructing a foliation is translated into the language of Schwartz distributions and a deep result in the theory of distributions is used to give the appropriate general form of the fitted function h. The model parameters are estimated by a simplified Newton method. Under appropriate distributional assumptions on v{1},...,v{n}, confidence regions for the true normals are developed and estimates of concentration are given.
462

Comparing the Powers of Several Proposed Tests for Testing the Equality of the Means of Two Populations When Some Data Are Missing

Dunu, Emeka Samuel 05 1900 (has links)
In comparing the means .of two normally distributed populations with unknown variance, two tests very often used are: the two independent sample and the paired sample t tests. There is a possible gain in the power of the significance test by using the paired sample design instead of the two independent samples design.
463

Discounting the role of causal attributions in the ANOVA model of attribution

Unknown Date (has links)
For years attribution research has been dominated by the ANOVA model of behavior which proposes that people construct their dispositional attributions of others by carefully comparing and weighing all situational information using mental computations similar to the processes used by researchers to analyze data. A preliminary experiment successfully determined that participants were able to distinguish differences in variability assessed across persons (high vs. low consensus) and across situations (high vs. low distinctiveness). Also, it was clear that the subjects could evaluate varying levels of situational constraint. A primary experiment administered to participants immediately following the preliminary study determined that participants grossly under-utilized those same variables when making dispositional attributions. Results gave evidence against the use of traditional ANOVA models and support for the use of the Behavior Averaging Principle of Attribution. / by Kori A. Hakala. / Thesis (M.A.)--Florida Atlantic University, 2008. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2008. Mode of access: World Wide Web.
464

Survival Analysis using Bivariate Archimedean Copulas

Chandra, Krishnendu January 2015 (has links)
In this dissertation we solve the nonidentifiability problem of Archimedean copula models based on dependent censored data (see [Wang, 2012]). We give a set of identifiability conditions for a special class of bivariate frailty models. Our simulation results show that our proposed model is identifiable under our proposed conditions. We use EM algorithm to estimate unknown parameters and the proposed estimation approach can be applied to fit dependent censored data when the dependence is of research interest. The marginal survival functions can be estimated using the copula-graphic estimator (see [Zheng and Klein, 1995] and [Rivest and Wells, 2001]) or the estimator proposed by [Wang, 2014]. We also propose two model selection procedures for Archimedean copula models, one for uncensored data and the other one for right censored bivariate data. Our simulation results are similar to that of [Wang and Wells, 2000] and suggest that both procedures work quite well. The idea of our proposed model selection procedure originates from the model selection procedure for Archimedean copula models proposed by [Wang and Wells, 2000] for right censored bivariate data using the L2 norm corresponding to the Kendall distribution function. A suitable bootstrap procedure is yet to be suggested for our method. We further propose a new parameter estimator and a simple goodness-of-fit test for Archimedean copula models when the bivariate data is under fixed left truncation. Our simulation results suggest that our procedure needs to be improved so that it can be more powerful, reliable and efficient. In our strategy, to obtain estimates for the unknown parameters, we heavily exploit the concept of truncated tau (a measure of association established by [Manatunga and Oakes, 1996] for left truncated data). The idea of our goodness of fit test originates from the goodness-of-fit test for Archimedean copula models proposed by [Wang, 2010] for right censored bivariate data.
465

Asymptotic Theory and Applications of Random Functions

Li, Xiaoou January 2016 (has links)
Random functions is the central component in many statistical and probabilistic problems. This dissertation presents theoretical analysis and computation for random functions and its applications in statistics. This dissertation consists of two parts. The first part is on the topic of classic continuous random fields. We present asymptotic analysis and computation for three non-linear functionals of random fields. In Chapter 1, we propose an efficient Monte Carlo algorithm for computing P{sup_T f(t)>b} when b is large, and f is a Gaussian random field living on a compact subset T. For each pre-specified relative error ɛ, the proposed algorithm runs in a constant time for an arbitrarily large $b$ and computes the probability with the relative error ɛ. In Chapter 2, we present the asymptotic analysis for the tail probability of ∫_T e^{σf(t)+μ(t)}dt under the asymptotic regime that σ tends to zero. In Chapter 3, we consider partial differential equations (PDE) with random coefficients, and we develop an unbiased Monte Carlo estimator with finite variance for computing expectations of the solution to random PDEs. Moreover, the expected computational cost of generating one such estimator is finite. In this analysis, we employ a quadratic approximation to solve random PDEs and perform precise error analysis of this numerical solver. The second part of this dissertation focuses on topics in statistics. The random functions of interest are likelihood functions, whose maximum plays a key role in statistical inference. We present asymptotic analysis for likelihood based hypothesis tests and sequential analysis. In Chapter 4, we derive an analytical form for the exponential decay rate of error probabilities of the generalized likelihood ratio test for testing two general families of hypotheses. In Chapter 5, we study asymptotic properties of the generalized sequential probability ratio test, the stopping rule of which is the first boundary crossing time of the generalized likelihood ratio statistic. We show that this sequential test is asymptotically optimal in the sense that it achieves asymptotically the shortest expected sample size as the maximal type I and type II error probabilities tend to zero. These results have important theoretical implications in hypothesis testing, model selection, and other areas where maximum likelihood is employed.
466

A study on model selection of binary and non-Gaussian factor analysis.

January 2005 (has links)
An, Yujia. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2005. / Includes bibliographical references (leaves 71-76). / Abstracts in English and Chinese. / Abstract --- p.ii / Acknowledgement --- p.iv / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Background --- p.1 / Chapter 1.1.1 --- Review on BFA --- p.2 / Chapter 1.1.2 --- Review on NFA --- p.3 / Chapter 1.1.3 --- Typical model selection criteria --- p.5 / Chapter 1.1.4 --- New model selection criterion and automatic model selection --- p.6 / Chapter 1.2 --- Our contributions --- p.7 / Chapter 1.3 --- Thesis outline --- p.8 / Chapter 2 --- Combination of B and BI architectures for BFA with automatic model selection --- p.10 / Chapter 2.1 --- Implementation of BFA using BYY harmony learning with au- tomatic model selection --- p.11 / Chapter 2.1.1 --- Basic issues of BFA --- p.11 / Chapter 2.1.2 --- B-architecture for BFA with automatic model selection . --- p.12 / Chapter 2.1.3 --- BI-architecture for BFA with automatic model selection . --- p.14 / Chapter 2.2 --- Local minima in B-architecture and BI-architecture --- p.16 / Chapter 2.2.1 --- Local minima in B-architecture --- p.16 / Chapter 2.2.2 --- One unstable result in BI-architecture --- p.21 / Chapter 2.3 --- Combination of B- and BI-architecture for BFA with automatic model selection --- p.23 / Chapter 2.3.1 --- Combine B-architecture and BI-architecture --- p.23 / Chapter 2.3.2 --- Limitations of BI-architecture --- p.24 / Chapter 2.4 --- Experiments --- p.25 / Chapter 2.4.1 --- Frequency of local minima occurring in B-architecture --- p.25 / Chapter 2.4.2 --- Performance comparison for several methods in B-architecture --- p.26 / Chapter 2.4.3 --- Comparison of local minima in B-architecture and BI- architecture --- p.26 / Chapter 2.4.4 --- Frequency of unstable cases occurring in BI-architecture --- p.27 / Chapter 2.4.5 --- Comparison of performance of three strategies --- p.27 / Chapter 2.4.6 --- Limitations of BI-architecture --- p.28 / Chapter 2.5 --- Summary --- p.29 / Chapter 3 --- A Comparative Investigation on Model Selection in Binary Factor Analysis --- p.31 / Chapter 3.1 --- Binary Factor Analysis and ML Learning --- p.32 / Chapter 3.2 --- Hidden Factors Number Determination --- p.33 / Chapter 3.2.1 --- Using Typical Model Selection Criteria --- p.33 / Chapter 3.2.2 --- Using BYY harmony Learning --- p.34 / Chapter 3.3 --- Empirical Comparative Studies --- p.36 / Chapter 3.3.1 --- Effects of Sample Size --- p.37 / Chapter 3.3.2 --- Effects of Data Dimension --- p.37 / Chapter 3.3.3 --- Effects of Noise Variance --- p.39 / Chapter 3.3.4 --- Effects of hidden factor number --- p.43 / Chapter 3.3.5 --- Computing Costs --- p.43 / Chapter 3.4 --- Summary --- p.46 / Chapter 4 --- A Comparative Investigation on Model Selection in Non-gaussian Factor Analysis --- p.47 / Chapter 4.1 --- Non-Gaussian Factor Analysis and ML Learning --- p.48 / Chapter 4.2 --- Hidden Factor Determination --- p.51 / Chapter 4.2.1 --- Using typical model selection criteria --- p.51 / Chapter 4.2.2 --- BYY harmony Learning --- p.52 / Chapter 4.3 --- Empirical Comparative Studies --- p.55 / Chapter 4.3.1 --- Effects of Sample Size on Model Selection Criteria --- p.56 / Chapter 4.3.2 --- Effects of Data Dimension on Model Selection Criteria --- p.60 / Chapter 4.3.3 --- Effects of Noise Variance on Model Selection Criteria --- p.64 / Chapter 4.3.4 --- Discussion on Computational Cost --- p.64 / Chapter 4.4 --- Summary --- p.68 / Chapter 5 --- Conclusions --- p.69 / Bibliography --- p.71
467

Regression methods in multidimensional prediction and estimation

Björkström, Anders January 2007 (has links)
<p>In regression with near collinear explanatory variables, the least squares predictor has large variance. Ordinary least squares regression (OLSR) often leads to unrealistic regression coefficients. Several regularized regression methods have been proposed as alternatives. Well-known are principal components regression (PCR), ridge regression (RR) and continuum regression (CR). The latter two involve a continuous metaparameter, offering additional flexibility.</p><p>For a univariate response variable, CR incorporates OLSR, PLSR, and PCR as special cases, for special values of the metaparameter. CR is also closely related to RR. However, CR can in fact yield regressors that vary discontinuously with the metaparameter. Thus, the relation between CR and RR is not always one-to-one. We develop a new class of regression methods, LSRR, essentially the same as CR, but without discontinuities, and prove that any optimization principle will yield a regressor proportional to a RR, provided only that the principle implies maximizing some function of the regressor's sample correlation coefficient and its sample variance. For a multivariate response vector we demonstrate that a number of well-established regression methods are related, in that they are special cases of basically one general procedure. We try a more general method based on this procedure, with two meta-parameters. In a simulation study we compare this method to ridge regression, multivariate PLSR and repeated univariate PLSR. For most types of data studied, all methods do approximately equally well. There are cases where RR and LSRR yield larger errors than the other methods, and we conclude that one-factor methods are not adequate for situations where more than one latent variable are needed to describe the data. Among those based on latent variables, none of the methods tried is superior to the others in any obvious way.</p>
468

Weak Convergence of First-Rare-Event Times for Semi-Markov Processes

Drozdenko, Myroslav January 2007 (has links)
<p>I denna avhandling studerar vi nödvändiga och tillräckliga villkor för svag konvergens av första-sällan-händelsetider för semi-Markovska processer.</p><p>I introduktionen ger vi nödvändiga grundläggande definitioner och beskrivningar av modeller som betraktas i avhandlingen, samt ger några exempel på situationer i vilka metoder av första-sällan-händelsetider kan vara lämpliga att använda. Dessutom analyserar vi publicerade resultat om asymptotiska problem för stokastiska funktionaler som definieras på semi-Markovska processer.</p><p>I artikel A betraktar vi första-sällan-händelsetider för semi-Markovska processer med en ändlig mängd av lägen. Vi ger också en sammanfattning av våra resultat om nödvändiga och tillräckliga villkor för svag konvergens, samt diskuterar möjliga tillämpningar inom aktuarie-området.</p><p>I artikel B redovisar vi i detalj de resultat som annonseras i artikel A och bevisen för dem. Vi ger också nödvändiga och tillräckliga villkor för svag konvergens av första-sällan-händelsetider för semi-Markovska processer med en ändlig mängd av lägen i ett icke-triangulärt tillstånd. Dessutom beskriver vi med hjälp av Laplacetransformationen klassen av alla möjliga gränsfördelningar.</p><p>I artikel C studerar vi villkor av svag konvergens av flöden av sällan-händelser i ett icke-triangulärt tillstånd. Vi formulerar nödvändiga och tillräckliga villkor för konvergens, och beskriver klassen av alla möjliga gränsflöden. Vi tillämpar också våra resultat i asymptotisk analys av icke-ruin-sannolikheten för störda riskprocesser.</p><p>I artikel D ger vi nödvändiga och tillräckliga villkor för svag konvergens av första-sällan-händelsetider för semi-Markovska rocesser med en ändlig mängd av lägen i ett triangulärt tillstånd, samt beskriver klassen av alla möjliga gränsfördelningar. Resultaten utvidgar slutsatser från artikel B till att gälla för ett allmänt triangulärt tillstånd.</p><p>I artikel E ger vi nödvändiga och tillräckliga villkor för svag konvergens av flöden av sällan-händelser för semi-Markovska processer i ett triangulärt tillstånd. Detta generaliserar resultaten från artikel C till att beskriva ett allmänt triangulärt tillstånd. Vidare ger vi tillämpningar av våra resultat på asymptotiska problem av störda riskprocesser och till kösystemen med snabb service.</p> / <p>In this thesis we study necessary and sufficient conditions for weak convergence of first-rare-event times for semi-Markov processes, we describe the class of all possible limit distributions, and give the applications of the results to risk theory and queueing systems.</p><p>In paper <b>A</b>, we consider first-rare-event times for semi-Markov processes with a finite set of states, and give a summary of our results concerning necessary and sufficient conditions for weak convergence of first-rare-event times and their actuarial applications.</p><p>In paper <b>B</b>, we present in detail results announced in paper <b>A</b> as well as their proofs. We give necessary and sufficient conditions for weak convergence of first-rare-event times for semi-Markov processes with a finite set of states in non-triangular-array mode and describe the class of all possible limit distributions in terms of their Laplace transforms.</p><p>In paper <b>C</b>, we study the conditions for weak convergence for flows of rare events for semi-Markov processes with a finite set of states in non-triangular array mode. We formulate necessary and sufficient conditions of convergence and describe the class of all possible limit stochastic flows. In the second part of the paper, we apply our results to the asymptotical analysis of non-ruin probabilities for perturbed risk processes.</p><p>In paper <b>D</b>, we give necessary and sufficient conditions for the weak convergence of first-rare-event times for semi-Markov processes with a finite set of states in triangular array mode as well as describing the class of all possible limit distributions. The results of paper <b>D</b> extend results obtained in paper <b>B</b> to a general triangular array mode.</p><p>In paper <b>E</b>, we give the necessary and sufficient conditions for weak convergence for the flows of rare events for semi-Markov processes with a finite set of states in triangular array case. This paper generalizes results obtained in paper <b>C</b> to a general triangular array mode. In the second part of the paper, we present applications of our results to asymptotical problems of perturbed risk processes and to queueing systems with quick service</p>
469

Test Cycle Optimization using Regression Analysis

Meless, Dejen January 2010 (has links)
<p>Industrial robots make up an important part in today’s industry and are assigned to a range of different tasks. Needless to say, businesses need to rely on their machine park to function as planned, avoiding stops in production due to machine failures. This is where fault detection methods play a very important part. In this thesis a specific fault detection method based on signal analysis will be considered. When testing a robot for fault(s), a specific test cycle (trajectory) is executed in order to be able to compare test data from different test occasions. Furthermore, different test cycles yield different measurements to analyse, which may affect the performance of the analysis. The question posed is: <em>Can we find an optimal test cycle so that the fault is best revealed in the test data?</em> The goal of this thesis is to, using regression analysis, investigate how the presently executed test cycle in a specific diagnosis method relates to the faults that are monitored (in this case a so called friction fault) and decide if a different one should be recommended. The data also includes representations of two disturbances.</p><p>The results from the regression show that the variation in the test quantities utilised in the diagnosis method are not explained by neither the friction fault or the test cycle. It showed that the disturbances had too large effect on the test quantities. This made it impossible to recommend a different (optimal) test cycle based on the analysis.</p>
470

Monitoring portfolio weights by means of the Shewhart method

Mohammadian, Jeela January 2010 (has links)
<p>The distribution of asset returns may lead to structural breaks. Thesebreaks may result in changes of the optimal portfolio weights. For a port-folio investor, the ability of timely detection of any systematic changesin the optimal portfolio weights is of a great interest.In this master thesis work, the use of the Shewhart method, as amethod for detecting a sudden parameter change, the implied changein the multivariate portfolio weights and its performance is reviewed.</p><p> </p>

Page generated in 0.0877 seconds