• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 51
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 86
  • 86
  • 24
  • 14
  • 13
  • 11
  • 11
  • 11
  • 10
  • 9
  • 8
  • 8
  • 8
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Priklausomų normaliųjų dydžių ekstremumų momentai / Moments of extremes of normally distributed values

Burauskaitė, Agnė 09 June 2005 (has links)
Gaussian distribution is the most applied in practice and because of that reason there is a great amount of studies done in this area. In this report we look at Gaussian distribution from a point of view of extreme value theory. More concretely, moments of maximum of normally distributed values are discussed. There are methods to calculate moments of extremes of independent identically distributed normal values, values with different variances and asymptotical results. In this work a case of dependant variables is analyzed and aim is to look for results in similar cases that is done for independent variables. Continuing Bachelor’s work formula for moment calculation of maximum of two dependent normal variables with all different parameters is presented. Also there is a proof of formula for calculation of odd order moments of three dependent variable maximum. This result is generalized for random variable vectors of any length. There is a theorem stated, according to which moments of length n vector maximum could be expressed by same order moments of shorter vectors. Unfortunately, because of requirements for numbers n and m, no recursion method could be applied. Using computer, maximum of various length random vectors with dependent components is simulated and average is analyzed. In experiments relation between mean values of dependent and independent variable maximums is observed. This relation is stated in a form of a formula and proved for vectors of any length. In this... [to full text]
42

Delay estimation in computer networks

Johnson, Nicholas Alexander January 2010 (has links)
Computer networks are becoming increasingly large and complex; more so with the recent penetration of the internet into all walks of life. It is essential to be able to monitor and to analyse networks in a timely and efficient manner; to extract important metrics and measurements and to do so in a way which does not unduly disturb or affect the performance of the network under test. Network tomography is one possible method to accomplish these aims. Drawing upon the principles of statistical inference, it is often possible to determine the statistical properties of either the links or the paths of the network, whichever is desired, by measuring at the most convenient points thus reducing the effort required. In particular, bottleneck-link detection methods in which estimates of the delay distributions on network links are inferred from measurements made at end-points on network paths, are examined as a means to determine which links of the network are experiencing the highest delay. Initially two published methods, one based upon a single Gaussian distribution and the other based upon the method-of-moments, are examined by comparing their performance using three metrics: robustness to scaling, bottleneck detection accuracy and computational complexity. Whilst there are many published algorithms, there is little literature in which said algorithms are objectively compared. In this thesis, two network topologies are considered, each with three configurations in order to determine performance in six scenarios. Two new estimation methods are then introduced, both based on Gaussian mixture models which are believed to offer an advantage over existing methods in certain scenarios. Computationally, a mixture model algorithm is much more complex than a simple parametric algorithm but the flexibility in modelling an arbitrary distribution is vastly increased. Better model accuracy potentially leads to more accurate estimation and detection of the bottleneck. The concept of increasing flexibility is again considered by using a Pearson type-1 distribution as an alternative to the single Gaussian distribution. This increases the flexibility but with a reduced complexity when compared with mixture model approaches which necessitate the use of iterative approximation methods. A hybrid approach is also considered where the method-of-moments is combined with the Pearson type-1 method in order to circumvent problems with the output stage of the former. This algorithm has a higher variance than the method-of-moments but the output stage is more convenient for manipulation. Also considered is a new approach to detection algorithms which is not dependant on any a-priori parameter selection and makes use of the Kullback-Leibler divergence. The results show that it accomplishes its aim but is not robust enough to replace the current algorithms. Delay estimation is then cast in a different role, as an integral part of an algorithm to correlate input and output streams in an anonymising network such as the onion router (TOR). TOR is used by users in an attempt to conceal network traffic from observation. Breaking the encryption protocols used is not possible without significant effort but by correlating the un-encrypted input and output streams from the TOR network, it is possible to provide a degree of certainty about the ownership of traffic streams. The delay model is essential as the network is treated as providing a pseudo-random delay to each packet; having an accurate model allows the algorithm to better correlate the streams.
43

Noninformative priors for some models useful in reliability and survival analysis /

Lee, Gunhee, January 1996 (has links)
Thesis (Ph. D.)--University of Missouri-Columbia, 1996. / Typescript. Vita. Includes bibliographical references (leaves 105-108). Also available on the Internet.
44

Noninformative priors for some models useful in reliability and survival analysis

Lee, Gunhee, January 1996 (has links)
Thesis (Ph. D.)--University of Missouri-Columbia, 1996. / Typescript. Vita. Includes bibliographical references (leaves 105-108). Also available on the Internet.
45

Corrected LM goodness-of-fit tests with applicaton to stock returns

Percy, Edward Richard, January 2005 (has links)
Thesis (Ph. D.)--Ohio State University, 2005. / Title from first page of PDF file. Includes bibliographical references (p. 263-266).
46

Long range dependence in South African Platinum prices under heavy tailed error distributions

Kubheka, Sihle 11 1900 (has links)
South Africa is rich in platinum group metals (PGMs) and these metals are important in providing jobs as well as investments some of which have been seen in the Johannesburg Securities Exchange (JSE). In this country this sector has experienced some setbacks in recent times. The most notable ones are the 2008/2009 global nancial crisis and the 2012 major nationwide labour unrest. Worrisomely, these setbacks keep simmering. These events usually introduce jumps and breaks in data which changes the structure of the underlying information thereby inducing spurious long memory (long range dependence). Thus it is recommended that these two phenomena must be addressed together. Further, it is well-known that nancial returns are dominated by stylized facts. In this thesis we carried out an investigation on distributional properties of platinum returns, structural changes, long memory and stylized facts in platinum returns and volatility series. To understand the distributional properties of the returns, we used two classes of heavy tailed distributions namely the alpha-Stable distributions and generalized hyperbolic distributions. We then investigated structural changes in the platinum return series and changes in long range dependence and volatility. Using Akaike information criterion, the ARFIMA-FIAPARCH under the Student distribution was selected as the best model for platinum although the ARCH e ects were slightly signi cant, while using the Schwarz information criteria the ARFIMA-FIAPARCH under the Normal distribution. Further, ARFIMA-FIEGARCH under the skewed Student distribution and ARFIMA-HYGARCH under the Normal distribution models were able to capture the ARCH effects. The best models with respect to prediction excluded the ARFIMA-FIGARCH model and were dominated by ARFIMA-FIAPARCH model with non-Normal error distributions which indicates the importance of asymmetry and heavy tailed error distributions. / Statistics / M. Sc. (Statistics)
47

Generating Generalized Inverse Gaussian Random Variates by Fast Inversion

Leydold, Josef, Hörmann, Wolfgang January 2009 (has links) (PDF)
We demonstrate that for the fast numerical inversion of the (generalized) inverse Gaussian distribution two algorithms based on polynomial interpolation are well-suited. Their precision is close to machine precision and they are much faster than the bisection method recently proposed by Y. Lai. / Series: Research Report Series / Department of Statistics and Mathematics
48

Model-based clustering based on sparse finite Gaussian mixtures

Malsiner-Walli, Gertraud, Frühwirth-Schnatter, Sylvia, Grün, Bettina January 2016 (has links) (PDF)
In the framework of Bayesian model-based clustering based on a finite mixture of Gaussian distributions, we present a joint approach to estimate the number of mixture components and identify cluster-relevant variables simultaneously as well as to obtain an identified model. Our approach consists in specifying sparse hierarchical priors on the mixture weights and component means. In a deliberately overfitting mixture model the sparse prior on the weights empties superfluous components during MCMC. A straightforward estimator for the true number of components is given by the most frequent number of non-empty components visited during MCMC sampling. Specifying a shrinkage prior, namely the normal gamma prior, on the component means leads to improved parameter estimates as well as identification of cluster-relevant variables. After estimating the mixture model using MCMC methods based on data augmentation and Gibbs sampling, an identified model is obtained by relabeling the MCMC output in the point process representation of the draws. This is performed using K-centroids cluster analysis based on the Mahalanobis distance. We evaluate our proposed strategy in a simulation setup with artificial data and by applying it to benchmark data sets. (authors' abstract)
49

INTEGRATED ANALYSIS OF TEMPORAL AND MORPHOLOGICAL FEATURES USING MACHINE LEARNING TECHNIQUES FOR REAL TIME DIAGNOSIS OF ARRHYTHMIA AND IRREGULAR BEATS

Gawde, Purva R. 06 December 2018 (has links)
No description available.
50

Interference Analysis and Mitigation in a Cellular Network with Femtocells

Dalal, Avani 26 September 2011 (has links)
No description available.

Page generated in 0.1199 seconds