61 |
A covariate model in finite mixture survival distributionsSoegiarso, Restuti Widayati January 1992 (has links)
No description available.
|
62 |
Nuclear Modifications of Parton Distribution FunctionsAdeluyi, Adeola Adeleke 17 June 2009 (has links)
No description available.
|
63 |
A numerical method for describing the inverted load duration curve as a sum of two normal probability distributionsDickson, John S. January 1985 (has links)
No description available.
|
64 |
Clustering Matrix Variate Data Using Finite Mixture Models with Component-Wise RegularizationTait, Peter A 11 1900 (has links)
Matrix variate distributions present a innate way to model random matrices. Realiza-
tions of random matrices are created by concurrently observing variables in different
locations or at different time points. We use a finite mixture model composed of
matrix variate normal densities to cluster matrix variate data. The matrix variate
data was generated by accelerometers worn by children in a clinical study conducted
at McMaster. Their acceleration along the three planes of motion over the course of
seven days, forms their matrix variate data. We use the resulting clusters to verify
existing group membership labels derived from a test of motor-skills proficiency used
to assess the children’s locomotion. / Thesis / Master of Science (MSc)
|
65 |
Analysis of Three-Way Data and Other Topics in Clustering and ClassificationGallaugher, Michael Patrick Brian January 2020 (has links)
Clustering and classification is the process of finding underlying group structure in heterogenous data. With the rise of the “big data” phenomenon, more complex data structures have made it so traditional clustering methods are oftentimes not advisable or feasible. This thesis presents methodology for analyzing three different examples of these more complex data types. The first is three-way (matrix variate) data, or data that come in the form of matrices. A large emphasis is placed on clustering skewed three-way data, and high dimensional three-way data. The second is click- stream data, which considers a user’s internet search patterns. Finally, co-clustering methodology is discussed for very high-dimensional two-way (multivariate) data. Parameter estimation for all these methods is based on the expectation maximization (EM) algorithm. Both simulated and real data are used for illustration. / Thesis / Doctor of Philosophy (PhD)
|
66 |
Multiple year pricing strategies for corn and soybeans using cash, futures, and options contractsBeckman, Charles V. 16 June 2009 (has links)
The possibility of profitable multiple year pricing using rollover strategies for com and soybeans is identified. Historical futures price distributions are generated for both commodities to determine the probability of prices reaching certain levels. The upper 5%, 10%, and 15% of the distributions are determined. Price forecasting models are developed to help producers anticipate high price levels before they occur. Seven different multiple year strategies containing various combinations of cash, futures, and options contracts are established and six different strategy rules are tested. A total of fifty strategies are then evaluated for each commodity over the 1980-1992 time period.
Mean net prices and standard deviations are calculated and the highest return strategies are identified. The strategies are then analyzed based on the two largest risks associated with long-term rollovers: margin calls and spread risk. The tradeoffs between risk and return for the various combinations of cash, futures, and options contracts is discussed. The highest return strategy for both com and soybeans involves selling three years of production when prices reach the upper 5% of the historical distribution, using cash contracts to price the first year's production and futures contracts to price the final two. Substituting options contracts for futures in the final two years results in a strategy void of margin call risk, but subject to increased spread risk. For com, a strategy that does not carry the risk of margin calls receives 93.7% of the high return strategy, while for soybeans this percentage is 98.3. / Master of Science
|
67 |
Potential flow solution and incompressible boundary layer for a two-dimensional cascadeBryner, Hans Eugen 15 July 2010 (has links)
A blade-to-blade computer program, using the method of finite differences has been written to calculate the velocity distributions on the rotor blade of an axial-flow compressor. The shape of the blade has been approximated in two different ways employing a rather elaborate method and one whose primary goal was simplicity. The ensuing velocity distributions were compared and can be judged to be satisfactory in that they follow the expectations and show a reasonable behavior, even close to the leading and trailing stagnation point. The latter fact represents an improvement to results obtained from a previous work [ref. 3], however the calculations still need to be confirmed by the experiment.
In the second part of this thesis, following a recommendation of reference 3, the blade boundary layer effects have been calculated from the velocity distributions of the first part. Considering certain assumptions, these results also may be judged as satisfactory and the rather important conclusion may be drawn that turbulent separation, if it occurs at all, takes place close to the rear stagnation point of the blade for the applied range of upstream velocities. Another conclusion may be drawn from the displacement thickness distribution in that the flow values would not affect greatly the potential flow calculation and hence an iterative procedure between the potential flow field and the blade boundary layer should converge rapidly. The results from the second part also require a confirmation by the experiment. / Master of Science
|
68 |
FITTING A DISTRIBUTION TO CATASTROPHIC EVENTOsei, Ebenezer 15 December 2010 (has links)
Statistics is a branch of mathematics which is heavily employed in the area of Actuarial Mathematics. This thesis first reviews the importance of statistical distributions in the analysis of insurance problems and the applications of Statistics in the area of risk and insurance. The Normal, Log-normal, Pareto, Gamma, standard Beta, Frechet, Gumbel, Weibull, Poisson, binomial, and negative binomial distributions are looked at and the importance of these distributions in general insurance is also emphasized. A careful review of literature is to provide practitioners in the general insurance industry with statistical tools which are of immediate application in the industry. These tools include estimation methods and fit statistics popular in the insurance industry. Finally this thesis carries out the task of fitting statistical distributions to the flood loss data in the 50 States of the United States.
|
69 |
Optimisation des stratégies de décodage des codes LDPC dans les environnements impulsifs : application aux réseaux de capteurs et ad hoc / LDPC strategy decoding optimization in impulsive environments : sensors and ad hoc networks applicationBen Maad, Hassen 29 June 2011 (has links)
L’objectif de cette thèse est d’étudier le comportement des codes LDPC dans un environnement où l’interférence générée par un réseau n’est pas de nature gaussienne mais présente un caractère impulsif. Un premier constat rapide montre que sans précaution, les performances de ces codes se dégradent très significativement. Nous étudions tout d’abord les différentes solutions possibles pour modéliser les bruits impulsifs. Dans le cas des interférences d’accès multiples qui apparaissent dans les réseaux ad hoc et les réseaux de capteurs, il nous semble approprié de choisir les distributions alpha-stables. Généralisation de la gaussienne, stables par convolution, elles peuvent être validées théoriquement dans plusieurs situations.Nous déterminons alors la capacité de l’environnement α-stable et montrons par une approche asymptotique que les codes LDPC dans cet environnement sont bons mais qu’une simple opération linéaire à l’entrée du décodeur ne permet pas d’obtenir de bonnes performances. Nous avons donc proposé différentes façons de calculer la vraisemblance en entrée du décodeur. L’approche optimale est très complexe à mettre en oeuvre. Nous avons étudié plusieurs approches différentes et en particulier le clipping dont nous avons cherché les paramètres optimaux. / The goal of this PhD is to study the performance of LDPC codes in an environment where interference, generated by the network, has not a Gaussian nature but presents an impulsive behavior.A rapid study shows that, if we do not take care, the codes’ performance significantly degrades.In a first step, we study different approaches for impulsive noise modeling. In the case of multiple access interference that disturb communications in ad hoc or sensor networks, the choice of alpha-stable distributions is appropriate. They generalize Gaussian distributions, are stable by convolution and can be theoretically justified in several contexts.We then determine the capacity if the α-stable environment and show using an asymptotic method that LDPC codes in such an environment are efficient but that a simple linear operation on the received samples at the decoder input does not allow to obtain the expected good performance. Consequently we propose several methods to obtain the likelihood ratio necessary at the decoder input. The optimal solution is highly complex to implement. We have studied several other approaches and especially the clipping for which we proposed several approaches to determine the optimal parameters.
|
70 |
Smooth $*$--AlgebrasPeter.Michor@esi.ac.at 19 June 2001 (has links)
No description available.
|
Page generated in 0.0264 seconds