• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 1
  • 1
  • Tagged with
  • 9
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The compounding method for finding bivariate noncentral distributions

Ferreira, Johannes Theodorus 04 1900 (has links)
The univariate and bivariate central chi-square- and F distributions have received a decent amount of attention in the literature during the past few decades; the noncentral counterparts of these distributions have been much less present. This study enriches the existing literature by proposing bivariate noncentral chi-square and F distributions via the employment of the compounding method with Poisson probabilities. This method has been used to a limited extent in the field of distribution theory to obtain univariate noncentral distributions; this study extends some results in literature to the corresponding bivariate setting. The process which is followed to obtain such bivariate noncentral distributions is systematically described and motivated. Some distributions of composites (univariate functions of the dependent components of the bivariate distributions) are derived and studied, in particular the product, ratio, and proportion. The benefit of introducing these bivariate noncentral distributions and their respective composites is demonstrated by graphical representations of their probability density functions. Furthermore, an example of possible application is given and discussed to illustrate the versatility of the proposed models. / Dissertation (MSc)--University of Pretoria, 2014. / Statistics / MSc / Unrestricted
2

Estimates of Statistical Power and Accuracy for Latent Trajectory Class Enumeration in the Growth Mixture Model

Brown, Eric C 09 June 2003 (has links)
This study employed Monte Carlo simulation to investigate the ability of the growth mixture model (GMM) to correctly identify models based on a "true" two-class pseudo-population from alternative models consisting of "false" one- and three-latent trajectory classes. This ability was assessed in terms of statistical power, defined as the proportion of replications that correctly identified the two-class model as having optimal fit to the data compared to the one-class model, and accuracy, which was defined as the proportion of replications that correctly identified the two-class model over both one- and three-class models. Estimates of power and accuracy were adjusted by empirically derived critical values to reflect nominal Type I error rates of a = .05. Six experimental conditions were examined: (a) standardized between-class differences in growth parameters, (b) percentage of total variance explained by growth parameters, (c) correlation between intercepts and slopes, (d) sample size, (e) number of repeated measures, and (f) planned missingness. Estimates of statistical power and accuracy were related to a measure of the degree of separation and distinction between latent trajectory classes (λ2 ), which approximated a chi-square based noncentrality parameter. Model selection relied on four criteria: (a) the Bayesian information criterion (BIC), (b) the sample-size adjusted BIC (ABIC), (c) the Akaike information criterion (AIC), and (d) the likelihood ratio test (LRT). Results showed that power and accuracy of the GMM to correctly enumerate latent trajectory classes were positively related to greater between-class separation, greater proportion of total variance explained by growth parameters, larger sample sizes, greater numbers of repeated measures, and larger negative correlations between intercepts and slopes; and inversely related to greater proportions of missing data. Results of the Monte Carlo simulations were field tested using specific design and population characteristics from an evaluation of a longitudinal demonstration project. This test compared estimates of power and accuracy generated via Monte Carlo simulation to estimates predicted from a regression of derived λ2 values. Results of this motivating example indicated that knowledge of λ2 can be useful in the two-class case for predicting power and accuracy without extensive Monte Carlo simulations.
3

Multiple hypothesis testing and multiple outlier identification methods

Yin, Yaling 13 April 2010
Traditional multiple hypothesis testing procedures, such as that of Benjamini and Hochberg, fix an error rate and determine the corresponding rejection region. In 2002 Storey proposed a fixed rejection region procedure and showed numerically that it can gain more power than the fixed error rate procedure of Benjamini and Hochberg while controlling the same false discovery rate (FDR). In this thesis it is proved that when the number of alternatives is small compared to the total number of hypotheses, Storeys method can be less powerful than that of Benjamini and Hochberg. Moreover, the two procedures are compared by setting them to produce the same FDR. The difference in power between Storeys procedure and that of Benjamini and Hochberg is near zero when the distance between the null and alternative distributions is large, but Benjamini and Hochbergs procedure becomes more powerful as the distance decreases. It is shown that modifying the Benjamini and Hochberg procedure to incorporate an estimate of the proportion of true null hypotheses as proposed by Black gives a procedure with superior power.<p> Multiple hypothesis testing can also be applied to regression diagnostics. In this thesis, a Bayesian method is proposed to test multiple hypotheses, of which the i-th null and alternative hypotheses are that the i-th observation is not an outlier versus it is, for i=1,...,m. In the proposed Bayesian model, it is assumed that outliers have a mean shift, where the proportion of outliers and the mean shift respectively follow a Beta prior distribution and a normal prior distribution. It is proved in the thesis that for the proposed model, when there exists more than one outlier, the marginal distributions of the deletion residual of the i-th observation under both null and alternative hypotheses are doubly noncentral t distributions. The outlyingness of the i-th observation is measured by the marginal posterior probability that the i-th observation is an outlier given its deletion residual. An importance sampling method is proposed to calculate this probability. This method requires the computation of the density of the doubly noncentral F distribution and this is approximated using Patnaiks approximation. An algorithm is proposed in this thesis to examine the accuracy of Patnaiks approximation. The comparison of this algorithms output with Patnaiks approximation shows that the latter can save massive computation time without losing much accuracy.<p> The proposed Bayesian multiple outlier identification procedure is applied to some simulated data sets. Various simulation and prior parameters are used to study the sensitivity of the posteriors to the priors. The area under the ROC curves (AUC) is calculated for each combination of parameters. A factorial design analysis on AUC is carried out by choosing various simulation and prior parameters as factors. The resulting AUC values are high for various selected parameters, indicating that the proposed method can identify the majority of outliers within tolerable errors. The results of the factorial design show that the priors do not have much effect on the marginal posterior probability as long as the sample size is not too small.<p> In this thesis, the proposed Bayesian procedure is also applied to a real data set obtained by Kanduc et al. in 2008. The proteomes of thirty viruses examined by Kanduc et al. are found to share a high number of pentapeptide overlaps to the human proteome. In a linear regression analysis of the level of viral overlaps to the human proteome and the length of viral proteome, it is reported by Kanduc et al. that among the thirty viruses, human T-lymphotropic virus 1, Rubella virus, and hepatitis C virus, present relatively higher levels of overlaps with the human proteome than the predicted level of overlaps. The results obtained using the proposed procedure indicate that the four viruses with extremely large sizes (Human herpesvirus 4, Human herpesvirus 6, Variola virus, and Human herpesvirus 5) are more likely to be the outliers than the three reported viruses. The results with thefour extreme viruses deleted confirm the claim of Kanduc et al.
4

Multiple hypothesis testing and multiple outlier identification methods

Yin, Yaling 13 April 2010 (has links)
Traditional multiple hypothesis testing procedures, such as that of Benjamini and Hochberg, fix an error rate and determine the corresponding rejection region. In 2002 Storey proposed a fixed rejection region procedure and showed numerically that it can gain more power than the fixed error rate procedure of Benjamini and Hochberg while controlling the same false discovery rate (FDR). In this thesis it is proved that when the number of alternatives is small compared to the total number of hypotheses, Storeys method can be less powerful than that of Benjamini and Hochberg. Moreover, the two procedures are compared by setting them to produce the same FDR. The difference in power between Storeys procedure and that of Benjamini and Hochberg is near zero when the distance between the null and alternative distributions is large, but Benjamini and Hochbergs procedure becomes more powerful as the distance decreases. It is shown that modifying the Benjamini and Hochberg procedure to incorporate an estimate of the proportion of true null hypotheses as proposed by Black gives a procedure with superior power.<p> Multiple hypothesis testing can also be applied to regression diagnostics. In this thesis, a Bayesian method is proposed to test multiple hypotheses, of which the i-th null and alternative hypotheses are that the i-th observation is not an outlier versus it is, for i=1,...,m. In the proposed Bayesian model, it is assumed that outliers have a mean shift, where the proportion of outliers and the mean shift respectively follow a Beta prior distribution and a normal prior distribution. It is proved in the thesis that for the proposed model, when there exists more than one outlier, the marginal distributions of the deletion residual of the i-th observation under both null and alternative hypotheses are doubly noncentral t distributions. The outlyingness of the i-th observation is measured by the marginal posterior probability that the i-th observation is an outlier given its deletion residual. An importance sampling method is proposed to calculate this probability. This method requires the computation of the density of the doubly noncentral F distribution and this is approximated using Patnaiks approximation. An algorithm is proposed in this thesis to examine the accuracy of Patnaiks approximation. The comparison of this algorithms output with Patnaiks approximation shows that the latter can save massive computation time without losing much accuracy.<p> The proposed Bayesian multiple outlier identification procedure is applied to some simulated data sets. Various simulation and prior parameters are used to study the sensitivity of the posteriors to the priors. The area under the ROC curves (AUC) is calculated for each combination of parameters. A factorial design analysis on AUC is carried out by choosing various simulation and prior parameters as factors. The resulting AUC values are high for various selected parameters, indicating that the proposed method can identify the majority of outliers within tolerable errors. The results of the factorial design show that the priors do not have much effect on the marginal posterior probability as long as the sample size is not too small.<p> In this thesis, the proposed Bayesian procedure is also applied to a real data set obtained by Kanduc et al. in 2008. The proteomes of thirty viruses examined by Kanduc et al. are found to share a high number of pentapeptide overlaps to the human proteome. In a linear regression analysis of the level of viral overlaps to the human proteome and the length of viral proteome, it is reported by Kanduc et al. that among the thirty viruses, human T-lymphotropic virus 1, Rubella virus, and hepatitis C virus, present relatively higher levels of overlaps with the human proteome than the predicted level of overlaps. The results obtained using the proposed procedure indicate that the four viruses with extremely large sizes (Human herpesvirus 4, Human herpesvirus 6, Variola virus, and Human herpesvirus 5) are more likely to be the outliers than the three reported viruses. The results with thefour extreme viruses deleted confirm the claim of Kanduc et al.
5

Efficient Confidence Interval Methodologies for the Noncentrality Parameters of Noncentral T-Distributions

Kim, Jong Phil 06 April 2007 (has links)
The problem of constructing a confidence interval for the noncentrality parameter of a noncentral t-distribution based upon one observation from the distribution is an interesting problem with important applications. A general theoretical approach to the problem is provided by the specification and inversion of acceptance sets for each possible value of the noncentrality parameter. The standard method is based upon the arbitrary assignment of equal tail probabilities to the acceptance set, while the choices of the shortest possible acceptance sets and UMP unbiased acceptance sets provide even worse confidence intervals, which means that since the standard confidence intervals are uniformly shorter than those of UMPU method, the standard method are "biased". However, with the correct choice of acceptance sets it is possible to provide an improvement in terms of confidence interval length over the confidence intervals provided by the standard method for all values of observation. The problem of testing the equality of the noncentrality parameters of two noncentral t-distributions is considered, which naturally arises from the comparison of two signal-to-noise ratios for simple linear regression models. A test procedure is derived that is guaranteed to maintain type I error while having only minimal amounts of conservativeness, and comparisons are made with several other approaches to this problem based on variance stabilizing transformations. In summary, these simulations confirm that the new procedure has type I error probabilities that are guaranteed not to exceed the nominal level, and they demonstrate that the new procedure has size and power levels that compare well with the procedures based on variance stabilizing transformations.
6

Spectral-based tests for periodicities

Wei, Lai 18 March 2008 (has links)
No description available.
7

Towards real-time diffusion imaging : noise correction and inference of the human brain connectivity / Imagerie de diffusion en temps-réel : correction du bruit et inférence de la connectivité cérébrale

Brion, Véronique 30 April 2013 (has links)
La plupart des constructeurs de systèmes d'imagerie par résonance magnétique (IRM) proposent un large choix d'applications de post-traitement sur les données IRM reconstruites a posteriori, mais très peu de ces applications peuvent être exécutées en temps réel pendant l'examen. Mises à part certaines solutions dédiées à l'IRM fonctionnelle permettant des expériences relativement simples ainsi que d'autres solutions pour l'IRM interventionnelle produisant des scans anatomiques pendant un acte de chirurgie, aucun outil n'a été développé pour l'IRM pondérée en diffusion (IRMd). Cependant, comme les examens d'IRMd sont extrêmement sensibles à des perturbations du système hardware ou à des perturbations provoquées par le sujet et qui induisent des données corrompues, il peut être intéressant d'investiguer la possibilité de reconstruire les données d'IRMd directement lors de l'examen. Cette thèse est dédiée à ce projet innovant. La contribution majeure de cette thèse a consisté en des solutions de débruitage des données d'IRMd en temps réel. En effet, le signal pondéré en diffusion peut être corrompu par un niveau élevé de bruit qui n'est plus gaussien, mais ricien ou chi non centré. Après avoir réalisé un état de l'art détaillé de la littérature sur le bruit en IRM, nous avons étendu l'estimateur linéaire qui minimise l'erreur quadratique moyenne (LMMSE) et nous l'avons adapté à notre cadre de temps réel réalisé avec un filtre de Kalman. Nous avons comparé les performances de cette solution à celles d'un filtrage gaussien standard, difficile à implémenter car il nécessite une modification de la chaîne de reconstruction pour y être inséré immédiatement après la démodulation du signal acquis dans l'espace de Fourier. Nous avons aussi développé un filtre de Kalman parallèle qui permet d'appréhender toute distribution de bruit et nous avons montré que ses performances étaient comparables à celles de notre méthode précédente utilisant un filtre de Kalman non parallèle. Enfin, nous avons investigué la faisabilité de réaliser une tractographie en temps-réel pour déterminer la connectivité structurelle en direct, pendant l'examen. Nous espérons que ce panel de développements méthodologiques permettra d'améliorer et d'accélérer le diagnostic en cas d'urgence pour vérifier l'état des faisceaux de fibres de la substance blanche. / Most magnetic resonance imaging (MRI) system manufacturers propose a huge set of software applications to post-process the reconstructed MRI data a posteriori, but few of them can run in real-time during the ongoing scan. To our knowledge, apart from solutions dedicated to functional MRI allowing relatively simple experiments or for interventional MRI to perform anatomical scans during surgery, no tool has been developed in the field of diffusion-weighted MRI (dMRI). However, because dMRI scans are extremely sensitive to lots of hardware or subject-based perturbations inducing corrupted data, it can be interesting to investigate the possibility of processing dMRI data directly during the ongoing scan and this thesis is dedicated to this challenging topic. The major contribution of this thesis aimed at providing solutions to denoise dMRI data in real-time. Indeed, the diffusion-weighted signal may be corrupted by a significant level of noise which is not Gaussian anymore, but Rician or noncentral chi. After making a detailed review of the literature, we extended the linear minimum mean square error (LMMSE) estimator and adapted it to our real-time framework with a Kalman filter. We compared its efficiency to the standard Gaussian filtering, difficult to implement, as it requires a modification of the reconstruction pipeline to insert the filter immediately after the demodulation of the acquired signal in the Fourier space. We also developed a parallel Kalman filter to deal with any noise distribution and we showed that its efficiency was quite comparable to the non parallel Kalman filter approach. Last, we addressed the feasibility of performing tractography in real-time in order to infer the structural connectivity online. We hope that this set of methodological developments will help improving and accelerating a diagnosis in case of emergency to check the integrity of white matter fiber bundles.
8

統計品管中製程能力指標決策程序之研究 / Some Decision Procedures For The Capability Index In Quality Control Process

李仁棻, Lee, Ren Fen Unknown Date (has links)
製程能力指標(process capability index)常被用來評量製程能力的高低。它結合規格及製程變異於一指標,便利使用者易於了解指標的意義。 若吾人主張一製程能力大於某一定值,當同時控制型I及型II過誤,這時,臨界值(critical value)及樣本大小n即可決定。若同時存在有數個大於某一定值的製造過程,吾人欲挑選具有最大製程能力的製程,這時,我們提出一個客觀的準則來加以選擇。 本篇論文的特色是以解析法來決定臨界值及樣本大小n,並於挑選最大的製程能力時能提出一個客觀的挑選準則。 研究中發現:雖然逼近常用的統計上查表值時有些誤差,但誤差不大。故本文討論的過程中所用的方法及結論,適用於線上作業。
9

Implications of neuronal excitability and morphology for spike-based information transmission

Hesse, Janina 29 November 2017 (has links)
Signalverarbeitung im Nervensystem hängt sowohl von der Netzwerkstruktur, als auch den zellulären Eigenschaften der Nervenzellen ab. In dieser Abhandlung werden zwei zelluläre Eigenschaften im Hinblick auf ihre funktionellen Anpassungsmöglichkeiten untersucht: Es wird gezeigt, dass neuronale Morphologie die Signalweiterleitung unter Berücksichtigung energetischer Beschränkungen verstärken kann, und dass selbst kleine Änderungen in biophysikalischen Parametern die Aktivierungsbifurkation in Nervenzellen, und damit deren Informationskodierung, wechseln können. Im ersten Teil dieser Abhandlung wird, unter Verwendung von mathematischen Modellen und Daten, die Hypothese aufgestellt, dass Energie-effiziente Signalweiterleitung als starker Evolutionsdruck für unterschiedliche Zellkörperlagen bei Nervenzellen wirkt. Um Energie zu sparen, kann die Signalweiterleitung vom Dendrit zum Axon verstärkt werden, indem relativ kleine Zellkörper zwischen Dendrit und Axon eingebaut werden, während relativ große Zellkörper besser ausgelagert werden. Im zweiten Teil wird gezeigt, dass biophysikalische Parameter, wie Temperatur, Membranwiderstand oder Kapazität, den Feuermechanismus des Neurons ändern, und damit gleichfalls Aktionspotential-basierte Informationsverarbeitung. Diese Arbeit identifiziert die sogenannte "saddle-node-loop" (Sattel-Knoten-Schlaufe) Bifurkation als den Übergang, der besonders drastische funktionale Auswirkungen hat. Neben der Änderung neuronaler Filtereigenschaften sowie der Ankopplung an Stimuli, führt die "saddle-node-loop" Bifurkation zu einer Erhöhung der Netzwerk-Synchronisation, was möglicherweise für das Auslösen von Anfällen durch Temperatur, wie bei Fieberkrämpfen, interessant sein könnte. / Signal processing in nervous systems is shaped by the connectome as well as the cellular properties of nerve cells. In this thesis, two cellular properties are investigated with respect to the functional adaptations they provide: It is shown that neuronal morphology can improve signal transmission under energetic constraints, and that even small changes in biophysical parameters can switch spike generation, and thus information encoding. In the first project of the thesis, mathematical modeling and data are deployed to suggest energy-efficient signaling as a major evolutionary pressure behind morphological adaptations of cell body location: In order to save energy, the electrical signal transmission from dendrite to axon can be enhanced if a relatively small cell body is located between dendrite and axon, while a relatively large cell body should be externalized. In the second project, it is shown that biophysical parameters, such as temperature, membrane leak or capacitance, can transform neuronal excitability (i.e., the spike onset bifurcation) and, with that, spike-based information processing. This thesis identifies the so-called saddle-node-loop bifurcation as the transition with particularly drastic functional implications. Besides altering neuronal filters and stimulus locking, the saddle-node-loop bifurcation leads to an increase in network synchronization, which may potentially be relevant for the initiation of seizures in response to increased temperature, such as during fever cramps.

Page generated in 0.0738 seconds