• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 625
  • 267
  • 111
  • 73
  • 43
  • 43
  • 35
  • 22
  • 17
  • 11
  • 8
  • 7
  • 5
  • 5
  • 5
  • Tagged with
  • 1427
  • 530
  • 171
  • 160
  • 157
  • 147
  • 114
  • 104
  • 104
  • 100
  • 100
  • 97
  • 95
  • 94
  • 93
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Quantifying Trust in Deep Learning Ultrasound Models by Investigating Hardware and Operator Variance

Zhu, Calvin January 2021 (has links)
Ultrasound (US) is the most widely used medical imaging modality due to its low cost, portability, real time imaging ability and use of non-ionizing radiation. However, unlike other imaging modalities such as CT or MRI, it is a heavily operator dependent, requiring trained expertise to leverage these benefits. Recently there has been an explosion of interest in artificial intelligence (AI) across the medical community and many are turning to the growing trend of deep learning (DL) models to assist in diagnosis. However, deep learning models do not perform as well when training data is not fully representative of the problem. Due to this difference in training and deployment, model performance suffers which can lead to misdiagnosis. This issue is known as dataset shift. Two aims to address dataset shift were proposed. The first was to quantify how US operator skill and hardware affects acquired images. The second was to use this skill quantification method to screen and match data to deep learning models to improve performance. A BLUE phantom from CAE Healthcare (Sarasota, FL) with various mock lesions was scanned by three operators using three different US systems (Siemens S3000, Clarius L15, and Ultrasonix SonixTouch) producing 39013 images. DL models were trained on a specific set to classify the presence of a simulated tumour and tested with data from differing sets. The Xception, VGG19, and ResNet50 architectures were used to test the effects with varying frameworks. K-Means clustering was used to separate images generated by operator and hardware into clusters. This clustering algorithm was then used to screen incoming images during deployment to best match input to an appropriate DL model which is trained specifically to classify that type of operator or hardware. Results showed a noticeable difference when models were given data from differing datasets with the largest accuracy drop being 81.26% to 31.26%. Overall, operator differences more significantly affected DL model performance. Clustering models had much higher success separating hardware data compared to operator data. The proposed method reflects this result with a much higher accuracy across the hardware test set compared to the operator data. / Thesis / Master of Applied Science (MASc)
142

Modélisation de la courbe de variance et modèles à volatilité stochastique / Forward Variance Modelling and Stochastic Volatility Models

Ould Aly, Sidi Mohamed 16 June 2011 (has links)
La première partie de cette thèse est consacrée aux problématiques liées à la modélisation markovienne de la courbe de variance forward. Elle est divisée en 3 chapitres. Dans le premier chapitre, nous présentons le cadre général de la modélisation de type HJM-Markov pour la courbe de variance forward. Nous revisitons le cadre affine-markovien modélisation et nous l'illustrons par l'exemple du modèle de Bühler. Dans le deuxième chapitre, nous proposons un nouveau modèle pour la courbe de variance forward qui combine les caractéristiques des deux versions (continue et discrète) du modèle de Bergomi 2008, sans se réduire ni à l'une ni à l'autre. Un des avantages de ce modèle est que les prix des futures et options sur VIX peuvent être exprimés comme des espérances de fonctions déterministes d'une variable aléatoire gaussienne, ce qui réduit le problème de la calibration à l'inversion de certaines fonctions monotones. Dans le troisième chapitre, on propose une méthode d'approximation pour les prix d'options européennes dans des modèles à volatilité stochastique de type multi-factoriels lognormal (comprenant le modèle présenté dans le deuxième chapitre, les modèles de Bergomi et le modèle de Scot 1987). Nous obtenons un développement d'ordre 3 de la densité du sous-jacent par rapport au paramètre de la volatilité de la volatilité. Nous présentons aussi une méthode de réduction de variance de type "variable de contrôle" pour la simulation par la méthode de Monte-Carlo qui utilise l'approximation explicite que nous obtenons de la fonction de répartition de la loi du sous-jacent. La deuxième partie de cette thèse est consacrée à l'étude des propriétés de monotonie des prix d'options européennes par rapport aux paramètres du CIR dans le modèle de Heston. Elle est divisée en deux chapitres. Dans le premier chapitre (cf. chapitre 4), nous donnons quelques résultats généraux sur le processus CIR. Nous montrons d'abord que les queues de distribution d'une combinaison du CIR et de sa moyenne arithmétique se comportent comme des exponentielles. Nous étudions ensuite les dérivées de la solution de ce processus par rapport aux paramètres de sa dynamique. Ces dérivées sont données comme solutions d'équations différentielles stochastiques, qu'on résout pour obtenir des représentations de ces dérivées en fonction des trajectoires du CIR. Le chapitre 5 est consacré à l'étude de la monotonie du prix d'un Put européen par rapport aux paramètres du CIR et à la corrélation dans le modèle de Heston. Nous montrons que, sous certaines conditions, les prix d’options européennes sont monotones par rapport aux paramètres du drift du CIR. Nous montrons ensuite que le paramètre de la volatilité de la volatilité joue le rôle de la volatilité si on prend la variance réalisée comme sous-jacent. En particulier, les prix d'options convexes sur la variance réalisée sont strictement croissants par rapport à la volatilité de la volatilité. Enfin, nous étudions la monotonie du prix du Put européen par rapport à la corrélation. Nous montrons que le prix du put du Put est croissant par rapport à la corrélation pour les petites valeurs du Spot et décroissant pour les grandes valeurs. Nous étudions ensuite les points de changement de monotonie pour les courtes et longues maturités / The first part of this thesis deals with issues related to the Markov-modeling of the forward variance curve. It is divided into 3 chapters. In the first chapter, we present the general framework of the HJM-type modelling for the forward variance curve. We revisit the Affine-Markov framework, and illustrate by the model proposed by B"uhler 2006. In the second chapter, we propose a new model for the forward variance curve that combines features of the continuous and discrete version of Bergomi's model model Bergomi (2008), without being reduced to either of them. One of the strengths of this model is that the prices of VIX futures and options can be expressed as expectations of deterministic functions of a Gaussian random variable, which reduces the problem of calibration to the inversion of some monotonic functions. In the third chapter, we propose an approximation method for pricing of European options under some lognormal stochastic volatility models (including the model presented in the second chapter, Bergomi's model2008 and Scot model 1987). We obtain an expansion (with respect to the the volatility of volatility parameters of order 3) of the density of the underlying. We also propose a control variate method to effectively reduce variances of Monte Carlo simulations for pricing European optionsThe purpose of the second part of this thesis is to study the monotonicity properties of the prices of European options with respect to the CIR parameters under Heston model. It is divided into two chapters. In the first chapter (see Chapter 4), we give some general results related to the CIR process. We first show that the distribution tails of a combination of the CIR and its arithmetic mean behave as exponential. We then study the derivatives of the solution process with respect to the parameters of its dynamics. These data are derived as solutions of stochastic differential equations, which solves for the representations of these derivatives based on trajectories of the CIR. Chapter 5 is devoted to the study of the monotony of the European price of a put with respect to parameters of CIR and correlation in the Heston model. We show that under certain conditions, prices of European options are monotonic with respect to the parameters of the drift of the CIR. We then show that the parameter of the volatility of volatility plays the role of volatility if we take the realized variance as the underlying. In particular, prices of (convex) options on realized variance are strictly increasing with respect to the volatility of volatility. Finally, we study the monotony of the European Put prices with respect to the correlation. We show that the price of the put is increasing with respect to the correlation for small values ​​of Spot and decreasing for large values. We then study the change points of monotonicity for short and long maturities
143

Robustness of normal theory inference when random effects are not normally distributed

Devamitta Perera, Muditha Virangika January 1900 (has links)
Master of Science / Department of Statistics / Paul I. Nelson / The variance of a response in a one-way random effects model can be expressed as the sum of the variability among and within treatment levels. Conventional methods of statistical analysis for these models are based on the assumption of normality of both sources of variation. Since this assumption is not always satisfied and can be difficult to check, it is important to explore the performance of normal based inference when normality does not hold. This report uses simulation to explore and assess the robustness of the F-test for the presence of an among treatment variance component and the normal theory confidence interval for the intra-class correlation coefficient under several non-normal distributions. It was found that the power function of the F-test is robust for moderately heavy-tailed random error distributions. But, for very heavy tailed random error distributions, power is relatively low, even for a large number of treatments. Coverage rates of the confidence interval for the intra-class correlation coefficient are far from nominal for very heavy tailed, non-normal random effect distributions.
144

A computer simulation study for comparing three methods of estimating variance components

Walsh, Thomas Richard January 2010 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries
145

Tests for unequal treatment variances in crossover designs

Jung, Yoonsung January 1900 (has links)
Doctor of Philosophy / Department of Statistics / John E. Boyer Jr., Dallas E. Johnson / A crossover design is an experimental design in which each experimental unit receives a series of experimental treatments over time. The order that an experimental unit receives its treatments is called a sequence (example, the sequence AB means that treatment A is given first, and then followed by treatment B). A period is the time interval during which a treatment is administered to the experimental unit. A period could range from a few minutes to several months depending on the study. Sequences usually involve subjects receiving a different treatment in each successive period. However, treatments may occur more than once in any sequence (example, ABAB). Treatments and periods are compared within subjects, i.e. each subject serves as his/her own control. Therefore, any effect that is related to subject differences is removed from treatment and period comparisons. Carryover effects are residual effects from a previous treatment manifesting themselves in subsequent periods. Crossover designs both with and without carryover are traditionally analyzed assuming that the response due to different treatments have equal variances. The effects of unequal variances on traditional tests for treatment and carryover difference were recently considered in crossover designs assuming that the response due to treatments have unequal variances with a compound symmetry correlation structure. The likelihood function for the two treatment/two sequence crossover design has closed form maximum likelihood solutions for the parameters at both the null hypothesis, H0 : sigma_A^2 =\sigma_B^2, and at alternative hypothesis, HA : not H0. Under HA : not H0, the method of moment estimators and the maximum likelihood estimators of sigma_A^2,sigma_B^2 and rho are identical. The dual balanced design, ABA=BAB, which is balanced for carryover effects is also considered. The dual balanced design has a closed form solution that maximizes the likelihood function under the null hypothesis, H0 :sigma_A^2=sigma_B^2, but not for the alternative hypothesis, HA : not H0. Similarly, the three treatment/three sequence crossover design, ABC=BCA=CAB, has a closed form solution that maximizes the likelihood function at the null hypothesis, H0 : sigma_A^2=sigma_B^2 = sigma_C^2, but not for the alternative hypothesis, HA : not H0. An iterative procedure is introduced to estimate the parameters for the two and three treatment crossover designs. To check the performance of the likelihood ratio tests, Type I error rates and power comparisons are explored using simulations.
146

Imprégnation du bois assistée par des ondes de pression

Lagrandeur, Junior January 2010 (has links)
Cette recherche visait à développer un procédé d'imprégnation utilisant des ondes de choc pour teindre du bois franc. D'importants progrès ont été réalisés à cet égard dans le projet. Tout d'abord, un modèle théorique a montré que les ondes de choc augmentent la pénétration dans un milieu poreux de perméabilité constante, mais seulement si le liquide utilisé est rhéofluidifiant. De plus, les ondes de choc semblent atténuer les variations de perméabilité, ce qui permettrait de teindre des espèces moins perméables et de donner au bois un fini plus uniforme. En second lieu, les essais expérimentaux réalisés sur des échantillons de merisier dans un tube à choc ont confirmé que les ondes de choc imprègnent efficacement le bois. Effectivement, il a été démontré qu'une augmentation du nombre de tirs et de l'amplitude des chocs améliore l'imprégnation sous certaines conditions. De plus, les ondes de choc peuvent être combinées avec un prétraitement du bois aux micro-ondes pour une meilleure imprégnation. Cependant, la grande variabilité des résultats entre chaque série de mesure ne permet pas de prédire la pénétration en fonction des conditions du choc, possiblement parce que l'efficacité du procédé dépend du taux d'humidité du bois. Finalement, deux mécanismes sont proposés pour expliquer l'effet bénéfique des ondes de choc sur l'imprégnation. Premièrement, l'augmentation importante de la pression causée par l'onde de choc pourrait endommager la microstructure du bois, en augmentant sa perméabilité. Cependant, cette hypothèse n'a pu être validée. Ensuite, comme la teinture a un comportement rhéofluidifiant, les chocs réduisent possiblement sa viscosité. Cette hypothèse rejoint les conclusions obtenues grâce au modèle théorique et elle explique plusieurs résultats des essais expérimentaux.
147

A FREQUENCY SCAN/FOLLOWING SCAN TWOWAY CARRIER ACQUISITION METHOD FOR USB SYSTEM

Jiaxing, Liu, Hongjun, Yang 10 1900 (has links)
ITC/USA 2007 Conference Proceedings / The Forty-Third Annual International Telemetering Conference and Technical Exhibition / October 22-25, 2007 / Riviera Hotel & Convention Center, Las Vegas, Nevada / This paper introduces a frequency scan/following scan twoway carrier acquisition method for USB and its following scan slope decision algorithm. Some measures are used to improve twoway acquisition speed such as selecting initiation direction and returning to zero in the shortest path, which can be implemented by software. Theoretic analysis, mathematical expression, design method and experiment results are provided. Practical engineering application shows the twoway acquisition using this new method has many advantages such as fast speed, low cost and programmability. The method has been used in Chinese USB system widely.
148

ANOVA - The Effect of Outliers

Halldestam, Markus January 2016 (has links)
This bachelor’s thesis focuses on the effect of outliers on the one-way analysis of variance and examines whether the estimate in ANOVA is robust and whether the actual test itself is robust from influence of extreme outliers. The robustness of the estimates is examined using the breakdown point while the robustness of the test is examined by simulating the hypothesis test under some extreme situations. This study finds evidence that the estimates in ANOVA are sensitive to outliers, i.e. that the procedure is not robust. Samples with a larger portion of extreme outliers have a higher type-I error probability than the expected level.
149

Longitudinal analysis on AQI in 3 main economic zones of China

Wu, Kailin 09 October 2014 (has links)
In modern China, air pollution has become an essential environmental problem. Over the last 2 years, the air pollution problem, as measured by PM 2.5 (particulate matter) is getting worse. My report aims to carry out a longitudinal data analysis of the air quality index (AQI) in 3 main economic zones in China. Longitudinal data, or repeated measures data, can be viewed as multilevel data with repeated measurements nested within individuals. I arrive at some conclusions about why the 3 areas have different AQI, mainly attributed to factors like population, GDP, temperature, humidity, and other factors like whether the area is inland or by the sea. The residual variance is partitioned into a between-zone component (the variance of the zone-level residuals) and a within-zone component (the variance of the city-level residuals). The zone residuals represent unobserved zone characteristics that affect AQI. In this report, the model building is mainly according to the sequence described by West et al (2007) with respect to the bottom-up procedures and the reference by Singer, J. D., & Willett, J. B (2003) which includes the non-linear situations. This report also compares the quartic curve model with piecewise growth model with respect to this data. The final model I reached is a piece wise model with time-level and zone-level predictors and also with temperature by time interactions. / text
150

Monte Carlo Simulation of Boundary Crossing Probabilities for a Brownian Motion and Curved Boundaries

Drabeck, Florian January 2005 (has links) (PDF)
We are concerned with the probability that a standard Brownian motion W(t) crosses a curved boundary c(t) on a finite interval [0, T]. Let this probability be denoted by Q(c(t); T). Due to recent advances in research a new way of estimating Q(c(t); T) seems feasible: Monte Carlo Simulation. Wang and Pötzelberger (1997) derived an explicit formula for the boundary crossing probability of piecewise linear functions which has the form of an expectation. Based on this formula we proceed as follows: First we approximate the general boundary c(t) by a piecewise linear function cm(t) on a uniform partition. Then we simulate Brownian sample paths in order to evaluate the expectation in the formula of the authors for cm(t). The bias resulting when estimating Q(c_m(t); T) rather than Q(c(t); T) can be bounded by a formula of Borovkov and Novikov (2005). Here the standard deviation - or the variance respectively - is the main limiting factor when increasing the accuracy. The main goal of this dissertation is to find and evaluate variance reducing techniques in order to enhance the quality of the Monte Carlo estimator for Q(c(t); T). Among the techniques we discuss are: Antithetic Sampling, Stratified Sampling, Importance Sampling, Control Variates, Transforming the original problem. We analyze each of these techniques thoroughly from a theoretical point of view. Further, we test each technique empirically through simulation experiments on several carefully chosen boundaries. In order to asses our results we set them in relation to a previously established benchmark. As a result of this dissertation we derive some very potent techniques that yield a substantial improvement in terms of accuracy. Further, we provide a detailed record of our simulation experiments. (author's abstract)

Page generated in 0.2076 seconds