• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 267
  • 89
  • 54
  • 39
  • 10
  • 7
  • 6
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 561
  • 134
  • 100
  • 98
  • 76
  • 70
  • 69
  • 59
  • 53
  • 48
  • 46
  • 44
  • 41
  • 37
  • 37
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Design and Analysis Methods for Cluster Randomized Trials with Pair-Matching on Baseline Outcome: Reduction of Treatment Effect Variance

Park, Misook 01 January 2006 (has links)
Cluster randomized trials (CRT) are comparative studies designed to evaluate interventions where the unit of analysis and randomization is the cluster but the unit of observation is individuals within clusters. Typically such designs involve a limited number of clusters and thus the variation between clusters is left uncontrolled. Experimental designs and analysis strategies that minimize this variance are required. In this work we focus on the CRT with pre-post intervention measures. By incorporating the baseline measure into the analysis, we can effectively reduce the variance of the treatment effect. Well known methods such as adjustment for baseline as a covariate and analysis of differences of pre and post measures are two ways to accomplish this. An alternate way of incorporating baseline measures in the data analysis is to order the clusters on baseline means and pairmatch the two clusters with the smallest means, pair-match the next two, and so on. Our results show that matching on baseline helps to control the between cluster variation when there is a high correlation between the pre-post measures. Six cases of designs and analysis are evaluated by comparing the variance of the treatment effect and the power of related hypothesis tests. We observed that - given our assumptions - the adjusted analysis for baseline as a covariate without pair-matching is the best choice in terms of variance. Future work may reveal that other matching schemes that reflect the natural clustering of experimental units could reduce the variance and increase the power over the standard methods.
162

Modélisation des flux de carbone, d'énergie et d'eau entre l'atmosphère et des écosystèmes de steppe sahélienne avec un modèle de végétation global / Modelisation of carbon, water and energy fluxes between the atmosphere and sahelian ecosystems with a dynamic global vegetation model.

Brender, Pierre 29 May 2012 (has links)
Compte tenu de la vulnérabilité de la population rurale de la région sahélienne aux aléas pluviométriques, et devant les ambitions de certains acteurs d’utiliser le levier de l’usage des terres pour contribuer à l’atténuation du changement climatique, il est important de comprendre les facteurs contribuant à la variabilité de la couverture végétale au Sahel.Une synthèse de la littérature expliquant l’évolution récente de la végétation au Sahel est donc d’abord présentée. Les études s’intéressant au paradigme qui souligne l’impact de l’usage des terres sur les précipitations en Afrique de l’Ouest évaluent principalement ces effets par le couplage de modèles dynamiques globaux de végétation – DGVM – avec des modèles de circulation générale. C’est à l’amélioration d’un tel DGVM, ORCHIDEE, développé à l’Institut Pierre Simon Laplace, que le reste du travail cherche à contribuer.Comme d’autres études ont montré qu’il était possible d’utiliser en première approximation les steppes pâturées et les jachères pour décrire le comportement global de la surface sahélienne, les écarts entre modèle et mesures sont caractérisés pour une jachère située à proximité de Wankama (Niger). Plus précisément, les forces et faiblesses de la paramétrisation et de la structure par défaut du modèle sont diagnostiqués, et l’importance de la réduction d’erreur permise par l’optimisation de certains des paramètres est donnée. En particulier, l’emploi d’une résolution aux différences finies de la diffusion de l’eau dans la colonne de sol est évalué, dans la mesure où cela permet de mieux simuler la réponse rapide du flux évaporatoire aux événements pluvieux que le schéma conceptuel utilisé par défaut dans ORCHIDEE.Le réalisme du modèle est également mesuré à l’échelle régionale, par la comparaison d’observations de NDVI GIMMS_3G à la couverture végétale simulée par le modèle en réponse à différents forçages climatiques . Si les modifications introduites au cours du travail ne permettent pas de mieux décrire les tendances de la végétation au cours des dernières décennies, tirer partie des leçons du présent travail pourra se révéler utile. Il en est de même des conclusions de l’étude de la transitivité des biais conditionnels du modèle réalisée avec Tao Wang et présentée en annexe B. / The evolution of the land-surface conditions is often assessed through the use of “dynamic global vegetation models”, as is shown in a review of the current understanding of the factors of variability and of the recent evolution of the vegetation cover in the Sahel. Such models are also coupled to atmospheric general circulation models to evaluate the land feedback on precipitation in monsoonal climates.Thus, the improvement of the skills of such surface models to simulate the radiative and turbulent fluxes between the land of surface and the atmosphere in the Sahel over a range of scales from hourly to multi-annual has a potential to have significant implications. This is especially true considering the vulnerability of the rural population of the region, which largely relies on rainfed agriculture and the interest on the evolution of the carbon stocks of ecosystems in the context of climate change. Such a work on the ORCHIDEE model is presented here. In complement to croplands, rangelands and fallows represent a large share of the sahelian landscapes and have intermediate characteristics between erosion glacis and acacia bushes. As such, their evolution (in terms of albedo, roughness length,…) may be used to study the Sahel ecosystem behaviour as a first approximation. Differences between model outputs and field observations are quantified for a fallow close to Wankama (Niger). More precisely, some of the drawbacks of the standard parametrisation and structure of the model are diagnosed, and the range of reduction of the model-observation mismatch that results from optimizing some of the parameters are given (plant phenology,…). In particular, the use of a finite difference resolution of the soil water diffusion is considered as it enables to better simulate the fast response of evaporative fluxes to rainfall than the conceptual scheme routinely used in ORCHIDEE. The benefits of the use of such a “physical” hydrological scheme on the different outputs of the surface scheme is evaluated.The realism of the model is also measured at the regional scale, through a comparison with GIMMS_3G NDVI time series over West Africa. If the modifications that have been introduced in the model don’t improve its ability to describe the vegetation cover trends over the last decades in the region, several lessons can be kept from the analysis that has been realised, especially from the work on the transitivity of state-dependant model biases that has been conducted with Tao Wang and which is presented in annex B.
163

Practical usage of optimal portfolio diversification using maximum entropy principle / Practical usage of optimal portfolio diversification using maximum entropy principle

Chopyk, Ostap January 2015 (has links)
"Practical usage of optimal portfolio diversification using maximum entropy principle" by Ostap Chopyk Abstract This thesis enhances the investigation of the principle of maximum entropy, implied in the portfolio diversification problem, when portfolio consists of stocks. Entropy, as a measure of diversity, is used as the objective function in the optimization problem with given side constraints. The principle of maximum entropy, by the nature itself, suggests the solution for two problems; it reduces the estimation error of inputs, as it has a shrinkage interpretation and it leads to more diversified portfolio. Furthermore, improvement to the portfolio optimization is made by using design-free estimation of variance-covariance matrices of stock returns. Design-free estimation is proven to provide superior estimate of large variance-covariance matrices and for data with heavy-tailed densities. To asses and compare the performance of the portfolios, their out-of-sample Sharpe ratios are used. In nominal terms, the out-of- sample Sharpe ratios are almost always lower for the portfolios, created using maximum entropy principle, than for 'classical' Markowitz's efficient portfolio. However, this out-of-sample Sharpe ratios are not statistically different, as it was tested by constructing studentized time-series...
164

Utilisation d'une assimilation d'ensemble pour modéliser des covariances d'erreur d'ébauche dépendantes de la situation météorologique à échelle convective / Use of an ensemble data assimilation to model flow-dependent background error covariances a convective scale

Ménétrier, Benjamin 03 July 2014 (has links)
L'assimilation de données vise à fournir aux modèles de prévision numérique du temps un état initial de l'atmosphère le plus précis possible. Pour cela, elle utilise deux sources d'information principales : des observations et une prévision récente appelée "ébauche", toutes deux entachées d'erreurs. La distribution de ces erreurs permet d'attribuer un poids relatif à chaque source d'information, selon la confiance que l'on peut lui accorder, d'où l'importance de pouvoir estimer précisément les covariances de l'erreur d'ébauche. Les méthodes de type Monte-Carlo, qui échantillonnent ces covariances à partir d'un ensemble de prévisions perturbées, sont considérées comme les plus efficaces à l'heure actuelle. Cependant, leur coût de calcul considérable limite de facto la taille de l'ensemble. Les covariances ainsi estimées sont donc contaminées par un bruit d'échantillonnage, qu'il est nécessaire de filtrer avant toute utilisation. Cette thèse propose des méthodes de filtrage du bruit d'échantillonnage dans les covariances d'erreur d'ébauche pour le modèle à échelle convective AROME de Météo-France. Le premier objectif a consisté à documenter la structure des covariances d'erreur d'ébauche pour le modèle AROME. Une assimilation d'ensemble de grande taille a permis de caractériser la nature fortement hétérogène et anisotrope de ces covariances, liée au relief, à la densité des observations assimilées, à l'influence du modèle coupleur, ainsi qu'à la dynamique atmosphérique. En comparant les covariances estimées par deux ensembles indépendants de tailles très différentes, le bruit d'échantillonnage a pu être décrit et quantifié. Pour réduire ce bruit d'échantillonnage, deux méthodes ont été développées historiquement, de façon distincte : le filtrage spatial des variances et la localisation des covariances. On montre dans cette thèse que ces méthodes peuvent être comprises comme deux applications directes du filtrage linéaire des covariances. L'existence de critères d'optimalité spécifiques au filtrage linéaire de covariances est démontrée dans une seconde partie du travail. Ces critères présentent l'avantage de n'impliquer que des grandeurs pouvant être estimées de façon robuste à partir de l'ensemble. Ils restent très généraux et l'hypothèse d'ergodicité nécessaire à leur estimation n'est requise qu'en dernière étape. Ils permettent de proposer des algorithmes objectifs de filtrage des variances et pour la localisation des covariances. Après un premier test concluant dans un cadre idéalisé, ces nouvelles méthodes ont ensuite été évaluées grâce à l'ensemble AROME. On a pu montrer que les critères d'optimalité pour le filtrage homogène des variances donnaient de très bons résultats, en particulier le critère prenant en compte la non-gaussianité de l'ensemble. La transposition de ces critères à un filtrage hétérogène a permis une légère amélioration des performances, à un coût de calcul plus élevé cependant. Une extension de la méthode a ensuite été proposée pour les composantes du tenseur de la hessienne des corrélations locales. Enfin, les fonctions de localisation horizontale et verticale ont pu être diagnostiquées, uniquement à partir de l'ensemble. Elles ont montré des variations cohérentes selon la variable et le niveau concernés, et selon la taille de l'ensemble. Dans une dernière partie, on a évalué l'influence de l'utilisation de variances hétérogènes dans le modèle de covariances d'erreur d'ébauche d'AROME, à la fois sur la structure des covariances modélisées et sur les scores des prévisions. Le manque de réalisme des covariances modélisées et l'absence d'impact positif pour les prévisions soulèvent des questions sur une telle approche. Les méthodes de filtrage développées au cours de cette thèse pourraient toutefois mener à d'autres applications fructueuses au sein d'approches hybrides de type EnVar, qui constituent une voie prometteuse dans un contexte d'augmentation de la puissance de calcul disponible. / Data assimilation aims at providing an initial state as accurate as possible for numerical weather prediction models, using two main sources of information : observations and a recent forecast called the “background”. Both are affected by systematic and random errors. The precise estimation of the distribution of these errors is crucial for the performance of data assimilation. In particular, background error covariances can be estimated by Monte-Carlo methods, which sample from an ensemble of perturbed forecasts. Because of computational costs, the ensemble size is much smaller than the dimension of the error covariances, and statistics estimated in this way are spoiled with sampling noise. Filtering is necessary before any further use. This thesis proposes methods to filter the sampling noise of forecast error covariances. The final goal is to improve the background error covariances of the convective scale model AROME of Météo-France. The first goal is to document the structure of background error covariances for AROME. A large ensemble data assimilation is set up for this purpose. It allows to finely characterize the highly heterogeneous and anisotropic nature of covariances. These covariances are strongly influenced by the topography, by the density of assimilated observations, by the influence of the coupling model, and also by the atmospheric dynamics. The comparison of the covariances estimated from two independent ensembles of very different sizes gives a description and quantification of the sampling noise. To damp this sampling noise, two methods have been historically developed in the community : spatial filtering of variances and localization of covariances. We show in this thesis that these methods can be understood as two direct applications of the theory of linear filtering of covariances. The existence of specific optimality criteria for the linear filtering of covariances is demonstrated in the second part of this work. These criteria have the advantage of involving quantities that can be robustly estimated from the ensemble only. They are fully general and the ergodicity assumption that is necessary to their estimation is required in the last step only. They allow the variance filtering and the covariance localization to be objectively determined. These new methods are first illustrated in an idealized framework. They are then evaluated with various metrics, thanks to the large ensemble of AROME forecasts. It is shown that optimality criteria for the homogeneous filtering of variances yields very good results, particularly with the criterion taking the non-gaussianity of the ensemble into account. The transposition of these criteria to a heterogeneous filtering slightly improves performances, yet at a higher computational cost. An extension of the method is proposed for the components of the local correlation hessian tensor. Finally, horizontal and vertical localization functions are diagnosed from the ensemble itself. They show consistent variations depending on the considered variable and level, and on the ensemble size. Lastly, the influence of using heterogeneous variances into the background error covariances model of AROME is evaluated. We focus first on the description of the modelled covariances using these variances and then on forecast scores. The lack of realism of the modelled covariances and the negative impact on scores raise questions about such an approach. However, the filtering methods developed in this thesis are general. They are likely to lead to other prolific applications within the framework of hybrid approaches, which are a promising way in a context of growing computational resources.
165

An investigation of temporal variability of CO2 fluxes in a boreal coniferous forest and a bog in central Siberia : from local to regional scale

Park, Sung-Bin 04 July 2019 (has links)
No description available.
166

Navigation Strategies for Improved Positioning of Autonomous Vehicles

Sandmark, David January 2019 (has links)
This report proposes three algorithms using model predictive control (MPC) in order to improve the positioning accuracy of an unmanned vehicle. The developed algorithms succeed in reducing the uncertainty in position by allowing the vehicle to deviate from a planned path, and can also handle the presence of occluding objects. To achieve this improvement, a compromise is made between following a predefined trajectory and maintaining good positioning accuracy. Due to the recent development of threats to systems using global navigation satellite systems to localise themselves, there is an increased need for methods of localisation that can function without relying on receiving signals from distant satellites. One example of such a system is a vehicle using a range-bearing sensor in combination with a map to localise itself. However, a system relying only on these measurements to estimate its position during a mission may get lost or gain an unacceptable level of uncertainty in its position estimates. Therefore, this thesis proposes a selection of algorithms that have been developed with the purpose of improving the positioning accuracy of such an autonomous vehicle without changing the available measurement equipment. These algorithms are: A nonlinear MPC solving an optimisation problem. A linear MPC using a linear approximation of the positioning uncertainty to reduce the computational complexity. A nonlinear MPC using a linear approximation (henceforth called the approximate MPC) of an underlying component of the positioning uncertainty in order to reduce computational complexity while still having good performance. The algorithms were evaluated in two different types of simulated scenarios in MATLAB. In these simulations, the nonlinear, linear and approximate MPC algorithms reduced the root mean squared positioning error by 20-25 %, 14-18 %, and 23-27 % respectively, compared to a reference path. It was found that the approximate MPC seems to have the best performance of the three algorithms in the examined scenarios, while the linear MPC may be used in the event that this is too computationally costly. The nonlinear MPC solving the full problem is a reasonable choice only in the case when computing power is not limited, or when the approximation used in the approximate MPC is too inaccurate for the application.
167

Comparative Analysis of Ledoit's Covariance Matrix and Comparative Adjustment Liability Management (CALM) Model Within the Markowitz Framework

Zhang, Yafei 08 May 2014 (has links)
Estimation of the covariance matrix of asset returns is a key component of portfolio optimization. Inherent in any estimation technique is the capacity to inaccurately reflect current market conditions. Typical of Markowitz portfolio optimization theory, which we use as the basis for our analysis, is to assume that asset returns are stationary. This assumption inevitably causes an optimized portfolio to fail during a market crash since estimates of covariance matrices of asset returns no longer re ect current conditions. We use the market crash of 2008 to exemplify this fact. A current industry standard benchmark for estimation is the Ledoit covariance matrix, which attempts to adjust a portfolio's aggressiveness during varying market conditions. We test this technique against the CALM (Covariance Adjustment for Liability Management Method), which incorporates forward-looking signals for market volatility to reduce portfolio variance, and assess under certain criteria how well each model performs during recent market crash. We show that CALM should be preferred against the sample convariance matrix and Ledoit covariance matrix under some reasonable weight constraints.
168

Uso de informações de parentesco e modelos mistos para avaliação e seleção de genótipos de cana-de-açúcar / Usage of kinship and mixed models for evaluation and selection of sugarcane genotypes

Freitas, Edjane Gonçalves de 02 August 2013 (has links)
Nos programas de melhoramento de cana-de-açúcar todos os anos são instalados experimentos com o objetivo de avaliar genótipos que podem eventualmente ser recomendados para o plantio, ou mesmo como genitores. Este objetivo é atingido com o emprego de experimentos em diferentes locais, durante diferentes colheitas. Além disso, frequentemente há grande desbalanceamento, pois nem todos os genótipos são avaliados em todos os experimentos. O emprego de abordagens tradicionais como análise de variância conjunta (ANAVA) é inviável devido à condição de desbalanceamento e ao fato de as pressuposições não modelarem adequadamente o relacionamento entre as observações. O emprego de modelos misto utilizando a metodologia REML/BLUP é uma alternativa para análise desses experimentos em cana-deaçúcar, permitindo também incorporar a informação de parentesco entre os indivíduos. Nesse contexto, foram analisados 44 experimentos (locais) de cana-de-açúcar do programa de melhoramento da cana-de-açúcar do Instituto Agronômico de Campinas (IAC), com 74 genótipos (clones e variedades) e com até 5 colheitas. O delineamento foi o de blocos ao acaso com 2 a 6 repetições. O caráter analisado foi TPH (Tonelada de pol por hectare). Foram testados 40 modelos, os 20 primeiros foram avaliadas diferentes estrutura de VCOV para locais e colheitas, e os 20 seguintes, além das matrizes de VCOV, foi incorporada a matriz de parentesco genético, A. De acordo com AIC, verificou-se que o Modelo 11, o qual assume as matrizes FA1, AR1 e ID, para locais, colheitas e genótipos, respectivamente, foi o melhor, e portanto, o mais eficiente para seleção de genótipos superiores. Quando comparado ao modelo tradicional (médias dos experimentos), houve mudanças no ranqueamento dos genótipos. Há correlação entre o modelo tradicional e o Modelo 11 (_ = 0, 63, p-valor < 0, 001). A opção de utilizar modelo misto sem ajustar as matrizes de VCOV (Modelo 1) é relativamente melhor do que usar o Modelo Tradicional. Isto foi evidenciado pela correlação mais alta entre os modelos 1 e 11 (_ = 0, 87 com p-valor < 0, 001). Acredita-se que o emprego do Modelo 11 junto com experiência do melhorista poderá aumentar a eficiência de seleção em programas de melhoramento de cana-de-açúcar. / In breeding programs of sugarcane every year experiments are installed to evaluate the performance of genotypes, in order to select superior varieties and genitors. The use of ordinary approaches such as joint analysis of variance (ANOVA) is unfeasible due to unbalancing and assumptions that do not reflect the standard of relationship of the observations. The use of mixed models using the method REML/BLUP is an alternative. It also allows the incorporation of information from kinship between individuals. In this context, we analyzed 44 trials (locations) of sugarcane breeding program of sugarcane (Agronomic Institute Campinas, IAC), with 74 genotypes (varieties and clones), up to 5 harvests. The experimental design was randomized blocks with 2-6 replicates. The character was examined TPH (Tons of pol per hectare). We tested 40 models, the first 20 were evaluated different VCOV structure to locations and harvests, and 20 following addition of matrix VCOV was incorporated genetic relationship matrix, A. Under AIC, it was found that the model 11, which assumes matrices FA1, AR1 and ID for locations, harvests and genotypes, respectively, was the best. There is a moderate correlation between traditional model and model 11 (_ = 0.63, p-value < 0.001), when ranking the genotypes. The option of using mixed model without adjusting matrices VCOV (model 1) is better than using the traditional model. This was suggested by the higher correlation between models 1 and 11 (_ = 0.87 with p-value < 0.001). We believe that the usage of model 11 together with breeders experience can increase the efficiency of selection in sugarcane breeding programs.
169

Evaluation of motion compensated ADV measurements for quantifying velocity fluctuations

Unknown Date (has links)
This study assesses the viability of using a towfish mounted ADV for quantifying water velocity fluctuations in the Florida Current relevant to ocean current turbine performance. For this study a motion compensated ADV is operated in a test flume. Water velocity fluctuations are generated by a 1.3 cm pipe suspended in front of the ADV at relative current speeds of 0.9 m/s and 0.15 m/s, giving Reynolds numbers on the order of 1000. ADV pitching motion of +/- 2.5 [degree] at 0.3 Hz and a heave motion of 0.3 m amplitude at 0.2 Hz are utilized to evaluate the motion compensation approach. The results show correction for motion provides up to an order of magnitude reduction in turbulent kinetic energy at frequencies of motion while the IMU is found to generate 2% error at 1/30 Hz and 9% error at 1/60 Hz in turbulence intensity. / by James William Lovenbury. / Thesis (M.S.C.S.)--Florida Atlantic University, 2013. / Includes bibliography. / Mode of access: World Wide Web. / System requirements: Adobe Reader.
170

Improved estimation of the scale matrix in a one-sample and two-sample problem.

January 1998 (has links)
by Foon-Yip Ng. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1998. / Includes bibliographical references (leaves 111-115). / Abstract also in Chinese. / Chapter Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Main Problems --- p.1 / Chapter 1.2 --- The Basic Concept of Decision Theory --- p.4 / Chapter 1.3 --- The Class of Orthogonally Invariant Estimators --- p.6 / Chapter 1.4 --- Related Works --- p.8 / Chapter 1.5 --- Summary --- p.10 / Chapter Chapter 2 --- Estimation of the Scale Matrix in a Wishart Distribution --- p.12 / Chapter 2.1 --- Review of the Previous Works --- p.13 / Chapter 2.2 --- Some Useful Statistical and Mathematical Results --- p.15 / Chapter 2.3 --- Improved Estimation of Σ under the Loss L1 --- p.18 / Chapter 2.4 --- Simulation Study for Wishart Distribution under the Loss L1 --- p.22 / Chapter 2.5 --- Improved Estimation of Σ under the Loss L2 --- p.25 / Chapter 2.6 --- Simulation Study for Wishart Distribution under the Loss L2 --- p.28 / Chapter Chapter 3 --- Estimation of the Scale Matrix in a Multivariate F Distribution --- p.31 / Chapter 3.1 --- Review of the Previous Works --- p.32 / Chapter 3.2 --- Some Useful Statistical and Mathematical Results --- p.35 / Chapter 3.3 --- Improved Estimation of Δ under the Loss L1____ --- p.38 / Chapter 3.4 --- Simulation Study for Multivariate F Distribution under the Loss L1 --- p.42 / Chapter 3.5 --- Improved Estimation of Δ under the Loss L2 ________ --- p.46 / Chapter 3.6 --- Relationship between Wishart Distribution and Multivariate F Distribution --- p.51 / Chapter 3.7 --- Simulation Study for Multivariate F Distribution under the Loss L2 --- p.52 / Chapter Chapter 4 --- Estimation of the Scale Matrix in an Elliptically Contoured Matrix Distribution --- p.57 / Chapter 4.1 --- Some Properties of Elliptically Contoured Matrix Distributions --- p.58 / Chapter 4.2 --- Review of the Previous Works --- p.60 / Chapter 4.3 --- Some Useful Statistical and Mathematical Results --- p.62 / Chapter 4.4 --- Improved Estimation of Σ under the Loss L3 --- p.63 / Chapter 4.5 --- Simulation Study for Multivariate-Elliptical t Distributions under the Loss L3 --- p.67 / Chapter 4.5.1 --- Properties of Multivariate-Elliptical t Distribution --- p.67 / Chapter 4.5.2 --- Simulation Study for Multivariate- Elliptical t Distributions --- p.70 / Chapter 4.6 --- Simulation Study for ε-Contaminated Normal Distributions under the Loss L3 --- p.74 / Chapter 4.6.1 --- Properties of ε-Contaminated Normal Distributions --- p.74 / Chapter 4.6.2 --- Simulation Study for 2-Contaminated Normal Distributions --- p.76 / Chapter 4.7 --- Discussions --- p.79 / APPENDIX --- p.81 / BIBLIOGRAPHY --- p.111

Page generated in 0.0277 seconds