Spelling suggestions: "subject:"aximum"" "subject:"amaximum""
291 |
Modeling terrestrial carbon cycle during the Last Glacial Maximum / Modélisation du cycle du carbone terrestre au cours du dernier maximum glaciaireZhu, Dan 30 September 2016 (has links)
Pendant les transitions glaciaire-interglaciaires,on observe une augmentation en partie abrupte de près de 100 ppm du CO2atmosphérique, indiquant une redistribution majeure entre les réservoirs de carbone des continents, de l'océan et de l'atmosphère.Expliquer les flux de carbone associés à ces transitions est un défi scientifique, qui nécessite une meilleure compréhension du stock de carbone ‘initial’ dans la biosphère terrestre au cours de la période glaciaire. L’objectif de cette thèse est d’améliorer la compréhension du fonctionnement des écosystèmes terrestres et des stocks de carbone au cours du dernier maximum glaciaire (LGM, il y a environ21.000 ans), à travers plusieurs nouveaux développements dans le modèle global de végétation ORCHIDEE-MICT, pour améliorer la représentation de la dynamique de la végétation, la dynamique du carbone dans le sol du pergélisol et les interactions entre les grands herbivores et la végétation dans le modèle de la surface terrestre.Pour la première partie, la représentation de la dynamique de la végétation dans ORCHIDEEMICT pour les régions des moyennes et hautes latitudes, a été calibrée et évaluée avec un ensemble de données spatiales de classes de végétation, production primaire brute, et de biomasse forestière pour la période actuelle.Des améliorations sont obtenues avec la nouvelle version du modèle dans la distribution des groupes fonctionnels de végétation. Ce modèle a ensuite été appliqué pour simuler la distribution de la végétation au cours de laLGM, montrant un accord général avec les reconstructions ponctuelles basées sur des données de pollen et de macro-fossiles de plantes.Une partie du pergélisol (sols gelés en permanence) contient des sédiments épais,riches en glace et en matières organiques appelés Yedoma, qui contiennent de grandes quantités de carbone organique, et sont des reliques des stocks de carbone du Pléistocène.Ces sédiments ont été accumulés sous des climats glaciaires. Afin de simuler l'accumulation du carbone dans les dépôts de Yedoma, j’ai proposé une nouvelle paramétrisation de la sédimentation verticale dans le module de carbone dans le sol de ORCHIDEE-MICT. L'inclusion de ce processus a permis de reproduire la distribution verticale de carbone observée sur des sites de Yedoma. Une première estimation du stock de carbone dans le pergélisol au cours du LGM est obtenue, de l’ordre de ~ 1550 PgC, dont 390 ~446 PgC sous forme de Yedoma encore intacts aujourd’hui (1,3 millions de km2).Potentiellement, une plus grande surface de Yedoma pourrait être présente pendant leLGM, qui a disparue lors de la déglaciation.Pour la troisième partie, à la lumière des impacts écologiques des grands animaux, et le rôle potentiel des méga-herbivores comme une force qui a maintenu les écosystèmes steppiques pendant les périodes glaciaires, j'ai incorporé un modèle de d’herbivores dans ORCHIDEE-MICT, basé sur des équations physiologiques pour l'apport énergétique et les dépenses, le taux de natalité, et le taux de mortalité pour les grands herbivores sauvages.Le modèle a montré des résultats raisonnables de biomasse des grands herbivores en comparaison avec des observations disponibles aujourd’hui sur des réserves naturelles. Nous avons simulé un biome de prairies très étendu pendant le LGM avec une densité importante de grands herbivores. Les effets des grands herbivores sur la végétation et le cycle du carbone du LGM ont été discutés, y compris la réduction de la couverture forestière, et la plus grande productivité des prairies.Enfin, j’ai réalisé une estimation préliminaire du stock total de carbone dans le permafrost pendant le LGM, après avoir tenu compte des effets des grands herbivores et en faisant une extrapolation de l'étendue spatiale des sédiments de type Yedoma basée sur des analogues climatiques et topographiques qui sont similaires à la région de Yedoma actuelle. / During the repeated glacialinterglacialtransitions, there has been aconsistent and partly abrupt increase of nearly100 ppm in atmospheric CO2, indicating majorredistributions among the carbon reservoirs ofland, ocean and atmosphere. A comprehensiveexplanation of the carbon fluxes associatedwith the transitions is still missing, requiring abetter understanding of the potential carbonstock in terrestrial biosphere during the glacialperiod. In this thesis, I aimed to improve theunderstanding of terrestrial carbon stocks andcarbon cycle during the Last Glacial Maximum(LGM, about 21,000 years ago), through aseries of model developments to improve therepresentation of vegetation dynamics,permafrost soil carbon dynamics, andinteractions between large herbivores andvegetation in the ORCHIDEE-MICT landsurface model.For the first part, I improved theparameterization of vegetation dynamics inORCHIDEE-MICT for the northern mid- tohigh-latitude regions, which was evaluatedagainst present-day observation-based datasetsof land cover, gross primary production, andforest biomass. Significant improvements wereshown for the new model version in thedistribution of plant functional types (PFTs),including a more realistic simulation of thenorthern tree limit and of the distribution ofevergreen and deciduous conifers in the borealzone. The revised model was then applied tosimulate vegetation distribution during theLGM, showing a general agreement with thepoint-scale reconstructions based on pollen andplant macrofossil data.Among permafrost (perennially frozen) soils,the thick, ice-rich and organic-rich siltysediments called yedoma deposits hold largequantities of organic carbon, which areremnants of late-Pleistocene carbonaccumulated under glacial climates. In order tosimulate the buildup of the thick frozen carbonin yedoma deposits, I implemented asedimentation parameterization in the soilcarbon module of ORCHIDEE-MICT. Theinclusion of sedimentation allowed the modelto reproduce the vertical distribution of carbonobserved at the yedoma sites, leading toseveral-fold increase in total carbon. Simulatedpermafrost soil carbon stock during the LGMwas ~1550 PgC, among which 390~446 PgCwithin today’s known yedoma region (1.3million km2). This result was still anunderestimation since the potentially largerarea of yedoma during the LGM than todaywas not yet taken into account.For the third part, in light of the growingevidence on the ecological impacts of largeanimals, and the potential role of megaherbivoresas a driving force that maintainedthe steppe ecosystems during the glacialperiods, I incorporated a dynamic grazingmodel in ORCHIDEE-MICT, based onphysiological equations for energy intake andexpenditure, reproduction rate, and mortalityrate for wild large grazers. The model showedreasonable results of today’s grazer biomasscompared to empirical data in protected areas,and was able to produce an extensive biomewith a dominant vegetation of grass and asubstantial distribution of large grazers duringthe LGM. The effects of large grazers onvegetation and carbon cycle were discussed,including reducing tree cover, enhancinggrassland productivity, and increasing theturnover rate of vegetation living biomass.Lastly, I presented a preliminary estimation ofpotential LGM permafrost carbon stock, afteraccounting for the effects of large grazers, aswell as extrapolations for the spatial extent ofyedoma-like thick sediments based on climaticand topographic features that are similar to theknown yedoma region. Since these results werederived under LGM climate and constantsedimentation rate, a more realistic simulationwould need to consider transient climate duringthe last glacial period and sedimentation ratevariations in the next step.
|
292 |
A Novel Technique to Improve the Resolution and Contrast of Planar Nuclear Medicine ImagingRaichur, Rohan January 2008 (has links)
No description available.
|
293 |
Identifiering av den invasiva lupinen (Lupinus polyphyllus) : Övervakning av blomsterlupiner längst vägkanter med hjälp av högupplösta UAV-data och GIS / Identifying the invasive Lupinus flower (Lupinus polyphyllus) : Monitoring Lupinus flowers growth along roads using high resolution UAV images an GISPetersen, Pontus January 2022 (has links)
Sveriges vägdiken och vägkanter är hem till många blommor och växtarter. Lupin-blomman Lupinus polyphyllus är en invasiv växtart som kom till Sverige under 1800-talet. Lupinblommans egenskaper gör att växten konkurrerar ut andra växtarter och negativt påverkar svensk biologisk mångfald. Naturvårdsverket och Trafikverket övervakar och hanterar lupinspridningen i Sverige. Det finns dock inget uppsatt digitalt system för övervakning utan myndigheterna förlitar sig mycket på inrapportering av lupinblommor. I denna studie utforskades metoder och parametrar för att med hjälp av GIS och klassificering identifiera lupinblommor med hjälp av högupplösta UAV-foton. Huvudmoment var att undersöka hur väl klassificeringsmetoderna random forest (RF) och maximum likelihood (MLC) identifierar lupiner, vilken flyghöjd för UAV och segmentering vid bildhantering som bör väljas. En tidsnotering på hur länge de olika metoderna tog att bearbeta för programmet utfördes även. Endast övervakad klassificering inom programmet ESRI ArcGIS Pro genomfördes. I studien användes rasterdata insamlad via två UAV längstseparata två vägsträckor på 200 m med flyghöjd från 10 till 120 m. Studien utfördes med segmenteringsparametrarna 1, 5, 10, 15 och 20 i spektrala detaljnivå över ett mindre testområde med 20 m flyghöjd. På dessa segmenteringar testades klassificeringsmetoderna MLC och RF. Baserat på resultaten ifrån dessa tester valdes en klassificeringsmetod ut och med denna utfördes tester på flyghöjd för att få fram var optimal flyghöjd låg. De flyghöjder som testades var 20 m, 50 m och 85 m. Vid varje processnoterades även tidsåtgången. Resultaten kontrollerades via Confusion Matrix och överklassificering för att identifiera den mest effektiva och noggranna metoden. Resultaten ifrån segmenteringen visade att metoden MLC generellt gav godast resultat med en överklassificering mellan +1 % och +3 % och noggrannhet på +90 %. RF gav resultat som låg på +1 % till +9 % överklassificering och noggrannhet var även här +90 %.Flyghöjdstesterna visade att 20m hade en noggrannhet på 97% och överklassificering på4,04 %. 50 m visade en noggrannhet 99 % och överklassificering på 8,17 %. 85 m hade noggrannhet på 53 % och överklassificering på 4,19 % Tidkontrollen visade att de objektbaserade metod var runt 33 % snabbare att utföra än pixelbaserad. Inga stora skillnader mellan klassificeringsmetoder hittades. Generellt visade resultaten att en objektbaserad MLC metod på 20 m gav godast resultat och går snabbast att utföra. Det är möjligt att 30 eller 40 m ger lika goda resultat men dessa höjder fanns ej tillgängligt att testa. Skillnaderna mellan klassificeringsnoggrannheter med RF och MLC var marginella. / Roadsides in Sweden are home to several different plant species. The lupine flower Lupinus polyphyllus is an invasive species originally from North America. Naturvårdsverket and Trafikverket are responsible for monitoring and handle lupine spread in Sweden. This study examined the use of GIS and aerial photos in lupine control and more specifically what parameters and classification methods that are suitable in identifying Lupinus polyphyllus. The two main classification methods were random forest (RF) and maximum likelihood classifiers(MLC). Other factors were the altitude of the UAV collecting the photos and what segmentation parameters were optimal for classification. Processing time when performing the different parameters and methods were also collected. The study used raster data from two drones with altitudes from 10 m to 120 m and the program used to perform these tests were ArcGIS Pro. The segmentation spectral detail levels tested were 1, 5, 10, 15 and 20, these were tested on a smaller area with a flight altitude of 20 m and both RF and MLC were tested on all detail levels. Based on these tests a classification method and segmentation parameters were chosen and tested on differing flight altitudes. These altitudes were 20, 50 and 85 m. A confusion matrix and overestimation of classes were used to determine accuracy and overclassification. Results show that supervised object-based MLC on a raster generated from a 20 m flight altitude gave generally the best results. In this case the accuracy was around 90 % and overclassification was around 1-3 %. Object-based classification was around 33 % faster than pixel-based classification but classification method did not alter the time any noticeable amount. However, it should be noted that a flight height of 30 or 40 m might give equally as good results as 20 m but those altitudes were not available for testing. It should also be pointed out that the difference between RF and MLC was not huge but the desired accuracy and over classification might be stringier depending on the needs of the user.
|
294 |
Bayesian Networks for Modelling the Respiratory System and Predicting HospitalizationsLopo Martinez, Victor January 2023 (has links)
Bayesian networks can be used to model the respiratory system. Their structure indicate how risk factors, symptoms, and diseases are related and the Conditional Probability Tables enable predictions about a patient’s need for hospitalization. Numerous structure learning algorithms exist for discerning the structure of a Bayesian network, but none can guarantee to find the perfect structure. Employing multiple algorithms can discover relationships between variables that might otherwise remain hidden when relying on a single algorithm. The Maximum Likelihood Estimator is the predominant algorithm for learning the Conditional Probability Tables. However, it faces challenges due to the data fragmentation problem, which can compromise its predictions. Failing to hospitalize patients who require specialized medical care could lead to severe consequences. Therefore, in this thesis, the use of an XGBoost model for learning is proposed as a novel and better method since it does not suffer from data fragmentation. A Bayesian network is constructed combining several structure learning algorithms, and the predictive performance of the Maximum Likelihood Estimator and XGBoost are compared. XGBoost achieved a maximum accuracy of 86.0% compared to the Maximum Likelihood Estimator, which attained an accuracy of 81.5% in predicting future patient hospitalization. In this way, the predictive performance of Bayesian networks has been enhanced. / Bayesianska nätverk kan användas för att modellera andningssystemet. Deras struktur visar hur riskfaktorer, symtom och sjukdomar är relaterade, och de villkorliga sannolikhetstabellerna möjliggör prognoser om en patients behov av sjukhusvård. Det finns många strukturlärningsalgoritmer för att urskilja strukturen i ett bayesianskt nätverk, men ingen kan garantera att hitta den perfekta strukturen. Genom att använda flera algoritmer kan man upptäcka relationer mellan variabler som annars kan förbli dolda när man bara förlitar sig på en enda algoritm. Maximum Likelihood Estimator är den dominerande algoritmen för att lära sig de villkorliga sannolikhetstabellerna. Men den står inför utmaningar på grund av datafragmenteringsproblemet, vilket kan äventyra dess prognoser. Att inte lägga in patienter som behöver specialiserad medicinsk vård kan leda till allvarliga konsekvenser. Därför föreslås i denna avhandling användningen av en XGBoost-modell för inlärning som en ny och bättre metod eftersom den inte lider av datafragmentering. Ett bayesianskt nätverk byggs genom att kombinera flera strukturlärningsalgoritmer, och den prediktiva prestandan för Maximum Likelihood Estimator och XGBoost jämförs. XGBoost uppnådde en maximal noggrannhet på 86,0% jämfört med Maximum Likelihood Estimator, som uppnådde en noggrannhet på 81,5% för att förutsäga framtida patientinläggning. På detta sätt har den prediktiva prestandan för bayesianska nätverk förbättrats.
|
295 |
Maximum flow-based formulation for the optimal location of electric vehicle charging stationsParent, Pierre-Luc 08 1900 (has links)
Due à l’augmentation de la force des changements climatiques, il devient critique d’éliminer
les combustibles fossiles. Les véhicules électriques sont un bon moyen de réduire notre
dépendance à ces matières polluantes, mais leur adoption est généralement limitée par le
manque d’accessibilité à des stations de recharge. Dans cet article, notre but est d’agrandir
l’infrastrucure liée aux stations de recharge pour fournir une meilleure qualité de service aux
usagers (et une meilleure accessibilité aux stations). Nous nous attaquons spéficiquement
au context urbain. Nous proposons de représenter un modèle d’assignation de demande de
recharge à des stations sous la forme d’un problème de flux maximum. Ce modèle nous sert
de base pour évaluer la satisfaction des usagers étant donné l’infrastruture disponible. Par la
suite, nous incorporons le model de flux maximum à un programme en nombre entier mixte
qui a pour but d’évaluer l’installation de nouvelles stations et d’étendre leur disponibilité
en ajoutant plus de bornes de recharge. Nous présentons notre méthodologie dans le cas de
la ville de Montréal et montrons que notre approche est en mesure résoudre des instances
réalistes. Nous concluons en montrant l’importance de la variation dans le temps et l’espace
de la demande de recharge lorsque l’on résout des instances de taille réelle. / With the increasing effects of climate change, the urgency to step away from fossil fuels
is greater than ever before. Electric vehicles (EVs) are one way to diminish these effects,
but their widespread adoption is often limited by the insufficient availability of charging
stations. In this work, our goal is to expand the infrastructure of EV charging stations, in
order to provide a better quality of service in terms of user satisfaction (and availability of
charging stations). Specifically, our focus is directed towards urban areas. We first propose
a model for the assignment of EV charging demand to stations, framing it as a maximum
flow problem. This model is the basis for the evaluation of the user satisfaction by a given
charging infrastructure. Secondly, we incorporate the maximum flow model into a mixedinteger linear program, where decisions on the opening of new stations and on the expansion
of their capacity through additional outlets is accounted for. We showcase our methodology
for the city of Montreal, demonstrating the scalability of our approach to handle real-world
scenarios. We conclude that considering both spacial and temporal variations in charging
demand is meaningful when solving realistic instances.
|
296 |
Assessment of Modern Statistical Modelling Methods for the Association of High-Energy Neutrinos to Astrophysical Sources / Bedömning av moderna statistiska modelleringsmetoder för associering av högenergetiska neutroner till astrofysiska källorMinoz, Valentin January 2021 (has links)
The search for the sources of astrophysical neutrinos is a central open question in particle astrophysics. Thanks to substantial experimental efforts, we now have large-scale neutrino detectors in the oceans and polar ice. The neutrino sky seems mostly isotropic, but hints of possible source-neutrino associations have started to emerge, leading to much excitement within the astrophysics community. As more data are collected and future experiments planned, the question of how to statistically quantify point source detection in a robust way becomes increasingly pertinent. The standard approach to null-hypothesis testing leads to reporting the results in terms of a p-value, with detection typically corresponding to surpassing the coveted 5-sigma threshold. While widely used, p-values and significance thresholds are notorious in the statistical community as challenging to interpret and potentially misleading. We explore an alternative Bayesian approach to reporting point source detection and the connections and differences with the frequentist view. In this thesis, two methods for associating neutrino events to candidate sources are implemented on data from a simplified simulation of high-energy neutrino generation and detection. One is a maximum likelihood-based method that has been used in some high-profile articles, and the alternative uses Bayesian Hierarchical modelling with Hamiltonian Monte Carlo to sample the joint posterior of key parameters. Both methods are applied to a set of test cases to gauge their differences and similarities when applied on identical data. The comparisons suggest the applicability of this Bayesian approach as alternative or complement to the frequentist, and illustrate how the two approaches differ. A discussion is also conducted on the applicability and validity of the study itself as well as some potential benefits of incorporating a Bayesian framework, with suggestions for additional aspects to analyze. / Sökandet efter källorna till astrofysiska neutriner är en central öppen fråga i astropartikel- fysik. Tack vare omfattande experimentella ansträngningar har vi nu storskaliga neutrino-detektorer i haven och polarisen. Neutrinohimlen verkar mestadels isotropisk, men antydningar till möjliga källneutrinoföreningar har börjat antydas, vilket har lett till mycket spänning inom astrofysikgemenskapen. När mer data samlas in och framtida experiment planeras, blir frågan om hur man statistiskt kvantifierar punktkälledetektering på ett robust sätt alltmer relevant. Standardmetoden för nollhypotes-testning leder ofta till rapportering av resultat i termer av p-värden, då en specifik tröskel i signifikans eftertraktas. Samtidigt som att vara starkt utbredda, är p-värden och signifikansgränser mycket omdiskuterade i det statistiska samfundet angående deras tolkning. Vi utforskar en alternativ Bayesisk inställning till utvärderingen av punktkälldetektering och jämför denna med den frekvensentistiska utgångspunkten. I denna uppsats tillämpas två metoder för att associera neutrinohändelser till kandidatkällor på basis av simulerad data. Den första använder en maximum likelihood-metod anpassad från vissa uppmärksammade rapporter, medan den andra använder Hamiltonsk Monte Carlo till att approximera den gemensamma aposteriorifördelningen hos modellens parametrar. Båda metoderna tillämpas på en uppsättning testfall för att uppskatta deras skillnader och likheter tillämpade på identisk data. Jämförelserna antyder tillämpligheten av den Bayesianska som alternativ eller komplement till den klassiska, och illustrerar hur de två metoderna skiljer sig åt. En diskussion förs också om validiteten av studien i sig samt några potentiella fördelar med att använda ett Bayesiskt ramverk, med förslag på ytterligare aspekter att analysera.
|
297 |
Voice Characteristics of Preschool Age ChildrenSchuckman, Melanie 29 April 2008 (has links)
No description available.
|
298 |
Estimation and Determination of Carrying Capacity in Loblolly PineYang, Sheng-I 27 May 2016 (has links)
Stand carrying capacity is the maximum size of population for a species under given environmental conditions. Site resources limit the maximum volume or biomass that can be sustained in forest stands. This study was aimed at estimating and determining the carrying capacity in loblolly pine. Maximum stand basal area (BA) that can be sustained over a long period of time can be regarded as a measure of carrying capacity. To quantify and project stand BA carrying capacity, one approach is to use the estimate from a fitted cumulative BA-age equation; another approach is to obtain BA estimates implied by maximum size-density relationships (MSDRs), denoted implied maximum stand BA. The efficacy of three diameter-based MSDR measures: Reineke's self-thinning rule, competition-density rule and Nilson's sparsity index, were evaluated. Estimates from three MSDR measures were compared with estimates from the Chapman-Richards (C-R) equation fitted to the maximum stand BA observed on plots from spacing trials. The spacing trials, established in the two physiographic regions (Piedmont and Coastal Plain), and at two different scales (operational and miniature) were examined and compared, which provides a sound empirical basis for evaluating potential carrying capacity.
Results showed that the stands with high initial planting density approached the stand BA carrying capacity sooner than the stands with lower initial planting density. The maximum stand BA associated with planting density developed similarly at the two scales. The potential carrying capacity in the two physiographic regions was significantly different. The value of implied maximum stand BA converted from three diameter-based MSDR measures was similar to the maximum stand BA curve obtained from the C-R equation. Nilson's sparsity index was the most stable and reliable estimate of stand BA carrying capacity. The flexibility of Nilson's sparsity index can illustrate the effect of physiographic regions on stand BA carrying capacity.
Because some uncontrollable factors on long-term operational experiments can make estimates of stand BA carrying capacity unreliable for loblolly pine, it is suggested that the stand BA carrying capacity could be estimated from high initial planting density stands in a relatively short period of time so that the risk of damages and the costs of experiments could be reduced. For estimating carrying capacity, another attractive option is to choose a miniature scale trial (microcosm) because it shortens the experiment time and reduces costs greatly. / Master of Science
|
299 |
Non-model based adaptive control of renewable energy systemsDarabi Sahneh, Faryad January 1900 (has links)
Master of Science / Department of Mechanical and Nuclear Engineering / Guoqiang Hu / In some types of renewable energy systems such as wind turbines or solar power plants, the optimal operating conditions are influenced by the intermittent nature of these energies. This fact, along with the modeling difficulties of such systems, provides incentive to look for non-model based adaptive techniques to address the maximum power point tracking (MPPT) problem. In this thesis, a novel extremum seeking algorithm is proposed for systems where the optimal point and the optimal value of the cost function are allowed to be time varying. A sinusoidal perturbation based technique is used to estimate the gradient of the cost function. Afterwards, a robust optimization method is developed to drive the system to its optimal point. Since this method does not require any knowledge about the dynamic system or the structure of the input-to-output mapping, it is considered to be a non-model based adaptive technique. The proposed method is then employed for maximizing the energy capture from the wind in a variable speed wind turbine. It is shown that without any measurements of wind velocity or power, the proposed method can drive the wind turbine to the optimal operating point. The generated power is observed to be very close to the maximum possible values.
|
300 |
Multiple imputation in the presence of a detection limit, with applications : an empirical approach / Shawn Carl LiebenbergLiebenberg, Shawn Carl January 2014 (has links)
Scientists often encounter unobserved or missing measurements that are typically reported as less than a fixed detection limit. This especially occurs in the environmental sciences when detection of low exposures are not possible due to limitations of the measuring instrument, and the resulting data are often referred to as type I and II left censored data. Observations lying below this detection limit are therefore often ignored, or `guessed' because it cannot be measured accurately. However, reliable estimates of the population parameters are nevertheless required to perform statistical analysis. The problem of dealing with values below a detection limit becomes increasingly complex when a large number of observations are present below this limit. Researchers thus have interest in developing statistical robust estimation procedures for dealing with left- or right-censored data sets (SinghandNocerino2002). The aim of this study focuses on several main components regarding the problems mentioned above. The imputation of censored data below a fixed detection limit are studied, particularly using the maximum likelihood procedure of Cohen(1959), and several variants thereof, in combination with four new variations of the multiple imputation concept found in literature. Furthermore, the focus also falls strongly on estimating the density of the resulting imputed, `complete' data set by applying various kernel density estimators. It should be noted that bandwidth selection issues are not of importance in this study, and will be left for further research. In this study, however, the maximum likelihood estimation method of Cohen (1959) will be compared with several variant methods, to establish which of these maximum likelihood estimation procedures for censored data estimates the population parameters of three chosen Lognormal distribution, the most reliably in terms of well-known discrepancy measures. These methods will be implemented in combination with four new multiple imputation procedures, respectively, to assess which of these nonparametric methods are most effective with imputing the 12 censored values below the detection limit, with regards to the global discrepancy measures mentioned above. Several variations of the Parzen-Rosenblatt kernel density estimate will be fitted to the complete filled-in data sets, obtained from the previous methods, to establish which is the preferred data-driven method to estimate these densities. The primary focus of the current study will therefore be the performance of the four chosen multiple imputation methods, as well as the recommendation of methods and procedural combinations to deal with data in the presence of a detection limit. An extensive Monte Carlo simulation study was performed to compare the various methods and procedural combinations. Conclusions and recommendations regarding the best of these methods and combinations are made based on the study's results. / MSc (Statistics), North-West University, Potchefstroom Campus, 2014
|
Page generated in 0.0442 seconds