481 |
Development and Evaluation of Nonparametric Mixed Effects ModelsBaverel, Paul January 2011 (has links)
A nonparametric population approach is now accessible to a more comprehensive network of modelers given its recent implementation into the popular NONMEM application, previously limited in scope by standard parametric approaches for the analysis of pharmacokinetic and pharmacodynamic data. The aim of this thesis was to assess the relative merits and downsides of nonparametric models in a nonlinear mixed effects framework in comparison with a set of parametric models developed in NONMEM based on real datasets and when applied to simple experimental settings, and to develop new diagnostic tools adapted to nonparametric models. Nonparametric models as implemented in NONMEM VI showed better overall simulation properties and predictive performance than standard parametric models, with significantly less bias and imprecision in outcomes of numerical predictive check (NPC) from 25 real data designs. This evaluation was carried on by a simulation study comparing the relative predictive performance of nonparametric and parametric models across three different validation procedures assessed by NPC. The usefulness of a nonparametric estimation step in diagnosing distributional assumption of parameters was then demonstrated through the development and the application of two bootstrapping techniques aiming to estimate imprecision of nonparametric parameter distributions. Finally, a novel covariate modeling approach intended for nonparametric models was developed with good statistical properties for identification of predictive covariates. In conclusion, by relaxing the classical normality assumption in the distribution of model parameters and given the set of diagnostic tools developed, the nonparametric approach in NONMEM constitutes an attractive alternative to the routinely used parametric approach and an improvement for efficient data analysis.
|
482 |
Nichtparametrische Analyse diagnostischer Gütemaße bei Clusterdaten / Nonparametric analysis of diagnostic accuracy measurements regarding clustered dataLange, Katharina 04 March 2011 (has links)
No description available.
|
483 |
Die statistische Auswertung von ordinalen Daten bei zwei Zeitpunkten und zwei Stichproben / The Statistical Analysis of Ordinal Data at two Timepoints and two GroupsSiemer, Alexander 03 April 2002 (has links)
No description available.
|
484 |
Nichtparametrische Analyse von diagnostischen Tests / Nonparametric Analysis of diagnostic trialsWerner, Carola 07 July 2006 (has links)
No description available.
|
485 |
Netiesinių statistikų taikymas atsitiktinių vektorių pasiskirstymo tankių vertinime / Application of nonlinear statistics for distribution density estimation of random vectorsŠmidtaitė, Rasa 11 August 2008 (has links)
Statistikoje ir jos taikyme vienas dažniausiai sprendžiamų uždavinių yra daugiamačių tankių vertinimas.Tankių vertinimas skirstomas į parametrinį ir neparametrinį vertinimą. Parametriniame vertinime daroma prielaida, kad tankio funkcija f, apibūdinanti duomenis yi, kai i kinta nuo 1 iki n, priklauso tam tikrai gan siaurai funkcijų šeimai f(•;θ), kuri priklauso nuo nedidelio kiekio parametrų θ=(θ1, θ2, …, θk). Tankis, apskaičiuojamas pagal parametrinį vertinimą, gaunamas iš pradžių apskaičiavus parametro θ įvertį θ0 ir f0=f(•;θ). Toks traktavimas statistiniu požiūriu yra labai efektyvus, tačiau jeigu nei vienas šeimos f(•;θ) narys nėra artimas funkcijai f, rezultatai gali būti gauti labai netikslūs.
Neparametriniam tankio vertinimui jokios parametrinės prielaidos apie f nėra reikalingos, tačiau vietoj to daromos kitos prielaidos, pavyzdžiui, apie funkcijos f tolydumą arba, kad f yra integruojama. Tankio funkcijos forma yra nustatoma iš turimų duomenų.Turint dideles imtis, tankis f gali būti apskaičiuotas pakankamai tiksliai.
Šiuolaikinėje duomenų analizėje naudojama daugybė neparametrinių metodų, skirtų daugiamačių atsitiktinių dydžių pasiskirstymo tankio statistiniam vertinimui. Ypač plačiai paplitę branduoliniai įvertiniai, populiarūs ir splaininiai bei pusiau parametriniai algoritmai. Taikant daugumą populiarių neparametrinio įvertinimo procedūrų praktikoje susiduriama su jų parametrų optimalaus parinkimo problema. Branduolinių įvertinių konstrukcijos svarbiausiu... [toliau žr. visą tekstą] / Most algorithms work properly if the probability densities of the multivariate vectors are known. Unfortunately, in reality these densities are usually not available, and parametric or non-parametric estimation of the densities becomes critically needed.
In parametric estimation one assumes that the density f underlying the data yi where i varies from 1 to n, belongs to some rather restricted family of functions f(•;θ) indexed by a small number of parameters θ=(θ1, θ2, …, θk). An example is the family of multivariate normal densities which is parameterized by the mean vector and the covariance matrix. A density estimate in the parametric approach is obtained by computing from the data an estimate θ0 of θ and setting f0=f(•;θ). Such an approach is statistically and computationally very efficient but can lead poor results if none of the family members f(•;θ) is close to f.
In nonparametric density estimation no parametric assumptions about f are made and one assumes instead that f, for example, has some smoothness properties (e.g. two continuous derivatives) or that it is square integrable. The shape of the density estimate is determined by the data and, in principle, given enough data, arbitrary densities f can be estimated accurately. Most popular methods are the kernel estimator based on local smoothing of the data. Quite popular are histospline, semiparametric and projection pursuit algorithms. While constructing various probability density estimation methods the most... [to full text]
|
486 |
Nonparametric criteria for sparse contingency tables / Neparametriniai kriterijai retų įvykių dažnių lentelėmsSamusenko, Pavel 18 February 2013 (has links)
In the dissertation, the problem of nonparametric testing for sparse contingency tables is addressed.
Statistical inference problems caused by sparsity of contingency tables are widely discussed in the literature. Traditionally, the expected (under null the hypothesis) frequency is required to exceed 5 in almost all cells of the contingency table. If this condition is violated, the χ2 approximations of goodness of fit statistics may be inaccurate and the table is said to be sparse . Several techniques have been proposed to tackle the problem: exact tests, alternative approximations, parametric and nonparametric bootstrap, Bayes approach and other methods. However they all are not applicable or have some limitations in nonparametric statistical inference of very sparse contingency tables.
In the dissertation, it is shown that, for sparse categorical data, the likelihood ratio statistic and Pearson’s χ2 statistic may become noninformative: they do not anymore measure the goodness-of-fit of null hypotheses to data. Thus, they can be inconsistent even in cases where a simple consistent test does exist.
An improvement of the classical criteria for sparse contingency tables is proposed. The improvement is achieved by grouping and smoothing of sparse categorical data by making use of a new sparse asymptotics model relying on (extended) empirical Bayes approach. Under general conditions, the consistency of the proposed criteria based on grouping is proved. Finite-sample behavior of... [to full text] / Disertacijoje sprendžiami neparametrinių hipotezių tikrinimo uždaviniai išretintoms dažnių lentelėms.
Problemos, susijusios su retų įvykių dažnių lentelėmis yra plačiai aptartos mokslinėje literatūroje. Yra pasiūlyta visa eilė metodų: tikslieji testai, alternatyvūs aproksimavimo būdai parametrinė ir neparametrinė saviranka, Bayeso ir kiti metodai. Tačiau jie nepritaikomi arba yra neefektyvūs neparametrinėje labai išretintų dažnių lentelių analizėje.
Disertacijoje parodyta, kad labai išretintiems kategoriniams duomenims tikėtinumo santykio statistika ir Pearsono χ2 statistika gali pasidaryti neinformatyviomis: jos jau nėra tinkamos nulinės hipotezės ir duomenų suderinamumui matuoti. Vadinasi, jų pagrindu sudaryti kriterijai gali būti net nepagrįsti net tuo atveju, kai egzistuoja paprastas pagrįstas kriterijus.
Darbe yra pasiūlytas klasikinių kriterijų patobulinimas išretintų dažnių lentelėms. Siūlomi kriterijai remiasi išretintų kategorinių duomenų grupavimu ir glodinimu naudojant naują išretinimo asimtotikos modelį, kuris remiasi (išplėstine) empirine Bayeso metodologija. Prie bendrų sąlygų yra įrodytas siūlomų kriterijų, naudojančių grupavimą, pagrįstumas. Kriterijų elgesys baigtinių imčių atveju tiriamas taikant Monte Carlo modeliavimą.
Disertacija susideda iš įvado, 4 skyrių, literatūros sąrašo, bendrų išvadų ir priedo.
Įvade atskleidžiama nagrinėjamos mokslinės problemos svarba, aprašomi darbo tikslai ir uždaviniai, tyrimo metodai, mokslinis naujumas, praktinė gautų... [toliau žr. visą tekstą]
|
487 |
Neparametriniai kriterijai retų įvykių dažnių lentelėms / Nonparametric criteria for sparse contingency tablesSamusenko, Pavel 18 February 2013 (has links)
Disertacijoje sprendžiami neparametrinių hipotezių tikrinimo uždaviniai išretintoms dažnių lentelėms.
Problemos, susijusios su retų įvykių dažnių lentelėmis yra plačiai aptartos mokslinėje literatūroje. Yra pasiūlyta visa eilė metodų: tikslieji testai, alternatyvūs aproksimavimo būdai parametrinė ir neparametrinė saviranka, Bayeso ir kiti metodai. Tačiau jie nepritaikomi arba yra neefektyvūs neparametrinėje labai išretintų dažnių lentelių analizėje.
Disertacijoje parodyta, kad labai išretintiems kategoriniams duomenims tikėtinumo santykio statistika ir Pearsono χ2 statistika gali pasidaryti neinformatyviomis: jos jau nėra tinkamos nulinės hipotezės ir duomenų suderinamumui matuoti. Vadinasi, jų pagrindu sudaryti kriterijai gali būti net nepagrįsti net tuo atveju, kai egzistuoja paprastas pagrįstas kriterijus.
Darbe yra pasiūlytas klasikinių kriterijų patobulinimas išretintų dažnių lentelėms. Siūlomi kriterijai remiasi išretintų kategorinių duomenų grupavimu ir glodinimu naudojant naują išretinimo asimtotikos modelį, kuris remiasi (išplėstine) empirine Bayeso metodologija. Prie bendrų sąlygų yra įrodytas siūlomų kriterijų, naudojančių grupavimą, pagrįstumas. Kriterijų elgesys baigtinių imčių atveju tiriamas taikant Monte Carlo modeliavimą.
Disertacija susideda iš įvado, 4 skyrių, literatūros sąrašo, bendrų išvadų ir priedo.
Įvade atskleidžiama nagrinėjamos mokslinės problemos svarba, aprašomi darbo tikslai ir uždaviniai, tyrimo metodai, mokslinis naujumas, praktinė gautų... [toliau žr. visą tekstą] / In the dissertation, the problem of nonparametric testing for sparse contingency tables is addressed.
Statistical inference problems caused by sparsity of contingency tables are widely discussed in the literature. Traditionally, the expected (under null the hypothesis) frequency is required to exceed 5 in almost all cells of the contingency table. If this condition is violated, the χ2 approximations of goodness of fit statistics may be inaccurate and the table is said to be sparse . Several techniques have been proposed to tackle the problem: exact tests, alternative approximations, parametric and nonparametric bootstrap, Bayes approach and other methods. However they all are not applicable or have some limitations in nonparametric statistical inference of very sparse contingency tables.
In the dissertation, it is shown that, for sparse categorical data, the likelihood ratio statistic and Pearson’s χ2 statistic may become noninformative: they do not anymore measure the goodness-of-fit of null hypotheses to data. Thus, they can be inconsistent even in cases where a simple consistent test does exist.
An improvement of the classical criteria for sparse contingency tables is proposed. The improvement is achieved by grouping and smoothing of sparse categorical data by making use of a new sparse asymptotics model relying on (extended) empirical Bayes approach. Under general conditions, the consistency of the proposed criteria based on grouping is proved. Finite sample behavior of... [to full text]
|
488 |
Essays on Trade Agreements, Agricultural Commodity Prices and Unconditional Quantile RegressionLi, Na 03 January 2014 (has links)
My dissertation consists of three essays in three different areas: international trade; agricultural markets; and nonparametric econometrics. The first and third essays are theoretical papers, while the second essay is empirical. In the first essay, I developed a political economy model of trade agreements where the set of policy instruments are endogenously determined, providing a rationale for countervailing duties (CVDs). Trade-related policy intervention is assumed to be largely shaped in response to rent seeking demand as is often shown empirically. Consequently, the uncertain circumstance during the lifetime of a trade agreement involves both economic and rent seeking conditions. The latter approximates the actual trade policy decisions more closely than the externality hypothesis and thus provides scope for empirical testing. The second essay tests whether normal mixture (NM) generalized autoregressive conditional heteroscedasticity (GARCH) models adequately capture the relevant properties of agricultural commodity prices. Volatility series were constructed for ten agricultural commodity weekly cash prices. NM-GARCH models allow for heterogeneous volatility dynamics among different market regimes. Both in-sample fit and out-of-sample forecasting tests confirm that the two-state NM-GARCH approach performs significantly better than the traditional normal GARCH model. For each commodity, it is found that an expected negative price change corresponds to a higher volatility persistence, while an expected positive price change arises in conjunction with a greater responsiveness of volatility. In the third essay, I propose an estimator for a nonparametric additive unconditional quantile regression model. Unconditional quantile regression is able to assess the possible different impacts of covariates on different unconditional quantiles of a response variable. The proposed estimator does not require d-dimensional nonparametric regression and therefore has no curse of dimensionality. In addition, the estimator has an oracle property in the sense that the asymptotic distribution of each additive component is the same as the case when all other components are known. Both numerical simulations and an empirical application suggest that the new estimator performs much better than alternatives. / the Canadian Agricultural Trade Policy and Competitiveness Research Network, the Structure and Performance of Agriculture and Agri-products Industry Network, and the Institute for the Advanced Study of Food and Agricultural Policy.
|
489 |
Precedence-type test based on the Nelson-Aalen estimator of the cumulative hazard functionGalloway, Katherine Anne Forsyth 03 July 2013 (has links)
In reliability studies, the goal is to gain knowledge about a product's failure times or life expectancy. Precedence tests do not require large sample sizes and are used in reliability studies to compare the life-time distributions from two samples. Precedence tests are useful since they provide reliable results early in a life-test and the surviving units can be used in other tests. Ng and Balakrishnan (2010) proposed a precedence-type test based on the Kaplan-Meier estimator of the cumulative distribution function.
A precedence-type test based on the Nelson-Aalen estimator of the cumulative hazard function has been proposed. This test was developed for both Type-II right censoring and progressive Type-II right censoring. Numerical results, including illustrative examples, critical values and a power study have been provided. The results from this test were compared with those from the test based on the Kaplan-Meier estimator.
|
490 |
Nonparametric Learning in High DimensionsLiu, Han 01 December 2010 (has links)
This thesis develops flexible and principled nonparametric learning algorithms to explore, understand, and predict high dimensional and complex datasets. Such data appear frequently in modern scientific domains and lead to numerous important applications. For example, exploring high dimensional functional magnetic resonance imaging data helps us to better understand brain functionalities; inferring large-scale gene regulatory network is crucial for new drug design and development; detecting anomalies in high dimensional transaction databases is vital for corporate and government security.
Our main results include a rigorous theoretical framework and efficient nonparametric learning algorithms that exploit hidden structures to overcome the curse of dimensionality when analyzing massive high dimensional datasets. These algorithms have strong theoretical guarantees and provide high dimensional nonparametric recipes for many important learning tasks, ranging from unsupervised exploratory data analysis to supervised predictive modeling. In this thesis, we address three aspects:
1 Understanding the statistical theories of high dimensional nonparametric inference, including risk, estimation, and model selection consistency;
2 Designing new methods for different data-analysis tasks, including regression, classification, density estimation, graphical model learning, multi-task learning, spatial-temporal adaptive learning;
3 Demonstrating the usefulness of these methods in scientific applications, including functional genomics, cognitive neuroscience, and meteorology.
In the last part of this thesis, we also present the future vision of high dimensional and large-scale nonparametric inference.
|
Page generated in 0.0605 seconds