• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 331
  • 135
  • 10
  • 4
  • Tagged with
  • 928
  • 928
  • 467
  • 437
  • 384
  • 380
  • 380
  • 184
  • 174
  • 92
  • 68
  • 66
  • 63
  • 62
  • 61
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
351

NONPARAMETRIC ESTIMATION OF DERIVATIVES WITH APPLICATIONS

Hall, Benjamin 01 January 2010 (has links)
We review several nonparametric regression techniques and discuss their various strengths and weaknesses with an emphasis on derivative estimation and confidence band creation. We develop a generalized C(p) criterion for tuning parameter selection when interest lies in estimating one or more derivatives and the estimator is both linear in the observed responses and self-consistent. We propose a method for constructing simultaneous confidence bands for the mean response and one or more derivatives, where simultaneous now refers both to values of the covariate and to all derivatives under consideration. In addition we generalize the simultaneous confidence bands to account for heteroscedastic noise. Finally, we consider the characterization of nanoparticles and propose a method for identifying a proper subset of the covariate space that is most useful for characterization purposes.
352

SOME CONTRIBUTIONS TO THE CENSORED EMPIRICAL LIKELIHOOD WITH HAZARD-TYPE CONSTRAINTS

Hu, Yanling 01 January 2011 (has links)
Empirical likelihood (EL) is a recently developed nonparametric method of statistical inference. Owen’s 2001 book contains many important results for EL with uncensored data. However, fewer results are available for EL with right-censored data. In this dissertation, we first investigate a right-censored-data extension of Qin and Lawless (1994). They studied EL with uncensored data when the number of estimating equations is larger than the number of parameters (over-determined case). We obtain results similar to theirs for the maximum EL estimator and the EL ratio test, for the over-determined case, with right-censored data. We employ hazard-type constraints which are better able to handle right-censored data. Then we investigate EL with right-censored data and a k-sample mixed hazard-type constraint. We show that the EL ratio test statistic has a limiting chi-square distribution when k = 2. We also study the relationship between the constrained Kaplan-Meier estimator and the corresponding Nelson-Aalen estimator. We try to prove that they are asymptotically equivalent under certain conditions. Finally we present simulation studies and examples showing how to apply our theory and methodology with real data.
353

Polytopes Arising from Binary Multi-way Contingency Tables and Characteristic Imsets for Bayesian Networks

Xi, Jing 01 January 2013 (has links)
The main theme of this dissertation is the study of polytopes arising from binary multi-way contingency tables and characteristic imsets for Bayesian networks. Firstly, we study on three-way tables whose entries are independent Bernoulli ran- dom variables with canonical parameters under no three-way interaction generalized linear models. Here, we use the sequential importance sampling (SIS) method with the conditional Poisson (CP) distribution to sample binary three-way tables with the sufficient statistics, i.e., all two-way marginal sums, fixed. Compared with Monte Carlo Markov Chain (MCMC) approach with a Markov basis (MB), SIS procedure has the advantage that it does not require expensive or prohibitive pre-computations. Note that this problem can also be considered as estimating the number of lattice points inside the polytope defined by the zero-one and two-way marginal constraints. The theorems in Chapter 2 give the parameters for the CP distribution on each column when it is sampled. In this chapter, we also present the algorithms, the simulation results, and the results for Samson’s monks data. Bayesian networks, a part of the family of probabilistic graphical models, are widely applied in many areas and much work has been done in model selections for Bayesian networks. The second part of this dissertation investigates the problem of finding the optimal graph by using characteristic imsets, where characteristic imsets are defined as 0-1 vector representations of Bayesian networks which are unique up to Markov equivalence. Characteristic imset polytopes are defined as the convex hull of all characteristic imsets we consider. It was proven that the problem of finding optimal Bayesian network for a specific dataset can be converted to a linear programming problem over the characteristic imset polytope [51]. In Chapter 3, we first consider characteristic imset polytopes for all diagnosis models and show that these polytopes are direct product of simplices. Then we give the combinatorial description of all edges and all facets of these polytopes. At the end of this chapter, we generalize these results to the characteristic imset polytopes for all Bayesian networks with a fixed underlying ordering of nodes. Chapter 4 includes discussion and future work on these two topics.
354

Multi-time Scales Stochastic Dynamic Processes: Modeling, Methods, Algorithms, Analysis, and Applications

Pedjeu, Jean-Claude 01 January 2012 (has links)
By introducing a concept of dynamic process operating under multi-time scales in sciences and engineering, a mathematical model is formulated and it leads to a system of multi-time scale stochastic differential equations. The classical Picard-Lindel\"{o}f successive approximations scheme is expended to the model validation problem, namely, existence and uniqueness of solution process. Naturally, this generates to a problem of finding closed form solutions of both linear and nonlinear multi-time scale stochastic differential equations. To illustrate the scope of ideas and presented results, multi-time scale stochastic models for ecological and epidemiological processes in population dynamic are exhibited. Without loss in generality, the modeling and analysis of three time-scale fractional stochastic differential equations is followed by the development of the numerical algorithm for multi-time scale dynamic equations. The development of numerical algorithm is based on the idea if numerical integration in the context of the notion of multi-time scale integration. The multi-time scale approach is applied to explore the study of higher order stochastic differential equations (HOSDE) is presented. This study utilizes the variation of constant parameter technique to develop a method for finding closed form solution processes of classes of HOSDE. Then then probability distribution of the solution processes in the context of the second order equations is investigated.
355

Trigonometric scores rank procedures with applications to long-tailed distributions /

Kravchuk, Olena. January 2005 (has links) (PDF)
Thesis (Ph.D.) - University of Queensland, 2006. / Includes bibliography.
356

Optimal Latin Hypercube Designs for Computer Experiments Based on Multiple Objectives

Hou, Ruizhe 22 March 2018 (has links)
Latin hypercube designs (LHDs) have broad applications in constructing computer experiments and sampling for Monte-Carlo integration due to its nice property of having projections evenly distributed on the univariate distribution of each input variable. The LHDs have been combined with some commonly used computer experimental design criteria to achieve enhanced design performance. For example, the Maximin-LHDs were developed to improve its space-filling property in the full dimension of all input variables. The MaxPro-LHDs were proposed in recent years to obtain nicer projections in any subspace of input variables. This thesis integrates both space-filling and projection characteristics for LHDs and develops new algorithms for constructing optimal LHDs that achieve nice properties on both criteria based on using the Pareto front optimization approach. The new LHDs are evaluated through case studies and compared with traditional methods to demonstrate their improved performance.
357

Signal Detection of Adverse Drug Reaction using the Adverse Event Reporting System: Literature Review and Novel Methods

Pham, Minh H. 29 March 2018 (has links)
One of the objectives of the U.S. Food and Drug Administration is to protect the public health through post-marketing drug safety surveillance, also known as Pharmacovigilance. An inexpensive and efficient method to inspect post-marketing drug safety is to use data mining algorithms on electronic health records to discover associations between drugs and adverse events. The purpose of this study is two-fold. First, we review the methods and algorithms proposed in the literature for identifying association drug interactions to an adverse event and discuss their advantages and drawbacks. Second, we attempt to adapt some novel methods that have been used in comparable problems such as the genome-wide association studies and the market-basket problems. Most of the common methods in the drug-adverse event problem have univariate structure and thus are vulnerable to give false positive when certain drugs are usually co-prescribed. Therefore, we will study applicability of multivariate methods in the literature such as Logistic Regression and Regression-adjusted Gamma-Poisson Shrinkage Model for the association studies. We also adopted Random Forest and Monte Carlo Logic Regression from the genome-wide association study to our problem because of their ability to detect inherent interactions. We have built a computer program for the Regression-adjusted Gamma Poisson Shrinkage model, which was proposed by DuMouchel in 2013 but has not been made available in any public software package. A comparison study between popular methods and the proposed new methods is presented in this study.
358

Distribution Fits for Various Parameters in the Hurricane Model

Oxenyuk, Victoria 20 March 2014 (has links)
The FPHLM is the only open public hurricane loss evaluation model available for assessment of hazard to insured residential property from hurricanes in Florida. The model consists of three independent components: the atmospheric science component, the vulnerability component and the actuarial component. The atmospheric component simulates thousands of storms, their wind speeds and their decay once on land on the basis of historical hurricane statistics defining wind risk for all residential zip codes in Florida. The focus of the thesis was to analyze atmospheric science component of the Florida Public Hurricane Loss Model, replicate statistical procedures used to model various parameters of atmospheric science component and to validate the model. I establish the distribution for modeling annual hurricane occurrence, choose the best fitting distribution for the radius of maximum winds and compute the expression for the pressure profile parameter Holland B.
359

A Comparison of Some Confidence Intervals for Estimating the Kurtosis Parameter

Jerome, Guensley 15 June 2017 (has links)
Several methods have been proposed to estimate the kurtosis of a distribution. The three common estimators are: g2, G2 and b2. This thesis addressed the performance of these estimators by comparing them under the same simulation environments and conditions. The performance of these estimators are compared through confidence intervals by determining the average width and probabilities of capturing the kurtosis parameter of a distribution. We considered and compared classical and non-parametric methods in constructing these intervals. Classical method assumes normality to construct the confidence intervals while the non-parametric methods rely on bootstrap techniques. The bootstrap techniques used are: Bias-Corrected Standard Bootstrap, Efron’s Percentile Bootstrap, Hall’s Percentile Bootstrap and Bias-Corrected Percentile Bootstrap. We have found significant differences in the performance of classical and bootstrap estimators. We observed that the parametric method works well in terms of coverage probability when data come from a normal distribution, while the bootstrap intervals struggled in constantly reaching a 95% confidence level. When sample data are from a distribution with negative kurtosis, both parametric and bootstrap confidence intervals performed well, although we noticed that bootstrap methods tend to have smaller intervals. When it comes to positive kurtosis, bootstrap methods perform slightly better than classical methods in coverage probability. Among the three kurtosis estimators, G2 performed better. Among bootstrap techniques, Efron’s Percentile intervals had the best coverage.
360

Cybersecurity: Stochastic Analysis and Modelling of Vulnerabilities to Determine the Network Security and Attackers Behavior

Kaluarachchi, Pubudu Kalpani 26 June 2017 (has links)
Development of Cybersecurity processes and strategies should take two main approaches. One is to develop an efficient and effective set of methodologies to identify software vulnerabilities and patch them before being exploited. Second is to develop a set of methodologies to predict the behavior of attackers and execute defending techniques based on attacking behavior. Managing of Vulnerabilities and analyzing them is directly related to the first approach. Developing of methodologies and models to predict the behavior of attackers is related to the second approach. Both these approaches are inseparably interconnected. Our effort in this study mainly focuses on developing useful statistical models that can give us signals about the behavior of cyber attackers. Analytically understanding of vulnerabilities in statistical point of view helps to develop a set of statistical models that works as a bridge between Cybersecurity and Abstract Statistical and Mathematical knowledge. Any such effort should begin with properly understanding the nature of Vulnerabilities in a computer network system. We start this study with analyzing "Vulnerability" based on inferences that can be taken from National Vulnerability Database (NVD). In Cybersecurity context, we apply Markov approach to develop suitable predictive models to successfully estimate the minimum number of steps to compromise a security goal that an attacker would take using the concept of Expected Path Length (EPL). We have further developed Non-Homogeneous Stochastic model by improving EPL estimates in to a time dependent variable. This approach analytically applied in a simple model of computer network with discovered vulnerabilities resulted in several useful observations exemplifying the applicability in real world computer systems. The methodology indicated a measure of the "Risk" associated with the model network as a function of time indicating defending professionals on the threats they are facing and should anticipate to face. Furthermore, using a similar approach taken in well-known Google page rank algorithm, a new ranking algorithm of vulnerability ranks with respect to time for computer network system is also presented in this study. With better IT resources analytical models and methodologies presented in this study can be developed into more generalized versions and apply in real world computer network environments.

Page generated in 0.1052 seconds