• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 291
  • 113
  • 32
  • 31
  • 15
  • 13
  • 8
  • 7
  • 7
  • 6
  • 5
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 604
  • 604
  • 213
  • 118
  • 101
  • 99
  • 97
  • 82
  • 78
  • 65
  • 62
  • 61
  • 55
  • 53
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Asymptotics for the maximum likelihood estimators of diffusion models

Jeong, Minsoo 15 May 2009 (has links)
In this paper I derive the asymptotics of the exact, Euler, and Milstein ML estimators for diffusion models, including general nonstationary diffusions. Though there have been many estimators for the diffusion model, their asymptotic properties were generally unknown. This is especially true for the nonstationary processes, even though they are usually far from the standard ones. Using a new asymptotics with respect to both the time span T and the sampling interval ¢, I find the asymptotics of the estimators and also derive the conditions for the consistency. With this new asymptotic result, I could show that this result can explain the properties of the estimators more correctly than the existing asymptotics with respect only to the sample size n. I also show that there are many possibilities to get a better estimator utilizing this asymptotic result with a couple of examples, and in the second part of the paper, I derive the higher order asymptotics which can be used in the bootstrap analysis.
232

Resolution of Phylogenetic Relationships and Characterization of Y-Linked Microsatellites within the Big Cats, Panthera

Davis, Brian W. 2009 August 1900 (has links)
The pantherine lineage of cats diverged from the remainder of modern Felidae less than 11 million years ago. This clade consists of the five big cats of the genus Panthera, the lion, tiger, jaguar, leopard, and snow leopard, as well as the closely related clouded leopard, which diverged from Panthera approximately 6 million years ago. A significant problem exists with respect to the precise phylogeny of these highly threatened great cats. Within the past four years, despite multiple publications on the subject, no two studies have reconstructed the phylogeny of Panthera with the same topology, showing particular discordance with respect to sister-taxa relationships to the lion and the position of the enigmatic snow leopard. The evolutionary relationship among these cats remains unresolved partially due to their recent and rapid radiation 3-5 million years ago, individual speciation events occurring within less than 1 million years, and probable introgression between lineages following their divergence. We assembled a 47.6 kb dataset using novel and published DNA sequence data from the autosomes, both sex chromosomes and the mitochondrial genome. This dataset was analyzed both as a supermatrix and with respect to individual partitions using maximum likelihood and Bayesian phylogeny inference. Since discord may exist among gene segments in a multilocus dataset due to their unique evolutionary histories, inference was also performed using Bayesian estimation of species trees (BEST) to form a robust consensus topology. Incongruent topologies for autosomal loci indicated phylogenetic signal conflict within the corresponding segments. We resequenced four mitochondrial and three nuclear gene segments used in recent attempts to reconstruct felid phylogeny. The newly generated data was combined with available GenBank sequence data from all published studies to highlight phylogenetic disparities stemming either from the amplification of a mitochondrial to nuclear translocation event, or errors in species identification. We provide an alternative, highly supported interpretation of the evolutionary history of the pantherine lineage using 39 single-copy regions of the felid Y chromosome and supportive phylogenetic evidence from a revised mitochondrial partition. These efforts result in a highly corroborated set of species relationships that open up new avenues for the study of speciation genomics and understanding the historical events surrounding the origin of the members of this lineage.
233

Expert System for Numerical Methods of Stochastic Differential Equations

Li, Wei-Hung 27 July 2006 (has links)
In this thesis, we expand the option pricing and virtual asset model system by Cheng (2005) and include new simulations and maximum likelihood estimation of the parameter of the stochastic differential equations. For easy manipulation of general users, the interface of original option pricing system is modified. In addition, in order to let the system more completely, some stochastic models and methods of pricing and estimation are added. This system can be divided into three major parts. One is an option pricing system; The second is an asset model simulation system; The last is estimation system of the parameter of the model. Finally, the analysis for the data of network are carried out. The differences of the prices between estimator of this system and real market are compared.
234

Non-normal Bivariate Distributions: Estimation And Hypothesis Testing

Qumsiyeh, Sahar Botros 01 November 2007 (has links) (PDF)
When using data for estimating the parameters in a bivariate distribution, the tradition is to assume that data comes from a bivariate normal distribution. If the distribution is not bivariate normal, which often is the case, the maximum likelihood (ML) estimators are intractable and the least square (LS) estimators are inefficient. Here, we consider two independent sets of bivariate data which come from non-normal populations. We consider two distinctive distributions: the marginal and the conditional distributions are both Generalized Logistic, and the marginal and conditional distributions both belong to the Student&rsquo / s t family. We use the method of modified maximum likelihood (MML) to find estimators of various parameters in each distribution. We perform a simulation study to show that our estimators are more efficient and robust than the LS estimators even for small sample sizes. We develop hypothesis testing procedures using the LS and the MML estimators. We show that the latter are more powerful and robust. Moreover, we give a comparison of our tests with another well known robust test due to Tiku and Singh (1982) and show that our test is more powerful. The latter is based on censored normal samples and is quite prominent (Lehmann, 1986). We also use our MML estimators to find a more efficient estimator of Mahalanobis distance. We give real life examples.
235

Assessment Of Soil

Unutmaz, Berna 01 December 2008 (has links) (PDF)
Although there exist some consensus regarding seismic soil liquefaction assessment of free field soil sites, estimating the liquefaction triggering potential beneath building foundations still stays as a controversial and difficult issue. Assessing liquefaction triggering potential under building foundations requires the estimation of cyclic and static stress state of the soil medium. For the purpose of assessing the effects of the presence of a structure three-dimensional, finite difference-based total stress analyses were performed for generic soil, structure and earthquake combinations. A simplified procedure was proposed which would produce unbiased estimates of the representative and maximum soil-structure-earthquake-induced iv cyclic stress ratio (CSRSSEI) values, eliminating the need to perform 3-D dynamic response assessment of soil and structure systems for conventional projects. Consistent with the available literature, the descriptive (input) parameters of the proposed model were selected as soil-to-structure stiffness ratio, spectral acceleration ratio (SA/PGA) and aspect ratio of the building. The model coefficients were estimated through maximum likelihood methodology which was used to produce an unbiased match with the predictions of 3-D analyses and proposed simplified procedure. Although a satisfactory fit was achieved among the CSR estimations by numerical seismic response analysis results and the proposed simplified procedure, validation of the proposed simplified procedure further with available laboratory shaking table and centrifuge tests and well-documented field case histories was preferred. The proposed simplified procedure was shown to capture almost all of the behavioral trends and most of the amplitudes. As the concluding remark, contrary to general conclusions of Rollins and Seed (1990), and partially consistent with the observations of Finn and Yodengrakumar (1987), Liu and Dobry (1997) and Mylonakis and Gazetas, (2000), it is proven that soil-structure interaction does not always beneficially affect the liquefaction triggering potential of foundation soils and the proposed simplified model conveniently captures when it is critical.
236

Statistical Inference From Complete And Incomplete Data

Can Mutan, Oya 01 January 2010 (has links) (PDF)
Let X and Y be two random variables such that Y depends on X=x. This is a very common situation in many real life applications. The problem is to estimate the location and scale parameters in the marginal distributions of X and Y and the conditional distribution of Y given X=x. We are also interested in estimating the regression coefficient and the correlation coefficient. We have a cost constraint for observing X=x, the larger x is the more expensive it becomes. The allowable sample size n is governed by a pre-determined total cost. This can lead to a situation where some of the largest X=x observations cannot be observed (Type II censoring). Two general methods of estimation are available, the method of least squares and the method of maximum likelihood. For most non-normal distributions, however, the latter is analytically and computationally problematic. Instead, we use the method of modified maximum likelihood estimation which is known to be essentially as efficient as the maximum likelihood estimation. The method has a distinct advantage: It yields estimators which are explicit functions of sample observations and are, therefore, analytically and computationally straightforward. In this thesis specifically, the problem is to evaluate the effect of the largest order statistics x(i) (i&gt / n-r) in a random sample of size n (i) on the mean E(X) and variance V(X) of X, (ii) on the cost of observing the x-observations, (iii) on the conditional mean E(Y|X=x) and variance V(Y|X=x) and (iv) on the regression coefficient. It is shown that unduly large x-observations have a detrimental effect on the allowable sample size and the estimators, both least squares and modified maximum likelihood. The advantage of not observing a few largest observations are evaluated. The distributions considered are Weibull, Generalized Logistic and the scaled Student&rsquo / s t.
237

A design of face recognition system

Jiang, Ming-Hong 11 August 2003 (has links)
The design of a face recognition system ( FRS ) can been separated into two major modules ¡V face detection and face recognition. In the face detection part, we combine image pre-processing techniques with maximum-likelihood estimation to detect the nearest frontal face in a single image. Under limited restrictions, our detection method overcomes some of the challenging tasks, such as variability in scale, location, orientation, facial expression, occlusion ( glasses ), and lighting change. In the face recognition part, we use both Karhunen-Loeve transform and linear discrimant analysis ( LDA ) to perform feature extraction. In this feature extraction process, the features are calculated from the inner products of the original samples and the selected eigenvectors. In general, as the size of the face database is increased, the recognition time will be proportionally increased. To solve this problem, hard-limited Karhunen-Loeve transform ( HLKLT ) is applied to reduce the computation time in our FRS.
238

Space-time block codes with low maximum-likelihood decoding complexity

Sinnokrot, Mohanned Omar 12 November 2009 (has links)
In this thesis, we consider the problem of designing space-time block codes that have low maximum-likelihood (ML) decoding complexity. We present a unified framework for determining the worst-case ML decoding complexity of space-time block codes. We use this framework to not only determine the worst-case ML decoding complexity of our own constructions, but also to show that some popular constructions of space-time block codes have lower ML decoding complexity than was previously known. Recognizing the practical importance of the two transmit and two receive antenna system, we propose the asymmetric golden code, which is designed specifically for low ML decoding complexity. The asymmetric golden code has the lowest decoding complexity compared to previous constructions of space-time codes, regardless of whether the channel varies with time. We also propose the embedded orthogonal space-time codes, which is a family of codes for an arbitrary number of antennas, and for any rate up to half the number of antennas. The family of embedded orthogonal space-time codes is the first general framework for the construction of space-time codes with low-complexity decoding, not only for rate one, but for any rate up to half the number of transmit antennas. Simulation results for up to six transmit antennas show that the embedded orthogonal space-time codes are simultaneously lower in complexity and lower in error probability when compared to some of the most important constructions of space-time block codes with the same number of antennas and the same rate larger than one. Having considered the design of space-time block codes with low ML decoding complexity on the transmitter side, we also develop efficient algorithms for ML decoding for the golden code, the asymmetric golden code and the embedded orthogonal space-time block codes on the receiver side. Simulations of the bit-error rate performance and decoding complexity of the asymmetric golden code and embedded orthogonal codes are used to demonstrate their attractive performance-complexity tradeoff.
239

Software implementation of modeling and estimation of effect size in multiple baseline designs

Xu, Weiwei, active 2013 22 April 2014 (has links)
A generalized design-comparable effect size modeling and estimation for multiple baseline designs across individuals has been proposed and evaluated by Restricted Maximum Likelihood method in a hierarchical linear model using R. This report evaluates the exact approach of the modeling and estimation by SAS. Three models (MB3, MB4 and MB5) with same fixed effects and different random effects are estimated by PROC MIXED procedure with REML method. The unadjusted size and adjusted effect size are then calculated by matrix operation package PROC IML. The estimations for the fixed effects of the three models are similar to each other and to that of R. The variance components estimated by the two software packages are fairly close for MB3 and MB4, but the results are different for MB5 which exhibits boundary conditions for variance-covariance matrix. This result suggests that the nlme library in R works differently than the PROC MIXEDREML method in SAS under extreme conditions. / text
240

Comparing Cognitive Decision Models of Iowa Gambling Task in Indivituals Following Temporal Lobectomy

Jeyarajah, Jenny Vennukkah 19 November 2009 (has links)
This study examined the theoretical basis for decision making behavior of patients with right or left temporal lobectomy and a control group when they participated in the Iowa Gambling Task. Two cognitive decision models, Expectancy Valence Model and Strategy Switching Heuristic Choice Model, were compared for best fit. The best fitting model was then chosen to provide the basis for parameter estimation (sources of decision making, i.e. cognitive, motivational, and response processes) and interpretation. Both models outperformed the baseline model. However comparison of G2 means between the two cognitive decision models showed the expectancy valence model having a higher mean and thus a better model between the two. Decision parameters were analyzed for the expectancy valence model. The analysis revealed that the parameters were not significant between the three groups. The data was simulated from the baseline model to determine whether the models are different from baseline.

Page generated in 0.0289 seconds