• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 202
  • 88
  • 54
  • 34
  • 14
  • 13
  • 12
  • 9
  • 6
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 485
  • 86
  • 71
  • 59
  • 56
  • 55
  • 50
  • 48
  • 48
  • 45
  • 45
  • 44
  • 41
  • 40
  • 37
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

New Non-Parametric Methods for Income Distributions

Luo, Shan 26 April 2013 (has links)
Low income proportion (LIP), Lorenz curve (LC) and generalized Lorenz curve (GLC) are important indexes in describing the inequality of income distribution. They have been widely used for measuring social stability by governments around the world. The accuracy of estimating those indexes is essential to quantify the economics of a country. Established statistical inferential methods for these indexes are based on an asymptotic normal distribution, which may have poor performance when the real income data is skewed or has outliers. Recent applications of nonparametric methods, though, allow researchers to utilize techniques without giving data the parametric distribution assumption. For example, existing research proposes the plug-in empirical likelihood (EL)-based inferences for LIP, LC and GLC. However, this method becomes computationally intensive and mathematically complex because of the presence of nonlinear constraints in the underlying optimization problem. Meanwhile, the limiting distribution of the log empirical likelihood ratio is a scaled Chi-square distribution. The estimation of the scale constant will affect the overall performance of the plug-in EL method. To improve the efficiency of the existing inferential methods, this dissertation first proposes kernel estimators for LIP, LC and GLC, respectively. Then the cross-validation method is proposed to choose bandwidth for the kernel estimators. These kernel estimators are proved to have asymptotic normality. The smoothed jackknife empirical likelihood (SJEL) for LIP, LC and GLC are defined. Then the log-jackknife empirical likelihood ratio statistics are proved to follow the standard Chi-square distribution. Extensive simulation studies are conducted to evaluate the kernel estimators in terms of Mean Square Error and Asymptotic Relative Efficiency. Next, the SJEL-based confidence intervals and the smoothed bootstrap-based confidence intervals are proposed. The coverage probability and interval length for the proposed confidence intervals are calculated and compared with the normal approximation-based intervals. The proposed kernel estimators are found to be competitive estimators, and the proposed inferential methods are observed to have better finite-sample performance. All inferential methods are illustrated through real examples.
222

The Effects of Rent Assignment on Long-Lived Public Goods in Exhaustible Resource Economies

Cyan, Musharraf R 15 December 2010 (has links)
Exhaustible resource rents are an important taxable base in many countries, with revenue sharing often part of the scheme. In some cases large shares are retained for the central government. Generally, the discussions of exhaustible resource taxation consider assignment of resource rent tax base and revenue sharing from the limited perspectives of efficiency and stability. Tax assignment and sharing arrangements are assumed to have a neutral effect on investment of resource rents in long-lived public goods. We attempt to demonstrate that this may not be the case, specifically looking at the question of whether rent assignment is neutral to effects on investment of rents in long-lived public goods, a normative policy objective, and under what conditions it occurs. We test the theoretical propositions with data from the Russian Federation to derive empirical results. The results from the Russian Federation point toward an important dimension of rent tax assignment in a federation. They results show that ceteris paribus, higher share of rent for the federation may lead to lower investment in long-lived public goods and may be constrained by stability. Another argument has been made for reconsidering rent tax assignment using assertive ethnic identity as a manifestation strong ownership claims. Communities with strongly valued identities value ownership over land and exhaustible resource endowments in their areas. This may be the case especially if ethnic identity is important to the resource owning community. The empirical results show that a decrease in the regional share of rent resulted in a fall in investments in the republics and regions with strong ethnic identity. Republics among the producing regions have historical claims to a distinct identity and may have a preference for preserving their identity. This preference is manifested as higher levels of rent investment. Following this line of argument, it can be concluded that rent assignment, through rent tax or revenue assignment, should favor producing regions within the range of stability in a federation, if the objective is achieving higher investment in long-lived public goods.
223

A Matlab Toolbox for fMRI Data Analysis: Detection, Estimation and Brain Connectivity

Budde, Kiran Kumar January 2012 (has links)
Functional Magnetic Resonance Imaging (fMRI) is one of the best techniques for neuroimaging and has revolutionized the way to understand the brain functions. It measures the changes in the blood oxygen level-dependent (BOLD) signal which is related to the neuronal activity. Complexity of the data, presence of different types of noises and the massive amount of data makes the fMRI data analysis a challenging one. It demands efficient signal processing and statistical analysis methods.  The inference of the analysis is used by the physicians, neurologists and researchers for better understanding of the brain functions.      The purpose of this study is to design a toolbox for fMRI data analysis. It includes methods to detect the brain activity maps, estimation of the hemodynamic response (HDR) and the connectivity of the brain structures. This toolbox provides methods for detection of activated brain regions measured with Bayesian estimator. Results are compared with the conventional methods such as t-test, ordinary least squares (OLS) and weighted least squares (WLS). Brain activation and HDR are estimated with linear adaptive model and nonlinear method based on radial basis function (RBF) neural network. Nonlinear autoregressive with exogenous inputs (NARX) neural network is developed to model the dynamics of the fMRI data.  This toolbox also provides methods to brain connectivity such as functional connectivity and effective connectivity.  These methods are examined on simulated and real fMRI datasets.
224

Generalized Bathtub Hazard Models for Binary-Transformed Climate Data

Polcer, James 01 May 2011 (has links)
In this study, we use a hazard-based modeling as an alternative statistical framework to time series methods as applied to climate data. Data collected from the Kentucky Mesonet will be used to study the distributional properties of the duration of high and low-energy wind events relative to an arbitrary threshold. Our objectiveswere to fit bathtub models proposed in literature, propose a generalized bathtub model, apply these models to Kentucky Mesonet data, and make recommendations as to feasibility of wind power generation. Using two different thresholds (1.8 and 10 mph respectively), results show that the Hjorth bathtub model consistently performed better than all other models considered with coefficient of R-squared values at 0.95 or higher. However, fewer sites and months could be included in the analysis when we increased our threshold to 10 mph. Based on a 10 mph threshold, Bowling Green (FARM), Hopkinsville (PGHL), and Columbia (CMBA) posted the top 3 wind duration times in February of 2009. Further studies needed to establish long-term trends.
225

Estimation of Inter-Cell Interference in 3G Communication Systems

Gunning, Dan, Jernberg, Pontus January 2011 (has links)
In this thesis the telecommunication problem known as inter-cell interference is examined. Inter-cell interference originates from users in neighboring cells and affects the users in the own cell. The reason that inter-cell interference is interesting to study is that it affects the maximum data-rates achievable in the 3G network. By knowing the inter-cell interference, higher data-rates can be scheduled without risking cell-instability. An expression for the coupling between cells is derived using basic physical principles. Using the expression for the coupling factors a nonlinear model describing the inter-cell interference is developed from the model of the power control loop commonly used in the base stations. The expression describing the coupling factors depends on the positions of users which are unknown. A quasi decentralized method for estimating the coupling factors using measurements of the total interference power is presented. The estimation results presented in this thesis could probably be improved by using a more advanced nonlinear filter, such as a particle filter or an Extended Kalman filter, for the estimation. Different expressions describing the coupling factors could also be considered to improve the result.
226

Pricing and Risk Management in Competitive Electricity Markets

Xia, Zhendong 22 November 2005 (has links)
Electricity prices in competitive markets are extremely volatile with salient features such as mean-reversion and jumps and spikes. Modeling electricity spot prices is essential for asset and project valuation as well as risk management. I introduce the mean-reversion feature into a classical variance gamma model to model the electricity price dynamics as a mean-reverting variance gamma (MRVG) process. Derivative pricing formulae are derived through transform analysis and model parameters are estimated by the generalized method of moments and the Markov Chain Monte Carlo method. A real option approach is proposed to value a tolling contract incorporating operational characteristics of the generation asset and contractual constraints. Two simulation-based methods are proposed to solve the valuation problem. The effects of different electricity price assumptions on the valuation of tolling contracts are examined. Based on the valuation model, I also propose a heuristic scheme for hedging tolling contracts and demonstrate the validity of the hedging scheme through numerical examples. Autoregressive Conditional Heteroscedasticity (ARCH) and Generalized ARCH (GARCH) models are widely used to model price volatility in financial markets. Considering a GARCH model with heavy-tailed innovations for electricity price, I characterize the limiting distribution of a Value-at-Risk (VaR) estimator of the conditional electricity price distribution, which corresponds to the extremal quantile of the conditional distribution of the GARCH price process. I propose two methods, the normal approximation method and the data tilting method, for constructing confidence intervals for the conditional VaR estimator and assess their accuracies by simulation studies. The proposed approach is applied to electricity spot price data taken from the Pennsylvania-New Jersey-Maryland market to obtain confidence intervals of the empirically estimated Value-at-Risk of electricity prices. Several directions that deserve further investigation are pointed out for future research.
227

New results in detection, estimation, and model selection

Ni, Xuelei 08 December 2005 (has links)
This thesis contains two parts: the detectability of convex sets and the study on regression models In the first part of this dissertation, we investigate the problem of the detectability of an inhomogeneous convex region in a Gaussian random field. The first proposed detection method relies on checking a constructed statistic on each convex set within an nn image, which is proven to be un-applicable. We then consider using h(v)-parallelograms as the surrogate, which leads to a multiscale strategy. We prove that 2/9 is the minimum proportion of the maximally embedded h(v)-parallelogram in a convex set. Such a constant indicates the effectiveness of the above mentioned multiscale detection method. In the second part, we study the robustness, the optimality, and the computing for regression models. Firstly, for robustness, M-estimators in a regression model where the residuals are of unknown but stochastically bounded distribution are analyzed. An asymptotic minimax M-estimator (RSBN) is derived. Simulations demonstrate the robustness and advantages. Secondly, for optimality, the analysis on the least angle regressions inspired us to consider the conditions under which a vector is the solution of two optimization problems. For these two problems, one can be solved by certain stepwise algorithms, the other is the objective function in many existing subset selection criteria (including Cp, AIC, BIC, MDL, RIC, etc). The latter is proven to be NP-hard. Several conditions are derived. They tell us when a vector is the common optimizer. At last, extending the above idea about finding conditions into exhaustive subset selection in regression, we improve the widely used leaps-and-bounds algorithm (Furnival and Wilson). The proposed method further reduces the number of subsets needed to be considered in the exhaustive subset search by considering not only the residuals, but also the model matrix, and the current coefficients.
228

Design of Adaptive Block Backstepping Controllers for Perturbed Nonlinear Systems with Input Nonlinearities

Chien, Chia-Wei 01 February 2012 (has links)
Based on the Lyapunov stability theorem, a design methodology of adaptive block backstepping control scheme is proposed in this thesis for a class of multi-input perturbed nonlinear systems with input nonlinearities to solve regulation problems. Fuzzy control method is utilized to estimate the unknown inverse input functions in order to facilitate the design of the proposed control scheme, so that the sector condition need not to be satisfied. According to the number of block m in the plant to be controlled, m−1 virtual input controllers are designed from the first block to the (m−1)th block. Then the proposed robust controller is designed from the last block. Adaptive mechanisms are also employed in the virtual input controllers as well as the robust controller, so that the least upper bounds of perturbations and estimation errors of inverse input functions are not required. The resultant control system is able to achieve asymptotic stability. Finally, a numerical example and a practical example are given for demonstrating the feasibility of the proposed control scheme.
229

Mixture Modeling and Outlier Detection in Microarray Data Analysis

George, Nysia I. 16 January 2010 (has links)
Microarray technology has become a dynamic tool in gene expression analysis because it allows for the simultaneous measurement of thousands of gene expressions. Uniqueness in experimental units and microarray data platforms, coupled with how gene expressions are obtained, make the field open for interesting research questions. In this dissertation, we present our investigations of two independent studies related to microarray data analysis. First, we study a recent platform in biology and bioinformatics that compares the quality of genetic information from exfoliated colonocytes in fecal matter with genetic material from mucosa cells within the colon. Using the intraclass correlation coe�cient (ICC) as a measure of reproducibility, we assess the reliability of density estimation obtained from preliminary analysis of fecal and mucosa data sets. Numerical findings clearly show that the distribution is comprised of two components. For measurements between 0 and 1, it is natural to assume that the data points are from a beta-mixture distribution. We explore whether ICC values should be modeled with a beta mixture or transformed first and fit with a normal mixture. We find that the use of mixture of normals in the inverse-probit transformed scale is less sensitive toward model mis-specification; otherwise a biased conclusion could be reached. By using the normal mixture approach to compare the ICC distributions of fecal and mucosa samples, we observe the quality of reproducible genes in fecal array data to be comparable with that in mucosa arrays. For microarray data, within-gene variance estimation is often challenging due to the high frequency of low replication studies. Several methodologies have been developed to strengthen variance terms by borrowing information across genes. However, even with such accommodations, variance may be initiated by the presence of outliers. For our second study, we propose a robust modification of optimal shrinkage variance estimation to improve outlier detection. In order to increase power, we suggest grouping standardized data so that information shared across genes is similar in distribution. Simulation studies and analysis of real colon cancer microarray data reveal that our methodology provides a technique which is insensitive to outliers, free of distributional assumptions, effective for small sample size, and data adaptive.
230

Comparing Model-based and Design-based Structural Equation Modeling Approaches in Analyzing Complex Survey Data

Wu, Jiun-Yu 2010 August 1900 (has links)
Conventional statistical methods assuming data sampled under simple random sampling are inadequate for use on complex survey data with a multilevel structure and non-independent observations. In structural equation modeling (SEM) framework, a researcher can either use the ad-hoc robust sandwich standard error estimators to correct the standard error estimates (Design-based approach) or perform multilevel analysis to model the multilevel data structure (Model-based approach) to analyze dependent data. In a cross-sectional setting, the first study aims to examine the differences between the design-based single-level confirmatory factor analysis (CFA) and the model-based multilevel CFA for model fit test statistics/fit indices, and estimates of the fixed and random effects with corresponding statistical inference when analyzing multilevel data. Several design factors were considered, including: cluster number, cluster size, intra-class correlation, and the structure equality of the between-/within-level models. The performance of a maximum modeling strategy with the saturated higher-level and true lower-level model was also examined. Simulation study showed that the design-based approach provided adequate results only under equal between/within structures. However, in the unequal between/within structure scenarios, the design-based approach produced biased fixed and random effect estimates. Maximum modeling generated consistent and unbiased within-level model parameter estimates across three different scenarios. Multilevel latent growth curve modeling (MLGCM) is a versatile tool to analyze the repeated measure sampled from a multi-stage sampling. However, researchers often adopt latent growth curve models (LGCM) without considering the multilevel structure. This second study examined the influences of different model specifications on the model fit test statistics/fit indices, between/within-level regression coefficient and random effect estimates and mean structures. Simulation suggested that design-based MLGCM incorporating the higher-level covariates produces consistent parameter estimates and statistical inferences comparable to those from the model-based MLGCM and maintain adequate statistical power even with small cluster number.

Page generated in 0.029 seconds