• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 576
  • 240
  • 59
  • 58
  • 28
  • 25
  • 24
  • 24
  • 20
  • 15
  • 15
  • 7
  • 3
  • 3
  • 3
  • Tagged with
  • 1278
  • 621
  • 313
  • 271
  • 197
  • 195
  • 193
  • 180
  • 172
  • 167
  • 151
  • 122
  • 122
  • 108
  • 106
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

On Parameter Estimation Employing Sinewave Fit andPhase Noise Compensation in OFDM Systems

Negusse, Senay January 2015 (has links)
In today’s modern society, we are surrounded by a multitude of digital devices.The number of available digital devices is set to grow even more. As the trendcontinues, product life-cycle is a major issue in mass production of these devices.Testing and verification is responsible for a significant percentage of the productioncost of digital devices. Time efficient procedures for testing and characterization aretherefore sought for. Moreover, the need for flexible and low-cost solutions in thedesign architecture of radio frequency devices coupled with the demand for highdata rate has presented a challenge caused by interferences from the analog circuitparts. Study of digital signal processing based techniques which would alleviate theeffects of the analog impairments is therefore a pertinent subject. In the first part of this thesis, we address parameter estimation based on wave-form fitting. We look at the sinewave model for parameter estimation which iseventually used to characterize the performance of a device. The underlying goal isto formulate and analyze a set of new parameter estimators which provide a moreaccurate estimate than well known estimators. Specifically, we study the maximum-likelihood (ML) SNR estimator employing the three-parameter sine fit and derivealternative estimator based on its statistical distribution. We show that the meansquare error (MSE) of the alternative estimators is lower than the MSE of the MLestimator for a small sample size and a few of the new estimators are very close tothe Cramér-Rao lower bound (CRB). Simply put, the number of acquired measure-ment samples translate to measurement time, implying that the fewer the numberof samples required for a given accuracy, the faster the test would be. We alsostudy a sub-sampling approach for frequency estimation problem in a dual channelsinewave model with common frequency. Coprime subsampling technique is usedwhere the signals from both channels are uniformly subsampled with coprime pairof sparse samplers. Such subsampling technique is especially beneficial to lower thesampling frequency required in applications with high bandwidth requirement. TheCRB based on the co-prime subsampled data set is derived and numerical illus-trations are given showing the relation between the cost in performance based onthe mean squared error and the employed coprime factors for a given measurementtime. In the second part of the thesis, we deal with the problem of phase-noise (PHN).First, we look at a scheme in orthogonal frequency-division multiplexing (OFDM)system where pilot subcarriers are employed for joint PHN compensation, channelestimation and symbol detection. We investigate a method where the PHN statis-tics is approximated by a finite number of vectors and design a PHN codebook. Amethod of selecting the element in the codebook that is closest to the current PHNrealization with the corresponding channel estimate is discussed. We present simula-tion results showing improved performance compared to state-of-the art techniques.We also look at a sequential Monte-Carlo based method for combined channel im-pulse response and PHN tracking employing known OFDM symbols. Such techniqueallows time domain compensation of PHN such that simultaneous cancellation ofthe common phase error and reduction of the inter-carrier interference occurs. / <p>QC 20150529</p>
82

Estimating Wind Velocities in Atmospheric Mountain Waves Using Sailplane Flight Data

Zhang, Ni January 2012 (has links)
Atmospheric mountain waves form in the lee of mountainous terrain under appropriate conditions of the vertical structure of wind speed and atmospheric stability. Trapped lee waves can extend hundreds of kilometers downwind from the mountain range, and they can extend tens of kilometers vertically into the stratosphere. Mountain waves are of importance in meteorology as they affect the general circulation of the atmosphere, can influence the vertical structure of wind speed and temperature fields, produce turbulence and downdrafts that can be an aviation hazard, and affect the vertical transport of aerosols and trace gasses, and ozone concentration. Sailplane pilots make extensive use of mountain lee waves as a source of energy with which to climb. There are many sailplane wave flights conducted every year throughout the world and they frequently cover large distances and reach high altitudes. Modern sailplanes frequently carry flight recorders that record their position at regular intervals during the flight. There is therefore potential to use this recorded data to determine the 3D wind velocity at positions on the sailplane flight path. This would provide an additional source of information on mountain waves to supplement other measurement techniques that might be useful for studies on mountain waves. The recorded data are limited however, and determination of wind velocities is not straightforward. This thesis is concerned with the development and application of techniques to determine the vector wind field in atmospheric mountain waves using the limited flight data collected during sailplane flights. A detailed study is made of the characteristics, uniqueness, and sensitivity to errors in the data, of the problem of estimating the wind velocities from limited flight data consisting of ground velocities, possibly supplemented by air speed or heading data. A heuristic algorithm is developed for estimating 3D wind velocities in mountain waves from ground velocity and air speed data, and the algorithm is applied to flight data collected during “Perlan Project” flights. The problem is then posed as a statistical estimation problem and maximum likelihood and maximum a posteriori estimators are developed for a variety of different kinds of flight data. These estimators are tested on simulated flight data and data from Perlan Project flights.
83

USE OF COMPUTER GENERATED HOLOGRAMS FOR OPTICAL ALIGNMENT

Zehnder, Rene January 2011 (has links)
The necessity to align a multi component null corrector that is used to test the 8.4 [m] off axis parabola segments of the primary mirror of the Giant Magellan Telescope (GMT) initiated this work. Computer Generated Holograms (CGHs) are often a component of these null correctors and their capability to have multiplefunctionality allows them not only to contribute to the measurement wavefront but also support the alignment. The CGH can also be used as an external tool to support the alignment of complex optical systems, although, for the applications shown in this work, the CGH is always a component of the optical system. In general CGHs change the shape of the illuminating wavefront that then can produce optical references. The uncertainty of position of those references not only depends on the uncertainty of position of the CGH with respect to the illuminating wavefront but also on the uncertainty on the shape of the illuminating wavefront. A complete analysis of the uncertainty on the position of the projected references therefore includes the illuminating optical system, that is typically an interferometer. This work provides the relationships needed to calculate the combined propagation of uncertainties on the projected optical references. This includes a geometrical optical description how light carries information of position and how diffraction may alter it. Any optical reference must be transferred to a mechanically tangible quantity for the alignment. The process to obtain the position of spheres relative to the CGH pattern where, the spheres are attached to the CGH, is provided and applied to the GMT null corrector. Knowing the location of the spheres relative to the CGH pattern is equivalent to know the location of the spheres with respect to the wavefront the pattern generates. This work provides various tools for the design and analysis to use CGHs for optical alignment including the statistical foundation that goes with it.
84

Modeling Stochastic Processes in Gamma-Ray Imaging Detectors and Evaluation of a Multi-Anode PMT Scintillation Camera for Use with Maximum-Likelihood Estimation Methods

Hunter, William Coulis Jason January 2007 (has links)
Maximum-likelihood estimation or other probabilistic estimation methods are underused in many areas of applied gamma-ray imaging, particularly in biomedicine. In this work, we show how to use our understanding of stochastic processes in a scintillation camera and their effect on signal formation to better estimate gamma-ray interaction parameters such as interaction position or energy.To apply statistical estimation methods, we need an accurate description of the signal statistics as a function of the parameters to be estimated. First, we develop a probability model of the signals conditioned on the parameters to be estimated by carefully examining the signal generation process. Subsequently, the likelihood model is calibrated by measuring signal statistics for an ensemble of events as a function of the estimate parameters.In this work, we investigate the application of ML-estimation methods for three topics. First, we design, build, and evaluate a scintillation camera based on a multi-anode PMT readout for use with ML-estimation techniques. Next, we develop methods for calibrating the response statistics of a thick-detector gamma camera as a function of interaction depth. Finally, we demonstrate the use of ML estimation with a modified clinical Anger camera.
85

Inverse Optical Design and Its Applications

Sakamoto, Julia January 2012 (has links)
We present a new method for determining the complete set of patient-specific ocular parameters, including surface curvatures, asphericities, refractive indices, tilts, decentrations, thicknesses, and index gradients. The data consist of the raw detector outputs of one or more Shack-Hartmann wavefront sensors (WFSs); unlike conventional wavefront sensing, we do not perform centroid estimation, wavefront reconstruction, or wavefront correction. Parameters in the eye model are estimated by maximizing the likelihood. Since a purely Gaussian noise model is used to emulate electronic noise, maximum-likelihood (ML) estimation reduces to nonlinear least-squares fitting between the data and the output of our optical design program. Bounds on the estimate variances are computed with the Fisher information matrix (FIM) for different configurations of the data-acquisition system, thus enabling system optimization. A global search algorithm called simulated annealing (SA) is used for the estimation step, due to multiple local extrema in the likelihood surface. The ML approach to parameter estimation is very time-consuming, so rapid processing techniques are implemented with the graphics processing unit (GPU).We are leveraging our general method of reverse-engineering optical systems in optical shop testing for various applications. For surface profilometry of aspheres, which involves the estimation of high-order aspheric coefficients, we generated a rapid ray-tracing algorithm that is well-suited to the GPU architecture. Additionally, reconstruction of the index distribution of GRIN lenses is performed using analytic solutions to the eikonal equation. Another application is parameterized wavefront estimation, in which the pupil phase distribution of an optical system is estimated from multiple irradiance patterns near focus. The speed and accuracy of the forward computations are emphasized, and our approach has been refined to handle large wavefront aberrations and nuisance parameters in the imaging system.
86

Nonparametric Estimation and Inference for the Copula Parameter in Conditional Copulas

Acar, Elif Fidan 14 January 2011 (has links)
The primary aim of this thesis is the elucidation of covariate effects on the dependence structure of random variables in bivariate or multivariate models. We develop a unified approach via a conditional copula model in which the copula is parametric and its parameter varies as the covariate. We propose a nonparametric procedure based on local likelihood to estimate the functional relationship between the copula parameter and the covariate, derive the asymptotic properties of the proposed estimator and outline the construction of pointwise confidence intervals. We also contribute a novel conditional copula selection method based on cross-validated prediction errors and a generalized likelihood ratio-type test to determine if the copula parameter varies significantly. We derive the asymptotic null distribution of the formal test. Using subsets of the Matched Multiple Birth and Framingham Heart Study datasets, we demonstrate the performance of these procedures via analyses of gestational age-specific twin birth weights and the impact of change in body mass index on the dependence between two consequent pulse pressures taken from the same subject.
87

Models for target detection times.

Bae, Deok Hwan January 1989 (has links)
Approved for public release; distribution in unlimited. / Some battlefield models have a component in them which models the time it takes for an observer to detect a target. Different observers may have different mean detection times due to various factors such as the type of sensor used, environmental conditions, fatigue of the observer, etc. Two parametric models for the distribution of time to target detection are considered which can incorporate these factors. Maximum likelihood estimation procedures for the parameters are described. Results of simulation experiments to study the small sample behavior of the estimators are presented. / http://archive.org/details/modelsfortargetd00baed / Major, Korean Air Force
88

Optimal designs for maximum likelihood estimation and factorial structure design

Chowdhury, Monsur 06 September 2016 (has links)
This thesis develops methodologies for the construction of various types of optimal designs with applications in maximum likelihood estimation and factorial structure design. The methodologies are applied to some real data sets throughout the thesis. We start with a broad review of optimal design theory including various types of optimal designs along with some fundamental concepts. We then consider a class of optimization problems and determine the optimality conditions. An important tool is the directional derivative of a criterion function. We study extensively the properties of the directional derivatives. In order to determine the optimal designs, we consider a class of multiplicative algorithms indexed by a function, which satisfies certain conditions. The most important and popular design criterion in applications is D-optimality. We construct such designs for various regression models and develop some useful strategies for better convergence of the algorithms. The remaining thesis is devoted to some important applications of optimal design theory. We first consider the problem of determining maximum likelihood estimates of the cell probabilities under the hypothesis of marginal homogeneity in a square contingency table. We formulate the Lagrangian function and remove the Lagrange parameters by substitution. We then transform the problem to one of maximizing some functions of the cell probabilities simultaneously. We apply this problem to some real data sets, namely, a US Migration data, and a data on grading of unaided distance vision. We solve another estimation problem to determine the maximum likelihood estimation of the parameters of the latent variable models such as Bradley-Terry model where the data come from a paired comparisons experiment. We approach this problem by considering the observed frequency having a binomial distribution and then replacing the binomial parameters in terms of optimal design weights. We apply this problem to a data set from American League Baseball Teams. Finally, we construct some optimal structure designs for comparing test treatments with a control. We introduce different structure designs and establish their properties using the incidence and characteristic matrices. We also develop methods of obtaining optimal R-type structure designs and show how such designs are trace, A- and MV-optimal. / October 2016
89

Design of a practical model-observer-based image quality assessment method for x-ray computed tomography imaging systems

Tseng, Hsin-Wu, Fan, Jiahua, Kupinski, Matthew A. 28 July 2016 (has links)
The use of a channelization mechanism on model observers not only makes mimicking human visual behavior possible, but also reduces the amount of image data needed to estimate the model observer parameters. The channelized Hotelling observer (CHO) and channelized scanning linear observer (CSLO) have recently been used to assess CT image quality for detection tasks and combined detection/estimation tasks, respectively. Although the use of channels substantially reduces the amount of data required to compute image quality, the number of scans required for CT imaging is still not practical for routine use. It is our desire to further reduce the number of scans required to make CHO or CSLO an image quality tool for routine and frequent system validations and evaluations. This work explores different data-reduction schemes and designs an approach that requires only a few CT scans. Three different kinds of approaches are included in this study: a conventional CHO/CSLO technique with a large sample size, a conventional CHO/CSLO technique with fewer samples, and an approach that we will show requires fewer samples to mimic conventional performance with a large sample size. The mean value and standard deviation of areas under ROC/EROC curve were estimated using the well-validated shuffle approach. The results indicate that an 80% data reduction can be achieved without loss of accuracy. This substantial data reduction is a step toward a practical tool for routine-task-based QA/QC CT system assessment. (C) 2016 Society of Photo-Optical Instrumentation Engineers (SPIE)
90

Model-based recursive partitioning

Zeileis, Achim, Hothorn, Torsten, Hornik, Kurt January 2005 (has links) (PDF)
Recursive partitioning is embedded into the general and well-established class of parametric models that can be fitted using M-type estimators (including maximum likelihood). An algorithm for model-based recursive partitioning is suggested for which the basic steps are: (1) fit a parametric model to a data set, (2) test for parameter instability over a set of partitioning variables, (3) if there is some overall parameter instability, split the model with respect to the variable associated with the highest instability, (4) repeat the procedure in each of the daughter nodes. The algorithm yields a partitioned (or segmented) parametric model that can effectively be visualized and that subject-matter scientists are used to analyze and interpret. / Series: Research Report Series / Department of Statistics and Mathematics

Page generated in 0.0391 seconds