• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 288
  • 113
  • 32
  • 30
  • 15
  • 13
  • 8
  • 7
  • 7
  • 6
  • 5
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 600
  • 600
  • 212
  • 118
  • 100
  • 99
  • 97
  • 82
  • 78
  • 65
  • 61
  • 60
  • 55
  • 53
  • 49
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Estimating Wind Velocities in Atmospheric Mountain Waves Using Sailplane Flight Data

Zhang, Ni January 2012 (has links)
Atmospheric mountain waves form in the lee of mountainous terrain under appropriate conditions of the vertical structure of wind speed and atmospheric stability. Trapped lee waves can extend hundreds of kilometers downwind from the mountain range, and they can extend tens of kilometers vertically into the stratosphere. Mountain waves are of importance in meteorology as they affect the general circulation of the atmosphere, can influence the vertical structure of wind speed and temperature fields, produce turbulence and downdrafts that can be an aviation hazard, and affect the vertical transport of aerosols and trace gasses, and ozone concentration. Sailplane pilots make extensive use of mountain lee waves as a source of energy with which to climb. There are many sailplane wave flights conducted every year throughout the world and they frequently cover large distances and reach high altitudes. Modern sailplanes frequently carry flight recorders that record their position at regular intervals during the flight. There is therefore potential to use this recorded data to determine the 3D wind velocity at positions on the sailplane flight path. This would provide an additional source of information on mountain waves to supplement other measurement techniques that might be useful for studies on mountain waves. The recorded data are limited however, and determination of wind velocities is not straightforward. This thesis is concerned with the development and application of techniques to determine the vector wind field in atmospheric mountain waves using the limited flight data collected during sailplane flights. A detailed study is made of the characteristics, uniqueness, and sensitivity to errors in the data, of the problem of estimating the wind velocities from limited flight data consisting of ground velocities, possibly supplemented by air speed or heading data. A heuristic algorithm is developed for estimating 3D wind velocities in mountain waves from ground velocity and air speed data, and the algorithm is applied to flight data collected during “Perlan Project” flights. The problem is then posed as a statistical estimation problem and maximum likelihood and maximum a posteriori estimators are developed for a variety of different kinds of flight data. These estimators are tested on simulated flight data and data from Perlan Project flights.
22

USE OF COMPUTER GENERATED HOLOGRAMS FOR OPTICAL ALIGNMENT

Zehnder, Rene January 2011 (has links)
The necessity to align a multi component null corrector that is used to test the 8.4 [m] off axis parabola segments of the primary mirror of the Giant Magellan Telescope (GMT) initiated this work. Computer Generated Holograms (CGHs) are often a component of these null correctors and their capability to have multiplefunctionality allows them not only to contribute to the measurement wavefront but also support the alignment. The CGH can also be used as an external tool to support the alignment of complex optical systems, although, for the applications shown in this work, the CGH is always a component of the optical system. In general CGHs change the shape of the illuminating wavefront that then can produce optical references. The uncertainty of position of those references not only depends on the uncertainty of position of the CGH with respect to the illuminating wavefront but also on the uncertainty on the shape of the illuminating wavefront. A complete analysis of the uncertainty on the position of the projected references therefore includes the illuminating optical system, that is typically an interferometer. This work provides the relationships needed to calculate the combined propagation of uncertainties on the projected optical references. This includes a geometrical optical description how light carries information of position and how diffraction may alter it. Any optical reference must be transferred to a mechanically tangible quantity for the alignment. The process to obtain the position of spheres relative to the CGH pattern where, the spheres are attached to the CGH, is provided and applied to the GMT null corrector. Knowing the location of the spheres relative to the CGH pattern is equivalent to know the location of the spheres with respect to the wavefront the pattern generates. This work provides various tools for the design and analysis to use CGHs for optical alignment including the statistical foundation that goes with it.
23

Modeling Stochastic Processes in Gamma-Ray Imaging Detectors and Evaluation of a Multi-Anode PMT Scintillation Camera for Use with Maximum-Likelihood Estimation Methods

Hunter, William Coulis Jason January 2007 (has links)
Maximum-likelihood estimation or other probabilistic estimation methods are underused in many areas of applied gamma-ray imaging, particularly in biomedicine. In this work, we show how to use our understanding of stochastic processes in a scintillation camera and their effect on signal formation to better estimate gamma-ray interaction parameters such as interaction position or energy.To apply statistical estimation methods, we need an accurate description of the signal statistics as a function of the parameters to be estimated. First, we develop a probability model of the signals conditioned on the parameters to be estimated by carefully examining the signal generation process. Subsequently, the likelihood model is calibrated by measuring signal statistics for an ensemble of events as a function of the estimate parameters.In this work, we investigate the application of ML-estimation methods for three topics. First, we design, build, and evaluate a scintillation camera based on a multi-anode PMT readout for use with ML-estimation techniques. Next, we develop methods for calibrating the response statistics of a thick-detector gamma camera as a function of interaction depth. Finally, we demonstrate the use of ML estimation with a modified clinical Anger camera.
24

Inverse Optical Design and Its Applications

Sakamoto, Julia January 2012 (has links)
We present a new method for determining the complete set of patient-specific ocular parameters, including surface curvatures, asphericities, refractive indices, tilts, decentrations, thicknesses, and index gradients. The data consist of the raw detector outputs of one or more Shack-Hartmann wavefront sensors (WFSs); unlike conventional wavefront sensing, we do not perform centroid estimation, wavefront reconstruction, or wavefront correction. Parameters in the eye model are estimated by maximizing the likelihood. Since a purely Gaussian noise model is used to emulate electronic noise, maximum-likelihood (ML) estimation reduces to nonlinear least-squares fitting between the data and the output of our optical design program. Bounds on the estimate variances are computed with the Fisher information matrix (FIM) for different configurations of the data-acquisition system, thus enabling system optimization. A global search algorithm called simulated annealing (SA) is used for the estimation step, due to multiple local extrema in the likelihood surface. The ML approach to parameter estimation is very time-consuming, so rapid processing techniques are implemented with the graphics processing unit (GPU).We are leveraging our general method of reverse-engineering optical systems in optical shop testing for various applications. For surface profilometry of aspheres, which involves the estimation of high-order aspheric coefficients, we generated a rapid ray-tracing algorithm that is well-suited to the GPU architecture. Additionally, reconstruction of the index distribution of GRIN lenses is performed using analytic solutions to the eikonal equation. Another application is parameterized wavefront estimation, in which the pupil phase distribution of an optical system is estimated from multiple irradiance patterns near focus. The speed and accuracy of the forward computations are emphasized, and our approach has been refined to handle large wavefront aberrations and nuisance parameters in the imaging system.
25

Models for target detection times.

Bae, Deok Hwan January 1989 (has links)
Approved for public release; distribution in unlimited. / Some battlefield models have a component in them which models the time it takes for an observer to detect a target. Different observers may have different mean detection times due to various factors such as the type of sensor used, environmental conditions, fatigue of the observer, etc. Two parametric models for the distribution of time to target detection are considered which can incorporate these factors. Maximum likelihood estimation procedures for the parameters are described. Results of simulation experiments to study the small sample behavior of the estimators are presented. / http://archive.org/details/modelsfortargetd00baed / Major, Korean Air Force
26

Optimal designs for maximum likelihood estimation and factorial structure design

Chowdhury, Monsur 06 September 2016 (has links)
This thesis develops methodologies for the construction of various types of optimal designs with applications in maximum likelihood estimation and factorial structure design. The methodologies are applied to some real data sets throughout the thesis. We start with a broad review of optimal design theory including various types of optimal designs along with some fundamental concepts. We then consider a class of optimization problems and determine the optimality conditions. An important tool is the directional derivative of a criterion function. We study extensively the properties of the directional derivatives. In order to determine the optimal designs, we consider a class of multiplicative algorithms indexed by a function, which satisfies certain conditions. The most important and popular design criterion in applications is D-optimality. We construct such designs for various regression models and develop some useful strategies for better convergence of the algorithms. The remaining thesis is devoted to some important applications of optimal design theory. We first consider the problem of determining maximum likelihood estimates of the cell probabilities under the hypothesis of marginal homogeneity in a square contingency table. We formulate the Lagrangian function and remove the Lagrange parameters by substitution. We then transform the problem to one of maximizing some functions of the cell probabilities simultaneously. We apply this problem to some real data sets, namely, a US Migration data, and a data on grading of unaided distance vision. We solve another estimation problem to determine the maximum likelihood estimation of the parameters of the latent variable models such as Bradley-Terry model where the data come from a paired comparisons experiment. We approach this problem by considering the observed frequency having a binomial distribution and then replacing the binomial parameters in terms of optimal design weights. We apply this problem to a data set from American League Baseball Teams. Finally, we construct some optimal structure designs for comparing test treatments with a control. We introduce different structure designs and establish their properties using the incidence and characteristic matrices. We also develop methods of obtaining optimal R-type structure designs and show how such designs are trace, A- and MV-optimal. / October 2016
27

Model-based recursive partitioning

Zeileis, Achim, Hothorn, Torsten, Hornik, Kurt January 2005 (has links) (PDF)
Recursive partitioning is embedded into the general and well-established class of parametric models that can be fitted using M-type estimators (including maximum likelihood). An algorithm for model-based recursive partitioning is suggested for which the basic steps are: (1) fit a parametric model to a data set, (2) test for parameter instability over a set of partitioning variables, (3) if there is some overall parameter instability, split the model with respect to the variable associated with the highest instability, (4) repeat the procedure in each of the daughter nodes. The algorithm yields a partitioned (or segmented) parametric model that can effectively be visualized and that subject-matter scientists are used to analyze and interpret. / Series: Research Report Series / Department of Statistics and Mathematics
28

Discrete Weibull regression model for count data

Kalktawi, Hadeel Saleh January 2017 (has links)
Data can be collected in the form of counts in many situations. In other words, the number of deaths from an accident, the number of days until a machine stops working or the number of annual visitors to a city may all be considered as interesting variables for study. This study is motivated by two facts; first, the vital role of the continuous Weibull distribution in survival analyses and failure time studies. Hence, the discrete Weibull (DW) is introduced analogously to the continuous Weibull distribution, (see, Nakagawa and Osaki (1975) and Kulasekera (1994)). Second, researchers usually focus on modeling count data, which take only non-negative integer values as a function of other variables. Therefore, the DW, introduced by Nakagawa and Osaki (1975), is considered to investigate the relationship between count data and a set of covariates. Particularly, this DW is generalised by allowing one of its parameters to be a function of covariates. Although the Poisson regression can be considered as the most common model for count data, it is constrained by its equi-dispersion (the assumption of equal mean and variance). Thus, the negative binomial (NB) regression has become the most widely used method for count data regression. However, even though the NB can be suitable for the over-dispersion cases, it cannot be considered as the best choice for modeling the under-dispersed data. Hence, it is required to have some models that deal with the problem of under-dispersion, such as the generalized Poisson regression model (Efron (1986) and Famoye (1993)) and COM-Poisson regression (Sellers and Shmueli (2010) and Sáez-Castillo and Conde-Sánchez (2013)). Generally, all of these models can be considered as modifications and developments of Poisson models. However, this thesis develops a model based on a simple distribution with no modification. Thus, if the data are not following the dispersion system of Poisson or NB, the true structure generating this data should be detected. Applying a model that has the ability to handle different dispersions would be of great interest. Thus, in this study, the DW regression model is introduced. Besides the exibility of the DW to model under- and over-dispersion, it is a good model for inhomogeneous and highly skewed data, such as those with excessive zero counts, which are more disperse than Poisson. Although these data can be fitted well using some developed models, namely, the zero-inated and hurdle models, the DW demonstrates a good fit and has less complexity than these modifed models. However, there could be some cases when a special model that separates the probability of zeros from that of the other positive counts must be applied. Then, to cope with the problem of too many observed zeros, two modifications of the DW regression are developed, namely, zero-inated discrete Weibull (ZIDW) and hurdle discrete Weibull (HDW) models. Furthermore, this thesis considers another type of data, where the response count variable is censored from the right, which is observed in many experiments. Applying the standard models for these types of data without considering the censoring may yield misleading results. Thus, the censored discrete Weibull (CDW) model is employed for this case. On the other hand, this thesis introduces the median discrete Weibull (MDW) regression model for investigating the effect of covariates on the count response through the median which are more appropriate for the skewed nature of count data. In other words, the likelihood of the DW model is re-parameterized to explain the effect of the predictors directly on the median. Thus, in comparison with the generalized linear models (GLMs), MDW and GLMs both investigate the relations to a set of covariates via certain location measurements; however, GLMs consider the means, which is not the best way to represent skewed data. These DW regression models are investigated through simulation studies to illustrate their performance. In addition, they are applied to some real data sets and compared with the related count models, mainly Poisson and NB models. Overall, the DW models provide a good fit to the count data as an alternative to the NB models in the over-dispersion case and are much better fitting than the Poisson models. Additionally, contrary to the NB model, the DW can be applied for the under-dispersion case.
29

Flächennutzungswandel in Tirana : Untersuchungen anhand von Landsat TM, Terra ASTER und GIS

Richter, Dietmar January 2007 (has links)
Die Zuwanderung nach Tirana führte im Verlauf der 1990er Jahre zu einem enormen Flächenverbrauch auf Kosten landwirtschaftlicher Flächen im Umland der albanischen Hauptstadt. Im Rahmen der vorliegenden Arbeit wird die Entwicklung des rasanten Flächenverbrauchs mit computergestützten Methoden dokumentiert. Grundlage der Untersuchung bilden zwei zu unterschiedlichen Zeitpunkten (1988 und 2000) aufgenommene Satellitenszenen, mit Hilfe derer eine Änderungsanalyse durchgeführt wird. Ziel der Änderungsanalyse ist es, den Flächennutzungswandel zu analysieren, Daten zu generieren und die Ergebnisse in geeigneter Weise zu visualisieren. Zu den protagonistischen Verfahren der Änderungsanalyse zählen sowohl die Maximum-Likelihood Klassifikation sowie ein wissensbasierter Klassifizierungsansatz. Die Ergebnisse der Änderungsanalyse werden in Änderungskarten dargestellt und mittels einer GIS-Software statistisch ausgewertet.
30

A Strategy for Earthquake Catalog Relocations Using a Maximum Likelihood Method

Li, Ka Lok January 2012 (has links)
A strategy for relocating earthquakes in a catalog is presented. The strategy is based on the argument that the distribution of the earthquake events in a catalog is reasonable a priori information for earthquake relocation in that region. This argument can be implemented using the method of maximum likelihood for arrival time data inversion, where the a priori probability distribution of the event locations is defined as the sum of the probability densities of all events in the catalog. This a priori distribution is then added to the standard misfit criterion in earthquake location to form the likelihood function. The probability density of an event in the catalog is described by a Gaussian probability density. The a priori probability distribution is, therefore, defined as the normalized sum of the Gaussian probability densities of all events in the catalog, excluding the event being relocated. For a linear problem, the likelihood function can be approximated by the joint probability density of the a priori distribution and the distribution of an unconstrained location due to the misfit alone. After relocating the events according to the maximum of the likelihood function, a modified distribution of events is generated. This distribution should be more densely clustered than before in general since the events are moved towards the maximum of the posterior distribution. The a priori distribution is updated and the process is iterated. The strategy is applied to the aftershock sequence in southwest Iceland after a pair of earthquakes on 29th May 2008. The relocated events reveal the fault systems in that area. Three synthetic data sets are used to test the general behaviour of the strategy. It is observed that the synthetic data give significantly different behaviour from the real data.

Page generated in 0.0492 seconds