21 |
Social Interactions and Network Formation -- EmpiricalModeling and ApplicationsHsieh, Chih-Sheng 09 August 2013 (has links)
No description available.
|
22 |
Direct Utility Models for Asymmetric ComplementsLee, Sanghak 20 June 2012 (has links)
No description available.
|
23 |
Stochastic Computer Model Calibration and Uncertainty QuantificationFadikar, Arindam 24 July 2019 (has links)
This dissertation presents novel methodologies in the field of stochastic computer model calibration and uncertainty quantification. Simulation models are widely used in studying physical systems, which are often represented by a set of mathematical equations. Inference on true physical system (unobserved or partially observed) is drawn based on the observations from corresponding computer simulation model. These computer models are calibrated based on limited ground truth observations in order produce realistic predictions and associated uncertainties. Stochastic computer model differs from traditional computer model in the sense that repeated execution results in different outcomes from a stochastic simulation. This additional uncertainty in the simulation model requires to be handled accordingly in any calibration set up.
Gaussian process (GP) emulator replaces the actual computer simulation when it is expensive to run and the budget is limited. However, traditional GP interpolator models the mean and/or variance of the simulation output as function of input. For a simulation where marginal gaussianity assumption is not appropriate, it does not suffice to emulate only the mean and/or variance. We present two different approaches addressing the non-gaussianity behavior of an emulator, by (1) incorporating quantile regression in GP for multivariate output, (2) approximating using finite mixture of gaussians. These emulators are also used to calibrate and make forward predictions in the context of an Agent Based disease model which models the Ebola epidemic outbreak in 2014 in West Africa.
The third approach employs a sequential scheme which periodically updates the uncertainty inn the computer model input as data becomes available in an online fashion. Unlike other two methods which use an emulator in place of the actual simulation, the sequential approach relies on repeated run of the actual, potentially expensive simulation. / Doctor of Philosophy / Mathematical models are versatile and often provide accurate description of physical events. Scientific models are used to study such events in order to gain understanding of the true underlying system. These models are often complex in nature and requires advance algorithms to solve their governing equations. Outputs from these models depend on external information (also called model input) supplied by the user. Model inputs may or may not have a physical meaning, and can sometimes be only specific to the scientific model. More often than not, optimal values of these inputs are unknown and need to be estimated from few actual observations. This process is known as inverse problem, i.e. inferring the input from the output. The inverse problem becomes challenging when the mathematical model is stochastic in nature, i.e., multiple execution of the model result in different outcome. In this dissertation, three methodologies are proposed that talk about the calibration and prediction of a stochastic disease simulation model which simulates contagion of an infectious disease through human-human contact. The motivating examples are taken from the Ebola epidemic in West Africa in 2014 and seasonal flu in New York City in USA.
|
24 |
Real-time Prediction of Dynamic Systems Based on Computer ModelingTong, Xianqiao 15 April 2014 (has links)
This dissertation proposes a novel computer modeling (DTFLOP modeling) technique to predict the real-time behavior of dynamic systems. The proposed DTFLOP modeling classifies the computation into the sequential computation, which is conducted on the CPU, and the parallel computation, which is performed on the GPU and formulates the data transmission between the CPU and the GPU using the parameters of the memory access speed and the floating point operations to be carried out on the CPU and the GPU by relating the calculation rate respectively. With the help of the proposed DTFLOP modeling it is possible to estimate the time cost for computing the model that represents a dynamic system given a certain computer. The proposed DTFLOP modeling can be utilized as a general method to analyze the computation of a model related to a dynamic system and two real life systems are selected to demonstrate its performance, the cooperative autonomous vehicle system and the full-field measurement system.
For the cooperative autonomous vehicle system a novel parallel grid-based RBE technique is firstly proposed. The formulations are derived by identifying the parallel computation in the prediction and correction processes of the RBE. A belief fusion technique, which fuses not only the observation information but also the target motion information, has hen been proposed. The proposed DTFLOP modeling is validated using the proposed parallel grid-based RBE technique with the GPU implementation by comparing the estimated time cost with the actual time cost of the parallel grid-based RBE. The superiority of the proposed parallel grid-based RBE technique is investigated by a number of numerical examples in comparison with the conventional grid-based RBE technique. The belief fusion technique is examined by a simulated target search and rescue test and it is observed to maintain more information of the target compared with the conventional observation fusion technique and eventually leads to the better performance of the target search and rescue.
For the full-field measurement system a novel parallel DCT full-field measurement technique for measuring the displacement and strain field on the deformed surface of a structure is proposed. The proposed parallel DCT full-field measurement technique measures the displacement and strain field by tracking the centroids of the marked dots on the deformed surface. It identifies and develops the parallel computation in the image analysis and the field estimation processes and then is implemented into the GPU to accelerate the conventional full-field measurement techniques. The detail strategy of the GPU implementation is also developed and presented. The corresponding software package, which also includes a graphic user interface, and the hardware system consist of two digital cameras, LED lights and adjustable support legs to accommodate indoor or outdoor experimental environments are proposed. The proposed DTFLOP modeling is applied to the proposed parallel DCT full-field measurement technique to estimate its performance and the well match with the actual performance demonstrates the DTFLOP modeling. A number of both simulated and real experiments, including the tensile, compressive and bending experiments in the laboratory and outdoor environments, are performed to validate and demonstrate the proposed parallel DCT full-field measurement technique. / Ph. D.
|
25 |
Robust Feature Extraction and Temporal Analysis for Partial Fingerprint IdentificationShort, Nathaniel Jackson 24 October 2012 (has links)
Identification of an individual from discriminating features of the friction ridge surface is one of the oldest and most commonly used biometric techniques. Methods for identification span from tedious, although highly accurate, manual examination to much faster Automated Fingerprint Identification Systems (AFIS). While automatic fingerprint recognition has grown in popularity due to the speed and accuracy of matching minutia features of good quality plain-to-rolled prints, the performance is less than impressive when matching partial fingerprints. For some applications, including forensic analysis where partial prints come in the form of latent prints, it is not always possible to obtain high-quality image samples. Latent prints, which are lifted from a surface, are typically of low quality and low fingerprint surface area. As a result, the overlapping region in which to find corresponding features in the genuine matching ten-print is reduced; this in turn reduces the identification performance. Image quality also can vary substantially during image capture in applications with a high throughput of subjects having limited training, such as in border control. The rushed image capture leads to an overall acceptable sample being obtained where local image region quality may be low.
We propose an improvement to the reliability of features detected in exemplar prints in order to reduce the likelihood of an unreliable overlapping region corresponding with a genuine partial print. A novel approach is proposed for detecting minutiae in low quality image regions. The approach has demonstrated an increase in match performance for a set of fingerprints from a well-known database. While the method is effective at improving match performance for all of the fingerprint images in the database, a more significant improvement is observed for a subset of low quality images.
In addition, a novel method for fingerprint analysis using a sequence of fingerprint images is proposed. The approach uses the sequence of images to extract and track minutiae for temporal analysis during a single impression, reducing the variation in image quality during image capture. Instead of choosing a single acceptable image from the sequence based on a global measure, we examine the change in quality on a local level and stitch blocks from multiple images based on the optimal local quality measures. / Ph. D.
|
26 |
Deriving Consensus Ratings of the Big Three Rating AgenciesGrün, Bettina, Hofmarcher, Paul, Hornik, Kurt, Leitner, Christoph, Pichler, Stefan 27 March 2013 (has links) (PDF)
This paper introduces a model framework for dynamic credit rating processes. Our framework aggregates ordinal rating information stemming from a variety of rating sources. The dynamic of the consensus rating captures systematic as well as idiosyncratic changes. In addition, our framework allows to validate the different rating sources by analyzing the mean/variance structure of the rating deviations.
In an empirical study for the iTraxx Europe companies rated by the big three external rating agencies we use Bayesian techniques to estimate the consensus ratings for these companies.
The advantages are illustrated by comparing our dynamic rating model to a naive benchmark model. (authors' abstract)
|
27 |
DSGE modeling of business cycle properties of Czech labor market / DSGE modeling of business cycle properties of Czech labor marketSentivany, Daniel January 2016 (has links)
The goal of this thesis is to develop a DSGE model that accounts for the key business cycle properties of the Czech labor market. We used standard New Keynesian framework for monetary policy analysis and incorporated an elaborated labor market setup with equi- librium wage derived via an alternating offer bargaining protocol originally proposed by Rubinstein (1982) and follow the work of Christiano, Eichenbaum and Trabandt (2013) in the following steps. Firstly, we calibrated the closed economy model according to values suited for the Czech economy and found that the model can not only account for higher volatility of the real wage and unemployment, but can also explain the contemporaneous rise of both wages and employment after an expansionary shock in the economy, so called Shimer puzzle (Shimer, 2005a). Secondly, we demonstrated that the alternating offer bar- gaining sharing rule outperforms the Nash sharing rule under assumption of using the hiring costs in our framework (more so while using search costs) and therefore is better suited for use in larger scale models. Thirdly, we concluded that after estimating the labor market parameters using the Czech data, our model disproved the relatively low values linked to the probabilities of unsuccessful bargaining and job destruction. JEL...
|
28 |
Discrete Parameter Estimation for Rare Events: From Binomial to Extreme Value DistributionsSchneider, Laura Fee 26 April 2019 (has links)
No description available.
|
29 |
Métodos estatísticos para equalização de canais de comunicação. / Statistical methods for blind equalization of communication channels.Bordin Júnior, Claudio José 23 March 2006 (has links)
Nesta tese analisamos e propomos métodos para a equalização não-treinada (cega) de canais de comunicação lineares FIR baseados em filtros de partículas, que são técnicas recursivas para a solução Bayesiana de problemas de filtragem estocástica. Iniciamos propondo novos métodos para equalização sob ruído gaussiano que prescindem do uso de codificação diferencial, ao contrário dos métodos existentes. Empregando técnicas de evolução artificial de parâmetros, estendemos estes resultados para o caso de ruído aditivo com distribuição não-gaussiana. Em seguida, desenvolvemos novos métodos baseados nos mesmos princípios para equalizar e decodificar conjuntamente sistemas de comunicação que empregam códigos convolucionais ou em bloco. Através de simulações numéricas, observamos que os algoritmos propostos apresentam desempenhos, medidos em termos de taxa média de erro de bit e velocidade de convergência, marcadamente superiores aos de métodos tradicionais, freqüentemente aproximando o desempenho dos algoritmos ótimos (MAP) treinados. Além disso, observamos que os métodos baseados em filtros de partículas determinísticos exibem desempenhos consistentemente superiores aos dos demais métodos, sendo portanto a melhor escolha caso o modelo de sinal empregado permita a marginalização analítica dos parâmetros desconhecidos do canal. / In this thesis, we propose and analyze blind equalization methods suitable for linear FIR communications channels, focusing on the development of algorithms based on particle filters - recursive methods for approximating Bayesian solutions to stochastic filtering problems. Initially, we propose new equalization methods for signal models with gaussian additive noise that dispense with the need for differentially encoding the transmitted signals, as opposed to the previously existing methods. Next, we extend these algorithms to deal with non-gaussian additive noise by deploying artificial parameter evolution techniques. We next develop new joint blind equalization and decoding algorithms, suitable for convolutionally or block-coded communications systems. Via numerical simulations we show that the proposed algorithms outperform traditional approaches both in terms of mean bit error rate and convergence speed, and closely approach the performance of the optimal (MAP) trained equalizer. Furthermore, we observed that the methods based on deterministic particle filters consistently outperform those based on stochastic approaches, making them preferable when the adopted signal model allows for the analytic marginalization of the unknown channel parameters.
|
30 |
Teacher and School Contributions to Student GrowthAnderson, Daniel 18 August 2015 (has links)
Teachers and schools both play important roles in students' education. Yet, the unique contribution of each to students' growth has rarely been explored. In this dissertation, a Bayesian multilevel model was applied in each of Grades 3 to 5, with students' growth estimated across three seasonal (fall, winter, spring) administrations of a mathematics assessment. Variance in students' within-year growth was then partitioned into student-, classroom-, and school-level components. The expected differences in students' growth between classrooms and schools were treated as indicators of the teacher or school "effect" on students' mathematics growth. Results provided evidence that meaningful differences in students' growth lies both between classrooms within schools, and between schools.
The distribution of teacher effects between schools was also examined through the lens of access and equity with systematic sorting of teachers to schools leading to disproportional student access to classrooms where the average growth was above the norm. Further, previous research has documented persistent and compounding teacher effects over time. Systematic teacher sorting results in students' having differential probabilities of being enrolled in multiple "high" or "low" growth classrooms in a row. While clear evidence of teacher sorting was found, the demographic composition of schools did not relate to the sorting, contrary to previous research. The persistence of teacher and school effects was also examined from a previously unexplored angle by examining the effect of students' previous teacher(s) on their subsequent rate of within-year growth during the school year. These effects were found to be small and teacher effects overall were found to decay quite rapidly.
|
Page generated in 0.0909 seconds