• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 22
  • 3
  • 2
  • 2
  • Tagged with
  • 45
  • 45
  • 10
  • 8
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Essays in International Trade

Jiatong Zhong (6997745) 16 August 2019 (has links)
<div>The first chapter quantitatively examines the impact of exporting countries' reputations for product quality on aggregate trade flows. I introduce a novel data set in which recall incidences retrieved from the Consumer Product Safety Commission are matched to U.S. import data from 1990-2009. Using a model of learning I construct a measure for exporter reputation where consumers internalize product recalls as bad signals. Structural estimation of the model finds that reputation is important and especially impactful for products used by children. The market share elasticity of exporter's reputation is around 1.49 across products, similar in magnitude to the average price elasticity, which is around 1.51. Improving reputation can increase export value, but reputation is sluggish: increasing reputation by 10\% can take decades for most exporters. Counterfactual exercises confirm that quality inspection institutions are welfare improving, and quality inspection is especially important for consumers of toys. </div><div> </div><div> The second chapter summarizes the correlation between export decisions of Chinese firms and product recalls for Chinese products. I use a new data set where I link recall data scraped from CPSC to monthly Chinese Customs Data. I found that recalls from previous months correlates negatively with the decision of export participation, but not with export value. </div><div> </div><div><br></div><div> The third chapter, coauthored with Kendall Kennedy and Xuan Jiang, analyzes how China's industrialization and the immediate export growth due to the Open Door Policy change Chinese teenagers' education decisions, which explains the education decline. We find that, middle school completion rates increased and high school completion rates decreased in response to export growth. This suggests a tradeoff between education and labor market opportunities in China. These education effects are more prominent for cohorts who were younger when China's Open Door Policy began, even though these teenagers also faced a stronger education system compared to the earlier cohorts. </div>
12

Esparsidade estruturada em reconstrução de fontes de EEG / Structured Sparsity in EEG Source Reconstruction

Francisco, André Biasin Segalla 27 March 2018 (has links)
Neuroimagiologia funcional é uma área da neurociência que visa o desenvolvimento de diversas técnicas para mapear a atividade do sistema nervoso e esteve sob constante desenvolvimento durante as últimas décadas devido à sua grande importância para aplicações clínicas e pesquisa. Técnicas usualmente utilizadas, como imagem por ressonância magnética functional (fMRI) e tomografia por emissão de pósitrons (PET) têm ótima resolução espacial (~ mm), mas uma resolução temporal limitada (~ s), impondo um grande desafio para nossa compreensão a respeito da dinâmica de funções cognitivas mais elevadas, cujas oscilações podem ocorrer em escalas temporais muito mais finas (~ ms). Tal limitação ocorre pelo fato destas técnicas medirem respostas biológicas lentas que são correlacionadas de maneira indireta com a atividade elétrica cerebral. As duas principais técnicas capazes de superar essa limitação são a Eletro- e Magnetoencefalografia (EEG/MEG), que são técnicas não invasivas para medir os campos elétricos e magnéticos no escalpo, respectivamente, gerados pelas fontes elétricas cerebrais. Ambas possuem resolução temporal na ordem de milisegundo, mas tipicalmente uma baixa resolução espacial (~ cm) devido à natureza mal posta do problema inverso eletromagnético. Um imenso esforço vem sendo feito durante as últimas décadas para melhorar suas resoluções espaciais através da incorporação de informação relevante ao problema de outras técnicas de imagens e/ou de vínculos biologicamente inspirados aliados ao desenvolvimento de métodos matemáticos e algoritmos sofisticados. Neste trabalho focaremos em EEG, embora todas técnicas aqui apresentadas possam ser igualmente aplicadas ao MEG devido às suas formas matemáticas idênticas. Em particular, nós exploramos esparsidade como uma importante restrição matemática dentro de uma abordagem Bayesiana chamada Aprendizagem Bayesiana Esparsa (SBL), que permite a obtenção de soluções únicas significativas no problema de reconstrução de fontes. Além disso, investigamos como incorporar diferentes estruturas como graus de liberdade nesta abordagem, que é uma aplicação de esparsidade estruturada e mostramos que é um caminho promisor para melhorar a precisão de reconstrução de fontes em métodos de imagens eletromagnéticos. / Functional Neuroimaging is an area of neuroscience which aims at developing several techniques to map the activity of the nervous system and has been under constant development in the last decades due to its high importance in clinical applications and research. Common applied techniques such as functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) have great spatial resolution (~ mm), but a limited temporal resolution (~ s), which poses a great challenge on our understanding of the dynamics of higher cognitive functions, whose oscillations can occur in much finer temporal scales (~ ms). Such limitation occurs because these techniques rely on measurements of slow biological responses which are correlated in a complicated manner to the actual electric activity. The two major candidates that overcome this shortcoming are Electro- and Magnetoencephalography (EEG/MEG), which are non-invasive techniques that measure the electric and magnetic fields on the scalp, respectively, generated by the electrical brain sources. Both have millisecond temporal resolution, but typically low spatial resolution (~ cm) due to the highly ill-posed nature of the electromagnetic inverse problem. There has been a huge effort in the last decades to improve their spatial resolution by means of incorporating relevant information to the problem from either other imaging modalities and/or biologically inspired constraints allied with the development of sophisticated mathematical methods and algorithms. In this work we focus on EEG, although all techniques here presented can be equally applied to MEG because of their identical mathematical form. In particular, we explore sparsity as a useful mathematical constraint in a Bayesian framework called Sparse Bayesian Learning (SBL), which enables the achievement of meaningful unique solutions in the source reconstruction problem. Moreover, we investigate how to incorporate different structures as degrees of freedom into this framework, which is an application of structured sparsity and show that it is a promising way to improve the source reconstruction accuracy of electromagnetic imaging methods.
13

Essays on Formal and Informal Long-Term Health Insurance Markets

Woldemichael, Andinet D. 13 August 2013 (has links)
This dissertation consists of two essays examining formal and informal long term health insurance markets. The first essay analyzes heterogeneity of Long-Term Care Insurance policyholders in their lapse decision, and how their ex-ante and ex-post subjective beliefs about the probability of needing Long-Term Care affect their lapse decisions. In this essay, I develop a model of lapse decision in a two-period insurance framework with a Bayesian learning process and implement several empirical specifications of the model using longitudinal data from the Health and Retirement Study. The results show that policyholders' ex- ante point predictions of their probabilities and their uncertainties about them have a persistent but declining impact on lapse decisions. Those who believe that their risk is higher are indeed more likely to remain insured. However, as their uncertainties surrounding their ex-ante point predictions increase, their chances of lapsing increase regardless of their initial perception biases. These results are heterogeneous across cohorts and policyholders and, in particular, show that those in the older group near the average age of Nursing Home entry have a precise prediction of their risk levels compared to the younger cohort. Policy simulations show that a more informed initial purchase decision reduces the chance of lapsing down the road. The second essay examines the extent to which informal risk sharing arrangement provides insurance against health shocks. I develop a comprehensive model of informal risk sharing contract with two-sided limited commitment which extends the standard model to a regime with the following features. Information regarding the nature of realized health shocks is imperfect and individuals' health capital stock serves as a storage technology and is a factor of production. The theoretical results show that, in such a regime, Pareto optimal allocations are history dependent even if participation constraints do not bind. I perform numerical analysis to show that risk sharing against health shock is less likely to be sustainable among non-altruistic individuals with different levels of biological survival rates and health capital productivity. The results also show that optimal allocations vary depending on the set of information available to individuals. Using panel data of households from villages in rural Ethiopia, I test the main predictions of the theoretical model. While there is negative history dependence in transfers among non-altruistic partners, history dependence is positive when risk sharing is along bloodline and kinship. However, neither short-term nor long-term health shocks are insured through informal risk sharing arrangements among non-altruistic individuals.
14

Essays on the optimal policy response to climate change

Kaufman, Noah 17 June 2011 (has links)
Unchecked anthropogenic climate change has the potential to destroy human lives and wealth on an unprecedented scale. This dissertation analyzes from an economic perspective various public policy options to correct the market failures caused by climate change. The widespread adoption of environmentally friendly consumer products can reduce the impacts of climate change. The first chapter analyzes various methods of encouraging the market performance of these products. I build a model of observational learning in which a "green" consumer good enters a market to challenge an established "dirty" product. Among other results, I provide conditions for when financial incentives or informational campaigns should be more effective at encouraging the market performance of green products. I also provide a discussion and an empirical analysis of the performance of compact fluorescent light bulbs in the U.S. residential market, and compare the findings to the predictions of the theoretical model. The second chapter provides a critic of the macroeconomic models economists have used to determine optimal climate change abatement policies. I build a model that can incorporate more realistic ranges of uncertainty for both the occurrence of catastrophic events and societal risk aversion than economists have used in the past. Numerical simulations are then used to calculate a range of risk premiums, the magnitude of which display that previous calculations of optimal carbon dioxide taxes are too imprecise to support any particular policy recommendation. Government-backed energy-efficiency programs have become popular as components of local and national strategies to combat climate change. The effectiveness of such policies hinges on whether they provide the appropriate incentives to both energy consumers and program implementers. The third chapter analyzes evaluations of California's energy-efficiency programs to assess their effectiveness at improving our understanding of the programs' performance and providing a check on utility incentives to overstate energy savings. We find, among other results, that evaluations are useful tools to achieve both of these goals because the programs largely did not meet their energy-savings projections, and the utility savings estimates are systematically higher than the third-party savings estimates of the evaluations. / text
15

Using Peak Intensity and Fragmentation Patterns in Peptide SeQuence IDentification (SQID) - A Bayesian Learning Algorithm for Tandem Mass Spectra

Ji, Li January 2006 (has links)
As DNA sequence information becomes increasingly available, researchers are now tackling the great challenge of characterizing and identifying peptides and proteins from complex mixtures. Automatic database searching algorithms have been developed to meet this challenge. This dissertation is aimed at improving these algorithms to achieve more accurate and efficient peptide and protein identification with greater confidence by incorporating peak intensity information and peptide cleavage patterns obtained in gas-phase ion dissociation research. The underlying hypothesis is that these algorithms can benefit from knowledge about molecular level fragmentation behavior of particular amino acid residues or residue combinations.SeQuence IDentification (SQID), developed in this dissertation research, is a novel Bayesian learning-based method that attempts to incorporate intensity information from peptide cleavage patterns in a database searching algorithm. It directly makes use of the estimated peak intensity distributions for cleavage at amino acid pairs, derived from probability histograms generated from experimental MS/MS spectra. Rather than assuming amino acid cleavage patterns artificially or disregarding intensity information, SQID aims to take advantage of knowledge of observed fragmentation intensity behavior. In addition, SQID avoids the generation of a theoretical spectrum predication for each candidate sequence, needed by other sequencing methods including SEQUEST. As a result, computational efficiency is significantly improved.Extensive testing has been performed to evaluate SQID, by using datasets from the Pacific Northwest National Laboratory, University of Colorado, and the Institute for Systems Biology. The computational results show that by incorporating peak intensity distribution information, the program's ability to distinguish the correct peptides from incorrect matches is greatly enhanced. This observation is consistent with experiments involving various peptides and searches against larger databases with distraction proteins, which indirectly verifies that peptide dissociation behaviors determine the peptide sequencing and protein identification in MS/MS. Furthermore, testing SQID by using previously identified clusters of spectra associated with unique chemical structure motifs leads to the following conclusions: (1) the improvement in identification confidence is observed with a range of peptides displaying different fragmentation behaviors; (2) the magnitude of improvement is in agreement with the peptide cleavage selectivity, that is, more significant improvements are observed with more selective peptide cleavages.
16

Dynamic Models of Human Capital Accumulation

Ransom, Tyler January 2015 (has links)
<p>This dissertation consists of three separate essays that use dynamic models to better understand the human capital accumulation process. First, I analyze the role of migration in human capital accumulation and how migration varies over the business cycle. An interesting trend in the data is that, over the period of the Great Recession, overall migration rates in the US remained close to their respective long-term trends. However, migration evolved differently by employment status: unemployed workers were more likely to migrate during the recession and employed workers less likely. To isolate mechanisms explaining this divergence, I estimate a dynamic, non-stationary search model of migration using a national longitudinal survey from 2004-2013. I focus on the role of employment frictions on migration decisions in addition to other explanations in the literature. My results show that a divergence in job offer and job destruction rates caused differing migration incentives by employment status. I also find that migration rates were muted because of the national scope of the Great Recession. Model simulations show that spatial unemployment insurance in the form of a moving subsidy can help workers move to more favorable markets.</p><p>In the second essay, my coauthors and I explore the role of information frictions in the acquisition of human capital. Specifically, we investigate the determinants of college attrition in a setting where individuals have imperfect information about their schooling ability and labor market productivity. We estimate a dynamic structural model of schooling and work decisions, where high school graduates choose a bundle of education and work combinations. We take into account the heterogeneity in schooling investments by distinguishing between two- and four-year colleges and graduate school, as well as science and non-science majors for four-year colleges. Individuals may also choose whether to work full-time, part-time, or not at all. A key feature of our approach is to account for correlated learning through college grades and wages, thus implying that individuals may leave or re-enter college as a result of the arrival of new information on their ability and/or productivity. We use our results to quantify the importance of informational frictions in explaining the observed school-to-work transitions and to examine sorting patterns.</p><p>In the third essay, my coauthors and I investigate the evolution over the last two decades in the wage returns to schooling and early work experience. </p><p>Using data from the 1979 and 1997 panels of the National Longitudinal Survey of Youth, we isolate changes in skill prices from changes in composition by estimating a dynamic model of schooling and work decisions. Importantly, this allows us to account for the endogenous nature of the changes in educational and accumulated work experience over this time period. We find an increase over this period in the returns to working in high school, but a decrease in the returns to working while in college. We also find an increase in the incidence of working in college, but that any detrimental impact of in-college work experience is offset by changes in other observable characteristics. Overall, our decomposition of the evolution in skill premia suggests that both price and composition effects play an important role. The role of unobserved ability is also important.</p> / Dissertation
17

Bayesian Learning Under Nonnormality

Yilmaz, Yildiz Elif 01 December 2004 (has links) (PDF)
Naive Bayes classifier and maximum likelihood hypotheses in Bayesian learning are considered when the errors have non-normal distribution. For location and scale parameters, efficient and robust estimators that are obtained by using the modified maximum likelihood estimation (MML) technique are used. In naive Bayes classifier, the error distributions from class to class and from feature to feature are assumed to be non-identical and Generalized Secant Hyperbolic (GSH) and Generalized Logistic (GL) distribution families have been used instead of normal distribution. It is shown that the non-normal naive Bayes classifier obtained in this way classifies the data more accurately than the one based on the normality assumption. Furthermore, the maximum likelihood (ML) hypotheses are obtained under the assumption of non-normality, which also produce better results compared to the conventional ML approach.
18

Esparsidade estruturada em reconstrução de fontes de EEG / Structured Sparsity in EEG Source Reconstruction

André Biasin Segalla Francisco 27 March 2018 (has links)
Neuroimagiologia funcional é uma área da neurociência que visa o desenvolvimento de diversas técnicas para mapear a atividade do sistema nervoso e esteve sob constante desenvolvimento durante as últimas décadas devido à sua grande importância para aplicações clínicas e pesquisa. Técnicas usualmente utilizadas, como imagem por ressonância magnética functional (fMRI) e tomografia por emissão de pósitrons (PET) têm ótima resolução espacial (~ mm), mas uma resolução temporal limitada (~ s), impondo um grande desafio para nossa compreensão a respeito da dinâmica de funções cognitivas mais elevadas, cujas oscilações podem ocorrer em escalas temporais muito mais finas (~ ms). Tal limitação ocorre pelo fato destas técnicas medirem respostas biológicas lentas que são correlacionadas de maneira indireta com a atividade elétrica cerebral. As duas principais técnicas capazes de superar essa limitação são a Eletro- e Magnetoencefalografia (EEG/MEG), que são técnicas não invasivas para medir os campos elétricos e magnéticos no escalpo, respectivamente, gerados pelas fontes elétricas cerebrais. Ambas possuem resolução temporal na ordem de milisegundo, mas tipicalmente uma baixa resolução espacial (~ cm) devido à natureza mal posta do problema inverso eletromagnético. Um imenso esforço vem sendo feito durante as últimas décadas para melhorar suas resoluções espaciais através da incorporação de informação relevante ao problema de outras técnicas de imagens e/ou de vínculos biologicamente inspirados aliados ao desenvolvimento de métodos matemáticos e algoritmos sofisticados. Neste trabalho focaremos em EEG, embora todas técnicas aqui apresentadas possam ser igualmente aplicadas ao MEG devido às suas formas matemáticas idênticas. Em particular, nós exploramos esparsidade como uma importante restrição matemática dentro de uma abordagem Bayesiana chamada Aprendizagem Bayesiana Esparsa (SBL), que permite a obtenção de soluções únicas significativas no problema de reconstrução de fontes. Além disso, investigamos como incorporar diferentes estruturas como graus de liberdade nesta abordagem, que é uma aplicação de esparsidade estruturada e mostramos que é um caminho promisor para melhorar a precisão de reconstrução de fontes em métodos de imagens eletromagnéticos. / Functional Neuroimaging is an area of neuroscience which aims at developing several techniques to map the activity of the nervous system and has been under constant development in the last decades due to its high importance in clinical applications and research. Common applied techniques such as functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) have great spatial resolution (~ mm), but a limited temporal resolution (~ s), which poses a great challenge on our understanding of the dynamics of higher cognitive functions, whose oscillations can occur in much finer temporal scales (~ ms). Such limitation occurs because these techniques rely on measurements of slow biological responses which are correlated in a complicated manner to the actual electric activity. The two major candidates that overcome this shortcoming are Electro- and Magnetoencephalography (EEG/MEG), which are non-invasive techniques that measure the electric and magnetic fields on the scalp, respectively, generated by the electrical brain sources. Both have millisecond temporal resolution, but typically low spatial resolution (~ cm) due to the highly ill-posed nature of the electromagnetic inverse problem. There has been a huge effort in the last decades to improve their spatial resolution by means of incorporating relevant information to the problem from either other imaging modalities and/or biologically inspired constraints allied with the development of sophisticated mathematical methods and algorithms. In this work we focus on EEG, although all techniques here presented can be equally applied to MEG because of their identical mathematical form. In particular, we explore sparsity as a useful mathematical constraint in a Bayesian framework called Sparse Bayesian Learning (SBL), which enables the achievement of meaningful unique solutions in the source reconstruction problem. Moreover, we investigate how to incorporate different structures as degrees of freedom into this framework, which is an application of structured sparsity and show that it is a promising way to improve the source reconstruction accuracy of electromagnetic imaging methods.
19

Statistical Learning with Artificial Neural Network Applied to Health and Environmental Data

Sharaf, Taysseer 01 January 2015 (has links)
The current study illustrates the utilization of artificial neural network in statistical methodology. More specifically in survival analysis and time series analysis, where both holds an important and wide use in many applications in our real life. We start our discussion by utilizing artificial neural network in survival analysis. In literature there exist two important methodology of utilizing artificial neural network in survival analysis based on discrete survival time method. We illustrate the idea of discrete survival time method and show how one can estimate the discrete model using artificial neural network. We present a comparison between the two methodology and update one of them to estimate survival time of competing risks. To fit a model using artificial neural network, you need to take care of two parts; first one is the neural network architecture and second part is the learning algorithm. Usually neural networks are trained using a non-linear optimization algorithm such as quasi Newton Raphson algorithm. Other learning algorithms are base on Bayesian inference. In this study we present a new learning technique by using a mixture of the two available methodologies for using Bayesian inference in training of neural networks. We have performed our analysis using real world data. We have used patients diagnosed with skin cancer in the United states from SEER database, under the supervision of the National Cancer Institute. The second part of this dissertation presents the utilization of artificial neural to time series analysis. We present a new method of training recurrent artificial neural network with Hybrid Monte Carlo Sampling and compare our findings with the popular auto-regressive integrated moving average (ARIMA) model. We used the carbon dioxide monthly average emission to apply our comparison, data collected from NOAA.
20

Three Essays on a Longitudinal Analysis of Business Start-ups using the Kauffman Firm Survey

Khurana, Indu 05 November 2012 (has links)
This dissertation focused on the longitudinal analysis of business start-ups using three waves of data from the Kauffman Firm Survey. The first essay used the data from years 2004-2008, and examined the simultaneous relationship between a firm’s capital structure, human resource policies, and its impact on the level of innovation. The firm leverage was calculated as, debt divided by total financial resources. Index of employee well-being was determined by a set of nine dichotomous questions asked in the survey. A negative binomial fixed effects model was used to analyze the effect of employee well-being and leverage on the count data of patents and copyrights, which were used as a proxy for innovation. The paper demonstrated that employee well-being positively affects the firm's innovation, while a higher leverage ratio had a negative impact on the innovation. No significant relation was found between leverage and employee well-being. The second essay used the data from years 2004-2009, and inquired whether a higher entrepreneurial speed of learning is desirable, and whether there is a linkage between the speed of learning and growth rate of the firm. The change in the speed of learning was measured using a pooled OLS estimator in repeated cross-sections. There was evidence of a declining speed of learning over time, and it was concluded that a higher speed of learning is not necessarily a good thing, because speed of learning is contingent on the entrepreneur's initial knowledge, and the precision of the signals he receives from the market. Also, there was no reason to expect speed of learning to be related to the growth of the firm in one direction over another. The third essay used the data from years 2004-2010, and determined the timing of diversification activities by the business start-ups. It captured when a start-up diversified for the first time, and explored the association between an early diversification strategy adopted by a firm, and its survival rate. A semi-parametric Cox proportional hazard model was used to examine the survival pattern. The results demonstrated that firms diversifying at an early stage in their lives show a higher survival rate; however, this effect fades over time.

Page generated in 0.0619 seconds