• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 41
  • Tagged with
  • 42
  • 38
  • 24
  • 14
  • 12
  • 12
  • 10
  • 9
  • 9
  • 8
  • 8
  • 8
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Natural Disasters and Human Capital Accumulation

Crespo Cuaresma, Jesus January 2010 (has links) (PDF)
The empirical literature on the relationship between natural disaster risk and investment in education is inconclusive. Model averaging methods in a framework of crosscountry and panel regressions show an extremely robust negative partial correlation between secondary school enrollment and natural disaster risk. This result is driven exclusively by geologic disasters. Exposure to natural disaster risk is a robust determinant of differences in secondary school enrollment between countries but not necessarily within countries Natural disasters, human capital, education, school enrollment, Bayesian model averaging.
2

Model uncertainty in matrix exponential spatial growth regression models

Piribauer, Philipp, Fischer, Manfred M. 06 1900 (has links) (PDF)
This paper considers the most important aspects of model uncertainty for spatial regression models, namely the appropriate spatial weight matrix to be employed and the appropriate explanatory vari- ables. We focus on the spatial Durbin model (SDM) specification in this study that nests most models used in the regional growth literature, and develop a simple Bayesian model averaging approach that provides a unified and formal treatment of these aspects of model uncertainty for SDM growth models. The approach expands on the work by LeSage and Fischer (2008) by reducing the computational costs through the use of Bayesian information criterion model weights and a matrix exponential specification of the SDM model. The spatial Durbin matrix exponential model has theoretical and computational advantages over the spatial autoregressive specification due to the ease of inversion, differentiation and integration of the matrix expo- nential. In particular, the matrix exponential has a simple matrix determinant which vanishes for the case of a spatial weight matrix with a trace of zero (LeSage and Pace 2007). This allows for a larger domain of spatial growth regression models to be analysed with this approach, including models based on different classes of spatial weight matrices. The working of the approach is illustrated for the case of 32 potential determinants and three classes of spatial weight matrices (contiguity-based, k-nearest neighbor and distance-based spatial weight matrices), using a dataset of income per capita growth for 273 European regions. (authors' abstract)
3

Compiler optimisations and relaxed memory consistency models / Optimisations des compilateurs et modèles mémoire relâchés

Morisset, Robin 05 April 2017 (has links)
Les architectures modernes avec des processeurs multicœurs, ainsi que les langages de programmation modernes, ont des mémoires faiblement consistantes. Leur comportement est formalisé par le modèle mémoire de l'architecture ou du langage de programmation ; il définit précisément quelle valeur peut être lue par chaque lecture dans la mémoire partagée. Ce n'est pas toujours celle écrite par la dernière écriture dans la même variable, à cause d'optimisation dans les processeurs, telle que l'exécution spéculative d'instructions, des effets complexes des caches, et des optimisations dans les compilateurs. Dans cette thèse, nous nous concentrons sur le modèle mémoire C11 qui est défini par l'édition 2011 du standard C. Nos contributions suivent trois axes. Tout d'abord, nous avons regardé la théorie autour du modèle C11, étudiant de façon formelle quelles optimisations il autorise les compilateurs à faire. Nous montrons que de nombreuses optimisations courantes sont permises, mais, surprenamment, d'autres, importantes, sont interdites. Dans un second temps, nous avons développé une méthode à base de tests aléatoires pour détecter quand des compilateurs largement utilisés tels que GCC et Clang réalisent des optimisations invalides dans le modèle mémoire C11. Nous avons trouvés plusieurs bugs dans GCC, qui furent tous rapidement fixés. Nous avons aussi implémenté une nouvelle passez d'optimisation dans LLVM, qui recherchent des instructions des instructions spéciales qui limitent les optimisations faites par le processeur - appelées instructions barrières - et élimine celles qui ne sont pas utiles. Finalement, nous avons développé un ordonnanceur en mode utilisateur pour des threads légers communicants via des canaux premier entré-premier sorti à un seul producteur et un seul consommateur. Ce modèle de programmation est connu sous le nom de réseau de Kahn, et nous montrons comment l'implémenter efficacement, via les primitives désynchronisation de C11. Ceci démontre qu'en dépit de ses problèmes, C11 peut être utilisé en pratique. / Modern multiprocessors architectures and programming languages exhibit weakly consistent memories. Their behaviour is formalised by the memory model of the architecture or programming language; it precisely defines which write operation can be returned by each shared memory read. This is not always the latest store to the same variable, because of optimisations in the processors such as speculative execution of instructions, the complex effects of caches, and optimisations in the compilers. In this thesis we focus on the C11 memory model that is defined by the 2011 edition of the C standard. Our contributions are threefold. First, we focused on the theory surrounding the C11 model, formally studying which compiler optimisations it enables. We show that many common compiler optimisations are allowed, but, surprisingly, some important ones are forbidden. Secondly, building on our results, we developed a random testing methodology for detecting when mainstream compilers such as GCC or Clang perform an incorrect optimisation with respect to the memory model. We found several bugs in GCC, all promptly fixed. We also implemented a novel optimisation pass in LLVM, that looks for special instructions that restrict processor optimisations - called fence instructions - and eliminates the redundant ones. Finally, we developed a user-level scheduler for lightweight threads communicating through first-in first-out single-producer single-consumer queues. This programming model is known as Kahn process networks, and we show how to efficiently implement it, using C11 synchronisation primitives. This shows that despite its flaws, C11 can be usable in practice.
4

Macroeconomic Applications of Bayesian Model Averaging

Moser, Mathias 02 1900 (has links) (PDF)
Bayesian Model Averaging (BMA) is a common econometric tool to assess the uncertainty regarding model specification and parameter inference and is widely applied in fields where no strong theoretical guidelines are present. Its major advantage over single-equation models is the combination of evidence from a large number of specifications. The three papers included in this thesis all investigate model structures in the BMA model space. The first contribution evaluates how priors can be chosen to enforce model structures in the presence of interactions terms and multicollinearity. This is linked to a discussion in the Journal of Applied Econometrics regarding the question whether being a Sub-Saharan African country makes a difference for growth modelling. The second essay is concerned with clusters of different models in the model space. We apply Latent Class Analysis to the set of sampled models from BMA and identify different subsets (kinds of) models for two well-known growth data sets. The last paper focuses on the application of "jointness", which tries to find bivariate relationships between regressors in BMA. Accordingly this approach attempts to identify substitutes and complements by linking the econometric discussion on this subject to the field of Machine Learning.(author's abstract)
5

Spatial Regression-Based Model Specifications for Exogenous and Endogenous Spatial Interaction

LeSage, James P., Fischer, Manfred M. 18 March 2014 (has links) (PDF)
The focus here is on the log-normal version of the spatial interaction model. In this context, we consider spatial econometric specifications that can be used to accommodate two types of dependence scenarios, one involving endogenous interaction and the other exogenous interaction. These model specifications replace the conventional assumption of independence between origin-destination-flows with formal approaches that allow for two different types of spatial dependence in flow magnitudes. Endogenous interaction reflects situations where there is reaction to feedback regarding flow magnitudes from regions neighboring origin and destination regions. This type of interaction can be modeled using specifications proposed by LeSage and Pace (2008) who use spatial lags of the dependent variable to quantify the magnitude and extent of feedback effects, hence the term endogenous interaction. Exogenous interaction represents a situation where spillover arise from nearby (or perhaps even distant) regions, and these need to be taken into account when modeling observed variation in flows across the network of regions. In contrast to endogenous interaction, these contextual effects do not generate reaction to the spillovers, leading to a model specification that can be interpreted without considering changes in the long-run equilibrium state of the system of flows. We discuss issues pertaining to interpretation of estimates from these two types of model specification, and provide an empirical illustration. (authors' abstract)
6

Output specific efficiencies. The case of UK private secondary schools.

Gstach, Dieter, Somers, Andrew, Warning, Susanne January 2003 (has links) (PDF)
Based on regularly published data we quantitatively assess the efficiency of UK secondary, private schools in providing quantity vs. quality of graduates on a per output basis. In economic terms the primary question is whether indeed an increase in the quantity of graduates with the observed inputs would be associated with a deterioration of average quality of graduates. The estimation framework is a new, statistically enriched type of Data Envelopment Analysis as detailed in Gstach (2002) to account for output-specific efficiencies. The results indicate that quantity clearly dominates quality as performance distinguishing criteria amongst sample schools, i.e. on average quantity efficiency is low while quality efficiency is high. The results also provide evidence that the abilities of schools to provide quantity resp. quality are positively correlated. These findings indicate considerable scope for increasing the number of graduates without sacrificing average graduation quality through improved school management. (author's abstract) / Series: Department of Economics Working Paper Series
7

Adaptive Shrinkage in Bayesian Vector Autoregressive Models

Feldkircher, Martin, Huber, Florian 03 1900 (has links) (PDF)
Vector autoregressive (VAR) models are frequently used for forecasting and impulse response analysis. For both applications, shrinkage priors can help improving inference. In this paper we derive the shrinkage prior of Griffin et al. (2010) for the VAR case and its relevant conditional posterior distributions. This framework imposes a set of normally distributed priors on the autoregressive coefficients and the covariances of the VAR along with Gamma priors on a set of local and global prior scaling parameters. This prior setup is then generalized by introducing another layer of shrinkage with scaling parameters that push certain regions of the parameter space to zero. A simulation exercise shows that the proposed framework yields more precise estimates of the model parameters and impulse response functions. In addition, a forecasting exercise applied to US data shows that the proposed prior outperforms other specifications in terms of point and density predictions. (authors' abstract) / Series: Department of Economics Working Paper Series
8

A Bayesian approach to identifying and interpreting regional convergence clubs in Europe

Fischer, Manfred M., LeSage, James P. 10 1900 (has links) (PDF)
This study suggests a two-step approach to identifying and interpreting regional convergence clubs in Europe. The first step involves identifying the number and composition of clubs using a space-time panel data model for annual income growth rates in conjunction with Bayesian model comparison methods. A second step uses a Bayesian space-time panel data model to assess how changes in the initial endowments of variables (that explain growth) impact regional income levels over time. These dynamic trajectories of changes in regional income levels over time allow us to draw inferences regarding the timing and magnitude of regional income responses to changes in the initial conditions for the clubs that have been identified in the first step. This is in contrast to conventional practice that involves setting the number of clubs ex ante, selecting the composition of the potential convergence clubs according to some a priori criterion (such as initial per capita income thresholds for example), and using cross-sectional growth regressions for estimation and interpretation purposes. (authors' abstract)
9

Threshold cointegration and adaptive shrinkage

Huber, Florian, Zörner, Thomas 06 1900 (has links) (PDF)
This paper considers Bayesian estimation of the threshold vector error correction (TVECM) model in moderate to large dimensions. Using the lagged cointegrating error as a threshold variable gives rise to additional difficulties that are typically solved by relying on large sample approximations. Relying on Markov chain Monte Carlo methods we circumvent these issues by avoiding computationally prohibitive estimation strategies like the grid search. Due to the proliferation of parameters we use novel global-local shrinkage priors in the spirit of Griffin and Brown (2010). We illustrate the merits of our approach in an application to five exchange rates vis-á-vis the US dollar and assess whether a given currency is over or undervalued. Moreover, we perform a forecasting comparison to investigate whether it pays off to adopt a non-linear modeling approach relative to a set of simpler benchmark models. / Series: Department of Economics Working Paper Series
10

Decomposing Income Differentials Between Roma and Non-Roma in South East Europe

Milcher, Susanne January 2011 (has links) (PDF)
The paper decomposes average income differentials between Roma and non-Roma in South East Europe into the component that can be explained by group differences in income-related characteristics (characteristics effect), and the component which is due to differing returns to these characteristics (coefficients or discrimination effect). The decomposition analysis is based on Blinder (1973) and Oaxaca (1973) and uses three weighting matrices, reflecting the different assumptions about income structures that would prevail in the absence of discrimination. Heckman (1979) estimators control for selectivity bias. Using microdata from the 2004 UNDP household survey on Roma minorities, the paper finds that a large share of the average income differential between Roma and non-Roma is explained by human capital differences. Nevertheless, significant labour market discrimination is found in Kosovo for all weight specifications and in Bulgaria and Serbia for two weight specifications. (author's abstract)

Page generated in 0.0334 seconds