Spelling suggestions: "subject:"incertainty,"" "subject:"ncertainty,""
451 |
Essays in Macroeconomics with Frictions and Uncertainty ShocksKang, Taesu January 2012 (has links)
Thesis advisor: Fabio Ghironi / This dissertation consists of three essays on macroeconomics with frictions and uncertainty shocks. The first essay is "Collateral Constrained Workers' Unemployment". Financial market and labor market are closely interconnected each other in the sense that unemployed workers have difficulty not only in borrowing new loan but also in repaying outstanding loan. In addition, if unemployment entails loss from default and no new loan, credit constrained workers will accept lower wage to avoid the loss from losing job. In this paper, we try to investigate the role of the interaction between financial market and labor market over the business cycle. To do that, we assume credit constrained workers can borrow against their houses and repay outstanding loans only when they are employed. We also introduce labor search and matching framework into our model to consider unemployment and wage bargaining process explicitly. With this setup, we find that adverse housing preference shock leads to substantial negative impact on labor market by reducing the benefit from maintaining job. As a result, high unemployment significantly amplifies the business cycle by reducing supply of loan and increasing default. This result would be helpful to understand recent "Great Recession" which was originated from the collapse of housing market and accompanied by high unemployment and default rate. The second essay is "International Financial Business Cycles". Recent international macroeconomics literature on global imbalances explains the U.S. persistent current account deficit and emerging countries' surplus, i.e., the U.S. is the borrower. Little research has been done on the banking-sector level, where U.S. banks are lenders to banks in emerging countries. We build a two country framework where banks are explicitly modeled to investigate how lending in the banking sector can affect the international macroeconomy during the recent crisis. In steady state, banks in the developing country borrows from the U.S. banks. When the borrowers in the U.S. pay back less than contractually agreed and damage the balance sheet of the U.S. banks, with the presence of bank capital requirement constraint, U.S. banks raise lending rates and decrease the loans made to U.S. borrowers as well as banks in the developing country. The results are a sharp increase in the lending spread, a reduction in output and a depreciation in the real exchange rate of the developing country. They are the experience of many emerging Asian markets following the U.S. financial crisis starting in late 2007. Another feature of our model captures an empirical fact, documented by Devereux and Yetman (2010), that across different economies, countries with lower financial rating can suffer more when the lending country deleverages. The third essay is "Uncertainty, Collateral Constrained Borrowers, and Business Cycle". Standard RBC model fails to generate the co-movement of key macro variables under uncertainty shock because precautionary saving motive decreases consumption but increases investment and labor. To fill this gap, we build a DSGE model with collateral constrained borrowers who can borrow against housing and capital. In the model with modest risk aversion, we can generate the desired co-movement of key macro variables under uncertainty shock and the co-movement comes from the collateral constraint channel through drop in housing price. Under uncertainty shock, highly indebted borrowers sell collaterals to avoid uncertainty in future consumption. As a result, housing price goes down and it makes credit crunch to borrowers through collateral constraint channel. The negative effect of uncertainty shock is strengthened in the economy with higher indebted borrowers. / Thesis (PhD) — Boston College, 2012. / Submitted to: Boston College. Graduate School of Arts and Sciences. / Discipline: Economics.
|
452 |
Essays in Applied Microeconomic TheoryRaykov, Radoslav S. January 2012 (has links)
Thesis advisor: Utku Unver / This dissertation consists of three essays in microeconomic theory: two focusing on insurance theory and one on matching theory. The first chapter is concerned with catastrophe insurance. Motivated by the aftermath of hurricane Katrina, it studies a strategic model of catastrophe insurance in which consumers know that they may not get reimbursed if too many other people file claims at the same time. The model predicts that the demand for catastrophe insurance can ``bend backwards'' to zero, resulting in multiple equilibria and especially in market failure, which is always an equilibrium. This shows that a catastrophe market can fail entirely due to demand-driven reasons, a result new to the literature. The model suggests that pricing is key for the credibility of catastrophe insurers: instead of increasing demand, price cuts may backfire and instead cause a ``race to the bottom.'' However, small amounts of extra liquidity can restore the system to stable equilibrium, highlighting the importance of a functioning reinsurance market for large risks. These results remain robust both for expected utility consumer preferences and for expected utility's most popular alternative, rank-dependent expected utility. The second chapter develops a model of quality differentiation in insurance markets, focusing on two of their specific features: the fact that costs are uncertain, and the fact that firms are averse to risk. Cornerstone models of price competition predict that firms specialize in products of different quality (differentiate their products) as a way of softening price competition. However, real-world insurance markets feature very little differentiation. This chapter offers an explanation to this phenomenon by showing that cost uncertainty fundamentally alters the nature of price competition among risk-averse firms by creating a drive against differentiation. This force becomes particularly pronounced when consumers are picky about quality, and is capable of reversing standard results, leading to minimum differentiation instead. The chapter concludes with a study of how the costs of quality affect differentiation by considering two benchmark cases: when quality is costless and when quality costs are convex (quadratic). The third chapter focuses on the theory of two-sided matching. Its main topic are inefficiencies that arise when agent preferences permit indifferences. It is well-known that two-sided matching under weak preferences can result in matchings that are stable, but not Pareto efficient, which creates bad incentives for inefficiently matched agents to stay together. In this chapter I show that in one-to-one matching with weak preferences, the fraction of inefficiently matched agents decreases with market size if agents are sufficiently diverse; in particular, the proportion of agents who can Pareto improve in a randomly chosen stable matching approaches zero when the number of agents goes to infinity. This result shows that the relative degree of the inefficiency vanishes in sufficiently large markets, but this does not provide a "cure-all'' solution in absolute terms, because inefficient individuals remain even when their fraction is vanishing. Agent diversity is represented by the diversity of each person's preferences, which are assumed randomly drawn, i.i.d. from the set of all possible weak preferences. To demonstrate its main result, the chapter relies on the combinatorial properties of random weak preferences. / Thesis (PhD) — Boston College, 2012. / Submitted to: Boston College. Graduate School of Arts and Sciences. / Discipline: Economics.
|
453 |
Estudo sobre a determinação de antimônio em amostras ambientais pelo método de análise por ativação com nêutrons. validação da metodologia e determinação da incerteza da medição / A study on antimony determination in environmental samples by neutron activation analysis. validation of the methodology and determination of the uncertainty of the measurementMatsubara, Tassiane Cristina Martins 09 September 2011 (has links)
O antimônio é um elemento encontrado em baixas concentrações no meio ambiente. No entanto, a sua determinação tem despertado grande interesse devido ao conhecimento de sua toxicidade e da crescente aplicação na indústria. A determinação de antimônio tem sido um desafio para os pesquisadores uma vez que o elemento é encontrado em baixas concentrações, o que faz de sua análise uma tarefa difícil. Portanto, embora a análise por ativação de nêutrons (NAA) seja um método adequado para a determinação de vários elementos em diferentes tipos de matriz, no caso de Sb, a análise apresenta algumas dificuldades. A principal dificuldade é devido às interferências espectrais. O objetivo desta pesquisa foi validar o método de NAA para a determinação de Sb em amostras ambientais. Para estabelecer condições adequadas para a determinação de Sb, ensaios preliminares foram realizados para posterior análise de materiais de referência certificados (MRC). O procedimento experimental consistiu em irradiar amostras juntamente com padrão sintético de Sb por períodos de 8 ou 16 horas no reator nuclear de pesquisa IEA-R1, seguido de espectrometria de raios gama. A quantificação de Sb foi realizada pela medição dos radioisótopos de 122Sb e 124Sb. Ensaios preliminares indicaram a presença de Sb em papel de filtro Whatman, utilizado no preparo do padrão, porém em teor muito baixo, podendo ser considerado desprezível. No caso do material plástico utilizado como invólucro para a irradiação da amostra, foi verificado que ele deve ser escolhido cuidadosamente, pois dependendo do plástico, este pode conter Sb. A análise da estabilidade da solução padrão diluída de Sb, dentro do período de oito meses, mostrou que não há alteração significativa na concentração deste elemento. Os resultados obtidos nas análises dos materiais de referência certificados indicaram a formação de radioisótopos de 76As e também de 134Cs e 152Eu, podendo interferir na determinação de Sb pela medição de 122Sb, devido à proximidade de energias dos raios gama emitidos. Além disso, a alta atividade do 24Na pode mascarar o pico do 122Sb e dificultar a sua detecção. As análises dos MRC indicaram que a exatidão e a precisão dos resultados de Sb dependem principalmente do tipo e composição da matriz, da sua concentração na amostra, do radioisotopo medido e do tempo de decaimento utilizado para a medição. A avaliação dos componentes que contribuem para a medição da incerteza da concentração de Sb, mostrou que a maior contribuição da incerteza é dada pela estatística de contagem da amostra. Os resultados da avaliação da incerteza indicaram também que o valor da incerteza padrão combinada depende do radioisótopo medido e do tempo de decaimento utilizado para as contagens. Este estudo mostrou que a NAA é um método bastante adequado na determinação de Sb em amostras ambientais, possibilitando a obtenção de resultados com baixos valores de incerteza e por ser uma técnica puramente instrumental, permite a análise de um grande número de amostras. / Antimony is an element found in low concentrations in the environment. However, its determination has attracted great interest due to the knowledge of its toxicity and increasing application in industry. The determination of antimony has been a challenge for researchers since this element is found in low concentrations which make its analysis a difficult task. Therefore, although neutron activation analysis (NAA) is an appropriate method for the determination of various elements in different types of matrix, in the case of Sb its analysis presents some difficulties, mainly due to spectral interferences. The objective of this research was to validate the NAA method for Sb determination in environmental samples. To establish appropriate conditions for Sb determinations, preliminary assays were carried out for further analysis of certified reference materials (CRM). The experimental procedure was to irradiate samples with a synthetic Sb standard for a period of 8 or 16 hours in the IEA-R1 nuclear research reactor, followed by gamma ray spectrometry. The quantification of Sb was performed by measuring the radioactive isotopes of 122Sb and 124Sb. The results of preliminary assays indicated the presence of Sb in Whatman no 40 filter paper used in the preparation of the synthetic standard, but at very low concentrations, which could be considered negligible. In the case of the plastic material used in bags for the sample irradiation, it should be chosen carefully, because depending on the thickness, they may contain Sb. The analyses of the stability of the diluted Sb standard solution showed no change in the Sb concentration within eight months after its preparation. Results obtained in the analysis of certified reference materials indicated the interference of 76As and also of 134Cs and 152Eu in the Sb determinations by measuring 122Sb, due to the proximity of the gamma ray energies. The high activity of 24Na can also mask the peak of 122Sb hindering its detection. The analysis of CRM indicated that the accuracy and precision of the results depend on the type of matrix analyzed, its concentration in the sample, radioisotope measured and of the decay time used for the measurements. The analysis of the components that contribute to the uncertainty of the Sb concentration indicated that the largest uncertainty contribution is given by statistical counting of the sample. The findings also showed that the value of combined standard uncertainty depends on the radioisotopes of Sb measured and the decay time used for counting. This study showed that NAA is a very adequate method for Sb determinations in environmental samples furnishing results with low uncertainty values.
|
454 |
Essays in International Economics:Brabant, Dominique January 2019 (has links)
Thesis advisor: James E. Anderson / Thesis advisor: Rosen Valchev / The effect of uncertainty on firms' behavior and on the macroeconomy is generally negative in the literature. Extensive research has also demonstrated that financial frictions limit the extent of firms' activities and growth prospects. In the first two chapters of this dissertation, I study both empirically and theoretically how a specific type of uncertainty, exchange rate uncertainty, interacts with financial frictions to affect the behavior of exporting firms. In line with the existing literature, I find in the first chapter that exports of manufacturing sectors in which firms are more financially constrained decrease by more in times of high uncertainty. Having more tangible capital, which can potentially be used as collateral, makes the effect of uncertainty less negative, especially im sectors where firms are large. Relying more on external financing, on the other hand, makes the effect more negative and affects sectors with small firms more. Current theoretical models have little to say about the effect of uncertainty on heterogeneous firms. To address this issue, I introduce in the second chapter a model of financially-constrained heterogeneous exporting firms in which credit conditions depend on the degree of exchange rate uncertainty. Firms in different sectors face different types of financial constraints, and are therefore differently affected by uncertainty. I use the calibrated model to evaluate potential policies that could be implemented to alleviate the negative effect of exchange rate uncertainty on exports. The uncovered interest parity puzzle is the empirical finding that countries with higher risk-free interest rates tend to see their currencies appreciate in the short run. Typical two-country macroeconomic models instead predict that high interest-rate currencies depreciate, with arbitrage opportunities eliminating profitable carry trade strategies. The international finance literature responded to this puzzle by providing several alternative theoretical models able to explain the puzzle. In the third chapter of this dissertation, I study how the predictions of two of these alternative models - the habit model of Verdelhan (2010) and the distorted belief model of Gourinchas and Tornell (2004) - are affected when re-cast in a standard dynamic stochastic general equilibrium framework. I investigate how the mechanisms rely on specific parameter values in order to find under which conditions, if any, they can explain the UIP puzzle. In addition, I obtain business cycle moments from model simulations and compare them to the moments obtained from a standard two-country DSGE model and from the data. My results show that for the first model, the habit model, the UIP results disappear under realistic calibrations. For the second model, the distorted beliefs model, UIP properties remain under some calibrations. In addition, business cycle predictions remain close to empirical evidence. / Thesis (PhD) — Boston College, 2019. / Submitted to: Boston College. Graduate School of Arts and Sciences. / Discipline: Economics.
|
455 |
Determinação do teor médio e da incerteza em depósitos fosfáticos estratiformes no sul do Estado do Mato Grosso / Determination of mean grade and uncertainty in stratiform phosphatic deposits in southern Mato Grosso StateGuilherme José Rampazzo 17 November 2017 (has links)
A avaliação de recursos minerais em depósitos estratiformes que possuem dimensões horizon-tais relativamente muito maiores que na direção vertical pode acarretar em algumas dificulda-des durante os procedimentos de interpolação e estimativas, podendo inclusive necessitar da utilização de sub-blocos. Tal fato pode ser ainda mais agravado se na geometria do depósito ocorrer regiões com grande variabilidade de espessura, falhas ou dobramentos, fazendo com que eventos deposicionais de idades diferentes sejam postos muito próximos e que, conse-quentemente, esses dados sejam correlacionados durante as estimativas de pontos não amos-trados. Uma maneira de se minimizar esse efeito em depósitos minerais de grandes extensões e relativa pequena espessura é utilizando métodos indiretos de avaliação através de variáveis de serviço, como a Acumulação (Acc) e a Espessura Mineralizada (T), onde o Teor Médio (G) é obtido pela razão destas variáveis (Acc/T). Entretanto, uma lacuna matemática é identi-ficada quando Dowd & Milton (1987) afirmam que esta razão se trata somente de uma apro-ximação, pois não leva em consideração as incertezas associadas às variáveis de serviço, e Journel & Huijbregts (1978) quando demonstram a dificuldade em se calcular a incerteza lo-cal associada a uma razão de duas variáveis. Yamamoto et al. (2017) ao identificarem essa lacuna desenvolveram equações capazes de se calcular em valores provenientes da razão de duas variáveis considerando suas incertezas, que neste caso se trata do teor médio dos blocos estimados, além da variância ou incerteza local associada às estimativas, aplicando o fator esperança matemática à função ( ) , e expandindo-a até segunda ordem pela Série de Taylor. Os objetivos propostos por esta Dissertação foram fundamentados nestas lacunas matemáticas e no teste desta nova metodologia de avaliação indireta de depósitos minerais estratiformes utilizando pioneiramente dados reais provenientes de um projeto de pesquisa em fosfato de titularidade da BEMISA - Brasil Exploração Mineral S.A. no Estado do Mato Gro-so. Para o desenvolvimento deste trabalho utilizaram das diversas ferramentas estatísticas e geoestatísticas, a partir das quais foi possível comtemplar recursos fosfáticos de aproximada-mente sete milhões de toneladas a um teor médio de 3,21% de P2O5, considerando um teor de corte 1,8%. / The evaluation of mineral resources in stratiform deposits that have horizontal dimensions much larger than vertical dimension can lead to some difficulties during interpolation and estimations procedures, and may even require the use of sub-blocks. This fact can be further aggravated if regions of great variability of thickness, thrusts or folding occur in the deposi-tion geometry, causing depositional events of different ages to be placed very close and con-sequently, these data are correlated during the estimates of not sampled points. One way to minimize this effect in mineral deposits with relatively small thickness is using indirect meth-ods of evaluation through service variables, such as Accumulation (Acc) and Mineralized Thickness (T), where the Average (G) is obtained by the ratio of these variables (Acc / T). However, a mathematical gap is identified when Dowd & Milton (1987) argue that this ratio is only an approximation because it doesn\'t take into account the uncertainties associated at with service variables, and Journel & Huijbregts (1978) when they demonstrate the difficulty in calculate the local uncertainty associated with two-variable ratio. When Yamamoto et al. (2017) identified this gap, they developed equations capable of calculating values from the ratio of two variables considering their uncertainties, which in this case is the average of esti-mated blocks, in addition to the local variance or uncertainty associated with the estimates, applying the Mathematical Expectation factor to the function f (x, y) = X / Y, and expanding it to second order by the Taylor Series. The objectives proposed by this Dissertation were based on these mathematical gaps and the test of this new methodology of indirect evaluation of stratiform mineral deposits using a real data from a phosphate research project owned by BEMISA - Brasil Exploração Mineral SA in the State of Mato Grosso. For the development of this work, a number of statistical and geostatistical tools were used, in which it was possi-ble to contemplate phosphatic deposits of approximately 7 million tons at an average 3.21% P2O5 content, considering a cut-off of 1,8%.
|
456 |
The effects of perceived environmental uncertainty, information asymmetry and evaluative style on compensation scheme: an agency study in China.January 1993 (has links)
by Zhou Jun. / Includes "Chinese version of the questionaire". / Thesis (M.B.A.)--Chinese University of Hong Kong, 1993. / Includes bibliographical references (leaves 43-45). / ABSTRACT / TABLE OF CONTENTS / ACKNOWLEDGEMENT / CHAPTER / Chapter I. --- INTRODUCTION --- p.1 / Chapter II. --- DEVELOPMENT OF PROPOSITIONS --- p.7 / Chapter III. --- METHODOLOGY --- p.19 / Chapter IV. --- DATA ANALYSIS --- p.25 / Chapter V. --- RESULTS --- p.30 / Chapter VI. --- DISCUSSION AND CONCLUSION --- p.34 / APPENDICES / BIBLIOGRAPHY
|
457 |
Hybrid simulation and optimization approach for green intermodal transportation problem with travel time uncertaintyHrusovsky, Martin, Demir, Emrah, Jammernegg, Werner, van Woensel, Tom 09 1900 (has links) (PDF)
The increasing volumes of road transportation contribute to congestion on road, which leads to delays and other negative impacts on the reliability of transportation. Moreover, transportation is one of the main contributors to the growth of carbon dioxide equivalent emissions, where the impact of road transportation is significant. Therefore, governmental organizations and private commercial companies are looking for greener transportation solutions to eliminate the negative externalities of road transportation. In this paper, we present a novel solution framework to support the operational-level decisions for intermodal transportation networks using a combination of an optimization model and simulation. The simulation model includes stochastic elements in form of uncertain travel times, whereas the optimization model represents a deterministic and linear multi-commodity service network design formulation. The intermodal transportation plan can be optimized according to different objectives, including costs, time and CO2e emissions. The proposed approach is successfully implemented to real-life scenarios where differences in transportation plans for alternative objectives are presented. The solutions for transportation networks with up to 250 services and 20 orders show that the approach is capable of delivering reliable solutions and identifying possible disruptions and alternatives for adapting the unreliable transportation plans.
|
458 |
Uncertainty quantification of engineering systems using the multilevel Monte Carlo methodUnwin, Helena Juliette Thomasin January 2018 (has links)
This thesis examines the quantification of uncertainty in real-world engineering systems using the multilevel Monte Carlo method. It is often infeasible to use the traditional Monte Carlo method to investigate the impact of uncertainty because computationally it can be prohibitively expensive for complex systems. Therefore, the newer multilevel method is investigated and the cost of this method is analysed in the finite element framework. The Monte Carlo and multilevel Monte Carlo methods are compared for two prototypical examples: structural vibrations and buoyancy driven flows through porous media. In the first example, the impact of random mass density is quantified for structural vibration problems in several dimensions using the multilevel Monte Carlo method. Comparable eigenvalues and energy density approximations are found for the traditional Monte Carlo method and the multilevel Monte Carlo method, but for certain problems the expectation and variance of the quantities of interest can be computed over 100 times faster using the multilevel Monte Carlo method. It is also tractable to use the multilevel method for three dimensional structures, where the traditional Monte Carlo method is often prohibitively expensive. In the second example, the impact of uncertainty in buoyancy driven flows through porous media is quantified using the multilevel Monte Carlo method. Again, comparable results are obtained from the two methods for diffusion dominated flows and the multilevel method is orders of magnitude cheaper. The finite element models for this investigation are formulated carefully to ensure that spurious numerical artefacts are not added to the solution and are compared to an analytical model describing the long term sequestration of CO2 in the presence of a background flow. Additional cost reductions are achieved by solving the individual independent samples in parallel using the new podS library. This library schedules the Monte Carlo and multilevel Monte Carlo methods in parallel across different computer architectures for the two examples considered in this thesis. Nearly linear cost reductions are obtained as the number of processes is increased.
|
459 |
The effects of day-to-day interaction via social network sites on interpersonal relationshipsHoughton, David J. January 2013 (has links)
The current research identifies the impact of sharing day-to-day information in social network sites (SNS) on the relationships we hold within and outside of them. Stemming from the literature on self-disclosure, uncertainty reduction, personal relationships, privacy and computer-mediated communication (CMC), a concurrent triangulation research strategy is adopted to identify the patterns of relationship development and interaction in SNS. Using a mixed methods approach, five studies were conducted to determine how young adults interact via SNS. Empirical findings suggest SNS users are driven by the need to reduce uncertainty and gather information about their interaction partners. An interaction between several factors was found to impact on relationships between communication partners: the frequency of information sharing; the content of the shared information; the type of relationship held between the sender and recipient; the stage of relationship development; the medium of communication, and; an expected social contract. A conceptual model of interpersonal interaction within SNS environments is proposed, identifying the links between sharing, certainty and relationship quality, and manifested communication behaviour throughout relationship development. Implications for the fields of communication science, CMC, and social and behavioural psychology are discussed.
|
460 |
Model Validation in Fire Protection EngineeringLantz, Renee Vaillancourt 24 August 2001 (has links)
"In the prediction of phenomenon behavior there is a presupposition that a similarity exists between model and phenomenon. Success of application is derived from that similarity. An example of this approach is the use of similarity conditions such as Reynolds number in flow problems or Fourier number in heat transfer problems. The advent of performance-based codes has opened up opportunities for many diverse avenues of fire model implementation. The reliability of models depends upon model correspondence uncertainty. Model correspondence uncertainty is incomplete and distorted information introduced into a simulation by a modeling scheme. It manifests itself as 1) the uncertainty associated with the mathematical relationships hypothesized for a particular model, and 2) the uncertainty of the predictions obtained from the model. Improving model implementation by providing a method for rank-ordering models is the goal of the Model Validity Criterion (MVC) method. MVC values can be useful as a tool to objectively and quantitatively choose a model for an application or as part of a model improvement program. The MVC method calculates the amount of model correspondence uncertainty introduced by a modeling scheme. Model choice is based upon the strategy of minimizing correspondence uncertainty and therefore provides the model that best corresponds to the phenomenon. The MVC value for a model is quantified as the sum of the length of two files. These files are individual measures of model structure correspondence uncertainty and model behavior correspondence uncertainty. The combination of the two uncertainty components gives an objective and structured evaluation of the relative validity of each model from a set of likely candidate models. The model with the smallest uncertainty files has the lowest MVC value and is the model with the most validity. Ultimately the value of such a method is only realized from its utility. Example applications of the MVC method are demonstrated. Examples evaluate the rank-ordering of plume physics options used within the computer zone model WPI-Fire when validated against upper layer temperature data from compartment-fire test scenarios. The results show how candidate models of a set may be discriminated against based on validity. These results are powerful in that they allow the user to establish a quantitative measure for level of model performance and/or choose the most valid model for an application."
|
Page generated in 0.0745 seconds