• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 643
  • 276
  • 107
  • 85
  • 66
  • 44
  • 31
  • 14
  • 12
  • 11
  • 9
  • 5
  • 5
  • 4
  • 4
  • Tagged with
  • 1547
  • 146
  • 137
  • 98
  • 91
  • 89
  • 84
  • 79
  • 79
  • 79
  • 77
  • 75
  • 72
  • 70
  • 69
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
371

Homomorphic encryption and coding theory / Homomorphic encryption and coding theory

Půlpánová, Veronika January 2012 (has links)
Title: Homomorphic encryption and coding theory Author: Veronika Půlpánová Department: Department of algebra Supervisor: RNDr. Michal Hojsík, Ph.D., Department of algebra Abstract: The current mainstream in fully homomorphic encryption is the appro- ach that uses the theory of lattices. The thesis explores alternative approaches to homomorphic encryption. First we present a code-based homomorphic encrypti- on scheme by Armknecht et. al. and study its properties. Then we describe the family of cryptosystems commonly known as Polly Cracker and identify its pro- blematic aspects. The main contribution of this thesis is the design of a new fully homomorphic symmetric encryption scheme based on Polly Cracker. It proposes a new approach to overcoming the complexity of the simple Polly Cracker - based cryptosystems. It uses Gröbner bases to generate zero-dimensional ideals of po- lynomial rings over finite fields whose factor rings are then used as the rings of ciphertexts. Gröbner bases equip these rings with a multiplicative structure that is easily algorithmized, thus providing an environment for a fully homomorphic cryptosystem. Keywords: Fully homomorphic encryption, Polly Cracker, coding theory, zero- dimensional ideals
372

Global Spillover Effects from Unconventional Monetary Policy During the Crisis

Solís González, Brenda January 2015 (has links)
This work investigates the international spillover effects and transmission channels of Unconventional Monetary Policy (UMP) of major central banks from United States, United Kingdom, Japan and Europe to Latin-American countries. A Global VAR model is estimated to analyze the impact on output, inflation, credit, equity prices and money growth on the selected countries. Results suggest that indeed, there are international spillovers to the region with money growth, stock prices and international reserves as the main transmission channels. In addition, outcomes are different between countries and variables implying not only that transmission channels are not same across the region but also that the effects of the monetary policy are not distributed equally. Furthermore, it is found evidence that for some countries transmission channels may have transformed due to the crisis. Finally, effects of UMP during the crisis were in general positive with exception of Japan indicating that policies from this country brought more costs than benefits to the region. Keywords Zero Lower Bound, Unconventional Monetary Policy, International Spillovers, Global VAR, GVAR.
373

Treatment of persistent organic pollutants in wastewater with combined advanced oxidation

Badmus, Kassim Olasunkanmi January 2019 (has links)
Philosophiae Doctor - PhD / Persistent organic pollutants (POPs) are very tenacious wastewater contaminants with negative impact on the ecosystem. The two major sources of POPs are wastewater from textile industries and pharmaceutical industries. They are known for their recalcitrance and circumvention of nearly all the known wastewater treatment procedures. However, the wastewater treatment methods which applied advanced oxidation processes (AOPs) are documented for their successful remediation of POPs. AOPs are a group of water treatment technologies which is centered on the generation of OH radicals for the purpose of oxidizing recalcitrant organic contaminants content of wastewater to their inert end products. Circumvention of the reported demerits of AOPs such as low degradation efficiency, generation of toxic intermediates, massive sludge production, high energy expenditure and operational cost can be done through the application of the combined AOPs in the wastewater treatment procedure. The resultant mineralisation of the POPs content of wastewater is due to the synergistic effect of the OH radicals produced in the combined AOPs. Hydrodynamic cavitation is the application of the pressure variation in a liquid flowing through the venturi or orifice plates. This results in generation, growth, implosion and subsequent production of OH radicals in the liquid matrix. The generated OH radical in the jet loop hydrodynamic cavitation was applied as a form of advanced oxidation process in combination with hydrogen peroxide, iron (II) oxides or the synthesized green nano zero valent iron (gnZVI) for the treatment of simulated textile and pharmaceutical wastewater.
374

Development of a novel rate-modulated fixed dose analgesic combination for the treatment of mild to moderate pain

Hobbs, Kim Melissa 17 September 2010 (has links)
MSc (Med),Dept of Pharmacy and Pharmacology, Faculty of Health Sciences, University of the Witwatersrand / Pain is the net effect of multidimensional mechanisms that engage most parts of the central nervous system (CNS) and the treatment of pain is one of the key challenges in clinical medicine (Le Bars et al., 2001; Miranda et al., 2008). Polypharmacy is seen as a barrier to analgesic treatment compliance, signifying the necessity for the development of fixed dose combinations (FDCs), which allow the number of tablets administered to be reduced, with no associated loss in efficacy or increase in the prevalence of side effects (Torres Morera, 2004). FDCs of analgesic drugs with differing mechanisms of nociceptive modulation offer benefits including synergistic analgesic effects, where the individual agents act in a greater than additive manner, and a reduced occurrence of side-effects (Raffa, 2001; Camu, 2002). This study aimed at producing a novel, rate-modulated, fixed-dose analgesic formulation for the treatment of mild to moderate pain. The fixed-dose combination (FDC) rationale of paracetamol (PC), tramadol hydrochloride (TM) and diclofenac potassium (DC) takes advantage of previously reported analgesic synergy of PC and TM as well as extending the analgesic paradigm with the addition of the anti-inflammatory component, DC. The study involved the development of a triple-layered tablet delivery system with the desired release characteristics of approximately 60% of the PC and TM being made available within 2 hours to provide an initial pain relief effect and then sustained zero-order release of DC over a period of 24 hours to combat the on-going effects of any underlying inflammatory conditions. The triple-layered tablet delivery system would thus provide both rapid onset of pain relief as well as potentially address an underlying inflammatory cause. The design of a novel triple-layered tablet allowed for the desired release characteristics to be attained. During initial development work on the polymeric matrix it was discovered that only when combined with the optimized ratio of the release retarding polymer polyethylene oxide (PEO) in combination with electrolytic-crosslinking activity, provided by the biopolymer sodium alginate and zinc gluconate, could the 24 hour zero-order release of DC be attained. It was also necessary for this polymeric matrix to be bordered on both sides by the cellulosic polymers containing PC and TM. Thus the application of multi-layered tableting technology in the form of a triple-layered tablet were capable of attaining the rate-modulated release objectives set out in the study. The induced barriers provided by the three layers also served to physically separate TM and DC, reducing the likelihood of the bioavailability-diminishing interaction noted in United States Patent 6,558,701 and detected in the DSC analysis performed as part of this study. The designed system provided significant flexibility in modulation of release kinetics for drugs of varying solubility. The suitability of the designed triple-layered tablet delivery system was confirmed by a Design of Experiments (DoE) statistical evaluation, which revealed that Formulation F4 related closest to the desired more immediate release for PC and TM and the zero-order kinetics for DC. The results were confirmed by comparing Formulation F4 to typical release kinetic mechanisms described by Noyes-Whitney, Higuchi, Power Law, Pappas-Sahlin and Hopfenberg. Using f1 and f2 fit factors Formulation F4 compared favourably to each of the criteria defined for these kinetic models. The Ultra Performance Liquid Chromatographic (UPLC) assay method developed displayed superior resolution of the active pharmaceutical ingredient (API) combinations and the linearity plots produced indicated that the method was sufficiently sensitive to detect the concentrations of each API over the concentration ranges studied. The method was successfully validated and hence appropriate to simultaneously detect the three APIs as well as 4-aminophenol, the degradation product related to PC. Textural profile analysis in the form of swelling as well as matrix hardness analysis revealed that an increase in the penetration distance was associated with an increase in hydration time of the tablet and also an increase in gel layer thickness. The swelling complexities observed in the delivery system in terms of both the PEO, crosslinking sodium alginate and both cellulose polymers as well as the actuality of the three layers of the tablet swelling simultaneously suggests further intricacies involved in the release kinetics of the three drugs from this tablet configuration. Modified release dosage forms, such as the one developed in this study, have gained widespread importance in recent years and offer many advantages including flexible release kinetics and improved therapy and patient compliance.
375

Cost control-zero base budgeting and cost drivers

Borthwick, John Alistair Stewart 17 August 2016 (has links)
A research report submitted to the faculty of commerce of the University of the Witwatersrand in part completion of the degree of Master of commerce, by coursework / No abstract
376

Diamagnétisme des gaz quantiques quasi-parfaits / Diamagnetism of quasi-perfect quantum gases

Savoie, Baptiste 24 November 2010 (has links)
La majeure partie de cette thèse concerne l’étude de la susceptibilité diamagnétique en champ magnétique nul d’un gaz d’électrons de Bloch à température et densité fixées dans la limite de sfaibles températures. Pour les électrons libres (i.e. en l’absence de potentiel périodique), la susceptibilité diamagnétique a été calculée par L. Landau en 1930 ; le résultat est connu sous le nom de formule de Landau. Quant au cas des électrons de Bloch, E.R. Peierls montra en 1933 que dans l’approximation des électrons fortement liés, la formule pour la susceptibilité diamagnétique reste la même en remplaçant la masse de l’électron par sa ”masse effective” ; ce résultat est connu sous le nom de formule de Landau-Peierls. Depuis, de nombreuses tentatives pour clarifier les hypothèses de validité de la formule de Landau-Peierls ont vu le jour. Le résultat principal de cette thèse établit rigoureusement qu’à température nulle, lorsque la densité d’électrons tend vers zéro, la contribution dominante à la susceptibilité diamagnétique est donné par la formule de Landau-Peierls avecla masse effective de la plus petite bande d’énergie de Bloch. / The main part of this thesis deals with the zero-field diamagnetic susceptibility of a Blochelectrons gas at fixed temperature and fixed density in the limit of low temperatures. For a freeelectrons gas (that is when the periodic potential is zero), the steady diamagnetic susceptibilityhas been computed by L. Landau in 1930 ; the result is known as Landau formula. As for the Blochelectrons, E.R. Peierls in 1933 showed that under the tight-binding approximation, the formula forthe diamagnetic susceptibility remains the same but with the mass of the electron replaced by its”effective mass” ; this result is known as the Landau-Peierls formula. Since, there were very manyattempts in order to clarify the assumptions of validity of the Landau-Peierls formula. The mainresult of this thesis establishes rigorously that at zero temperature, as the density of electrons tendsto zero, the leading contribution of the diamagnetic susceptibility is given by the Landau-Peierlsformula with the effective mass of the lowest Bloch energy band.
377

Modelos assimétricos inflacionados de zeros / Zero-inflated asymmetric models

Dias, Mariana Ferreira 28 November 2014 (has links)
A principal motivação desse estudo é a análise da quantidade de sangue recebido em transfusão (padronizada pelo peso) por crianças com problemas hepáticos. Essa quantidade apresenta distribuição assimétrica, além de valores iguais a zero para as crianças que não receberam transfusão. Os modelos lineares generalizados, usuais para variáveis positivas, não permitem a inclusão de zeros. Para os dados positivos, foram ajustados tais modelos com distribuição gama e normal inversa. Também foi considerado o modelo log-normal. A análise de resíduos padronizados indicou heterocedasticidade, e portanto a variabilidade extra foi modelada utilizando a classe de modelos GAMLSS. A terceira abordagem consiste em modelos baseados na mistura de zeros e distribuições para valores positivos, incluídos recentemente na família dos modelos GAMLSS. Estes aliam a distribuição assimétrica para os dados positivos e a probabilidade da ocorrência de zeros. Na análise dos dados de transfusão, observa-se que a distribuição normal inversa apresentou melhor ajuste por acomodar dados com forte assimetria em relação às demais distribuições consideradas. Foram significativos os efeitos das variáveis explicativas Kasai (ocorrência de operação prévia) e PELD (nível de uma medida da gravidade do paciente com 4 níveis) assim como os efeitos de interação sobre a média e variabilidade da quantidade de sangue recebida. A possibilidade de acrescentar efeitos de variáveis explicativas para modelar o parâmetro de dispersão, permite que a variabilidade extra, além de sua dependência da média, seja melhor explicada e melhore o ajuste do modelo. A probabilidade de não receber transfusão depende de modo significativo somente de PELD. A proposta de um só modelo que alia a presença de zeros e diversas distribuições assimétricas facilita o ajuste dos dados e a análise de resíduos. Seus resultados são equivalentes à abordagem em que a ocorrência ou não de transfusão é analisada por meio de modelo logístico independente da modelagem dos dados positivos com distribuições assimétricas. / The main motivation of this study is to analyze the amount of blood transfusions received (by weight) of children with liver problems. This amount shows asymmetric distribution as well as present zero values for children who did not receive transfusions. The usual generalized linear models for positive variables do not allow the inclusion of zeros. For positive data, such models with gamma and inverse normal distributions were fitted in this study. Log-normal distribution was also considered. Analysis of the standardized residuals indicated heterocedasticity and therefore the extra variability was modelled using GAMLSS. The third approach consists of models based on a mixture of zeros and distributions for positive values, also recently included in the family of GAMLSS models. These models combine the asymmetric distribution of positive data and the probability of occurrence of zeros. In the data analysis of transfusion, the inverse normal distribution showed better goodness of fit to allow adjustment of data with greater asymmetry than the other distributions considered. The effects of the explanatory variables Kasai (occurrence of previous operation) and PELD level (a measure of the severity of the patient with 4 levels) and interaction effects on the mean and variability of the amount of blood received were signicant. The inclusion of explanatory variables to model the dispersion parameter, allows to model the extra variability, beyond its dependence on the average, and improves fitness of the model. The probability of not receiving transfusion depends signicantly only PELD. The proposal of a unified model that combines the presence of zeros and several asymmetric distributions greatly facilitates the fitness of the model and the evaluation of fitness. An advantage is the equivalence between this model and a separate logistic model to for the probability of the occurrence of transfusion and a model for the positive skewed data.
378

Caracterização da transformação martensítica em temperaturas criogênicas. / Characterization of the martensitic transformation at cryogenic temperatures.

Apaza Huallpa, Edgar 29 March 2011 (has links)
Na atualidade, o estudo da transformação martensítica é de grande importância na área acadêmica e tecnológica, devido à aplicação de aços e ferros fundidos com estruturas martensíticas. O estudo dos fenômenos da transformação martensítica envolve vários pesquisadores no mundo e é objeto de eventos como o ICOMAT e ESOMAT. O presente trabalho acompanhou a transformação martensítica por meio de técnicas experimentais a temperaturas sub-zero em um aço AISI D2 e uma liga Fe-Ni-C previamente austenitizadas. A literatura indica que o tratamento a temperaturas sub-zero pode melhorar propriedades dos aços temperados e revenidos. Foi explorado o uso dos métodos de ruído magnético de Barkhausen (MBN), para detectar a transformação de fase da austenita para a martensita durante o resfriamento sub-zero das amostras, usando três diferentes configurações: a emissão de ruído Barkhausen convencional estimulada por um campo magnético alternado; o método de Okamura que é a emissão de ruído magnético medido embaixo de um campo fixo (DC); e uma nova técnica experimental, que mede a emissão magnética espontânea durante a transformação na ausência de qualquer campo externo. Os fenômenos associados com a transformação de fase também foram medidos por resistividade elétrica e as amostras resultantes foram caracterizadas por microscopia óptica e eletrônica de varredura. Medições MBN no aço ferramenta AISI D2, austenitizadas a 1473K (1200C) e resfriadas a temperatura de nitrogênio líquido apresentaram uma mudança próximo de 225K (-48C) durante o resfriamento, que corresponde à temperatura Ms, como foi confirmado por medidas de resistividade. As medições da emissão de ruído magnético espontâneo, realizadas in situ durante o resfriamento da amostra imersa em nitrogênio líquido, mostraram que poderia ser detectado um fenômeno de estouro individual (burst), de forma similar às medições de emissão acústica (AE), o qual foi confirmado com a liga Fe-Ni-C. Este método de caracterização Spontaneous Magnetic Emission (SME) pode ser considerado uma nova ferramenta experimental para o estudo de transformações martensiticas em ligas ferrosas. Foi acompanhado o inicio da transformação martensítica por SME, em função do tamanho de grão, já que é conhecido pela literatura que o inicio da transformação martensítica (Ms), muda com a variação do tamanho de grão. / Martensitic transformations are of special interest both as an academic topic and as a technological issue, due to importance of steels and cast irons with martensitic structures. Studies of martensite transformation phenomena involve researchers all over the world and specific conferences and meetings, as ICOMAT and ESOMAT. The present work followed the martensitic transformation using different experimental techniques, during cooling at cryogenic temperatures samples of a AISI D2 cold work tool steel and also a Fe-Ni-C, previously austenitized. There are plenty of references in the literature suggesting that sub-zero cooling treatments could ameliorate the properties of quenched and tempered steels. The Magnetic Barkhausen Noise (MBN) method was applied during cooling to subzero temperatures of austenitic samples of a AISI D2 cold work tool steels (previously quenched from 1200ºC) and to a Invar-type Fe-Ni-C alloy. MBN is a non-destructive technique based on the detection of the signal generated when ferromagnetic materials are subjected to an oscillating external magnetic field. In order to study the austenite to martensite transformation, three different configurations were tested: conventional Barkhausen using an oscillating magnetic field, a method proposed by Okamura, which uses a fixed magnetic field and a new method that detects spontaneous magnetic emissions (SME) on the absence of any applied magnetic field. Other phenomena associated with the transformation were followed using electrical resistivity measurements, optical microscopy and X-ray diffraction. MBN measurements on a cold work tool steel AISI D2, austenitized at 1473K (1200ºC) and quenched to room temperature, made during further cooling to liquid nitrogen temperature, presented a clear change of signal intensity near 225K (-48ºC), corresponding to Ms temperature, as confirmed by resistivity measurements. The SME in situ measurements during cooling of samples in liquid nitrogen were able to detect single burst (landslide nucleation and growth) phenomena, in a manner similar to the Acoustic Emission (AE) measurements; these results have been confirmed also with measurements on a Fe-Ni-C alloy. The new Spontaneous Magnetic Emission (SME) characterization method can be considered a new experimental tool for the study of martensitic transformations in ferrous alloys. The beginning temperature for the martensitic transformation detected using SME, electric resistivity and MBN were compared with estimates using the Andrews empirical equation (linear, 1965) for the Ms temperature. The effect of the austenite grain size on the beginning of the martensitic transformation was studied using SME, as it is known that the Ms temperature depends on the austenite grain size.
379

Estudo do ponto invariante com a temperatura (ZTC) em SOI-FInFETS tensionados e radiados. / Study of zero temperature coefficient ZTC) on SOI-FinFETs strained and irradiated.

Nascimento, Vinicius Mesquita do 17 February 2017 (has links)
Este trabalho foi realizado tendo como objetivo o estudo do ponto invariante com a temperatura (ZTC - Zero Temperature Coefficient) para transistores com estrutura SOI FinFET em relação aos efeitos de tensionamento e radiação, através da utilização de dados experimentais e de um modelo analítico. Foram analisados primeiramente os parâmetros básicos de tensão de limiar e transcondutância, nos quais está baseado todo o modelo e verificado a influência dos efeitos do tensionamento e da radiação nos mesmos, para analisar o comportamento da tensão de porta no ponto ZTC em dispositivos do tipo n. Foram utilizados dispositivos com três dimensões de largura de aleta (fin) diferentes, 20nm, 120nm e 370nm e comprimento de canal de 150nm e de forma comparativa em dispositivos de 900nm, em quatro lâminas diferentes, sem/com tensionamento e/ou sem/com radiação. A tensão de limiar sofre grande influência do tensionamento, enquanto a radiação tem menor efeito na tensão de limiar na faixa estudada, passando a ter maiores significâncias nos dispositivos tensionados com maior largura de aleta. A transcondutância também sofre maior influência do efeito de tensionamento, sendo neste parâmetro a alteração pelo efeito da radiação muito menor. Contudo estes dois parâmetros geram outros dois parâmetros essenciais para análise do ZTC, que são obtidos através das suas variações em relação a temperatura. A variação da tensão de limiar em relação à temperatura e a degradação da transcondutância também pela temperatura (ou fator c: degradação da mobilidade pela temperatura), influenciam diretamente na eventual variação do ponto de ZTC com a temperatura. Quando estas influências são pequenas ou atuam de forma a compensarem-se mutuamente, resultam em valores de ZTC mais constantes com a temperatura. A tensão de limiar influência direta e proporcionalmente no valor da tensão de ZTC em amplitude, enquanto a degradação da mobilidade (transcondutância) atua mais na constância do ZTC com a temperatura. Com base nestes mesmos parâmetros e com ajustes necessários no modelo foram estudados dispositivos com as mesmas características físicas, porém, do tipo p, onde os resultados encontrados tiveram relação a característica de funcionamento deste outro tipo, ficando claro a inversão da significância dos efeitos quanto a variação da temperatura. O modelo simples e analítico utilizado para o estudo do ZTC foi validado para esta tecnologia, já que foi encontrado valores de erro entre valores experimentais e calculados com um máximo de 13% incluindo toda a faixa de temperatura e a utilização dos efeitos de radiação e tensionamento, tendo mostrado valores discrepantes somente para alguns casos de largura da aleta maiores, que mostraram ter uma pequena condução pela interface canal/óxido enterrado antes da condução na primeira interface, não prevista no modelo. / This work was performed with the aim of the study of the invariant point with temperature (called ZTC - Zero temperature Coefficient) for transistors made with SOI FinFET structure in relation to the mechanical stress and irradiation effects, through of the use of experimental data and an analytical model. Were first analyzed the basics parameters as threshold voltage and transconductance, in which all the model is based and was verified the influence of the mechanical stress and irradiation effects on these parameters, for analyze the gate voltage\'s behavior on ZTC point in n type devices. Were used devices with three different width fin dimensions, 20nm 120nm and 370nm and channel length of 150nm and in a comparative way with 900nm length devices, in four different waffles, with/without mechanical stress and/or with/without irradiation. The threshold voltage suffers big influence from stress, while the irradiation has less effect on the threshold voltage in the studied band, becoming to have more significance on the stressed devices with larger fin width. The transconductance also suffers more influence of the stress effect, being on this parameter the variation caused by irradiation effect smaller. However, these two parameters generate others two essentials parameters for the ZTC analysis, they are obtained through of the previous parameters variation by the temperature. The threshold voltage variation by the temperature and the tranconductance degradation by the temperature (or c factor: mobility degradation by the temperature), influence directly on the eventual variation of the ZTC point by the temperature. When these influences are small or act by the way to compensate mutually, result at ZTC values more constant with the temperature. The threshold voltage influence direct proportionality on the ZTC voltage\'s value at amplitude, while the mobility (transconductance) degradation act more on ZTC stability with the temperature. Based in these same parameters and with necessaries adjusts on the model, were studied devices with the same physic characteristics, but of the p type, where the founded results had relation with the work characteristics of this other type, becoming clear the inversion of significance of the effects by the temperature variation. The simple and analytical model used for the ZTC study was validated for this technology, since it was found error values between experimental data and calculated data with a maximum of 13%, shown discrepant values only for some cases of larger fin widths, that shown to have a small conduction by the channel/buried oxide interface before of the first interface\'s conduction, not previewed in the model.
380

Monetary Policy and the Great Recession

Bundick, Brent January 2014 (has links)
Thesis advisor: Susanto Basu / The Great Recession is arguably the most important macroeconomic event of the last three decades. Prior to the collapse of national output during 2008 and 2009, the United States experienced a sustained period of good economic outcomes with only two mild and short recessions. In addition to the severity of the recession, several characteristics of this recession signify it as as a unique event in the recent economic history of the United States. Some of these unique features include the following: Large Increase in Uncertainty About the Future: The Great Recession and its subsequent slow recovery have been marked by a large increase in uncertainty about the future. Uncertainty, as measured by the VIX index of implied stock market volatility, peaked at the end of 2008 and has remained volatile over the past few years. Many economists and the financial press believe the large increase in uncertainty may have played a role in the Great Recession and subsequent slow recovery. For example, Kocherlakota (2010) states, ``I've been emphasizing uncertainties in the labor market. More generally, I believe that overall uncertainty is a large drag on the economic recovery.'' In addition, Nobel laureate economist Peter Diamond argues, ``What's critical right now is not the functioning of the labor market, but the limits on the demand for labor coming from the great caution on the side of both consumers and firms because of the great uncertainty of what's going to happen next.'' Zero Bound on Nominal Interest Rates: The Federal Reserve plays a key role in offsetting the negative impact of fluctuations in the economy. During normal times, the central bank typically lowers nominal short-term interest rates in response to declines in inflation and output. Since the end of 2008, however, the Federal Reserve has been unable to lower its nominal policy rate due to the zero lower bound on nominal interest rates. Prior to the Great Recession, the Federal Reserve had not encountered the zero lower bound in the modern post-war period. The zero lower bound represents a significant constraint monetary policy's ability to fully stabilize the economy. Unprecedented Use of Forward Guidance: Even though the Federal Reserve remains constrained by the zero lower bound, the monetary authority can still affect the economy through expectations about future nominal policy rates. By providing agents in the economy with forward guidance on the future path of policy rates, monetary policy can stimulate the economy even when current policy rates remain constrained. Throughout the Great Recession and the subsequent recovery, the Federal Reserve provided the economy with explicit statements about the future path of monetary policy. In particular, the central bank has discussed the timing and macroeconomic conditions necessary to begin raising its nominal policy rate. Using this policy tool, the Federal Reserve continues to respond to the state of the economy at the zero lower bound. Large Fiscal Expansion: During the Great Recession, the United States engaged in a very large program of government spending and tax reductions. The massive fiscal expansion was designed to raise national income and help mitigate the severe economic contraction. A common justification for the fiscal expansion is the reduced capacity of the monetary authority to stimulate the economy at the zero lower bound. Many economists argue that the benefits of increasing government spending are significantly higher when the monetary authority is constrained by the zero lower bound. The goal of this dissertation is to better understand how these various elements contributed to the macroeconomic outcomes during and after the Great Recession. In addition to understanding each of the elements above in isolation, a key component of this analysis focuses on the interaction between the above elements. A key unifying theme between all of the elements is the role in monetary policy. In modern models of the macroeconomy, the monetary authority is crucial in determining how a particular economic mechanism affects the macroeconomy. In the first and second chapters, I show that monetary policy plays a key role in offsetting the negative effects of increased uncertainty about the future. My third chapter highlights how assumptions about monetary policy can change the impact of various shocks and policy interventions. For example, suppose the fiscal authority wants to increase national output by increasing government spending. A key calculation in this situation is the fiscal multiplier, which is dollar increase in national income for each dollar of government spending. I show that fiscal multipliers are dramatically affected by the assumptions about monetary policy even if the monetary authority is constrained by the zero lower bound. The unique nature of the elements discussed above makes analyzing their contribution difficult using standard macroeconomic tools. The most popular method for analyzing dynamic, stochastic general equilibrium models of the macroeconomy relies on linearizing the model around its deterministic steady state and examining the local dynamics around that approximation. However, the nature of the unique elements above make it impossible to fully capture dynamics using local linearization methods. For example, the zero lower bound on nominal interest rates often occurs far from the deterministic steady state of the model. Therefore, linearization around the steady state cannot capture the dynamics associated with the zero lower bound. The overall goal of this dissertation is to use and develop tools in computational macroeconomics to help better understand the Great Recession. Each of the chapters outlined below examine at least one of the topics listed above and its impact in explaining the macroeconomics of the Great Recession. In particular, the essays highlight the role of the monetary authority in generating the observed macroeconomic outcomes over the past several years. Can increased uncertainty about the future cause a contraction in output and its components? In joint work with Susanto Basu, my first chapter examines the role of uncertainty shocks in a one-sector, representative-agent, dynamic, stochastic general-equilibrium model. When prices are flexible, uncertainty shocks are not capable of producing business-cycle comovements among key macroeconomic variables. With countercyclical markups through sticky prices, however, uncertainty shocks can generate fluctuations that are consistent with business cycles. Monetary policy usually plays a key role in offsetting the negative impact of uncertainty shocks. If the central bank is constrained by the zero lower bound, then monetary policy can no longer perform its usual stabilizing function and higher uncertainty has even more negative effects on the economy. We calibrate the size of uncertainty shocks using fluctuations in the VIX and find that increased uncertainty about the future may indeed have played a significant role in worsening the Great Recession, which is consistent with statements by policymakers, economists, and the financial press. In sole-authored work, the second chapter continues to explore the interactions between the zero lower bound and increased uncertainty about the future. From a positive perspective, the essay further shows why increased uncertainty about the future can reduce a central bank's ability to stabilize the economy. The inability to offset contractionary shocks at the zero lower bound endogenously generates downside risk for the economy. This increase in risk induces precautionary saving by households, which causes larger contractions in output and inflation and prolongs the zero lower bound episode. The essay also examines the normative implications of uncertainty and shows how monetary policy can attenuate the negative effects of higher uncertainty. When the economy faces significant uncertainty, optimal monetary policy implies further lowering real rates by committing to a higher price-level target. Under optimal policy, the monetary authority accepts higher inflation risk in the future to minimize downside risk when the economy hits the zero lower bound. In the face of large shocks, raising the central bank's inflation target can attenuate much of the downside risk posed by the zero lower bound. In my third chapter, I examine how assumptions about monetary policy affect the economy at the zero lower bound. Even when current policy rates are zero, I argue that assumptions regarding the future conduct of monetary policy are crucial in determining the effects of real fluctuations at the zero lower bound. Under standard Taylor (1993)-type policy rules, government spending multipliers are large, improvements in technology cause large contractions in output, and structural reforms that decrease firm market power are bad for the economy. However, these policy rules imply that the central bank stops responding to the economy at the zero lower bound. This assumption is inconsistent with recent statements and actions by monetary policymakers. If monetary policy endogenously responds to current economic conditions using expectations about future policy, then spending multipliers are much smaller and increases in technology and firm competitiveness remain expansionary. Thus, the model-implied benefits of higher government spending are highly sensitive to the specification of monetary policy. / Thesis (PhD) — Boston College, 2014. / Submitted to: Boston College. Graduate School of Arts and Sciences. / Discipline: Economics.

Page generated in 0.0312 seconds