Spelling suggestions: "subject:"old 1standard"" "subject:"old 39standard""
31 |
Essays in MacroeconomicsBouscasse, Paul January 2022 (has links)
In chapter 1, I ask whether an exchange rate depreciation depresses trading partners' output. I address this question through the lens of a classic episode: the currency devaluations of the 1930s. From 1931 to 1936, many of the biggest economies in the world successively left the gold standard or devalued, leading to a depreciation of their currency by more than 30% against gold. In theory, the effect is ambiguous for countries that did not devalue: expenditure switching can lower their output, but the monetary stimulus to demand might raise it. I use cross-sectional evidence to discipline the strength of these two mechanisms in a multi-country model. This evidence comes in two forms: (i) causal inference of the effect of devaluation on country-level variables, (ii) new product-level data to estimate parameters that are essential to discipline the response of trade --- the international elasticity of substitution among foreign varieties, and the pass-through of the exchange rate to international prices. Contrary to the popular narrative in modern policy debates, devaluation did not dramatically lower the output of trading partners in this context. The expenditure switching effect was mostly offset by the monetary stimulus to foreign demand.
In chapter 2, Emi Nakamura, Jon Steinsson, and I provide new estimates of the evolution of productivity in England from 1250 to 1870. Real wages over this period were heavily influenced by plague-induced swings in the population. We develop and implement a new methodology for estimating productivity that accounts for these Malthusian dynamics. In the early part of our sample, we find that productivity growth was zero. Productivity growth began in 1600---almost a century before the Glorious Revolution. Post-1600 productivity growth had two phases: an initial phase of modest growth of 4% per decade between 1600 and 1810, followed by a rapid acceleration at the time of the Industrial Revolution to 18\% per decade. Our evidence helps distinguish between theories of why growth began. In particular, our findings support the idea that broad-based economic change preceded the bourgeois institutional reforms of 17th century England and may have contributed to causing them. We also estimate the strength of Malthusian population forces on real wages. We find that these forces were sufficiently weak to be easily overwhelmed by post-1800 productivity growth.
In chapter 3, Carlo Altavilla, Miguel Boucinha, and I propose a new methodology to identify aggregate demand and supply shocks in the bank loan market. We present a model of sticky bank-firm relationships, estimate its structural parameters in euro area credit register data, and infer aggregate shocks based on those estimates. To achieve credible identification, we leverage banks' exposure to various sectors' heterogeneous liquidity needs during the COVID-19 Pandemic. We find that developments in lending volumes following the pandemic were largely explained by demand shocks. Fluctuations in lending rates were instead mostly determined by bank-driven supply shocks and borrower risk. A by-product of our analysis is a structural interpretation of two-way fixed effects regressions in loan-level data: according to our framework, firm- and bank-time fixed effects only separate demand from supply under certain parametric assumptions. In the data, the conditions are satisfied for supply but not for demand: bank-time fixed effects identify true supply shocks up to a time constant, while firm-time fixed effects are contaminated by supply forces. Our methodology overcomes this limitation: we identify supply and demand shocks at the aggregate and individual levels.
In chapter 4, I study how the fiscal side of the US government reacts to monetary policy. I estimate the response of several fiscal variables to monetary shocks. Following an interest rate hike, tax receipts fall, outlays excluding interest payments are constant, and interest payments and debt increase. The fall in output that follows a monetary tightening --- not legislated changes in marginal tax rates --- drives the response of receipts. The fiscal authority therefore responds passively to monetary shocks, keeping expenditures constant and letting debt adjust to satisfy its budget constraint. In heterogeneous agent models, this scenario dampens output's response to monetary policy.
|
32 |
L'expertise de James Laurence Laughlin au service de l'unification monétaire et bancaire américaine, 1870- 1913. : de la défense de l’étalon-or à la conception du Federal Reserve Act (1913) / The expertise of James Laurence Laughlin at the service of U.S. monetary and banking unification, 1870-1913. : from the defense of the gold standard to the design of the Federal Reserve Act (1913)Andre-Aigret, Constance 13 May 2019 (has links)
Ce travail de thèse est consacré à l’étude de la participation de James Laurence Laughlin (1850-19133) à l’unification monétaire et bancaire américaine de 1870 à 1913. L’histoire des débat monétaires et bancaires américains de la fin du dix-neuvième et du début du vingtième sièclen’accorde pas une place importante à cet auteur pourtant incontournable. Laughlin devient unéconomiste académique réputé en tant que premier Head Professor à l’université de Chicago et en fondant le Journal of Political Economy en 1892. Il s’affirme comme expert économique grâce à son expérience de money doctoring à Saint-Domingue en 1894 puis sa participation à la commission monétaire d’Indianapolis en 1897-98. Le rapport final de cette commission rédigé par Laughlin est utilisé pour l’écriture du Gold Standard Act voté en 1900 qui institue légalement un système d’étalon-or aux États-Unis. Par la suite, il prend part à la conception du Federal Reserve Act de 1913, aux côtés de son ancien étudiant Henry Parker Willis. La théorie monétaire de Laughlin se veut être une critique de la théorie quantitative de la monnaie et une défense de la mise en place d’un système d’étalon-or. Pour ce faire, il mobilise des éléments issus de la théorie des auteurs de la Banking School anglaise. Il explique alors la formation des prix par des déterminants non monétaires et inclut le crédit et la spéculation à sa théorie en distinguant un crédit « normal » et un crédit « anormal ». / This Ph.D. dissertation studies James Laurence Laughlin (1850-1913) participation in the American monetary and banking unification. The history of American monetary and banking debates of the end of the nineteenth and the beginning of the twentieth century does not place emphasis on this author while he is unavoidable. He becomes a renowned academic economist by being the first Head Professor of the University of Chicago and the founder of the Journal of Political Economy in 1892. He also acquires the status of economic expert by doing a money doctoring in Santo Domingo in 1894 and by participating in the Indianapolis Monetary Commission in 1897-98. The final report of this commission written by Laughlin had been used to write the Gold Standard Act, passed in 1900 and establishing a gold standard system in the United States. Subsequently, he gets involved in designing the Federal Reserve Act of 1913 alongside his former student Henry Parker Willis. Laughlin’s theory is meant to be a critique of the quantity theory of money. He includes elements from the English Banking School authors’ theory. He explains the formation of prices by non-monetary determinants and includes credit and speculation in his theory by distinguishing a “normal” credit and an “abnormal” credit.
|
33 |
Evaluation of two word alignment systemsWang, Xiaoyang January 2004 (has links)
<p>This project evaluates two different systems that generate wordalignments on English-Swedish data. The systems to be used are the Giza++ system, that may generate a variety of statistical translation models, and I*Trix system developed at IDA/NLPLab that generates word pairs with frequencies. </p><p>The file formats of these two systems, the way of running them and the differences of the two systems are addressed in this paper. Evaluation in this project considers a variety of parameters such as corpus size, characteristics of the corpus, the effect of linguistic knowledge, etc. At the end of this paper, the conclusions of the two systems evaluation are presented. In general, Giza++ is better applying on big corpora while I*Trix is better for small corpora. Especially for corpora with high statistical ratio or special resource, I*Trix has a better performance.</p>
|
34 |
Assessing Binary Measurement SystemsDanila, Oana Mihaela January 2012 (has links)
Binary measurement systems (BMS) are widely used in both manufacturing industry and medicine. In industry, a BMS is often used to measure various characteristics of parts and then classify them as pass or fail, according to some quality standards. Good measurement systems are essential both for problem solving (i.e., reducing the rate of defectives) and to protect customers from receiving defective products. As a result, it is desirable to assess the performance of the BMS as well as to separate the effects of the measurement system and the production process on the observed classifications. In medicine, BMSs are known as diagnostic or screening tests, and are used to detect a target condition in subjects, thus classifying them as positive or negative. Assessing the performance of a medical test is essential in quantifying the costs due to misclassification of patients, and in the future prevention of these errors.
In both industry and medicine, the most commonly used characteristics to quantify the performance a BMS are the two misclassification rates, defined as the chance of passing a nonconforming (non-diseased) unit, called the consumer's risk (false positive), and the chance of failing a conforming (diseased) unit, called the producer's risk (false negative). In most assessment studies, it is also of interest to estimate the conforming (prevalence) rate, i.e. probability that a randomly selected unit is conforming (diseased).
There are two main approaches for assessing the performance of a BMS. Both approaches involve measuring a number of units one or more times with the BMS. The first one, called the "gold standard" approach, requires the use of a gold-standard measurement system that can determine the state of units with no classification errors. When a gold standard does not exist, is too expensive or time-consuming, another option is to repeatedly measure units with the BMS, and then use a latent class approach to estimate the parameters of interest. In industry, for both approaches, the standard sampling plan involves randomly selecting parts from the population of manufactured parts.
In this thesis, we focus on a specific context commonly found in the manufacturing industry. First, the BMS under study is nondestructive. Second, the BMS is used for 100% inspection or any kind of systematic inspection of the production yield. In this context, we are likely to have available a large number of previously passed and failed parts. Furthermore, the inspection system typically tracks the number of parts passed and failed; that is, we often have baseline data about the current pass rate, separate from the assessment study. Finally, we assume that during the time of the evaluation, the process is under statistical control and the BMS is stable.
Our main goal is to investigate the effect of using sampling plans that involve random selection of parts from the available populations of previously passed and failed parts, i.e. conditional selection, on the estimation procedure and the main characteristics of the estimators. Also, we demonstrate the value of combining the additional information provided by the baseline data with those collected in the assessment study, in improving the overall estimation procedure. We also examine how the availability of baseline data and using a conditional selection sampling plan affect recommendations on the design of the assessment study.
In Chapter 2, we give a summary of the existing estimation methods and sampling plans for a BMS assessment study in both industrial and medical settings, that are relevant in our context. In Chapters 3 and 4, we investigate the assessment of a BMS in the case where we assume that the misclassification rates are common for all conforming/nonconforming parts and that repeated measurements on the same part are independent, conditional on the true state of the part, i.e. conditional independence. We call models using these assumptions fixed-effects models. In Chapter 3, we look at the case where a gold standard is available, whereas in Chapter 4, we investigate the "no gold standard" case. In both cases, we show that using a conditional selection plan, along with the baseline information, substantially improves the accuracy and precision of the estimators, compared to the standard sampling plan.
In Chapters 5 and 6, we investigate the case where we allow for possible variation in the misclassification rates within conforming and nonconforming parts, by proposing some new random-effects models. These models relax the fixed-effects model assumptions regarding constant misclassification rates and conditional independence. As in the previous chapters, we focus on investigating the effect of using conditional selection and baseline information on the properties of the estimators, and give study design recommendations based on our findings.
In Chapter 7, we discuss other potential applications of the conditional selection plan, where the study data are augmented with the baseline information on the pass rate, especially in the context where there are multiple BMSs under investigation.
|
35 |
Assessing Binary Measurement SystemsDanila, Oana Mihaela January 2012 (has links)
Binary measurement systems (BMS) are widely used in both manufacturing industry and medicine. In industry, a BMS is often used to measure various characteristics of parts and then classify them as pass or fail, according to some quality standards. Good measurement systems are essential both for problem solving (i.e., reducing the rate of defectives) and to protect customers from receiving defective products. As a result, it is desirable to assess the performance of the BMS as well as to separate the effects of the measurement system and the production process on the observed classifications. In medicine, BMSs are known as diagnostic or screening tests, and are used to detect a target condition in subjects, thus classifying them as positive or negative. Assessing the performance of a medical test is essential in quantifying the costs due to misclassification of patients, and in the future prevention of these errors.
In both industry and medicine, the most commonly used characteristics to quantify the performance a BMS are the two misclassification rates, defined as the chance of passing a nonconforming (non-diseased) unit, called the consumer's risk (false positive), and the chance of failing a conforming (diseased) unit, called the producer's risk (false negative). In most assessment studies, it is also of interest to estimate the conforming (prevalence) rate, i.e. probability that a randomly selected unit is conforming (diseased).
There are two main approaches for assessing the performance of a BMS. Both approaches involve measuring a number of units one or more times with the BMS. The first one, called the "gold standard" approach, requires the use of a gold-standard measurement system that can determine the state of units with no classification errors. When a gold standard does not exist, is too expensive or time-consuming, another option is to repeatedly measure units with the BMS, and then use a latent class approach to estimate the parameters of interest. In industry, for both approaches, the standard sampling plan involves randomly selecting parts from the population of manufactured parts.
In this thesis, we focus on a specific context commonly found in the manufacturing industry. First, the BMS under study is nondestructive. Second, the BMS is used for 100% inspection or any kind of systematic inspection of the production yield. In this context, we are likely to have available a large number of previously passed and failed parts. Furthermore, the inspection system typically tracks the number of parts passed and failed; that is, we often have baseline data about the current pass rate, separate from the assessment study. Finally, we assume that during the time of the evaluation, the process is under statistical control and the BMS is stable.
Our main goal is to investigate the effect of using sampling plans that involve random selection of parts from the available populations of previously passed and failed parts, i.e. conditional selection, on the estimation procedure and the main characteristics of the estimators. Also, we demonstrate the value of combining the additional information provided by the baseline data with those collected in the assessment study, in improving the overall estimation procedure. We also examine how the availability of baseline data and using a conditional selection sampling plan affect recommendations on the design of the assessment study.
In Chapter 2, we give a summary of the existing estimation methods and sampling plans for a BMS assessment study in both industrial and medical settings, that are relevant in our context. In Chapters 3 and 4, we investigate the assessment of a BMS in the case where we assume that the misclassification rates are common for all conforming/nonconforming parts and that repeated measurements on the same part are independent, conditional on the true state of the part, i.e. conditional independence. We call models using these assumptions fixed-effects models. In Chapter 3, we look at the case where a gold standard is available, whereas in Chapter 4, we investigate the "no gold standard" case. In both cases, we show that using a conditional selection plan, along with the baseline information, substantially improves the accuracy and precision of the estimators, compared to the standard sampling plan.
In Chapters 5 and 6, we investigate the case where we allow for possible variation in the misclassification rates within conforming and nonconforming parts, by proposing some new random-effects models. These models relax the fixed-effects model assumptions regarding constant misclassification rates and conditional independence. As in the previous chapters, we focus on investigating the effect of using conditional selection and baseline information on the properties of the estimators, and give study design recommendations based on our findings.
In Chapter 7, we discuss other potential applications of the conditional selection plan, where the study data are augmented with the baseline information on the pass rate, especially in the context where there are multiple BMSs under investigation.
|
36 |
Transição política e política econômica no Brasil-Império = 1853-1862 / Political transition and economic policy on Brazilian Empire : 1853-1862Almeida, José Tadeu de 16 August 2018 (has links)
Orientador: Pedro Paulo Zahluth Bastos / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Economia / Made available in DSpace on 2018-08-16T09:34:44Z (GMT). No. of bitstreams: 1
Almeida_JoseTadeude_M.pdf: 1625026 bytes, checksum: d720de44f43500e8f0db59b211b2f7a7 (MD5)
Previous issue date: 2010 / Resumo: Este trabalho tem como meta resgatar alguns aspectos relacionados à gestão da estrutura macroeconômica do Brasil no chamado Segundo Império (1840-1889), principalmente no período compreendido entre os anos de 1853 e 1862, onde se verifica uma grande liderança política dos quadros filiados ao Partido Conservador, que foram capazes de conduzir um movimento de aproximação com membros mais moderados do Partido Liberal, denominado Movimento da Conciliação, como forma de engendrar a formação de governos de coalizão, reduzir o espaço de opinião de dissidências político-partidárias, e garantir a aprovação de projetos favoráveis ao progresso nacional. A política econômica deste período, neste sentido, desenvolveu-se em sua maior parte sob a égide dos conservadores, cujo foco incidia constantemente sobre o equilíbrio orçamentário e pela manutenção da valorização da taxa de câmbio, como forma de evitar o recurso da emissão de moeda para sanear os gastos públicos, em sintonia com os preceitos do sistema do padrão-ouro, modelo internacional de paridade entre moedas adotado pelo Brasil em 1846. Busca-se assim entender melhor este modelo - de natureza conservadora - de gestão da coisa pública, a partir da inserção de natureza periférica da economia brasileira ao padrão ouro-libra, então vigente, e elucidando ainda a vulnerabilidade do sistema monetário brasileiro no século XIX, e os impactos desta conjuntura sobre a ordem social. Por fim, procuramos enfatizar a necessidade de novas reflexões a respeito da gestão dos negócios do Império, levando em consideração a necessidade premente, conforme o pensamento da época, da construção de um Estado-nação / Abstract: This dissertation focuses the management of the Brazilian macroeconomic structures into the period known as Second Empire (1840-1889), specially between 1853 to 1862, where is verified a great political leadership from the members of Conservative Party, whose conducted an approach with the most moderated members of Liberal Party, a movement called Conciliação. This movement aimed to form coalition governments, reduce the expressivity of political dissidents, and approve some favorable projects to the development of the country. The economic policy in this period was conducted by the conservatives, focusing a balanced budget and a valorized exchange rate, avoiding the emission of money (and debt papers) to pay the public expenditures. These acts have a link with the gold standard system, adopted by Brazilian authorities in 1846. Therefore, the dissertation intends to have a better understanding of the conservative model of economic policy in Brazil; focusing, indeed, the marginal condition of its insertion to the gold standard, and also the vulnerability of its monetary system in the 1850 decade, and the impacts from this situation into the social order. At last, the paper emphasizes the relevance of new reflections about the management of the Brazilian empire business in the same time, basing the analysis on an urgent needing to build a new nation, a new state, in the 19th century / Mestrado / Historia Economica / Mestre em Desenvolvimento Econômico
|
37 |
Teamwork in Distributed Agile Software DevelopmentGurram, Chaitanya, Bandi, Srinivas Goud January 2013 (has links)
Context: Distributed software development has become a most desired way of software development. Application of agile development methodologies in distributed environments has taken a new trend in developing software due to its benefits of improved communication and collaboration. Teamwork is an important concept that agile methodologies facilitate and is one of the potential determinants of team performance which was not focused in distributed agile software development. Objectives: This research shed a light on the topic of teamwork in the context of distributed agile software development. The objectives are to identify the factors contributing teamwork of distributed agile teams along with the dependencies between the factors. And, as it is not without challenges to work with unity in a heterogeneous environment, identification of challenges related to teamwork factors of distributed agile teams along with the mitigation strategies is an another objective. Methods: A systematic literature review (SLR) was employed to identify the teamwork factors along with their dependencies and corresponding challenges and mitigation strategies of each teamwork factor from state-of-the-art literature. Quasi-gold standard method was employed as search strategy in SLR to find out the primary studies representing the objective under investigation. Further a survey was conducted with industrial practitioners working in distributed agile projects to validate the findings from state-of-the-art literature. Results: A total of 13 teamwork factors (i.e. team orientation, shared leadership, mutual performance monitoring, backup behavior, feedback, team autonomy, team learning, coordination, communication, trust, collective culture, ease of use of technology, team familiarity), a set of nine dependencies between the teamwork factors and 45 challenges and 41 mitigation strategies related to the teamwork factors were identified from state-of-the-art literature. From survey result, communication, coordination, trust and team orientation were identified as four most important teamwork factors for distributed agile teams. Out of nine dependencies, seven were supported and two were not supported by the practitioners of distributed agile projects. Additionally, nine challenges and 12 mitigation strategies were identified through survey. Conclusions: From this study, we conclude that communication is the top most important factor for successful teamwork of distributed agile teams. And, unlike its prime importance in distributed software development for getting teams work together, trust was identified with a third priority for successful teamwork of distributed agile teams. Similar to the findings of the agile teams, team autonomy was identified with least importance towards the successful teamwork of distributed agile teams. Results of dependencies show that there is need for future research to explore all the dependencies between the teamwork factors. Furthermore, there are teamwork factors with no challenges and mitigation strategies being identified in state-of-the-art literature but later, through survey it was found that practitioners are facing the challenges for that particular teamwork factor. Though, this study identified those missed challenges, due to the limited number of participants involved in the survey, we cannot conclude that these were the only challenges faced in relation to the teamwork. Hence, there is a need to have a dedicated investigation in exploring all the challenges and mitigation strategies, such that it would help the distributed agile teams in attaining the fruitful interactions between them. / H.no. 5-5-289, Prashanth Nagar, Vanasthalipuram, Hyderabad-500070, Andhra Pradesh. India
|
38 |
Evaluation of two word alignment systemsWang, Xiaoyang January 2004 (has links)
This project evaluates two different systems that generate wordalignments on English-Swedish data. The systems to be used are the Giza++ system, that may generate a variety of statistical translation models, and I*Trix system developed at IDA/NLPLab that generates word pairs with frequencies. The file formats of these two systems, the way of running them and the differences of the two systems are addressed in this paper. Evaluation in this project considers a variety of parameters such as corpus size, characteristics of the corpus, the effect of linguistic knowledge, etc. At the end of this paper, the conclusions of the two systems evaluation are presented. In general, Giza++ is better applying on big corpora while I*Trix is better for small corpora. Especially for corpora with high statistical ratio or special resource, I*Trix has a better performance.
|
39 |
Intervenção estatal na economia: o Banco Central e a execução das políticas monetária e creditícia / State intervencion: Central Bank and the monetary and credit policies execution.Ladeira, Florinda Figueiredo Borges 01 June 2010 (has links)
Este trabalho presta-se a analisar a adequação da execução da política monetária pelos Bancos Centrais, com enfoque especial dado ao Banco Central do Brasil e ao arcabouço normativo atualmente em vigor sobre a matéria. A necessidade de desenvolver este tema sobreveio da verificação, especialmente nas duas últimas décadas, de um distanciamento dos Bancos Centrais em relação às orientações do Poder Executivo. O regime de metas inflacionárias, tido como o ideal para orientar a atuação dos Bancos Centrais e assegurar a estabilidade da moeda foi assumido como o objetivo-fim da política monetária, em detrimento das previsões constitucionais acerca da promoção do desenvolvimento equilibrado do País, da busca do pleno emprego e da redução das desigualdades sociais. Para a análise desenvolvida, partiu-se do método histórico, por meio do qual foi possível verificar, a partir do Século XIX até o presente, de que forma os Bancos Centrais surgiram e galgaram posições de relevo enquanto agentes do Estado orientados a intervir na economia para fins de promoção social, especialmente a partir do surgimento e consolidação do Direito Econômico como ciência jurídica legitimadora da intervenção estatal. Em seguida, buscou-se explorar as funções dos Bancos Centrais, os instrumentos dos quais dispõem para o exercício da política monetária e a adequação dos objetivos dessa política no contexto da política econômica desenvolvida pelo Estado. Por fim, passou-se à análise do Banco Central do Brasil no que concerne a evolução da execução da política monetária, paralelamente às conquistas sociais e políticas do país, com especial destaque para as décadas de 1960 e 1990. / This work is to examine the appropriateness of implementing monetary policy by central banks, with particular emphasis given to the Central Bank of Brazil and the regulatory framework currently in force on the matter. The need to develop this theme came to check, especially in the last two decades, from a distance of central banks in relation to Executive guidelines. Inflation targeting system, seen as the ideal to drive central banks actions and ensure currency stability was adopted as the end goal of monetary policy at the expense of constitutional statements upon balanced development of the country, in pursuit of comprehensive employment and the reduction of social inequalities. For the developed analysis, historical method has been adopted which enabled the understanding from the nineteenth century until present days upon how Central Banks emerged and have risen to prominent positions as agents of the Stated driven to intervene in economy for social advancement, especially since the emergence and consolidation of Economic Law and legal science legitimating state intervention. Then Central Banks central banks functions were explored, theirs tools for monetary policy undertaking and the adequacy of such objectives in the context of economic policy developed by the state. Finally, we have assessed Brazil Central Bank in regards of monetary policy implementation development, along with social and political local achievements, with particular emphasis to the 60s and 90s.
|
40 |
The Scandinavian Currency Union 1873-1924 : studies in monetary integration and disintegrationTalia, Krim January 2004 (has links)
This thesis studies the history of the Scandinavian Currency Union, 1873-1924. It is divided into four analytical chapters, each dealing with a different aspect of the Union and each written as a separate paper. The conclusions of the thesis challenge existing views of the Union and examines new aspects of this episode in monetary history. It poses new questions and exploits and evaluates new sources. The first paper offers an original interpretation of the role of Scandinavianism in the monetary reform of 1873-1875. It is argued that its importance has been both exaggerated and misinterpreted. In fact, the monetary integration of those years was principally motivated by economic considerations. The second paper deals with inter Scandinavian monetary cooperation during the period 1873-1914. It argues that the process of monetary integration, later followed by disintegration, during these decades is best understood in the context of a trade off between financial efficiency and national economic vulnerability. It provides a comprehensive analysis of the motives that underlay the principal extensions of the Union’s institutional framework.This includes, the formation, cancellation and renegotiation of the formal, Union based, clearing agreement, as well as the process leading to the free circulation of all Scandinavian notes throughout the currency area.The third paper studies the level of integration and efficiency of the Scandinavian foreign exchange market throughout the period. The paper applies theories and methods from modern economics and finance on a new set of historical financial data. It concludes that the currency union generally, and the clearing agreement in particular, significantly increased the degree of market integration. It also concludes that, during most of the period, the Scandinavian foreign exchange market was characterized by perfect arbitrage and efficiency. The final paper challenges the prevailing scholarly view of the dissolution of the Union. It argues that the break up resulted from the asymmetric shocks that the three countries experienced during World War I. These shocks, which differed as a result of varying national economic policies and structures, created tensions that required exchange rate adjustments to be resolved. / <p>Diss. Stockholm : Handelshögskolan, 2004</p>
|
Page generated in 0.0713 seconds