• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 225
  • 64
  • 42
  • 19
  • 18
  • 14
  • 11
  • 8
  • 6
  • 5
  • 5
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 496
  • 92
  • 91
  • 86
  • 86
  • 82
  • 72
  • 65
  • 63
  • 59
  • 51
  • 43
  • 40
  • 39
  • 37
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

The Uncovered Interest Rate Parity at the Turn of the 20th Century

Davies, Orlan 01 January 2013 (has links)
High interest rate currencies tend to appreciate despite what is be implied by the uncovered interest parity. It is thought that the uncovered interest parity does not hold due to various risks, costs, liquidity issues, and monetary policies. There have been extensive studies into the cause of this phenomenon yet none have examined the period before the formation of the Federal Reserve in 1913. This study examines whether or not the uncovered interest parity holds between the UK, the US, France, Germany, the Netherlands, Belgium, Italy, Spain, and Portugal during this time period to determine if the absence of capital controls and monetary policies allow for the uncovered interest parity to hold. In the end, none of the 213 regressions testing all the country pairs across varying horizons came close to providing support for the uncovered interest parity.
62

Equivocation of Eve using two edge type LDPC codes for the binary erasure wiretap channel

Andersson, Mattias, Rathi, Vishwambhar, Thobaben, Ragnar, Kliewer, Joerg, Skoglund, Mikael January 2010 (has links)
We consider transmission over a binary erasure wiretap channel using the code construction method introduced by Rathi et al. based on two edge type Low-Density Parity-Check (LDPC) codes and the coset encoding scheme. By generalizing the method of computing conditional entropy for standard LDPC ensembles introduced by Méasson, Montanari, and Urbanke to two edge type LDPC ensembles, we show how the equivocation for the wiretapper can be computed. We find that relatively simple constructions give very good secrecy performance and are close to the secrecy capacity. / <p>Copyright 2010 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works. QC 20120110</p>
63

On Constructing Low-Density Parity-Check Codes

Ma, Xudong January 2007 (has links)
This thesis focuses on designing Low-Density Parity-Check (LDPC) codes for forward-error-correction. The target application is real-time multimedia communications over packet networks. We investigate two code design issues, which are important in the target application scenarios, designing LDPC codes with low decoding latency, and constructing capacity-approaching LDPC codes with very low error probabilities. On designing LDPC codes with low decoding latency, we present a framework for optimizing the code parameters so that the decoding can be fulfilled after only a small number of iterative decoding iterations. The brute force approach for such optimization is numerical intractable, because it involves a difficult discrete optimization programming. In this thesis, we show an asymptotic approximation to the number of decoding iterations. Based on this asymptotic approximation, we propose an approximate optimization framework for finding near-optimal code parameters, so that the number of decoding iterations is minimized. The approximate optimization approach is numerically tractable. Numerical results confirm that the proposed optimization approach has excellent numerical properties, and codes with excellent performance in terms of number of decoding iterations can be obtained. Our results show that the numbers of decoding iterations of the codes by the proposed design approach can be as small as one-fifth of the numbers of decoding iterations of some previously well-known codes. The numerical results also show that the proposed asymptotic approximation is generally tight for even non-extremely limiting cases. On constructing capacity-approaching LDPC codes with very low error probabilities, we propose a new LDPC code construction scheme based on $2$-lifts. Based on stopping set distribution analysis, we propose design criteria for the resulting codes to have very low error floors. High error floors are the main problems of previously constructed capacity-approaching codes, which prevent them from achieving very low error probabilities. Numerical results confirm that codes with very low error floors can be obtained by the proposed code construction scheme and the design criteria. Compared with the codes by the previous standard construction schemes, which have error floors at the levels of $10^{-3}$ to $10^{-4}$, the codes by the proposed approach do not have observable error floors at the levels higher than $10^{-7}$. The error floors of the codes by the proposed approach are also significantly lower compared with the codes by the previous approaches to constructing codes with low error floors.
64

On Constructing Low-Density Parity-Check Codes

Ma, Xudong January 2007 (has links)
This thesis focuses on designing Low-Density Parity-Check (LDPC) codes for forward-error-correction. The target application is real-time multimedia communications over packet networks. We investigate two code design issues, which are important in the target application scenarios, designing LDPC codes with low decoding latency, and constructing capacity-approaching LDPC codes with very low error probabilities. On designing LDPC codes with low decoding latency, we present a framework for optimizing the code parameters so that the decoding can be fulfilled after only a small number of iterative decoding iterations. The brute force approach for such optimization is numerical intractable, because it involves a difficult discrete optimization programming. In this thesis, we show an asymptotic approximation to the number of decoding iterations. Based on this asymptotic approximation, we propose an approximate optimization framework for finding near-optimal code parameters, so that the number of decoding iterations is minimized. The approximate optimization approach is numerically tractable. Numerical results confirm that the proposed optimization approach has excellent numerical properties, and codes with excellent performance in terms of number of decoding iterations can be obtained. Our results show that the numbers of decoding iterations of the codes by the proposed design approach can be as small as one-fifth of the numbers of decoding iterations of some previously well-known codes. The numerical results also show that the proposed asymptotic approximation is generally tight for even non-extremely limiting cases. On constructing capacity-approaching LDPC codes with very low error probabilities, we propose a new LDPC code construction scheme based on $2$-lifts. Based on stopping set distribution analysis, we propose design criteria for the resulting codes to have very low error floors. High error floors are the main problems of previously constructed capacity-approaching codes, which prevent them from achieving very low error probabilities. Numerical results confirm that codes with very low error floors can be obtained by the proposed code construction scheme and the design criteria. Compared with the codes by the previous standard construction schemes, which have error floors at the levels of $10^{-3}$ to $10^{-4}$, the codes by the proposed approach do not have observable error floors at the levels higher than $10^{-7}$. The error floors of the codes by the proposed approach are also significantly lower compared with the codes by the previous approaches to constructing codes with low error floors.
65

Purchasing power parity and the dynamic adjusting behavior of short-term nominal exchange rate

Chen, I-Hsiu 05 July 2010 (has links)
Purchasing power parity (PPP) is considered as an important theory of explaining how exchange rate varies in the long run. Most of empirical studies in the past adapted linear cointegration method to test the purchasing power parity. However, there are papers point out that exchange rate exists non-linear cointegration and unexplainable bias might exist in testing the purchase power parity theory while using linear cointegration test. The methodology of this study is based on an application of ESTR ECM proposed by Kapetaniosetet al. to enhance the inadequate of linear cointegration test. We analyze the dynamic adjusting behavior of short-term nominal exchange rate with ESTR ECM model while the non-linear cointegratoin exists. The empirical result indicates that the purchase power parity between Taiwan and its major trading countries is confirmed. Among the trading countries, American, Japan and Hong Kong are suitable for using linear error correction model and non-linear error correction model for Singapore and Korea.
66

Re-examine the Purchasing Power Parity in sPVAR Model

Chen, Ching-po 10 August 2005 (has links)
The studies of exchange rate theory in international finance are divided into several schools. Purchasing Power Parity (PPP) is one important hypothesis in both the Monetary Exchange Rate theory and the main theory in the Open Macroeconomics Model. Although many models are found upon the existence of PPP, but it still has not been proved empirically. That is why it¡¦s important to examine the existence of PPP. In the past, the statistic analyzing processes are all made directly under the models since all variables have been assumed stationary. However, regressing two non-stationary variables may result in Spurious Regression. The Unit Roots Test and Cointegration Test are developed in order to avoid the problem of spurious regression. Therefore, Unit Roots Test and Cointegration Test should be applied to the variables before estimating during regression analyses. Concerning the power deficiency of Unit Roots Test and Cointegration Test, many researches have adopted the combination time-series and cross-section Panel Data Model in order to improve the power and limitation of small samples. The Panel-Unit Root Test and Panel-Cointegration Test have therefore been developed to avoid Spurious Regression. However, Panel-Unit Root Test and Panel-Cointegration Test are applied with long time-series and large cross-section. Nevertheless, obtaining the data has always been the toughest difficulty during empirical researches, let alone the need for long period and large unit data. These Panel Data Models can only be applied to studies for long period, but not to the short periods. In order to avoid these problems; Binder, Hsiao and Pesaran (2004) have developed the Short Panel Vector Autoregressions (sPVAR) Model, a Panel Data Model developed with short time-series and large cross-section. Therefore, this paper will focus on Purchasing Power Parity under the sPVAR Model with the examination of PPP for the 30 countries since the introduction of Euro (1998 to 2004).
67

none

Kao, Hsiao-feng 21 August 2008 (has links)
none
68

Nested low-density lattice codes based on non-binary LDPC codes

Ghiya, Ankit 20 December 2010 (has links)
A family of low-density lattice codes (LDLC) is studied based on Construction-A for lattices. The family of Construction-A codes is already known to contain a large capacity-achieving subset. Parallels are drawn between coset non-binary low-density parity-check (LDPC) codes and nested low-density Construction-A lattices codes. Most of the related research in LDPC domain assumes optimal power allocation to encoded codeword. The source coding problem of mapping message to power optimal codeword for any LDPC code is in general, NP-hard. In this thesis, we present a novel method for encoding and decoding lattice based on non-binary LDPC codes using message-passing algorithms. / text
69

Price Discovery Across Option and Equity Prices

Kane, Hayden January 2014 (has links)
This paper measures the channels by which private information is incorporated in prices in the equity and option markets. Using a mispricing events approach and conditioning on the option market being the cause of the mispricing event, I analyse the subsequent behaviour of both the options and equity markets and I find that options markets play an important role in the price discovery process. When conditioning on option caused mispricing events, the equity price adjusts towards the options price to reconcile the prices. I find that around 40% of the option caused mispricing events contain information, and the equity prices adjust 35-40%, depending on the exchange, of the maximum discrepancy before prices reconcile. When the equity market causes the mispricing, the option market follows due to the autoquote mechanism. Additionally, I use Monte Carlo to assess the suitability of the Hasbrouck (1995) Information Share and Gonzalo-Granger (1995) Component Share measures in the option-equity context. I find that neither metric is suitable, however the Putnins (2013) Information Leadership metric is and the options market has on average a 35% information leadership share.
70

Multilateral approaches to the theory of international comparisons

Armstrong, Keir G. 11 1900 (has links)
The present thesis provides a definite answer to the question of how comparisons of certain aggregate quantities and price levels should be made across two or more geographic regions. It does so from the viewpoint of both economic theory and the “test” (or “axiomatic”) approach to index-number theory. Chapter 1 gives an overview of the problem of multilateral interspatial comparisons and introduces the rest of the thesis. Chapter 2 focuses on a particular domain of comparison involving consumer goods and services, countries and households in developing a theory of international comparisons in terms of the the (Kontis-type) cost-of-living index. To this end, two new classes of purchasing power parity measures are set out and the relationship between them is explored. The first is the many-household analogue of the (single-household) cost-of-living index and, as such, is rooted in the theory of group cost-of-living indexes. The second Consists of sets of (nominal) expenditure-share deflators, each corresponding to a system of (real) consumption shares for a group of countries. Using this framework, a rigorous exact index- number interpretation for Diewert’s “own-share” system of multilateral quantity indexes is provided. Chapter 3 develops a novel multilateral test approach to the problem at hand by generalizing Eichhorn and Voeller’s bilateral counterpart in a sensible manner. The equivalence of this approach to an extended version of Diewert’s multilateral test approach is exploited in an assessment of the relative merits of several alternative multilateral comparison formulae motivated outside the test-approach framework. Chapter 4 undertakes an empirical comparison of the formulae examined on theoretical grounds in Chapter 3 using an appropriate cross-sectional data set constructed by the Eurostat—OECD Purchasing Power Parity Programme. The principal aim of this comparison is to ascertain the magnitude of the effect of choosing one formula over another. In aid of this, a new indicator is proposed which facilitates the measurement of the difference between two sets of purchasing power parities, each computed using a different multilateral index-number formula.

Page generated in 0.063 seconds