11 |
Nonlinear Analysis of the Uncovered Interest Parity in Latin American CountriesLeng, Chuan-chiang 04 August 2008 (has links)
Abstract
Most of literature and studies on prediction of exchange rate focus on main industrial countries with few discussions on the exchange rate of the developing countries. For model residual differences can be found in a linear model, so the linear model will adjust to find equilibrium at a fixed speed. However, it is difficult for the linear model to capture the character of dynamic adjustment behavior if a non-linear adjustment relationship exists (Sarno, 2002). Moreover, in case the trading costs exist in the foreign exchange market or the technical analysis is widely used among traders, then the deviations from equilibrium exchange rate may present a non-linear adjustment trend. In view of this, this study employed the STAR (smooth transition autoregression) model developed by Granger and Terasvirta (1993) to discuss the dynamic adjustment process of the deviations from UIP in the seven countries in Latin America. In most of the experimental studies conducted in the past, it was found difficult to establish the assumptions of uncovered interest parity (UIP). Therefore, this study is aimed to verify the experimental studies on UIP in the Latin America under the non-linear framework by means of non-linear model analysis.
|
12 |
Single parity check product codes and iterative decodingRankin, David Michael January 2001 (has links)
The aim of coding theory is to design codes which can achieve the fundamental limits of communication [52] and yet are simple to implement. On average randomly constructed codes can achieve this goal, but with a decoding complexity that is impractical. Consequently, highly structured codes with practical decoding algorithms have been extensively studied. Unfortunately the vast majority of these codes do not approach capacity. Recent advances involving simple 'random like' codes with practical iterative decoding algorithms have closely approached capacity as the blocklength increases. This thesis investigates single parity check (SPC) product codes and introduces the class of randomly interleaved (RI) SPC product codes. It will be shown that RI SPC product codes can asymptotically force the probability of error to zero, at code rates up to capacity, for almost all codewords. Furthermore the structure of these codes allows a very simple, sub-optimal, iterative decoding algorithm to be used. This thesis also derives an asymptotic analysis on SPC product codes from the decoding point of view. It is shown that the probability of error can be driven to zero, as the blocklength increases, for signal to noise ratios within 2dB of capacity on the additive white Gaussian noise (AWGN) channel. Simulation results for both SPC and RI SPC product codes in an AWGN channel are presented. These results indicate that RI SPC product codes perform very well, typically within 1.5dB of capacity over a wide range of blocklengths and code rates. Further analysis on the weight enumerator of finite length RI SPC product codes is used to confirm the error floor of these codes. Extensions to parallel and serially concatenated SPC product codes are also investigated. Simulation results show an advantageous trade-off between code rate, blocklength and performance for three dimensional parallel concatenated SPC product codes. The design of irregular SPC product codes is also considered, and some simulation results are presented.
|
13 |
Low-density Parity-check Codes for Wireless Relay NetworksZhou, Xinsheng January 2013 (has links)
In wireless networks, it has always been a challenge to satisfy high traffic throughput demands, due to limited spectrum resources. In past decades, various techniques, including cooperative communications, have been developed to achieve higher communication rates.
This thesis addresses the challenges imposed by cooperative wireless networks, in particular focusing on practical code constructions and designs for wireless relay networks. The thesis is divided into the following four topics: 1) constructing and designing low-density parity-check (LDPC) codes for half-duplex three-phase two-way relay channels, 2) extending LDPC code constructions to half-duplex three-way relay channels, 3) proposing maximum-rate relay selection algorithms and LDPC code constructions for the broadcast problem in wireless relay networks, and 4) proposing an iterative hard interference cancellation decoder for LDPC codes in 2-user multiple-access channels.
Under the first topic, we construct codes for half-duplex three-phase two-way relay channels where two terminal nodes exchange information with the help of a relay node. Constructing codes for such channels is challenging, especially when messages are encoded into multiple streams and a destination node receives signals from multiple nodes. We first prove an achievable rate region by random coding. Next, a systematic LDPC code is constructed at the relay node where relay bits are generated from two source codewords. At the terminal nodes, messages are decoded from signals of the source node and the relay node. To analyze the performance of the codes, discretized density evolution is derived. Based on the discretized density evolution, degree distributions are optimized by iterative linear programming in three steps. The optimized codes obtained are 26% longer than the theoretic ones.
For the second topic, we extend LDPC code constructions from half-duplex three-phase two-way relay channels to half-duplex three-way relay channels. An achievable rate region of half-duplex three-way relay channels is first proved. Next, LDPC codes for each sub-region of the achievable rate region are constructed, where relay bits can be generated only from a received codeword or from both the source codeword and received codewords.
Under the third topic, we study relay selection and code constructions for the broadcast problem in wireless relay networks. We start with the system model, followed by a theorem stating that a node can decode a message by jointly decoding multiple blocks of received signals. Next, the maximum rate is given when a message is decoded hop-by-hop or decoded by a set of nodes in a transmission phase. Furthermore, optimal relay selection algorithms are proposed for the two relay schemes. Finally, LDPC codes are constructed for the broadcast problem in wireless relay networks.
For the fourth topic, an iterative hard interference cancellation decoder for LDPC codes in 2-user multiple-access channels is proposed. The decoder is based on log-likelihood ratios (LLRs). Interference is estimated, quantized and subtracted from channel outputs. To analyze the codes, density evolution is derived. We show that the required signal-to-noise ratio (SNR) for the proposed low-complexity decoder is 0.2 dB higher than that for an existing sub-optimal belief propagation decoder at code rate 1/3.
|
14 |
Puncturing, mapping, and design of low-density parity-check codesRichter, Gerd January 2008 (has links)
Zugl.: Ulm, Univ., Diss., 2008
|
15 |
The relationship between adherence to a graded pelvic muscle exercise program and pelvic muscle strength in primiparous women a report submitted in partial fulfillment ... for the degree of Master of Science, Parent-Child Nursing ... /Hanson, Elizabeth G. January 1995 (has links)
Thesis (M.S.)--University of Michigan, 1995.
|
16 |
The relationship between adherence to a graded pelvic muscle exercise program and pelvic muscle strength in primiparous women a report submitted in partial fulfillment ... for the degree of Master of Science, Parent-Child Nursing ... /Hanson, Elizabeth G. January 1995 (has links)
Thesis (M.S.)--University of Michigan, 1995.
|
17 |
Terms of trade effects on PPP and incomes of primary-commodity exporting countries /Koya, Sharmistha N., January 1994 (has links)
Thesis (Ph. D.)--Virginia Polytechnic Institute and State University, 1994. / Vita. Abstract. Includes bibliographical references (leaves 166-174). Also available via the Internet.
|
18 |
A Duty to Deceive? Using New and Deceptive Technologies to Enhance the Lives of Dementia PatientsNofal, Jacob Ramsey 24 April 2023 (has links)
In this paper I propose a new type of therapy for dementia patients called AIIT. AIIT concerns using artificially intelligent programs to mimic the likeness of a dementia patients' spouse or relative in hopes of providing them comfort and an alternative solution to them constantly reliving grief or being lied to in unsatisfactory ways. I establish the moral permissibility of AIIT through the moral parity claim, which is a conditional claim stating that if current dementia care practices are permissible, then AIIT ought to be as well. This means that AIIT is no more morally problematic than current dementia care practices.
To make this claim I evaluate and compare AIIT to current practices from three different perspectives/potential harms. I first find AIIT to be less harmful to dementia patients than current practices since AIIT better preserves dementia patients' beliefs, emotions, and desires. I then conclude that AIIT does not pose a unique harm to the impersonated person, since 1) Many theories of wellbeing do not support the possibility of the deceased being harmed and 2) People with dementia are not commonly creating new impressions, and therefore harms committed to the impersonated person are extremely unlikely. Finally, I claim that AIIT would not cause additional harms to society given that current practices already harm relatives in a similar manner, also have the potential to pose problems if used outside of dementia care, and don't differ from AIIT in respect to affecting trust in the medical system.
Having established moral parity, I conclude with a push for a stronger claim, the superior option claim, which states that AIIT is morally permissible by arguing for the antecedent of the moral parity claim. I argue for this by denying that we have an obligation to not deceive dementia patients since they have special conditions that do not allow them to apprehend the world accurately. / Master of Arts / AI is becoming increasingly popular, especially within the healthcare industry. I explore the intersection of AI and dementia care. Dementia is a debilitating illness that affects millions of people worldwide. In this paper, I propose a new therapy for dementia patients called AIIT. AIIT involves using artificially intelligent programs to simulate the likeness of a dementia patient's spouse or relative, with the goal of providing comfort and an alternative solution to their grief. I argue that AIIT is morally permissible on the grounds that it is not any worse than current practices in dementia care. At the end of the paper, I also argue that AIIT may have significant benefits and so we should employ AIIT when it is applicable.
|
19 |
An Analysis of the Parity Violating Asymmetry of Polarized Neutron Capture in Hydrogen from the NPDGamma ExperimentTang, Elise 01 January 2015 (has links)
The NPDGamma Experiment is used to study the n + p to d + g reaction for the purpose of examining the hadronic weak interaction. The nucleon-nucleon interaction is overwhelmingly mediated by the strong force, however, the weak part can be extracted by a study of its parity violating manifestations. When neutrons are incident on protons, deuterons and 2.2 MeV gamma rays are produced. If the incoming neutrons are polarized, the parity violating weak interaction gives rise to a measured spatial asymmetry, Ag, in the outgoing gamma rays, as sn · kg is parity odd.
At low energies, the weak nucleon-nucleon interaction can be modeled as meson exchange and characterized with six parameters. NPDGamma is sensitive to one of these parameters, hp. Previous measurements that extrapolate hp from more complicated interactions disagree, and disagree with the theoretical reasonable range. Additionally, a previous iteration of the NPDGamma Experiment performed at Los Alamos National Lab was statistics limited in its measurement of Ag. For this reason, a new measurement was performed at the high neutron flux Spallation Neutron Source at Oak Ridge National Lab.
In the experiment, a high flux of cold neutrons was polarized to ~ 95% by a supermirror polarizer, the spins flipped in a defined sequence by a radio-frequency spin rotator, and then the neutrons captured on a 16 L liquid para-hydrogen target, which emits gamma-rays asymmetrically upon capture. The gamma-rays are detected in a 3pi array of 48 CsI crystal detectors. This thesis discusses the NPDGamma Experiment in detail, and includes an analysis of subset of the NPDGamma data that has unique timing and data acquisition properties that preclude it being analyzed with the combined data set. Ag was extracted with a result of (6.254 ± 37.694) × 10-9
|
20 |
Analytical and numerical models of accretion disksCaunt, Stuart Edward January 1998 (has links)
No description available.
|
Page generated in 0.0276 seconds