• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 228
  • 64
  • 42
  • 19
  • 18
  • 14
  • 11
  • 8
  • 6
  • 5
  • 5
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 500
  • 93
  • 92
  • 86
  • 86
  • 84
  • 73
  • 66
  • 63
  • 59
  • 51
  • 43
  • 41
  • 39
  • 37
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
281

Joint Equalization and Decoding via Convex Optimization

Kim, Byung Hak 2012 May 1900 (has links)
The unifying theme of this dissertation is the development of new solutions for decoding and inference problems based on convex optimization methods. Th first part considers the joint detection and decoding problem for low-density parity-check (LDPC) codes on finite-state channels (FSCs). Hard-disk drives (or magnetic recording systems), where the required error rate (after decoding) is too low to be verifiable by simulation, are most important applications of this research. Recently, LDPC codes have attracted a lot of attention in the magnetic storage industry and some hard-disk drives have started using iterative decoding. Despite progress in the area of reduced-complexity detection and decoding algorithms, there has been some resistance to the deployment of turbo-equalization (TE) structures (with iterative detectors/decoders) in magnetic-recording systems because of error floors and the difficulty of accurately predicting performance at very low error rates. To address this problem for channels with memory, such as FSCs, we propose a new decoding algorithms based on a well-defined convex optimization problem. In particular, it is based on the linear-programing (LP) formulation of the joint decoding problem for LDPC codes over FSCs. It exhibits two favorable properties: provable convergence and predictable error-floors (via pseudo-codeword analysis). Since general-purpose LP solvers are too complex to make the joint LP decoder feasible for practical purposes, we develop an efficient iterative solver for the joint LP decoder by taking advantage of its dual-domain structure. The main advantage of this approach is that it combines the predictability and superior performance of joint LP decoding with the computational complexity of TE. The second part of this dissertation considers the matrix completion problem for the recovery of a data matrix from incomplete, or even corrupted entries of an unknown matrix. Recommender systems are good representatives of this problem, and this research is important for the design of information retrieval systems which require very high scalability. We show that our IMP algorithm reduces the well-known cold-start problem associated with collaborative filtering systems in practice.
282

Parallel VLSI Architectures for Multi-Gbps MIMO Communication Systems

January 2011 (has links)
In wireless communications, the use of multiple antennas at both the transmitter and the receiver is a key technology to enable high data rate transmission without additional bandwidth or transmit power. Multiple-input multiple-output (MIMO) schemes are widely used in many wireless standards, allowing higher throughput using spatial multiplexing techniques. MIMO soft detection poses significant challenges to the MIMO receiver design as the detection complexity increases exponentially with the number of antennas. As the next generation wireless system is pushing for multi-Gbps data rate, there is a great need for high-throughput low-complexity soft-output MIMO detector. The brute-force implementation of the optimal MIMO detection algorithm would consume enormous power and is not feasible for the current technology. We propose a reduced-complexity soft-output MIMO detector architecture based on a trellis-search method. We convert the MIMO detection problem into a shortest path problem. We introduce a path reduction and a path extension algorithm to reduce the search complexity while still maintaining sufficient soft information values for the detection. We avoid the missing counter-hypothesis problem by keeping multiple paths during the trellis search process. The proposed trellis-search algorithm is a data-parallel algorithm and is very suitable for high speed VLSI implementation. Compared with the conventional tree-search based detectors, the proposed trellis-based detector has a significant improvement in terms of detection throughput and area efficiency. The proposed MIMO detector has great potential to be applied for the next generation Gbps wireless systems by achieving very high throughput and good error performance. The soft information generated by the MIMO detector will be processed by a channel decoder, e.g. a low-density parity-check (LDPC) decoder or a Turbo decoder, to recover the original information bits. Channel decoder is another very computational-intensive block in a MIMO receiver SoC (system-on-chip). We will present high-performance LDPC decoder architectures and Turbo decoder architectures to achieve 1+ Gbps data rate. Further, a configurable decoder architecture that can be dynamically reconfigured to support both LDPC codes and Turbo codes is developed to support multiple 3G/4G wireless standards. We will present ASIC and FPGA implementation results of various MIMO detectors, LDPC decoders, and Turbo decoders. We will discuss in details the computational complexity and the throughput performance of these detectors and decoders.
283

Carry Trading & Uncovered Interest Rate Parity : An overview and empirical study of its applications

Tafazoli, Farid, Westman, Mathias January 2011 (has links)
The thesis examine if the uncovered interest rate parity holds over a 10 year period between Japan and Australia/Norway/USA. The data is collected between February 2001 - December 2010 and is used to, through regression and correlation analysis, explain if the theory holds or not. In the thesis it is also included a simulated portfolio that shows how a carry trading strategy could have been exercised and proof is shown that you can indeed profit as an investor on this kind of trades with low risk. The thesis shows in the end that the theory of uncovered interest rate parity does not hold in the long term and that some opportunities for profits with low risk do exist. / Uppsatsen undersöker om det icke kurssäkrade ränteparitetsvilkoret har hållit på en 10-års period mellan Japan och Australien/Norge/USA. Månadsdata från februari 2001 till december 2010 används för att genom regressionsanalys samt undersökning av korrelationer se om sambandet håller eller inte. I studien finns också en simulerad portfölj som visar hur en carry trading portfölj kan ha sett ut under den undersökta tidsperioden och hur man kan profitera på denna typ av handel med låg risk. Studien visar i slutet att teorin om det kursosäkrade ränteparitetsvilkoret inte håller i det långa loppet och att vissa möjligheter till vinst existerar.
284

Efficient Decoding Algorithms for Low-Density Parity-Check Codes / Effektiva avkodningsalgoritmer för low density parity check-koder

Blad, Anton January 2005 (has links)
Low-density parity-check codes have recently received much attention because of their excellent performance and the availability of a simple iterative decoder. The decoder, however, requires large amounts of memory, which causes problems with memory consumption. We investigate a new decoding scheme for low density parity check codes to address this problem. The basic idea is to define a reliability measure and a threshold, and stop updating the messages for a bit whenever its reliability is higher than the threshold. We also consider some modifications to this scheme, including a dynamic threshold more suitable for codes with cycles, and a scheme with soft thresholds which allow the possibility of removing a decision which have proved wrong. By exploiting the bits different rates of convergence we are able to achieve an efficiency of up to 50% at a bit error rate of less than 10^-5. The efficiency should roughly correspond to the power consumption of a hardware implementation of the algorithm.
285

The role of unobserved heterogeneity in transition to higher parity : evidence from Italy using Multiscopo survey

Carioli, Alessandra January 2009 (has links)
The paper uses data from 2003 Multiscopo Italian Survey to estimate education effects on fertility and in particular to determine how and to what degree does unobserved heterogeneity influence the estimated effects, that is to say how unobserved heterogeneity might bias estimates of effects of education on transition to 1st, 2nd and 3rd births. The peculiarity of this study is the implementation of a multiprocess approach, which allows for a broader and more efficient view of the phenomenon, studying jointly the transition to first, second and third or higher order births. In doing this I will use control variables, in particular educational level of the mother and her siblings (i.e. partner and grandmother), to detect possible influences of education in childbearing timing. Moreover, this topic has not yet been analysed using Italian data, in particular using Multiscopo Survey data and it may produce interesting comparisons with regard to other European countries, where the topic has already been addressed. In this study I will prove that number of siblings is the variable, which has a significative and relevant effect in all the models considered and that women partner’s education has an up-and-down effect on transition to childbearing. Moreover, the inclusion of unobserved characteristics of women has an important role in understanding transition to childbearing, being positive and significant.
286

International Fisher Effect: A Reexamination Within Co-integration And Dsue Frameworks

Ersan, Eda 01 December 2008 (has links) (PDF)
International Fisher Effect (IFE) is a theory in international finance which asserts that the spot exchange rate between countries should move in opposite direction with the interest rate differential between these countries. The aim of this thesis is to analyze whether differences in nominal interest rates between countries and the movement of spot exchange rates between their currencies tend to move together over the long run. The presence of IFE is tested among the G-5 countries and Turkey for the period from 1985:1 to 2007:12. The long run relationship is estimated with the Johansen co-integration method and supportive evidence is found for all country pairs. Individually modeled equations are further tested with the Dynamic SUR method. Those DSUR equations that include the Turkish currency provide supportive evidence for IFE that higher interest rates in favor of Turkey would cause depreciation of the Turkish Lira. The magnitude of the effect is found to be lower than expected which indicates that there might be other factors in economy, such as inflation rates, that affect the exchange rate movements.
287

Impact of general purchasing power accounting on Greek accounts

Baralexis, Spyridon K. January 1989 (has links)
This Study addressed the inflation accounting problem with respect to Greece. This problem had been unaddressed despite the serious implications it may have on micro- and macro-decision making due to the high and persistent inflation Greece has sustained from 1973 and afterwards. To accomplish the above purpose, the general significance of inflation accounting as well as its specific significance for Greece was established by means of the existing inflation accounting literature and the economic setting of Greece. Following this, the relevance of GPPA rather than CCA to the Greek financial reporting was established by means of correspondence between specific features of GPPA and specific characteristics of the Greek setting. After having established the a priori relevance of GPPA for Greece, the potential usefulness of GPPA to the Greek users of accounts was established as well on an empirical basis. For this purpose the impact of GPPA on Greek accounts was approximated ex ante through detailed restatement procedures and estimation techniques. It was found that inflation has a serious impact on earnings and especially on such important (for decision making) financial parameters as tax rate, dividend payout ratio, and return on capital employed. This impact of inflation on earnings does not seem to be systematic, and hence it cannot be estimated by use of HCA numbers. Therefore, GPPA should be adopted at least on a supplementary (to HCA) basis, if in the future the increase in the inflation rate continues to be as high as it was in the period examined by the study (i.e. 25% or so). In additon to the main conclusion above, other conclusions drawn on the basis of the empirical findings obtained are as follows: 1. The Composite Age Technique used (mainly in the USA) for the restatement of fixed assets and depreciation does not work at all in the Greek case. In contrast, the Dichotomus Year Technique in the first place, and the Equal Additions Technique, in the second place, may be used for adjusting fixed assets not only in developing countries like Greece, but, perhaps in developed countries as well. 2. Operation costs of GPPA can be saved by restating fixed assets and depreciation on an annual rather than monthly basis. 3. Perhaps the Greek government should consider the taxes imposed on corporate net profits in times of high inflation because it was found that the effective tax rate is substantially different from the nominal one. 4. There are serious implications for the Greek businesses in the finding that in real term dividends are paid out of capital rather than out of income. 5. The profitability of Greek companies is low when measured in real terms. Hence, businessmen should exercise every effort to improve it. On the other hand, the Greek government should consider the prices control imposed.
288

Performance Evaluation of Low Density Parity Check Forward Error Correction in an Aeronautical Flight Environment

Temple, Kip 10 1900 (has links)
ITC/USA 2014 Conference Proceedings / The Fiftieth Annual International Telemetering Conference and Technical Exhibition / October 20-23, 2014 / Town and Country Resort & Convention Center, San Diego, CA / In some flight test scenarios the telemetry link is noise limited at long slant ranges or during signal fade events caused by antenna pattern nulls. In these situations, a mitigation technique such as forward error correction (FEC) can add several decibels to the link margin. The particular FEC code discussed in this paper is a variant of a low-density parity check (LDPC) code and is coupled with SOQPSK modulation in the hardware tested. This paper will briefly cover lab testing of the flight-ready hardware then present flight test results comparing a baseline uncoded telemetry link with a LDPC-coded telemetry link. This is the first known test dedicated to this specific FEC code in a real-world test environment with flight profile tailored to assess the viability of an LDPC-coded telemetry link.
289

Coded Modulation for High Speed Optical Transport Networks

Batshon, Hussam George January 2010 (has links)
At a time where almost 1.75 billion people around the world use the Internet on a regular basis, optical communication over optical fibers that is used in long distance and high demand applications has to be capable of providing higher communication speed and re-liability. In recent years, strong demand is driving the dense wavelength division multip-lexing network upgrade from 10 Gb/s per channel to more spectrally-efficient 40 Gb/s or 100 Gb/s per wavelength channel, and beyond. The 100 Gb/s Ethernet is currently under standardization, and in a couple of years 1 Tb/s Ethernet is going to be standardized as well for different applications, such as the local area networks (LANs) and the wide area networks (WANs). The major concern about such high data rates is the degradation in the signal quality due to linear and non-linear impairments, in particular polarization mode dispersion (PMD) and intrachannel nonlinearities. Moreover, the higher speed transceivers are expensive, so the alternative approaches of achieving the required rates is preferably done using commercially available components operating at lower speeds.In this dissertation, different LDPC-coded modulation techniques are presented to offer a higher spectral efficiency and/or power efficiency, in addition to offering aggregate rates that can go up to 1Tb/s per wavelength. These modulation formats are based on the bit-interleaved coded modulation (BICM) and include: (i) three-dimensional LDPC-coded modulation using hybrid direct and coherent detection, (ii) multidimensional LDPC-coded modulation, (iii) subcarrier-multiplexed four-dimensional LDPC-coded modulation, (iv) hybrid subcarrier/amplitude/phase/polarization LDPC-coded modulation, and (v) iterative polar quantization based LDPC-coded modulation.
290

Robust High Throughput Space-Time Block Coded MIMO Systems

Pau, Nicholas January 2007 (has links)
In this thesis, we present a space-time coded system which achieves high through- put and good performance with low processing delay using low-complexity detection and decoding. Initially, Hamming codes are used in a simple interleaved bit-mapped coded modulation structure (BMCM). This is concatenated with Alamouti's or- thogonal space-time block codes. The good performance achieved by this system indicates that higher throughput is possible while maintaining performance. An analytical bound for the performance of this system is presented. We also develop a class of low density parity check codes which allows flexible "throughput versus performance" tradeoffs. We then focus on a Rate 2 quasi-orthogonal space-time block code structure which enables us to achieve an overall throughput of 5.6 bits/symbol period with good performance and relatively simple decoding using iterative parallel interference cancellation. We show that this can be achieved through the use of a bit-mapped coded modulation structure using parallel short low density parity check codes. The absence of interleavers here reduces processing delay significantly. The proposed system is shown to perform well on flat Rayleigh fading channels with a wide range of normalized fade rates, and to be robust to channel estimation errors. A comparison with bit-interleaved coded modulation is also provided (BICM).

Page generated in 0.0308 seconds