• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 228
  • 64
  • 42
  • 19
  • 18
  • 14
  • 11
  • 8
  • 6
  • 5
  • 5
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 499
  • 92
  • 92
  • 86
  • 86
  • 83
  • 73
  • 66
  • 63
  • 59
  • 51
  • 43
  • 41
  • 39
  • 37
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
281

Parallel VLSI Architectures for Multi-Gbps MIMO Communication Systems

January 2011 (has links)
In wireless communications, the use of multiple antennas at both the transmitter and the receiver is a key technology to enable high data rate transmission without additional bandwidth or transmit power. Multiple-input multiple-output (MIMO) schemes are widely used in many wireless standards, allowing higher throughput using spatial multiplexing techniques. MIMO soft detection poses significant challenges to the MIMO receiver design as the detection complexity increases exponentially with the number of antennas. As the next generation wireless system is pushing for multi-Gbps data rate, there is a great need for high-throughput low-complexity soft-output MIMO detector. The brute-force implementation of the optimal MIMO detection algorithm would consume enormous power and is not feasible for the current technology. We propose a reduced-complexity soft-output MIMO detector architecture based on a trellis-search method. We convert the MIMO detection problem into a shortest path problem. We introduce a path reduction and a path extension algorithm to reduce the search complexity while still maintaining sufficient soft information values for the detection. We avoid the missing counter-hypothesis problem by keeping multiple paths during the trellis search process. The proposed trellis-search algorithm is a data-parallel algorithm and is very suitable for high speed VLSI implementation. Compared with the conventional tree-search based detectors, the proposed trellis-based detector has a significant improvement in terms of detection throughput and area efficiency. The proposed MIMO detector has great potential to be applied for the next generation Gbps wireless systems by achieving very high throughput and good error performance. The soft information generated by the MIMO detector will be processed by a channel decoder, e.g. a low-density parity-check (LDPC) decoder or a Turbo decoder, to recover the original information bits. Channel decoder is another very computational-intensive block in a MIMO receiver SoC (system-on-chip). We will present high-performance LDPC decoder architectures and Turbo decoder architectures to achieve 1+ Gbps data rate. Further, a configurable decoder architecture that can be dynamically reconfigured to support both LDPC codes and Turbo codes is developed to support multiple 3G/4G wireless standards. We will present ASIC and FPGA implementation results of various MIMO detectors, LDPC decoders, and Turbo decoders. We will discuss in details the computational complexity and the throughput performance of these detectors and decoders.
282

Carry Trading & Uncovered Interest Rate Parity : An overview and empirical study of its applications

Tafazoli, Farid, Westman, Mathias January 2011 (has links)
The thesis examine if the uncovered interest rate parity holds over a 10 year period between Japan and Australia/Norway/USA. The data is collected between February 2001 - December 2010 and is used to, through regression and correlation analysis, explain if the theory holds or not. In the thesis it is also included a simulated portfolio that shows how a carry trading strategy could have been exercised and proof is shown that you can indeed profit as an investor on this kind of trades with low risk. The thesis shows in the end that the theory of uncovered interest rate parity does not hold in the long term and that some opportunities for profits with low risk do exist. / Uppsatsen undersöker om det icke kurssäkrade ränteparitetsvilkoret har hållit på en 10-års period mellan Japan och Australien/Norge/USA. Månadsdata från februari 2001 till december 2010 används för att genom regressionsanalys samt undersökning av korrelationer se om sambandet håller eller inte. I studien finns också en simulerad portfölj som visar hur en carry trading portfölj kan ha sett ut under den undersökta tidsperioden och hur man kan profitera på denna typ av handel med låg risk. Studien visar i slutet att teorin om det kursosäkrade ränteparitetsvilkoret inte håller i det långa loppet och att vissa möjligheter till vinst existerar.
283

Efficient Decoding Algorithms for Low-Density Parity-Check Codes / Effektiva avkodningsalgoritmer för low density parity check-koder

Blad, Anton January 2005 (has links)
Low-density parity-check codes have recently received much attention because of their excellent performance and the availability of a simple iterative decoder. The decoder, however, requires large amounts of memory, which causes problems with memory consumption. We investigate a new decoding scheme for low density parity check codes to address this problem. The basic idea is to define a reliability measure and a threshold, and stop updating the messages for a bit whenever its reliability is higher than the threshold. We also consider some modifications to this scheme, including a dynamic threshold more suitable for codes with cycles, and a scheme with soft thresholds which allow the possibility of removing a decision which have proved wrong. By exploiting the bits different rates of convergence we are able to achieve an efficiency of up to 50% at a bit error rate of less than 10^-5. The efficiency should roughly correspond to the power consumption of a hardware implementation of the algorithm.
284

The role of unobserved heterogeneity in transition to higher parity : evidence from Italy using Multiscopo survey

Carioli, Alessandra January 2009 (has links)
The paper uses data from 2003 Multiscopo Italian Survey to estimate education effects on fertility and in particular to determine how and to what degree does unobserved heterogeneity influence the estimated effects, that is to say how unobserved heterogeneity might bias estimates of effects of education on transition to 1st, 2nd and 3rd births. The peculiarity of this study is the implementation of a multiprocess approach, which allows for a broader and more efficient view of the phenomenon, studying jointly the transition to first, second and third or higher order births. In doing this I will use control variables, in particular educational level of the mother and her siblings (i.e. partner and grandmother), to detect possible influences of education in childbearing timing. Moreover, this topic has not yet been analysed using Italian data, in particular using Multiscopo Survey data and it may produce interesting comparisons with regard to other European countries, where the topic has already been addressed. In this study I will prove that number of siblings is the variable, which has a significative and relevant effect in all the models considered and that women partner’s education has an up-and-down effect on transition to childbearing. Moreover, the inclusion of unobserved characteristics of women has an important role in understanding transition to childbearing, being positive and significant.
285

International Fisher Effect: A Reexamination Within Co-integration And Dsue Frameworks

Ersan, Eda 01 December 2008 (has links) (PDF)
International Fisher Effect (IFE) is a theory in international finance which asserts that the spot exchange rate between countries should move in opposite direction with the interest rate differential between these countries. The aim of this thesis is to analyze whether differences in nominal interest rates between countries and the movement of spot exchange rates between their currencies tend to move together over the long run. The presence of IFE is tested among the G-5 countries and Turkey for the period from 1985:1 to 2007:12. The long run relationship is estimated with the Johansen co-integration method and supportive evidence is found for all country pairs. Individually modeled equations are further tested with the Dynamic SUR method. Those DSUR equations that include the Turkish currency provide supportive evidence for IFE that higher interest rates in favor of Turkey would cause depreciation of the Turkish Lira. The magnitude of the effect is found to be lower than expected which indicates that there might be other factors in economy, such as inflation rates, that affect the exchange rate movements.
286

Impact of general purchasing power accounting on Greek accounts

Baralexis, Spyridon K. January 1989 (has links)
This Study addressed the inflation accounting problem with respect to Greece. This problem had been unaddressed despite the serious implications it may have on micro- and macro-decision making due to the high and persistent inflation Greece has sustained from 1973 and afterwards. To accomplish the above purpose, the general significance of inflation accounting as well as its specific significance for Greece was established by means of the existing inflation accounting literature and the economic setting of Greece. Following this, the relevance of GPPA rather than CCA to the Greek financial reporting was established by means of correspondence between specific features of GPPA and specific characteristics of the Greek setting. After having established the a priori relevance of GPPA for Greece, the potential usefulness of GPPA to the Greek users of accounts was established as well on an empirical basis. For this purpose the impact of GPPA on Greek accounts was approximated ex ante through detailed restatement procedures and estimation techniques. It was found that inflation has a serious impact on earnings and especially on such important (for decision making) financial parameters as tax rate, dividend payout ratio, and return on capital employed. This impact of inflation on earnings does not seem to be systematic, and hence it cannot be estimated by use of HCA numbers. Therefore, GPPA should be adopted at least on a supplementary (to HCA) basis, if in the future the increase in the inflation rate continues to be as high as it was in the period examined by the study (i.e. 25% or so). In additon to the main conclusion above, other conclusions drawn on the basis of the empirical findings obtained are as follows: 1. The Composite Age Technique used (mainly in the USA) for the restatement of fixed assets and depreciation does not work at all in the Greek case. In contrast, the Dichotomus Year Technique in the first place, and the Equal Additions Technique, in the second place, may be used for adjusting fixed assets not only in developing countries like Greece, but, perhaps in developed countries as well. 2. Operation costs of GPPA can be saved by restating fixed assets and depreciation on an annual rather than monthly basis. 3. Perhaps the Greek government should consider the taxes imposed on corporate net profits in times of high inflation because it was found that the effective tax rate is substantially different from the nominal one. 4. There are serious implications for the Greek businesses in the finding that in real term dividends are paid out of capital rather than out of income. 5. The profitability of Greek companies is low when measured in real terms. Hence, businessmen should exercise every effort to improve it. On the other hand, the Greek government should consider the prices control imposed.
287

Performance Evaluation of Low Density Parity Check Forward Error Correction in an Aeronautical Flight Environment

Temple, Kip 10 1900 (has links)
ITC/USA 2014 Conference Proceedings / The Fiftieth Annual International Telemetering Conference and Technical Exhibition / October 20-23, 2014 / Town and Country Resort & Convention Center, San Diego, CA / In some flight test scenarios the telemetry link is noise limited at long slant ranges or during signal fade events caused by antenna pattern nulls. In these situations, a mitigation technique such as forward error correction (FEC) can add several decibels to the link margin. The particular FEC code discussed in this paper is a variant of a low-density parity check (LDPC) code and is coupled with SOQPSK modulation in the hardware tested. This paper will briefly cover lab testing of the flight-ready hardware then present flight test results comparing a baseline uncoded telemetry link with a LDPC-coded telemetry link. This is the first known test dedicated to this specific FEC code in a real-world test environment with flight profile tailored to assess the viability of an LDPC-coded telemetry link.
288

Coded Modulation for High Speed Optical Transport Networks

Batshon, Hussam George January 2010 (has links)
At a time where almost 1.75 billion people around the world use the Internet on a regular basis, optical communication over optical fibers that is used in long distance and high demand applications has to be capable of providing higher communication speed and re-liability. In recent years, strong demand is driving the dense wavelength division multip-lexing network upgrade from 10 Gb/s per channel to more spectrally-efficient 40 Gb/s or 100 Gb/s per wavelength channel, and beyond. The 100 Gb/s Ethernet is currently under standardization, and in a couple of years 1 Tb/s Ethernet is going to be standardized as well for different applications, such as the local area networks (LANs) and the wide area networks (WANs). The major concern about such high data rates is the degradation in the signal quality due to linear and non-linear impairments, in particular polarization mode dispersion (PMD) and intrachannel nonlinearities. Moreover, the higher speed transceivers are expensive, so the alternative approaches of achieving the required rates is preferably done using commercially available components operating at lower speeds.In this dissertation, different LDPC-coded modulation techniques are presented to offer a higher spectral efficiency and/or power efficiency, in addition to offering aggregate rates that can go up to 1Tb/s per wavelength. These modulation formats are based on the bit-interleaved coded modulation (BICM) and include: (i) three-dimensional LDPC-coded modulation using hybrid direct and coherent detection, (ii) multidimensional LDPC-coded modulation, (iii) subcarrier-multiplexed four-dimensional LDPC-coded modulation, (iv) hybrid subcarrier/amplitude/phase/polarization LDPC-coded modulation, and (v) iterative polar quantization based LDPC-coded modulation.
289

Robust High Throughput Space-Time Block Coded MIMO Systems

Pau, Nicholas January 2007 (has links)
In this thesis, we present a space-time coded system which achieves high through- put and good performance with low processing delay using low-complexity detection and decoding. Initially, Hamming codes are used in a simple interleaved bit-mapped coded modulation structure (BMCM). This is concatenated with Alamouti's or- thogonal space-time block codes. The good performance achieved by this system indicates that higher throughput is possible while maintaining performance. An analytical bound for the performance of this system is presented. We also develop a class of low density parity check codes which allows flexible "throughput versus performance" tradeoffs. We then focus on a Rate 2 quasi-orthogonal space-time block code structure which enables us to achieve an overall throughput of 5.6 bits/symbol period with good performance and relatively simple decoding using iterative parallel interference cancellation. We show that this can be achieved through the use of a bit-mapped coded modulation structure using parallel short low density parity check codes. The absence of interleavers here reduces processing delay significantly. The proposed system is shown to perform well on flat Rayleigh fading channels with a wide range of normalized fade rates, and to be robust to channel estimation errors. A comparison with bit-interleaved coded modulation is also provided (BICM).
290

Breast cancer screening with mammography of women 40-49 years in Sweden / Mammografiscreening i ålder 40-49 år i Sverige

Hellquist, Barbro Numan January 2014 (has links)
Background The debate regarding the lower age limit for mammography service screening is old and lively; a product in part of the lower breast cancer risk in younger ages as well as the limited data available for studies of the younger age group. Recently the idea of inviting only high risk groups has gained momentum, however high risk might not be equivalent to greater benefit from screening. Therefore, there is a need for information on effectiveness of screening as it relates to young women and to specific risk groups. To this end, this thesis evaluates mammography screening for the age group – 40 to 49 year old women – in terms of breast cancer mortality reduction in total and in subgroups based on breast cancer risk factors. Overdiagnosis of mammography screening is also evaluated for women 40 to 49 years old. In addition, this thesis presents a statistical method to estimate this effectiveness and to test for differences in effectiveness between subgroups adjusted for non-compliance and contamination. Methods The studies of this thesis are based on data from the Screening of Young Women (SCRY) database. The SCRY database consists of detailed information on diagnosis, death, screening exposure and risk factors for breast cancer cases and population size by year (between 1986 and 2005) and municipality for women in Sweden between 40 and 49 years old. The material was divided into a study group consisting of the counties that invited women in the age group 40-49 years to mammography screening, and a contemporaneous control group consisting of the counties that did not. Effectiveness was estimated in terms of rate ratios for two different exposures (invitation to and participation in screening), and overdiagnosis for subsequent screening was estimated adjusting for lead time bias. Defining a reference period enabled adjustment for possible underlying differences in breast cancer mortality and incidence. A statistical model for adjusting for non-compliance and contamination in randomised controlled trials was further developed to allow for adjustment in cohort studies using a Poisson model with log-linear structure for exposure and background risk. Results During the study period (1986-2005), there were 619 and 1205 breast cancer deaths and 6047 and 7790 breast cancer cases in the study group and the control groups, respectively. For women between 40 and 49 years old, the breast cancer mortality reduction was estimated at 26% [95% CI, 17 to 34%] for invited to screening and 29% [95% CI, 20 to 38%] for attending screening. The RR estimates for the high-risk groups based on the risk factors parity, age at birth of first child, and socio-economic status were equal to or higher than that of the low risk groups. The new statistical method showed that the decrease in effectiveness with parity was not a statistically significant trend. The overdiagnosis from subsequent screening for 40 to 49 year old women was estimated at 1% [95 % CI, -6 to 8 %] (i.e., not statistically significant). Conclusion Subgroup specific effectiveness was also estimated. The relative effectiveness of screening for breast cancer with mammography for women age 40 to 49 years appears to be comparable to that for older women. These findings and the fact that there was no statistically significant overdiagnosis from subsequent screening speak for inviting women 40 to 49 years old to screening. High-risk screening for nulliparous women aged 40 to 49 years, for example, might be an alternative in countries where population-based screening for all women between 40 and 49 years old is not possible. However, the matter of risk factors and the effect of their combinations is complex and risk group screening presents ethical and practical difficulties. The new statistical model is a useful tool for analysing cohorts with exposed and non-exposed populations where non-compliance and contamination is a potential source of bias.

Page generated in 0.0834 seconds