• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 219
  • 51
  • 49
  • 18
  • 16
  • 15
  • 14
  • 12
  • 11
  • 7
  • 4
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 486
  • 486
  • 163
  • 101
  • 79
  • 67
  • 66
  • 51
  • 47
  • 39
  • 38
  • 38
  • 36
  • 34
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Price Discovery in the Natural Gas Markets of the United States and Canada

Olsen, Kyle 2010 December 1900 (has links)
The dynamics of the U.S. and Canada natural gas spot markets are evolving through deregulation policies and technological advances. Economic theory suggests that these markets will be integrated. The key question is the extent of integration among the markets. This thesis characterizes the degree of dynamic integration among 11 major natural gas markets, six from the U.S. and five from Canada, and determines each individual markets’ role in price discovery. This is the first study to include numerous Canadian markets in a North American natural gas market study. Causal flows modeling using directed acyclic graphs in conjunction with time series analysis are used to explain the relationships among the markets. Daily gas price data from 1994 to 2009 are used. The 11 natural gas market prices are tied together with nine long-run co-integrating relationships. All markets are included in the co-integration space, providing evidence the markets are integrated. Results show the degree of integration varies by region. Further results indicate no clear price leader exists among the 11 markets. Dawn market is exogenous in contemporaneous time, while Sumas market is an information sink. Henry Hub plays a significant role in the price discovery of markets in the U.S. Midwest and Northeast, but little to markets in the west. The uncertainty of a markets’ price depends primarily on markets located in nearby regions. Policy makers may use information on market integration for important policy matters in efforts of attaining efficiency. Gas traders benefit from knowing the price discovery relationships.
142

Fine Granularity Video Compression Technique and Its Application to Robust Video Transmission over Wireless Internet

Su, Yih-ching 22 December 2003 (has links)
This dissertation deals with (a) fine granularity video compression technique and (b) its application to robust video transmission over wireless Internet. First, two wavelet-domain motion estimation algorithms, HMRME (Half-pixel Multi-Resolution Motion Estimation) and HSDD (Hierarchical Sum of Double Difference Metric), have been proposed to give wavelet-based FGS (Fine Granularity Scalability) video encoder with either low-complexity or high-performance features. Second, a VLSI-friendly high-performance embedded coder ABEC (Array-Based Embedded Coder) has been built to encode motion compensation residue as bitstream with fine granularity scalability. Third, the analysis of loss-rate prediction over Gilbert channel with loss-rate feedback, and several optimal FEC (Forward Error Correction) assignment schemes applicable for any real-time FGS video transmission system will be presented in this dissertation. In addition to those theoretical works mentioned above, for future study on embedded systems for wireless FGS video transmission, an initiative FPGA-based MPEG-4 video encoder has also been implemented in this work.
143

none

Huang, Yi-Hsuan 27 June 2007 (has links)
With the liberalization of financial market, the prevalence of international trade and the prosperity of foreign exchange markets ,investors could hedge,speculate or interest arbitrage in markets. Therefore, market efficiency is worthy of investigation and analysis on the international finance extensively. According to simple market efficiency hypothesis, there would be a long-run relationship between spot exchange rate and forward exchange rate if the foreign exchange market is efficient. Under the circumstance, this study firstly tries to examine whether there is a long-run relationship or not between spot exchange rate and forward exchange rate by Linear Cointegration Theory. At the same time, the study tests Simple Market Efficiency Hypothesis is correct or not in practice. Next,in a non-linear threshold cointegrational way, it looks into whether there is an apparent threshold effect or not among variables, and the adjusting behavior in the long-run equilibrium process. The result of the study proves that there are an apparent threshold effect and inconsistent behaviors in the long-run equilibrium process.
144

Modeling Monthly Electricity Demand In Turkey For 1990-2006

Kucukbahar, Duygu 01 February 2008 (has links) (PDF)
Factors such as economical development, rapid increase in population and climate change increased electricity demand in Turkey as well as in other countries. Thus, using the correct methods to estimate short, medium and long term electricity demand forms a basis for the countries to develop their energy strategy. In this study, monthly electricity demand of Turkey is estimated. First, the effect of natural gas price and consumption to electricity demand and elasticities are searched with a simple regression model. Although, natural gas is known as a substitute of electricity, natural gas consumption and natural gas over electricity price ratio are found to be nearly inelastic. Second part includes two models and cointegration relation is investigated in nonstationary industry production index, electricity consumption per capita and electricity prices series in the first one. An error correction model is then formed with an additional average temperature variable and 12 months electricity demand is forecasted. In the second one, heating degree-days and cooling degree-days are used instead of the average temperature variable and a new error correction model is formed. The first model performs better than the second one, indicating the seasonality of electricity consumption during a year. The results of both models are also compared with previous studies to investigate the effect of different weather variables.
145

A Segmentation-Based Multiple-Baseline Stereo (SMBS) Scheme for Acquisition of Depth in 3-D Scenes

TANIMOTO, Masayuki, FUJII, Toshiaki, TOUJI, Bunpei, KIMOTO, Tadahiko, IMORI, Takashi 20 February 1998 (has links)
No description available.
146

Study the relationship between real exchange rate and interest rate differential – United States and Sweden

Wang, Zhiyuan January 2007 (has links)
<p>This paper uses co-integration method and error-correction model to re-examine the relationship between real exchange rate and expected interest rate differentials, including cumulated current account balance, over floating exchange rate periods. As indicated by the dynamic model, I find that there is a long run relationship among the variables using Johansen co-integration method. Final conclusion is that the empirical evidence is provided to show that our error-correction model leads to a good real exchange rate forecast.</p>
147

SOQPSK with LDPC: Spending Bandwidth to Buy Link Margin

Hill, Terry, Uetrecht, Jim 10 1900 (has links)
ITC/USA 2013 Conference Proceedings / The Forty-Ninth Annual International Telemetering Conference and Technical Exhibition / October 21-24, 2013 / Bally's Hotel & Convention Center, Las Vegas, NV / Over the past decade, SOQPSK has been widely adopted by the flight test community, and the low density parity check (LDPC) codes are now in widespread use in many applications. This paper defines the waveform and presents the bit error rate (BER) performance of SOQPSK coupled with a rate 2/3 LDPC code. The scheme described here expands the transmission bandwidth by approximately 56% (which is still 22% less than the legacy PCM/FM modulation), for the benefit of improving link margin by over 10 dB at BER = 10⁻⁶.
148

Υλοποίηση qubit και διόρθωση κβαντικού κώδικα

Χιώτης, Γιώργος 09 October 2014 (has links)
Η κατασκευή ενός ολοκληρωμένου κβαντικού υπολογιστή αποτελεί μια πρόκληση για τη σύγχρονη επιστήμη. Ο κβαντικός υπολογιστής μας δίνει την ελπίδα πως κάποια στιγμή στο κοντινό μέλλον, θα είμαστε σε θέση να λύνουμε προβλήματα ταχύτερα και πιο αποδοτικά από ότι κάνει ένας κλασσικός υπολογιστής σήμερα. Για παράδειγμα, ο κβαντικός αλγόριθμος παραγοντοποίησης του Shor [3] πετυχαίνει εκθετική επιτάχυνση έναντι του κλασσικού, κάτι που σημαίνει πως η χρήση του πρωτόκολλου κρυπτογράφησης RSA δεν θα είναι όσο ασφαλής είναι σήμερα. Αυτό θα έχει ως αποτέλεσμα μεγάλες αλλαγές στις επικοινωνίες και στις συναλλαγές στο προσεχές μέλλον. Στην παρούσα διπλωματική εργασία θα περιγράψουμε τις αρχές που πρέπει να πληρεί ένα κβαντικό σύστημα για να θεωρηθεί κβαντικός υπολογιστής, πώς υλοποιούμε ένα qubit που είναι η μονάδα πληροφορίας του και τέλος θα μιλήσουμε για το πώς κωδικοποιούμε την κβαντική πληροφορία ώστε να είμαστε σε θέση να τη διορθώσουμε. Αρχίζουμε με τη διατύπωση των αρχών της κβαντικής μηχανικής , όπως προκύπτουν από την πειραματική διαδικασία. Συνεχίζουμε με την υπεραγωγιμότητα, το φαινόμενο που μας επιτρέπει να χειριζόμαστε μακροσκοπικά της κβαντικές ιδιότητες της ύλης, όπως και κάποια ακόμα φαινόμενα, όπως αυτό του Meissner, που μας δίνουν τη δυνατότητα να δημιουργήσουμε το κυκλώμα που υλοποιεί το qubit. Τέλος, περιγράφουμε θεωρητικά ένα καθολικό σύνολο από κβαντικές πύλες και τα κυκλώματα διόρθωσης λαθών κβαντικού κώδικα. / The construction of an integrated quantum computer is a challenge for modern science. The quantum computer gives us hope that sometime in the near future, we will be able to solve problems faster and more efficiently than does a conventional computer today. For example, the Shor's quantum algorithm for factoring [3] gave exponential acceleration compared to the classical one, which means that the use of RSA encryption protocol will not be safe as it is today. This will result large changes in communications and transactions in the near future. In this paper we describe the principles that must meet a quantum system to be considered as a quantum computer, how do we implement a qubit which is the unit of information, and finally we'll talk about how we encode quantum information in order to be able to fix it . We begin with the formulation of the principles of quantum mechanics, derived from the experimental procedure. We continue with the superconductivity phenomenon that allows us to manipulate the macroscopic quantum properties of matter, and even some phenomena such as the Meissner, who enable us to create a circuit that implements the qubit. Finally, we describe theoretically a universal set of quantum gates and circuits of error correcting quantum code.
149

Iterative Decoding Beyond Belief Propagation of Low-Density Parity-Check Codes

Planjery, Shiva Kumar January 2013 (has links)
The recent renaissance of one particular class of error-correcting codes called low-density parity-check (LDPC) codes has revolutionized the area of communications leading to the so-called field of modern coding theory. At the heart of this theory lies the fact that LDPC codes can be efficiently decoded by an iterative inference algorithm known as belief propagation (BP) which operates on a graphical model of a code. With BP decoding, LDPC codes are able to achieve an exceptionally good error-rate performance as they can asymptotically approach Shannon's capacity. However, LDPC codes under BP decoding suffer from the error floor phenomenon, an abrupt degradation in the error-rate performance of the code in the high signal-to-noise ratio region, which prevents the decoder from achieving very low error-rates. It arises mainly due to the sub-optimality of BP decoding on finite-length loopy graphs. Moreover, the effects of finite precision that stem from hardware realizations of BP decoding can further worsen the error floor phenomenon. Over the past few years, the error floor problem has emerged as one of the most important problems in coding theory with applications now requiring very low error rates and faster processing speeds. Further, addressing the error floor problem while taking finite precision into account in the decoder design has remained a challenge. In this dissertation, we introduce a new paradigm for finite precision iterative decoding of LDPC codes over the binary symmetric channel (BSC). These novel decoders, referred to as finite alphabet iterative decoders (FAIDs), are capable of surpassing the BP in the error floor region at a much lower complexity and memory usage than BP without any compromise in decoding latency. The messages propagated by FAIDs are not quantized probabilities or log-likelihoods, and the variable node update functions do not mimic the BP decoder. Rather, the update functions are simple maps designed to ensure a higher guaranteed error correction capability which improves the error floor performance. We provide a methodology for the design of FAIDs on column-weight-three codes. Using this methodology, we design 3-bit precision FAIDs that can surpass the BP (floating-point) in the error floor region on several column-weight-three codes of practical interest. While the proposed FAIDs are able to outperform the BP decoder with low precision, the analysis of FAIDs still proves to be a difficult issue. Furthermore, their achievable guaranteed error correction capability is still far from what is achievable by the optimal maximum-likelihood (ML) decoding. In order to address these two issues, we propose another novel class of decoders called decimation-enhanced FAIDs for LDPC codes. For this class of decoders, the technique of decimation is incorporated into the variable node update function of FAIDs. Decimation, which involves fixing certain bits of the code to a particular value during decoding, can significantly reduce the number of iterations required to correct a fixed number of errors while maintaining the good performance of a FAID, thereby making such decoders more amenable to analysis. We illustrate this for 3-bit precision FAIDs on column-weight-three codes and provide insights into the analysis of such decoders. We also show how decimation can be used adaptively to further enhance the guaranteed error correction capability of FAIDs that are already good on a given code. The new adaptive decimation scheme proposed has marginally added complexity but can significantly increase the slope of the error floor in the error-rate performance of a particular FAID. On certain high-rate column-weight-three codes of practical interest, we show that adaptive decimation-enhanced FAIDs can achieve a guaranteed error-correction capability that is close to the theoretical limit achieved by ML decoding.
150

Detection and Decoding for Magnetic Storage Systems

Radhakrishnan, Rathnakumar January 2009 (has links)
The hard-disk storage industry is at a critical time as the current technologies are incapable of achieving densities beyond 500 Gb/in2, which will be reached in a few years. Many radically new storage architectures have been proposed, which along with advanced signal processing algorithms are expected to achieve much higher densities. In this dissertation, various signal processing algorithms are developed to improve the performance of current and next-generation magnetic storage systems.Low-density parity-check (LDPC) error correction codes are known to provide excellent performance in magnetic storage systems and are likely to replace or supplement currently used algebraic codes. Two methods are described to improve their performance in such systems. In the first method, the detector is modified to incorporate auxiliary LDPC parity checks. Using graph theoretical algorithms, a method to incorporate maximum number of such checks for a given complexity is provided. In the second method, a joint detection and decoding algorithm is developed that, unlike all other schemes, operates on the non-binary channel output symbols rather than input bits. Though sub-optimal, it is shown to provide the best known decoding performance for channels with memory more than 1, which are practically the most important.This dissertation also proposes a ternary magnetic recording system from a signal processing perspective. The advantage of this novel scheme is that it is capable of making magnetic transitions with two different but predetermined gradients. By developing optimal signal processing components like receivers, equalizers and detectors for this channel, the equivalence of this system to a two-track/two-head system is determined and its performance is analyzed. Consequently, it is shown that it is preferable to store information using this system, than to store using a binary system with inter-track interference. Finally, this dissertation provides a number of insights into the unique characteristics of heat-assisted magnetic recording (HAMR) and two-dimensional magnetic recording (TDMR) channels. For HAMR channels, the effects of laser spot on transition characteristics and non-linear transition shift are investigated. For TDMR channels, a suitable channel model is developed to investigate the two-dimensional nature of the noise.

Page generated in 0.1394 seconds