• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 224
  • 51
  • 49
  • 18
  • 16
  • 15
  • 14
  • 12
  • 11
  • 7
  • 4
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 490
  • 490
  • 165
  • 101
  • 79
  • 67
  • 67
  • 53
  • 49
  • 39
  • 38
  • 38
  • 36
  • 34
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
311

Embedded Computer for Space Applications suitable for Linux / Linuxanpassad inbyggnadsdator för rymdbruk

Dahlberg, Johan January 2003 (has links)
<p>This report briefly describes the special requirements for a computer board for use in space. In particular, component selection and ways of mitigating the soft and hard errors are discussed. Furthermore, one implementation for a low-cost, relatively high performance computer that will work in the harsh space environment is presented. The report is primarily intended for those familiar with digital design, who need an introduction to construction of space or other high-reliability hardware. </p><p>As the quality (resolution) of imagers, spectrometers and other data sources in scientific satellite payloads is increasing, there is also an increasing demand for more processing power in order to compress or in other way process the data before transmitting it on the limited bandwidth microwave downlink to Earth. Scientific instruments are usually mission specific and have rather low budget, so there is a need for a powerful computer board that can be used for a number of missions in order to keep the engineering costs down.</p>
312

Efficient Message Passing Decoding Using Vector-based Messages

Grimnell, Mikael, Tjäder, Mats January 2005 (has links)
<p>The family of Low Density Parity Check (LDPC) codes is a strong candidate to be used as Forward Error Correction (FEC) in future communication systems due to its strong error correction capability. Most LDPC decoders use the Message Passing algorithm for decoding, which is an iterative algorithm that passes messages between its variable nodes and check nodes. It is not until recently that computation power has become strong enough to make Message Passing on LDPC codes feasible. Although locally simple, the LDPC codes are usually large, which increases the required computation power. Earlier work on LDPC codes has been concentrated on the binary Galois Field, GF(2), but it has been shown that codes from higher order fields have better error correction capability. However, the most efficient LDPC decoder, the Belief Propagation Decoder, has a squared complexity increase when moving to higher order Galois Fields. Transmission over a channel with M-PSK signalling is a common technique to increase spectral efficiency. The information is transmitted as the phase angle of the signal.</p><p>The focus in this Master’s Thesis is on simplifying the Message Passing decoding when having inputs from M-PSK signals transmitted over an AWGN channel. Symbols from higher order Galois Fields were mapped to M-PSK signals, since M-PSK is very bandwidth efficient and the information can be found in the angle of the signal. Several simplifications of the Belief Propagation has been developed and tested. The most promising is the Table Vector Decoder, which is a Message Passing Decoder that uses a table lookup technique for check node operations and vector summation as variable node operations. The table lookup is used to approximate the check node operation in a Belief Propagation decoder. Vector summation is used as an equivalent operation to the variable node operation. Monte Carlo simulations have shown that the Table Vector Decoder can achieve a performance close to the Belief Propagation. The capability of the Table Vector Decoder depends on the number of reconstruction points and the placement of them. The main advantage of the Table Vector Decoder is that its complexity is unaffected by the Galois Field used. Instead, there will be a memory space requirement which depends on the desired number of reconstruction points.</p>
313

Konsumausgaben und Aktienmarktentwicklung in Deutschland : ein kointegriertes vektorautoregressives Modell

Nastansky, Andreas, Strohe, Hans Gerhard January 2011 (has links)
Vektorfehlerkorrekturmodelle (VECM) erlauben es, Abhängigkeiten zwischen den Veränderungen mehrerer potenziell endogener Variablen simultan zu modellieren. Die Idee, ein langfristiges Gleichgewicht gleichzeitig mit kurzfristigen Veränderungen zu modellieren, lässt sich vom Eingleichungsansatz des Fehlerkorrekturmodells (ECM) zu einem Mehrgleichungsansatz für Variablenvektoren (VECM) verallgemeinern. Die Anzahl der kointegrierenden Beziehungen und die Koeffizientenmatrizen werden mit dem Johansen-Verfahren geschätzt. An einer einfachen Verallgemeinerung einer Konsumfunktion wird die Schätzung und Wirkungsweise eines VECM für Verbrauch, Einkommen und Aktienkurse in Deutschland gezeigt. Die Anwendung der Beveridge- Nelson-(BN)-Dekomposition auf vektorautoregressive Prozesse ermöglicht zudem, Abhängigkeiten zwischen den aus den kointegrierten Zeitreihen extrahierten zyklischen Komponenten zu schätzen. / Vector error correction models (VECM) allow to simultaneously model dependencies between the changes of several potentially endogenous variables. The idea is the modelling of a long-run equilibrium together with the short-run dynamics. Therefore a single equation approach (ECM) can be generalised to a multi equation approach (VECM) for variable vectors. The number of cointegration relations and the coefficient matrices are estimated with the Johansen procedure. The estimation of a VECM for income, consumption and stock prices for Germany is demonstrated by using a generalised consumption function. The Beveridge-Nelson-(BN)-Decomposition procedure for vectorautoregressive processes allows extracting cyclical components of cointegrated time series and estimating the degree of co-movement between these transitory components.
314

Staatsverschuldung und Inflation : eine empirische Analyse für Deutschland

Mehnert, Alexander, Nastansky, Andreas January 2012 (has links)
In der vorliegenden Arbeit soll der Zusammenhang zwischen Staatsverschuldung und Inflation untersucht werden. Es werden theoretische Übertragungswege von der Staatsverschuldung über die Geldmenge und die langfristigen Zinsen hin zur Inflation gezeigt. Aufbauend auf diesen theoretischen Überlegungen werden die Variablen Staatsverschuldung, Verbraucherpreisindex, Geldmenge M3 und langfristige Zinsen im Rahmen eines Vektor-Fehlerkorrekturmodells untersucht. In der empirischen Analyse werden die Variablen für Deutschland in dem Zeitraum vom 1. Quartal 1991 bis zum 4. Quartal 2010 betrachtet. In ein Vektor-Fehlerkorrekturmodell fließen alle Variablen als potentiell endogen in das Modell ein. Die Ermittlung der Kointegrationsbeziehungen und die Schätzung des Vektor-Fehlerkorrekturmodells erfolgen mithilfe des Johansen-Verfahrens. / In the following study the relation between the public debt and the inflation will be analysed. The transmission from the public debt to the inflation through the money supply and long term interest rate will be shown. Based on these theoretical thoughts the variables public debt, consumer price index, money supply m3 and the long term interest rate will be analysed within a vector error correction model. In the empirical part of this paper we will evaluate the timeperiod from the first quarter in 1991 until the fourth quarter in 2010 for Germany. In a vector error correction model every variable can be taken as endogenous. The variables in the model will be tested for cointegrated relationships and estimated with the Johansen-Approach.
315

Transmitting Quantum Information Reliably across Various Quantum Channels

Ouyang, Yingkai January 2013 (has links)
Transmitting quantum information across quantum channels is an important task. However quantum information is delicate, and is easily corrupted. We address the task of protecting quantum information from an information theoretic perspective -- we encode some message qudits into a quantum code, send the encoded quantum information across the noisy quantum channel, then recover the message qudits by decoding. In this dissertation, we discuss the coding problem from several perspectives.} The noisy quantum channel is one of the central aspects of the quantum coding problem, and hence quantifying the noisy quantum channel from the physical model is an important problem. We work with an explicit physical model -- a pair of initially decoupled quantum harmonic oscillators interacting with a spring-like coupling, where the bath oscillator is initially in a thermal-like state. In particular, we treat the completely positive and trace preserving map on the system as a quantum channel, and study the truncation of the channel by truncating its Kraus set. We thereby derive the matrix elements of the Choi-Jamiolkowski operator of the corresponding truncated channel, which are truncated transition amplitudes. Finally, we give a computable approximation for these truncated transition amplitudes with explicit error bounds, and perform a case study of the oscillators in the off-resonant and weakly-coupled regime numerically. In the context of truncated noisy channels, we revisit the notion of approximate error correction of finite dimension codes. We derive a computationally simple lower bound on the worst case entanglement fidelity of a quantum code, when the truncated recovery map of Leung et. al. is rescaled. As an application, we apply our bound to construct a family of multi-error correcting amplitude damping codes that are permutation-invariant. This demonstrates an explicit example where the specific structure of the noisy channel allows code design out of the stabilizer formalism via purely algebraic means. We study lower bounds on the quantum capacity of adversarial channels, where we restrict the selection of quantum codes to the set of concatenated quantum codes. The adversarial channel is a quantum channel where an adversary corrupts a fixed fraction of qudits sent across a quantum channel in the most malicious way possible. The best known rates of communicating over adversarial channels are given by the quantum Gilbert-Varshamov (GV) bound, that is known to be attainable with random quantum codes. We generalize the classical result of Thommesen to the quantum case, thereby demonstrating the existence of concatenated quantum codes that can asymptotically attain the quantum GV bound. The outer codes are quantum generalized Reed-Solomon codes, and the inner codes are random independently chosen stabilizer codes, where the rates of the inner and outer codes lie in a specified feasible region. We next study upper bounds on the quantum capacity of some low dimension quantum channels. The quantum capacity of a quantum channel is the maximum rate at which quantum information can be transmitted reliably across it, given arbitrarily many uses of it. While it is known that random quantum codes can be used to attain the quantum capacity, the quantum capacity of many classes of channels is undetermined, even for channels of low input and output dimension. For example, depolarizing channels are important quantum channels, but do not have tight numerical bounds. We obtain upper bounds on the quantum capacity of some unital and non-unital channels -- two-qubit Pauli channels, two-qubit depolarizing channels, two-qubit locally symmetric channels, shifted qubit depolarizing channels, and shifted two-qubit Pauli channels -- using the coherent information of some degradable channels. We use the notion of twirling quantum channels, and Smith and Smolin's method of constructing degradable extensions of quantum channels extensively. The degradable channels we introduce, study and use are two-qubit amplitude damping channels. Exploiting the notion of covariant quantum channels, we give sufficient conditions for the quantum capacity of a degradable channel to be the optimal value of a concave program with linear constraints, and show that our two-qubit degradable amplitude damping channels have this property.
316

Information flow at the quantum-classical boundary

Beny, Cedric January 2008 (has links)
The theory of decoherence aims to explain how macroscopic quantum objects become effectively classical. Understanding this process could help in the search for the quantum theory underlying gravity, and suggest new schemes for preserving the coherence of technological quantum devices. The process of decoherence is best understood in terms of information flow within a quantum system, and between the system and its environment. We develop a novel way of characterizing this information, and give a sufficient condition for its classicality. These results generalize previous models of decoherence, clarify the process by which a phase-space based on non-commutative quantum variables can emerge, and provide a possible explanation for the universality of the phenomenon of decoherence. In addition, the tools developed in this approach generalize the theory of quantum error correction to infinite-dimensional Hilbert spaces. We characterize the nature of the information preserved by a quantum channel by the observables which exist in its image (in the Heisenberg picture). The sharp observables preserved by a channel form an operator algebra which can be characterized in terms of the channel's elements. The effect of the channel on these observables can be reversed by another physical transformation. These results generalize the theory of quantum error correction to codes characterized by arbitrary von Neumann algebras, which can represent hybrid quantum-classical information, continuous variable systems, or certain quantum field theories. The preserved unsharp observables (positive operator-valued measures) allow for a finer characterization of the information preserved by a channel. We show that the only type of information which can be duplicated arbitrarily many times consists of coarse-grainings of a single POVM. Based on these results, we propose a model of decoherence which can account for the emergence of a realistic classical phase-space. This model supports the view that the quantum-classical correspondence is given by a quantum-to-classical channel, which is another way of representing a POVM.
317

Efficient Message Passing Decoding Using Vector-based Messages

Grimnell, Mikael, Tjäder, Mats January 2005 (has links)
The family of Low Density Parity Check (LDPC) codes is a strong candidate to be used as Forward Error Correction (FEC) in future communication systems due to its strong error correction capability. Most LDPC decoders use the Message Passing algorithm for decoding, which is an iterative algorithm that passes messages between its variable nodes and check nodes. It is not until recently that computation power has become strong enough to make Message Passing on LDPC codes feasible. Although locally simple, the LDPC codes are usually large, which increases the required computation power. Earlier work on LDPC codes has been concentrated on the binary Galois Field, GF(2), but it has been shown that codes from higher order fields have better error correction capability. However, the most efficient LDPC decoder, the Belief Propagation Decoder, has a squared complexity increase when moving to higher order Galois Fields. Transmission over a channel with M-PSK signalling is a common technique to increase spectral efficiency. The information is transmitted as the phase angle of the signal. The focus in this Master’s Thesis is on simplifying the Message Passing decoding when having inputs from M-PSK signals transmitted over an AWGN channel. Symbols from higher order Galois Fields were mapped to M-PSK signals, since M-PSK is very bandwidth efficient and the information can be found in the angle of the signal. Several simplifications of the Belief Propagation has been developed and tested. The most promising is the Table Vector Decoder, which is a Message Passing Decoder that uses a table lookup technique for check node operations and vector summation as variable node operations. The table lookup is used to approximate the check node operation in a Belief Propagation decoder. Vector summation is used as an equivalent operation to the variable node operation. Monte Carlo simulations have shown that the Table Vector Decoder can achieve a performance close to the Belief Propagation. The capability of the Table Vector Decoder depends on the number of reconstruction points and the placement of them. The main advantage of the Table Vector Decoder is that its complexity is unaffected by the Galois Field used. Instead, there will be a memory space requirement which depends on the desired number of reconstruction points.
318

Information flow at the quantum-classical boundary

Beny, Cedric January 2008 (has links)
The theory of decoherence aims to explain how macroscopic quantum objects become effectively classical. Understanding this process could help in the search for the quantum theory underlying gravity, and suggest new schemes for preserving the coherence of technological quantum devices. The process of decoherence is best understood in terms of information flow within a quantum system, and between the system and its environment. We develop a novel way of characterizing this information, and give a sufficient condition for its classicality. These results generalize previous models of decoherence, clarify the process by which a phase-space based on non-commutative quantum variables can emerge, and provide a possible explanation for the universality of the phenomenon of decoherence. In addition, the tools developed in this approach generalize the theory of quantum error correction to infinite-dimensional Hilbert spaces. We characterize the nature of the information preserved by a quantum channel by the observables which exist in its image (in the Heisenberg picture). The sharp observables preserved by a channel form an operator algebra which can be characterized in terms of the channel's elements. The effect of the channel on these observables can be reversed by another physical transformation. These results generalize the theory of quantum error correction to codes characterized by arbitrary von Neumann algebras, which can represent hybrid quantum-classical information, continuous variable systems, or certain quantum field theories. The preserved unsharp observables (positive operator-valued measures) allow for a finer characterization of the information preserved by a channel. We show that the only type of information which can be duplicated arbitrarily many times consists of coarse-grainings of a single POVM. Based on these results, we propose a model of decoherence which can account for the emergence of a realistic classical phase-space. This model supports the view that the quantum-classical correspondence is given by a quantum-to-classical channel, which is another way of representing a POVM.
319

Diversity and Reliability in Erasure Networks: Rate Allocation, Coding, and Routing

Fashandi, Shervan January 2012 (has links)
Recently, erasure networks have received significant attention in the literature as they are used to model both wireless and wireline packet-switched networks. Many packet-switched data networks like wireless mesh networks, the Internet, and Peer-to-peer networks can be modeled as erasure networks. In any erasure network, path diversity works by setting up multiple parallel connections between the end points using the topological path redundancy of the network. Our analysis of diversity over erasure networks studies the problem of rate allocation (RA) across multiple independent paths, coding over erasure channels, and the trade-off between rate and diversity gain in three consecutive chapters. In the chapter 2, Forward Error Correction (FEC) is applied across multiple independent paths to enhance the end-to-end reliability. We prove that the probability of irrecoverable loss (P_E) decays exponentially with the number of paths. Furthermore, the RA problem across independent paths is studied. Our objective is to find the optimal RA, i.e. the allocation which minimizes P_E. Using memoization technique, a heuristic suboptimal algorithm with polynomial runtime is proposed for RA over a finite number of paths. This algorithm converges to the asymptotically optimal RA when the number of paths is large. For practical number of paths, the simulation results demonstrate the close-to-optimal performance of the proposed algorithm. Chapter 3 addresses the problem of lower-bounding the probability of error (PE) for any block code over an input-independent channel. We derive a lower-bound on PE for a general input-independent channel and find the necessary and sufficient condition to meet this bound with equality. The rest of this chapter applies this lower-bound to three special input-independent channels: erasure channel, super-symmetric Discrete Memoryless Channel (DMC), and q-ary symmetric DMC. It is proved that Maximum Distance Separable (MDS) codes achieve the minimum probability of error over any erasure channel (with or without memory). Chapter 4 addresses a fundamental trade-off between rate and diversity gain of an end-to-end connection in erasure networks. We prove that there exist general erasure networks for which any conventional routing strategy fails to achieve the optimum diversity-rate trade-off. However, for any general erasure graph, we show that there exists a linear network coding strategy which achieves the optimum diversity-rate trade-off. Unlike the previous works which suggest the potential benefit of linear network coding in the error-free multicast scenario (in terms of the achievable rate), our result demonstrates the benefit of linear network coding in the erasure single-source single-destination scenario (in terms of the diversity gain).
320

Applications of Random Graphs to Design and Analysis of LDPC Codes and Sensor Networks

19 August 2005 (has links)
This thesis investigates a graph and information theoretic approach to design and analysis of low-density parity-check (LDPC) codes and wireless networks. In this work, both LDPC codes and wireless networks are considered as random graphs. This work proposes solutions to important theoretic and practical open problems in LDPC coding, and for the first time introduces a framework for analysis of finite wireless networks. LDPC codes are considered to be one of the best classes of error-correcting codes. In this thesis, several problems in this area are studied. First, an improved decoding algorithm for LDPC codes is introduced. Compared to the standard iterative decoding, the proposed decoding algorithm can result in several orders of magnitude lower bit error rates, while having almost the same complexity. Second, this work presents a variety of bounds on the achievable performance of different LDPC coding scenarios. Third, it studies rate-compatible LDPC codes and provides fundamental properties of these codes. It also shows guidelines for optimal design of rate-compatible codes. Finally, it studies non-uniform and unequal error protection using LDPC codes and explores their applications to data storage systems and communication networks. It presents a new error-control scheme for volume holographic memory (VHM) systems and shows that the new method can increase the storage capacity by more than fifty percent compared to previous schemes. This work also investigates the application of random graphs to the design and analysis of wireless ad hoc and sensor networks. It introduces a framework for analysis of finite wireless networks. Such framework was lacking from the literature. Using the framework, different network properties such as capacity, connectivity, coverage, and routing and security algorithms are studied. Finally, connectivity properties of large-scale sensor networks are investigated. It is shown how unreliability of sensors, link failures, and non-uniform distribution of nodes affect the connectivity of sensor networks.

Page generated in 0.1233 seconds