211 |
ECC Video: An Active Second Error Control Approach for Error Resilience in Video CodingDu, Bing Bing January 2003 (has links)
To support video communication over mobile environments has been one of the objectives of many engineers of telecommunication networks and it has become a basic requirement of a third generation of mobile communication systems. This dissertation explores the possibility of optimizing the utilization of shared scarce radio channels for live video transmission over a GSM (Global System for Mobile telecommunications) network and realizing error resilient video communication in unfavorable channel conditions, especially in mobile radio channels. The main contribution describes the adoption of a SEC (Second Error Correction) approach using ECC (Error Correction Coding) based on a Punctured Convolutional Coding scheme, to cope with residual errors at the application layer and enhance the error resilience of a compressed video bitstream. The approach is developed further for improved performance in different circumstances, with some additional enhancements involving Intra Frame Relay and Interleaving, and the combination of the approach with Packetization. Simulation results of applying the various techniques to test video sequences Akiyo and Salesman are presented and analyzed for performance comparisons with conventional video coding standard. The proposed approach shows consistent improvements under these conditions. For instance, to cope with random residual errors, the simulation results show that when the residual BER (Bit Error Rate) reaches 10-4, the video output reconstructed from a video bitstream protected using the standard resynchronization approach is of unacceptable quality, while the proposed scheme can deliver a video output which is absolutely error free in a more efficient way. When the residual BER reaches 10-3, the standard approach fails to deliver a recognizable video output, while the SEC scheme can still correct all the residual errors with modest bit rate increase. In bursty residual error conditions, the proposed scheme also outperforms the resynchronization approach. Future works to extend the scope and applicability of the research are suggested in the last chapter of the thesis.
|
212 |
On Network Coding and Network-Error CorrectionPrasad, Krishnan January 2013 (has links) (PDF)
The paradigm of network coding was introduced as a means to conserve bandwidth (or equivalently increase throughput) in information flow networks. Network coding makes use of the fact that unlike physical commodities, information can be replicated and coded together at the nodes of the network. As a result, routing can be strictly suboptimal in many classes of information flow networks compared to network coding. Network-error correction is the art of designing network codes such that the sinks of the network will be able to decode the required information in the presence of errors in the edges of the network, known as network-errors. The network coding problem on a network with given sink demands can be considered to have the following three major subproblems, which naturally also extend to the study of network-error correcting codes, as they can be viewed as a special class of network codes (a) Existence of a network code that satisfies the demands (b) Efficient construction of such a network code (c) Minimum alphabet size for the existence of such a network code.
This thesis primarily considers linear network coding and error correction and in- vestigates solutions to these issues for certain classes of network coding and error correction problems in acyclic networks. Our contributions are broadly summarised as follows.
(1) We propose the use of convolutional codes for multicast network-error correc- tion. Depending upon the number of network-errors required to be corrected in the network, convolutional codes are designed at the source of the multicast network so that these errors can be corrected at the sinks of the networks as long as they are separated by certain number of time instants (for which we give a bound). In con- trast to block codes for network-error correction which require large field sizes, using convolutional codes enables the field size of the network code to be small. We discuss the performance of such networks under the BSC edge error model.
(2)Existing construction algorithms of block network-error correcting codes require a rather large field size, which grows with the size of the network and the number of sinks, and thereby can be prohibitive in large networks. In our work, we give an algorithm which, starting from a given network-error correcting code, can obtain an- other network code using a small field, with the same error correcting capability as the original code. The major step in our algorithm is to find a least degree irreducible poly- nomial which is coprime to another large degree polynomial. We utilize the algebraic properties of finite fields to implement this step so that it becomes much faster than the brute-force method. A recently proposed algorithm for network coding using small fields can be seen as a special case of our algorithm for the case of no network-errors.
(3)Matroids are discrete mathematical objects which generalize the notion of linear independence of sets of vectors. It has been observed recently that matroids and network coding share a deep connection, and several important results of network coding has been obtained using these connections from matroid theory. In our work, we establish that matroids with certain special properties correspond to networks with error detecting and correcting properties. We call such networks as matroidal error detecting (or equivalently, correcting) networks. We show that networks have scalar linear network-error detecting (or correcting) codes if and only if there are associated with representable matroids with some special properties. We also use these ideas to construct matroidal error correcting networks along with their associated matroids. In the case of representable matroids, these algorithms give rise to scalar linear network- error correcting codes on such networks. Finally we also show that linear network coding is not sufficient for the general network-error detection (correction) problem with arbitrary demands.
(4)Problems related to network coding for acyclic, instantaneous networks have been extensively dealt with in the past. In contrast, not much attention has been paid to networks with delays. In our work, we elaborate on the existence, construction and minimum field size issues of network codes for networks with integer delays. We show that the delays associated with the edges of the network cannot be ignored, and in fact turn out to be advantageous, disadvantageous or immaterial, depending on the topology of the network and the network coding problem considered. In the process, we also show multicast network codes which involve only delaying the symbols arriving at the nodes of the networks and coding the delayed symbols over a binary field, thereby making coding operations at the nodes less complex.
(5) In the usual network coding framework, for a given set of network demands over an arbitrary acyclic network with integer delays assumed for the links, the out- put symbols at the sink nodes, at any given time instant, is a Fq-linear combination of the input symbols generated at different time instants where Fq denotes the field over which the network operates. Therefore the sinks have to use sufficient memory elements in order to decode simultaneously for the entire stream of demanded infor- mation symbols. We propose a scheme using an ν-point finite-field discrete fourier transform (DFT) which converts the output symbols at the sink nodes at any given time instant, into a Fq-linear combination of the input symbols generated during the same time instant without making use of memory at the intermediate nodes. We call this as transforming the acyclic network with delay into ν-instantaneous networks (ν is sufficiently large). We show that under certain conditions, there exists a network code satisfying sink demands in the usual (non-transform) approach if and only if there exists a network code satisfying sink demands in the transform approach.
|
213 |
Prevendo a taxa de juros no Brasil: uma abordagem combinada entre o modelo de correção de erros e o modelo de fatoresMaeda Junior, Tomoharu 14 August 2012 (has links)
Submitted by Tomoharu Maeda Junior (tomoharu.maeda@gmail.com) on 2012-09-11T19:06:07Z
No. of bitstreams: 1
DissertacaoMPFE-TMJ.pdf: 2327119 bytes, checksum: e86dad879e97ba7ee62edb2eafde4556 (MD5) / Rejected by Suzinei Teles Garcia Garcia (suzinei.garcia@fgv.br), reason: Prezado Tomoharu,
Foi alterado o título da dissertação, porém não informado em Ata é necessário seu orientador informar.
Título anterior:
PREVISÃO DA ESTRUTURA A TERMO DE TAXA DE JUROS DO BRASIL UTILIZANDO MODELO DE FATORES COM CORREÇÃO DE ERROS
Att.
Suzi 3799-7876 on 2012-09-11T19:48:31Z (GMT) / Submitted by Tomoharu Maeda Junior (tomoharu.maeda@gmail.com) on 2012-09-12T13:14:24Z
No. of bitstreams: 1
DissertacaoMPFE-TMJ.pdf: 2327119 bytes, checksum: e86dad879e97ba7ee62edb2eafde4556 (MD5) / Approved for entry into archive by Suzinei Teles Garcia Garcia (suzinei.garcia@fgv.br) on 2012-09-12T13:31:49Z (GMT) No. of bitstreams: 1
DissertacaoMPFE-TMJ.pdf: 2327119 bytes, checksum: e86dad879e97ba7ee62edb2eafde4556 (MD5) / Made available in DSpace on 2012-09-12T13:37:49Z (GMT). No. of bitstreams: 1
DissertacaoMPFE-TMJ.pdf: 2327119 bytes, checksum: e86dad879e97ba7ee62edb2eafde4556 (MD5)
Previous issue date: 2012-08-14 / O objetivo do presente trabalho é verificar se o modelo que combina correção de erros e fatores extraídos de grandes conjuntos de dados macroeconômicos produz previsões mais precisas das taxas de juros do Brasil em relação aos modelos VAR, VECM e FAVAR. Para realizar esta análise, foi utilizado o modelo sugerido por Banerjee e Marcellino (2009), o FAVECM, que consiste em agregar o mecanismo de correção de erros ao modelo proposto por Bernanke, Boivin e Eliasz (2005), o FAVAR. A hipótese é que o FAVECM possuiu uma formulação teórica mais geral. Os resultados mostram que para o mercado brasileiro o FAVECM apresentou ganhos significativos de previsão para as taxas mais longas e horizontes de previsão maiores. / The objective of the present work is to examine if the model that combines error correction and factors extracted from large macoeconomic data sets offers a higher forecasting accuracy of the interest rate in Brazil when compared to VAR, VECM and FAVAR. In order to conduct this analysis it was used the econometric methodology introduced by Banerjee and Marcellino (2009), the FAVECM, which allows for the inclusion of error correction terms in the model introduced by Bernanke, Boivin and Eliasz (2005), the FAVAR. The hypothesis is that the FAVECM has several conceptual advantages given it is a nesting (or has a more general) specification. The results show that, for the Brazilian market, the FAVECM presented significant gains in forecasts for longer maturity rates and for longer prevision horizons.
|
214 |
Grassmannian Fusion Frames for Block Sparse Recovery and Its Application to Burst Error CorrectionMukund Sriram, N January 2013 (has links) (PDF)
Fusion frames and block sparse recovery are of interest in signal processing and communication applications. In these applications it is required that the fusion frame have some desirable properties. One such requirement is that the fusion frame be tight and its subspaces form an optimal packing in a Grassmannian manifold. Such fusion frames are called Grassmannian fusion frames.
Grassmannian frames are known to be optimal dictionaries for sparse recovery as they have minimum coherence. By analogy Grassmannian fusion frames are potential candidates as optimal dictionaries in block sparse processing. The present work intends to study fusion frames in finite dimensional vector spaces assuming a specific structure useful in block sparse signal processing.
The main focus of our work is the design of Grassmannian fusion frames and their implication in block sparse recovery. We will consider burst error correction as an application of block sparsity and fusion frame concepts.
We propose two new algebraic methods for designing Grassmannian fusion frames. The first method involves use of Fourier matrix and difference sets to obtain a partial Fourier matrix which forms a Grassmannian fusion frame. This fusion frame has a specific structure and the parameters of the fusion frame are determined by the type of difference set used.
The second method involves constructing Grassmannian fusion frames from Grassmannian frames which meet the Welch bound. This method uses existing constructions of optimal Grassmannian frames. The method, while fairly general, requires that the dimension of the vector space be divisible by the dimension of the subspaces.
A lower bound which is an analog of the Welch bound is derived for the block coherence of dictionaries along with conditions to be satisfied to meet the bound. From these results we conclude that the matrices constructed by us are optimal for block sparse recovery from block coherence viewpoint.
There is a strong relation between sparse signal processing and error control coding. It is known that burst errors are block sparse in nature. So, here we attempt to solve the burst error correction problem using block sparse signal recovery methods. The use of Grassmannian fusion frames which we constructed as optimal dictionary allows correction of maximum possible number of errors, when used in conjunction with reconstruction algorithms which exploit block sparsity. We also suggest a modification to improve the applicability of the technique and point out relationship with a method which appeared previously in literature.
As an application example, we consider the use of the burst error correction technique for impulse noise cancelation in OFDM system. Impulse noise is bursty in nature and severely degrades OFDM performance. The Grassmannian fusion frames constructed with Fourier matrix and difference sets is ideal for use in this application as it can be easily incorporated into the OFDM system.
|
215 |
De svenska hushållens sparande : Vilka faktorer påverkar sparkvoten? En reflektion under den rådande Corona-pandemin.Hillefors, Hanna, Isaksson, Nathalie January 2021 (has links)
The savings ratio for Swedish households is record-breaking and Sweden, together with the rest of the world, is currently in the middle of a pandemic. What drives individuals to save is based on a number of different factors that previous research has concluded. The purpose of this study is to, with previous research as a basis, investigate which factors affect the savings ratio for Swedish households. Quarterly data for the years 1982–2020 is analyzed in a time series by first processing for unit roots and then cointegration. The data is then estimated in a multiple linear regression in the form of an “Error Correction Model”, with the intention of investigating both the short-term and long-term relationship. The results of the study indicate that the variables that have a significant impact on the change in the household savings ratio are GDP per capita, inflation, unemployment and consumption, while public savings and the development of the stock market have a significant but less considerable effekt. The economic theories that the study findssupport for are the theory of precautionary savings as well as the standard buffer-stock model. / Sparkvoten hos svenska hushåll är rekordhög och Sverige, tillsammans med resten av världen, befinner sig för närvarande mitt i en pandemi. Vad som driver individer till att spara grundar sig i en rad olika faktorer som tidigare forskning kommit fram till. Syftet med denna studie är att, med tidigare forskning som grund, undersöka vilka faktorer som påverkar sparkvoten för svenska hushåll. Kvartalsdata för åren 1982–2020 analyseras i en tidsserie genom att först behandlas för enhetsrötter och sedan kointegration. Därefter skattas de i en multipel linjär regressionsanalys i form av en ”Error Correction Model”, med avsikt att utreda både det kortsiktiga- och långsiktiga sambandet. Resultatet av studien indikerar att de variabler som har en signifikant betydande påverkan på förändringen i hushållens sparkvot är BNP per capita, inflation, arbetslöshet samt konsumtion, medan offentligt sparande och utveckling av aktiemarknaden har en signifikant men mindre betydande effekt. De ekonomiska teorier som studien finner stöd i är teorin om försiktighetssparandet samt standard buffertlager-modellen.
|
216 |
Accurate modeling of noise in quantum error correcting circuitsGutierrez Arguedas, Mauricio 07 January 2016 (has links)
A universal, scalable quantum computer will require the use of quantum error correction in order to achieve fault tolerance. The assessment and comparison of error-correcting strategies is performed by classical simulation. However, due to the prohibitive exponential scaling of general quantum circuits, simulations are restrained to specific subsets of quantum operations. This creates a gap between accuracy and efficiency which is particularly problematic when modeling noise, because most realistic noise models are not efficiently simulable on a classical computer. We have introduced extensions to the Pauli channel, the traditional error channel employed to model noise in simulations of quantum circuits. These expanded error channels are still computationally tractable to simulate, but result in more accurate approximations to realistic error channels at the single qubit level. Using the Steane [[7,1,3]] code, we have also investigated the behavior of these expanded channels at the logical error-corrected level. We have found that it depends strongly on whether the error is incoherent or coherent. In general, the Pauli channel will be an excellent approximation to incoherent channels, but an unsatisfactory one for coherent channels, especially because it severely underestimates the magnitude of the error. Finally, we also studied the honesty and accuracy of the expanded channels at the logical level. Our results suggest that these measures can be employed to generate lower and upper bounds to a quantum code's threshold under the influence of a specific error channel.
|
217 |
Financial crisis and household indebtedness in South Africa : an econometric analysis / Christelle MeniagoMeniago, Christelle January 2012 (has links)
The 2007-2008 US subprime mortgage crisis evolved into a financial crisis that
negatively affected many economies in the world and therefore it was widely referred to
as the global financial crisis. Since the beginning of this financial crisis of 2008-2009,
South Africa experienced a significant increase in its household debt to income ratio. In
the main, the aim of this dissertation is to investigate the prominent factors contributing
to the rise in the level of household debt in South Africa. Also, we study the response of
household debt to various shocks originating from the aforementioned crisis.
Additionally, in the context of our timeline (1985 Q1-2012 Q1) we will extrapolate
possible graphical trends in the rise and fall of household indebtedness in South Africa
associated with various crises. Working from past research papers and a theoretical
framework developed by Franco Modigliani and Milton Friedman, seven
macroeconomic variables will be considered to examine the rise of household borrowing
to income namely; the real house price index, consumer price index. real income, real
prime rate, real household consumption expenditure, real gross domestic product and real
household savings. Both a long-run cointegration analysis and a short-run error
correction model will be used to evaluate the relationship between household debt and
the chosen variables by estimating a Vector Error Correction Model. Furthermore, the
Variance Decomposition and the Generalized Impulse Response Function will be
utilized to assess the impact of household debt to various shocks emanating from the
2008-2009 financial crisis. The different models and tests conducted in this research will
be executed using the statistical software package EVIEWS 7. Based on the results,
household debt was seen to have been fairly affected by the 2008-2009 financial crisis.
The cointegration analysis maintains that in the long run, household borrowing is
positively and significantly determined by consumer price index and real household
consumption. In addition, it confirms that household borrowing is negatively affected by
real household income and real GOP. The rest of the variables were found insignificant.
Nevertheless, the short run error correction model reveals that about 3.6% of the
disequilibrium will be corrected each quarter for the equilibrium state to be restored.
Also, the Variance Decomposition results confirmed that the South African household
debt is mostly affected by shocks from real house price index, real household income,
real household consumption and real household savings, respectively. Furthermore, the Generalized Impulse Response Function results established the significant positive
response of household debt to a shock from real house price index and real household
consumption. The response of debt to shocks from consumer price index, real household
savings and real income is negative and this outcome is confirmed by the theory.
However, the response of debt shows fluctuating behaviours to shocks from LRIN,
LRPR and LRGDP over the estimated period.
In conclusion, our econometric investigation highlighted the main causes of the high
levels of household debt in South Africa both in the short and long run. The Generalized
Impulse Response Functions confirm that shocks like the occurrence of the 2007-2008
financial crisis will have a significant impact on real house price index, consumer price
index, real household consumption and real household savings. The Engle granger
results show that there exist no significant relationship between household debt and
unemployment in South Africa over the period 1980 to 2010. However, we propose that
this result may have been significant if quarterly unemployment data was available and
included in the main data set. Finally, based on the stability, validity and reliability of
our model, we recommend its use to facilitate policy analysis and decision making
regarding household debt levels in South Africa. / Thesis (M.Com.( Economics) North-West University, Mafikeng Campus, 2012
|
218 |
COMMUNICATIONS OVER AIRCRAFT POWER LINES: A PRACTICAL IMPLEMENTATIONTian, Hai, Trojak, Tom, Jones, Charles H. 10 1900 (has links)
ITC/USA 2006 Conference Proceedings / The Forty-Second Annual International Telemetering Conference and Technical Exhibition / October 23-26, 2006 / Town and Country Resort & Convention Center, San Diego, California / This paper presents a practical implementation of a hardware design for transmission of data over aircraft power lines. The intent of such hardware is to significantly reduce the wiring in the aircraft instrumentation system. The potential usages of this technology include pulse code modulation (PCM), Ethernet and other forms data communications. Details of the fieldprogrammable gate array (FPGA) and printed circuit board (PCB) designs of the digital and analog front end will be discussed. The power line is not designed for data transmission. It contains considerable noise, multipath effects, and time varying impedance. Spectral analysis data of an aircraft is presented to indicate the difficulty of the problem at hand. A robust modulation is required to overcome the harsh environment and to provide reliable transmission. Orthogonal frequency division multiplexing (OFDM) has been used in power line communication industry with a great deal of success. OFDM has been deemed the most appropriate technology for high-speed data transmission on aircraft power lines. Additionally, forward error correction (FEC) techniques are discussed.
|
219 |
PERFORMANCE TRADE-OFFS WHEN IMPLEMENTING TURBO PRODUCT CODE FORWARD ERROR CORRECTION FOR AIRBORNE TELEMETRYTemple, Kip 10 1900 (has links)
ITC/USA 2005 Conference Proceedings / The Forty-First Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2005 / Riviera Hotel & Convention Center, Las Vegas, Nevada / Hardware implementing forward error correction (FEC) is currently available for utilization by the
airborne telemetry system designer. This paper will discuss the potential benefits along with drawbacks
when using this technology. Laboratory testing is supplemented with real-world flight testing.
Performance results comparing FEC and non-FEC systems are presented for both IRIG-106 Pulse Code
Modulation/Frequency Modulation, PCM/FM, (or Continuous Phase Frequency Shift Keying, CPFSK,
with filtering, or ARTM Tier 0) and Shaped Offset Quadrature Phase Shift Keying, Telemetry Group
version (SOQPSK-TG or ARTM Tier I) waveforms.
|
220 |
THE DESIGN OF A 21st CENTURY TELEMTRY SYSTEM WITH SOQPSK MODULATION AND INTEGRATED CONTROLWegener, John A., Roche, Michael C. 10 1900 (has links)
ITC/USA 2005 Conference Proceedings / The Forty-First Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2005 / Riviera Hotel & Convention Center, Las Vegas, Nevada / This paper describes a telemetry system developed for the EA-18G Flight Test program. The
program requires transmission of a number of data streams, in IRIG-106 Chapter 4 PCM, Chapter 8
Mux-All 1553, Ethernet, and Fibre Channel formats. The initial requested data rate was in excess of
30 Mbits/sec. The telemetry system must operate at a range up to about 120 miles, at several test
ranges, and with several different aircraft maneuvering configurations. To achieve these
requirements, the Flight Test Instrumentation group at Boeing Integrated Defense Systems in Saint
Louis, developed a telemetry system in conjunction with industry partners and test range customers.
The system transmits two telemetry streams with a total aggregate rate on the order of 20 Mbits/sec.
Each telemetry stream consists of up to four PCM streams, combined in a Teletronics Technology
Corporation (TTC) Miniature Adaptable Real-Time Multiplexer Unit (MARM) data combiner. It
uses Nova Engineering multi-mode transmitters capable of transmitting PCM-FM or Shaped Offset
Quadrature Phase Shift Keying (SOQPSK). The transmitter also provides Turbo-Product Code
(TPC) Forward Error Correction (FEC) to enhance range and improve link performance. Data
collection units purchased from outside vendors or developed by Saint Louis Flight Test
Instrumentation, translate Ethernet and Fibre Channel information into traditional PCM streams. A
Boeing Flight Test Instrumentation developed control system provides flexible selection of streams
to be combined into each telemetry stream, and functional control of antenna selection and
transmitter operation.
|
Page generated in 0.3097 seconds