• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2077
  • 469
  • 321
  • 181
  • 169
  • 71
  • 68
  • 65
  • 53
  • 51
  • 49
  • 43
  • 28
  • 23
  • 22
  • Tagged with
  • 4366
  • 717
  • 538
  • 529
  • 506
  • 472
  • 432
  • 408
  • 390
  • 323
  • 316
  • 306
  • 296
  • 286
  • 275
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
591

Investigation of Forward Error Correction Coding Schemes for a Broadcast Communication System

Wang, Xiaohan Sasha January 2013 (has links)
This thesis investigates four FEC (forward error correction) coding schemes for their suitability for a broadcast system where there is one energy-rich transmitter and many energy-constrained receivers with a variety of channel conditions. The four coding schemes are: repetition codes (the baseline scheme); Reed-Solomon (RS) codes; Luby-Transform (LT) codes; and a type of RS and LT concatenated codes. The schemes were tested in terms of their ability to achieve both high average data reception success probability and short data reception time at the receivers (due to limited energy). The code rate (Rc) is fixed to either 1/2 or 1/3. Two statistical channel models were employed: the memoryless channel and the Gilbert-Elliott channel. The investigation considered only the data-link layer behaviour of the schemes. During the course of the investigation, an improvement to the original LT encoding process was made, the name LTAM (LT codes with Added Memory) was given to this improved coding method. LTAM codes reduce the overhead needed for decoding short-length messages. The improvement can be seen for decoding up to 10000 number of user packets. The maximum overhead reduction is as much as 10% over the original LT codes. The LT-type codes were found to have the property that can both achieve high success data reception performance and flexible switch off time for the receivers. They are also adaptable to different channel characteristics. Therefore it is a prototype of the ideal coding scheme that this project is looking for. This scheme was then further developed by applying an RS code as an inner code to further improve the success probability of packet reception. The results show that LT&RS code has a significant improvement in the channel error tolerance over that of the LT codes without an RS code applied. The trade-off is slightly more reception time needed and more decoding complexity. This LT&RS code is then determined to be the best scheme that fulfils the aim in the context of this project which is to find a coding scheme that both has a high overall data reception probability and short overall data reception time. Comparing the LT&RS code with the baseline repetition code, the improvement is in three aspects. Firstly, the LT&RS code can keep full success rate over channels have approximately two orders of magnitude more errors than the repetition code. This is for the two channel models and two code rates tested. Secondly, the LT&RS code shows an exceptionally good performance under burst error channels. It is able to maintain more than 70% success rate under the long burst error channels where both the repetition code and the RS code have almost zero success probability. Thirdly, while the success rates are improved, the data reception time, measured in terms of number of packets needed to be received at the receiver, of the LT&RS codes can reach a maximum of 58% reduction for Rc = 1=2 and 158% reduction for Rc = 1=3 compared with both the repetition code and the RS code at the worst channel error rate that the LT&RS code maintains almost 100% success probability.
592

Liquid crystal point diffraction interferometer.

Mercer, Carolyn Regan. January 1995 (has links)
A new instrument, the liquid crystal point diffraction interferometer (LCPDI), has been developed for the measurement of phase objects. This instrument maintains the compact, robust design of Linnik's point diffiaction interferometer (PDI) and adds to it phase stepping capability for quantitative interferogram analysis. The result is a compact, simple to align, environmentally insensitive interferometer capable of accurately measuring optical wavefronts with very high data density and with automated data reduction. This dissertation describes the theory of both the PDI and liquid crystal phase control. The design considerations for the LCPDI are presented, including manufacturing considerations. The operation and performance of the LCPDI are discussed, including sections regarding alignment, calibration, and amplitude modulation effects. The LCPDI is then demonstrated using two phase objects: a defocus difference wavefront, and a temperature distribution across a heated chamber filled with silicone oil. The measured results are compared to theoretical or independently measured results and show excellent agreement. A computer simulation of the LCPDI was performed to verify the source of observed periodic phase measurement error. The error stems from intensity variations caused by dye molecules rotating within the liquid crystal layer. Methods are discussed for reducing this error. Algorithms are presented which reduce this error; they are also useful for any phase-stepping interferometer that has unwanted intensity fluctuations, such as those caused by unregulated lasers. It is expected that this instrument will have application in the fluid sciences as a diagnostic tool, particularly in space based applications where autonomy, robustness, and compactness are desirable qualities. It should also be useful for the testing of optical elements, provided a master is available for comparison.
593

LDPC Coding for Magnetic Storage: Low Floor Decoding Algorithms, System Design and Performance Analysis

Han, Yang January 2008 (has links)
Low-density parity check (LDPC) codes have experienced tremendous popularity due to their capacity-achieving performance. In this dissertation, several different aspects of LDPC coding and its applications to magnetic storage are investigated. One of the most significant issues that impedes the use of LDPC codes in many systems is the error-rate floor phenomenon associated with their iterative decoders. By delineating the fundamental principles, we extend to partial response channels algorithms for predicting the error rate performance in the floor region for the binary-input AWGN channel. We develop three classes of decoding algorithms for mitigating the error floor by directly tackling the cause of the problem: trapping sets. In our experiments, these algorithms provide multiple orders of improvement over conventional decoders at the cost of various implementation complexity increases.Product codes are widely used in magnetic recording systems where errors are both isolated and bursty. A dual-mode decoding technique for Reed-Solomon-code-based product codes is proposed, where the second decoding mode involves maximum-likelihood erasure decoding of the binary images of the Reed-Solomon codewords. By exploring a tape storage application, we demonstrate that this dual-mode decoding system dramatically improves the performance of product codes. Moreover, the complexity added by the second decoding mode is manageable. We also show the performance of this technique on a product code which has an LDPC code in the columns.Run-length-limited (RLL) codes are ubiquitous in today's disk drives. Using RLL codes has enabled drive designers to pack data very efficiently onto the platter surface by ensuring stable symbol-timing recovery. We consider a concatenation system design with an LDPC code and an RLL code as components to simultaneously achieve desirable features such as: soft information availability to the LDPC decoder, the preservation of the LDPC code's structure, and the capability of correcting long erasure bursts.We analyze the performance of LDPC-coded magnetic recording channel in the presence of media noise. We employ advanced signal processing for the pattern-dependent-noise-predictive channel detectors, and demonstrate that a gain of over 1 dB or a linear density gain of about 8% relative to a comparable Reed-Solomon is attainable by using an LDPC code.
594

A PERFORMANCE EVALUATION FOR CONSTRAINED ITERATIVE SIGNAL EXTRAPOLATION METHODS.

Omel, Randall Russ. January 1984 (has links)
No description available.
595

On adaptive MMSE receiver strategies for TD-CDMA

Garcia-Alis, Daniel January 2001 (has links)
No description available.
596

Financial and risk assessment and selection of health monitoring system design options for legacy aircraft

Esperon Miguez, Manuel January 2013 (has links)
Aircraft operators demand an ever increasing availability of their fleets with constant reduction of their operational costs. With the age of many fleets measured in decades, the options to face these challenges are limited. Integrated Vehicle Health Management (IVHM) uses data gathered through sensors in the aircraft to assess the condition of components to detect and isolate faults or even estimate their Remaining Useful Life (RUL). This information can then be used to improve the planning of maintenance operations and even logistics and operational planning, resulting in shorter maintenance stops and lower cost. Retrofitting health monitoring technology onto legacy aircraft has the capability to deliver what operators and maintainers demand, but working on aging platforms presents numerous challenges. This thesis presents a novel methodology to select the combination of diagnostic and prognostic tools for legacy aircraft that best suits the stakeholders’ needs based on economic return and financial risk. The methodology is comprised of different steps in which a series of quantitative analyses are carried out to reach an objective solution. Beginning with the identification of which components could bring higher reduction of maintenance cost and time if monitored, the methodology also provides a method to define the requirements for diagnostic and prognostic tools capable of monitoring these components. It then continues to analyse how combining these tools affects the economic return and financial risk. Each possible combination is analysed to identify which of them should be retrofitted. Whilst computer models of maintenance operations can be used to analyse the effect of retrofitting IVHM technology on a legacy fleet, the number of possible combinations of diagnostic and prognostic tools is too big for this approach to be practicable. Nevertheless, computer models can go beyond the economic analysis performed thus far and simulations are used as part of the methodology to get an insight of other effects or retrofitting the chosen toolset.
597

A Bayesian expected error reduction approach to Active Learning

Fredlund, Richard January 2011 (has links)
There has been growing recent interest in the field of active learning for binary classification. This thesis develops a Bayesian approach to active learning which aims to minimise the objective function on which the learner is evaluated, namely the expected misclassification cost. We call this approach the expected cost reduction approach to active learning. In this form of active learning queries are selected by performing a `lookahead' to evaluate the associated expected misclassification cost. \paragraph{} Firstly, we introduce the concept of a \textit{query density} to explicitly model how new data is sampled. An expected cost reduction framework for active learning is then developed which allows the learner to sample data according to arbitrary query densities. The model makes no assumption of independence between queries, instead updating model parameters on the basis of both which observations were made \textsl{and} how they were sampled. This approach is demonstrated on the probabilistic high-low game which is a non-separable extension of the high-low game presented by \cite{Seung_etal1993}. The results indicate that the Bayes expected cost reduction approach performs significantly better than passive learning even when there is considerable overlap between the class distributions, covering $30\%$ of input space. For the probabilistic high-low game however narrow queries appear to consistently outperform wide queries. We therefore conclude the first part of the thesis by investigating whether or not this is always the case, demonstrating examples where sampling broadly is favourable to a single input query. \paragraph{} Secondly, we explore the Bayesian expected cost reduction approach to active learning within the pool-based setting. This is where learning is limited to a finite pool of unlabelled observations from which the learner may select observations to be queried for class-labels. Our implementation of this approach uses Gaussian process classification with the expectation propagation approximation to make the necessary inferences. The implementation is demonstrated on six benchmark data sets and again demonstrates superior performance to passive learning.
598

A Systolic Array Based Reed-Solomon Decoder Realised Using Programmable Logic Devices

Biju, S., Narayana, T. V., Anguswamy, P., Singh, U. S. 11 1900 (has links)
International Telemetering Conference Proceedings / October 30-November 02, 1995 / Riviera Hotel, Las Vegas, Nevada / This paper describes the development of a Reed-Solomon (RS) Encoder-Decoder which implements the RS segment of the telemetry channel coding scheme recommended by the Consultative Committee on Space Data Systems (CCSDS)[1]. The Euclidean algorithm has been chosen for the decoder implementation, the hardware realization taking a systolic array approach. The fully pipelined decoder runs on a single clock and the operating speed is limited only by the Galois Field (GF) multiplier's delay. The circuit has been synthesised from VHDL descriptions and the hardware is being realised using programmable logic chips. This circuit was simulated for functional operation and found to perform correction of error patterns exactly as predicted by theory.
599

Probability of Bit Error on a Standard IRIG Telemetry Channel Using the Aeronautical Fading Channel Model

Nelson, N. Thomas 10 1900 (has links)
International Telemetering Conference Proceedings / October 17-20, 1994 / Town & Country Hotel and Conference Center, San Diego, California / This paper analyzes the probability of bit error for PCM-FM over a standard IRIG channel subject to multipath interference modeled by the aeronautical fading channel. The aeronautical channel model assumes a mobile transmitter and a stationary receiver and specifies the correlation of the fading component. This model describes fading which is typical of that encountered at military test ranges. An expression for the bit error rate on the fading channel with a delay line demodulator is derived and compared with the error rate for the Gaussian channel. The increase in bit error rate over that of the Gaussian channel is determined along with the power penalty caused by the fading. In addition, the effects of several channel parameters on the probability of bit error are determined.
600

La responsabilidad del Estado juez : la infracción al derecho a ser juzgado en un plazo razonable como título atributivo de responsabilidad

Barraza González, Andrea Inés January 2012 (has links)
Memoria (licenciado en ciencias jurídicas y sociales) / El análisis sobre la Responsabilidad del Estado-Juez estará basado en el estudio de la normativa vigente, jurisprudencia y doctrina tanto nacional como internacional. Para ello dividiré la exposición en dos partes. La Primera Parte estará destinada a exponer la regulación existente durante la vigencia de la CPR de 1925 y CPR de 1980. El estudio del Derecho Comparado, la constitucionalización del Derecho Internacional y el respeto de los derechos humanos nos resulta de consideración obligatoria. Durante el siglo pasado comenzó a gestarse una nueva forma de concebir al ser humano y al Estado, por la cual éste último se encuentra al servicio del primero. Por estas razones, la Segunda Parte estará destinada a exponer el fundamento por el cual la postura amplia sobre responsabilidad estatal por actividad judicial tiene cabida en el ordenamiento jurídico nacional. Me refiero tanto a las normas constitucionales como a los Tratados Internacionales y a la interpretación que debe dárseles a la luz del Constitucionalismo Humanista. Abordaré, como visión general, los presupuestos que hacen procedente la responsabilidad del Estado por la actividad judicial para clarificar un caso particular, excluido aparentemente de la actual normativa, por considerarlo de gran interés. Esto es, la infracción del derecho a un juicio dentro de un plazo razonable como título atributivo de dicha responsabilidad, cuestión que ya ha sido ampliamente debatida en Europa. Según pretendo dejar establecido en esta Memoria, es claro que el ordenamiento jurídico chileno contiene las bases por las cuales un ciudadano puede exigir del Estado el resarcimiento de los perjuicios que éste le ha causado con su actividad judicial cualquiera que sea el ámbito en que se desenvuelve. Más aún la jurisprudencia internacional ha sido reiterada y contundente en orden a proteger el derecho de los ciudadanos a ser juzgados dentro de un plazo razonable. En este sentido, revisaré pronunciamientos emitidos por la COIDH y su Comisión en aras de obtener el respeto de los instrumentos internacionales suscritos por los países americanos, a favor de sus propios habitantes

Page generated in 0.0497 seconds