581 |
AN INTRODUCTION TO LOW-DENSITY PARITY-CHECK CODESMoon, Todd K., Gunther, Jacob H. 10 1900 (has links)
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada / Low-Density Parity-Check (LDPC) codes are powerful codes capable of nearly achieving the Shannon channel capacity. This paper presents a tutorial introduction to LDPC codes, with a detailed description of the decoding algorithm. The algorithm propagates information about bit and check probabilities through a tree obtained from the Tanner graph for the code. This paper may be useful as a supplement in a course on error-control coding or digital communication.
|
582 |
EXTENDING THE RANGE OF PCM/FM USING A MULTISYMBOL DETECTOR AND TURBO CODINGGeoghegan, Mark 10 1900 (has links)
International Telemetering Conference Proceedings / October 21, 2002 / Town & Country Hotel and Conference Center, San Diego, California / It has been shown that a multi-symbol detector can improve the detection efficiency of PCM/FM by 3 dB when compared to traditional methods without any change to the transmitted waveform. Although this is a significant breakthrough, further improvements are possible with the addition of Forward Error Correction (FEC). Systematic redundancy can be added by encoding the source data prior to the modulation process, thereby allowing channel errors to be corrected using a decoding circuit. Better detection efficiency translates into additional link margin that can be used to extend the operating range, support higher data throughput, or significantly improve the quality of the received data. This paper investigates the detection efficiency that can be achieved using a multisymbol detector and turbo product coding. The results show that this combination can improve the detection performance by nearly 9 dB relative to conventional PCM/FM systems. The increase in link margin is gained at the expense of a small increase in bandwidth and the additional complexity of the encoding and decoding circuitry.
|
583 |
Simultaneous Tracking of Multiple Signals Using a Thinned Array Antenna SystemKaiser, Julius A., Herold, Fredrick W. 10 1900 (has links)
International Telemetering Conference Proceedings / October 22-25, 2001 / Riviera Hotel and Convention Center, Las Vegas, Nevada / Multiple same-frequency signals including direct/multipath signals are distinguished and
individually tracked by measuring phase differences between sum and error channels of thinned
array systems.
|
584 |
Estimating measurement error in blood pressure, using structural equations modellingKepe, Lulama Patrick January 2004 (has links)
Thesis (MSc)--Stellenbosch University, 2004. / ENGLISH ABSTRACT: Any branch in science experiences measurement error to some extent. This maybe due to
conditions under which measurements are taken, which may include the subject, the
observer, the measurement instrument, and data collection method. The inexactness
(error) can be reduced to some extent through the study design, but at some level further
reduction becomes difficult or impractical. It then becomes important to determine or
evaluate the magnitude of measurement error and perhaps evaluate its effect on the
investigated relationships. All this is particularly true for blood pressure measurement.
The gold standard for measunng blood pressure (BP) is a 24-hour ambulatory
measurement. However, this technology is not available in Primary Care Clinics in South
Africa and a set of three mercury-based BP measurements is the norm for a clinic visit.
The quality of the standard combination of the repeated measurements can be improved
by modelling the measurement error of each of the diastolic and systolic measurements
and determining optimal weights for the combination of measurements, which will give a
better estimate of the patient's true BP. The optimal weights can be determined through
the method of structural equations modelling (SEM) which allows a richer model than the
standard repeated measures ANOVA. They are less restrictive and give more detail than
the traditional approaches.
Structural equations modelling which is a special case of covariance structure modelling
has proven to be useful in social sciences over the years. Their appeal stem from the fact
that they includes multiple regression and factor analysis as special cases. Multi-type
multi-time (MTMT) models are a specific type of structural equations models that suit
the modelling of BP measurements. These designs (MTMT models) constitute a variant
of repeated measurement designs and are based on Campbell and Fiske's (1959)
suggestion that the quality of methods (time in our case) can be determined by comparing
them with other methods in order to reveal both the systematic and random errors. MTMT models also showed superiority over other data analysis methods because of their
accommodation of the theory of BP. In particular they proved to be a strong alternative to
be considered for the analysis of BP measurement whenever repeated measures are
available even when such measures do not constitute equivalent replicates. This thesis
focuses on SEM and its application to BP studies conducted in a community survey of
Mamre and the Mitchells Plain hypertensive clinic population. / AFRIKAANSE OPSOMMING: Elke vertakking van die wetenskap is tot 'n minder of meerdere mate onderhewig aan
metingsfout. Dit is die gevolg van die omstandighede waaronder metings gemaak word
soos die eenheid wat gemeet word, die waarnemer, die meetinstrument en die data
versamelingsmetode. Die metingsfout kan verminder word deur die studie ontwerp maar
op 'n sekere punt is verdere verbetering in presisie moeilik en onprakties. Dit is dan
belangrik om die omvang ven die metingsfout te bepaal en om die effek hiervan op
verwantskappe te ondersoek. Hierdie aspekte is veral waar vir die meting van bloeddruk
by die mens.
Die goue standaard vir die meet van bloeddruk is 'n 24-uur deurlopenee meting. Hierdie
tegnologie is egter nie in primêre gesondheidsklinieke in Suid-Afrika beskikbaar nie en
'n stel van drie kwik gebasseerde bloedrukmetings is die norm by 'n kliniek besoek. Die
kwaliteit van die standard kombinasie van die herhaalde metings kan verbeter word deur
die modellering van die metingsfout van diastoliese en sistoliese bloeddruk metings. Die
bepaling van optimale gewigte vir die lineêre kombinasie van die metings lei tot 'n beter
skatting van die pasiënt se ware bloedruk. Die gewigte kan berekening word met die
metode van strukturele vergelykings modellering (SVM) wat 'n ryker klas van modelle
bied as die standaard herhaalde metings analise van variansie modelle. Dié model het
minder beperkings en gee dus meer informasie as die tradisionele benaderings.
Strukurele vergelykings modellering wat 'n spesial geval van kovariansie strukturele
modellering is, is oor die jare nuttig aangewend in die sosiale wetenskap. Die aanhang is
die gevolg van die feit dat meervoudige lineêre regressie en faktor analise ook spesiale
gevalle van die metode is. Meervoudige-tipe meervoudige-tyd (MTMT) modelle is 'n
spesifieke strukturele vergelykings model wat die modellering van bloedruk pas. Hierdie
tipe model is 'n variant van die herhaalde metings ontwerp en is gebaseer op Campbell en
Fiske (1959) se voorstel dat die kwaliteit van verskillende metodes bepaal kan word deur
dit met ander metodes te vergelyk om sodoende sistematiese en stogastiese foute te
onderskei. Die MTMT model pas ook goed in by die onderliggende fisiologies aspekte van bloedruk en die meting daarvan. Dit is dus 'n goeie alternatief vir studies waar die
herhaalde metings nie ekwivalente replikate is nie.
Hierdie tesis fokus op die strukturele vergelykings model en die toepassing daarvan in
hipertensie studies uitgevoer in die Mamre gemeenskap en 'n hipertensie kliniek
populasie in Mitchells Plain.
|
585 |
Strategic Error as Style: Finessing the Grammar CheckerSmith, Sarah 12 August 2016 (has links)
Composition studies lacks a comprehensive theory of error, one which successfully defines error in writing and offers a pedagogical response to ostensible errors that neither ignores nor pathologizes them. Electronic text-critiquing technologies offer some promise of helping writers notice and correct errors, but they are under-researched in composition and rarely well-integrated into pedagogical praxis. This research on the grammar and style checker in Microsoft Word considers the program as an electronic checklist for making decisions about what counts as an error in a given rhetorical situation. This study also offers a theory of error grounded in the idea of attention, or cognitive load, some of which an electronic checker can relieve in its areas of its greatest effectiveness, which this research quantifies. The proposed theory of error forms the basis for a pedagogy of register, understood as typified style, and establishes that error itself can be a strategic style move.
|
586 |
Bayesian analysis of errors-in-variables in generalized linear models鄧沛權, Tang, Pui-kuen. January 1992 (has links)
published_or_final_version / Statistics / Doctoral / Doctor of Philosophy
|
587 |
Adaptive unequal error protection for wireless video transmissionsYang, Guanghua, 楊光華 January 2006 (has links)
published_or_final_version / abstract / Electrical and Electronic Engineering / Doctoral / Doctor of Philosophy
|
588 |
A study on low complexity near-maximum likelihood spherical MIMO decodersLiang, Ying, 梁瑩 January 2010 (has links)
published_or_final_version / Electrical and Electronic Engineering / Master / Master of Philosophy
|
589 |
Numerical errors in subfilter scalar variance models for large eddy simulation of turbulent combustionKaul, Colleen Marie, 1983- 03 September 2009 (has links)
Subfilter scalar variance is a key quantity for scalar mixing at the small scales of a turbulent flow and thus plays a crucial role in large eddy simulation (LES) of combustion. While prior studies have mainly focused on the physical aspects of modeling subfilter variance, the current work discusses variance models in conjunction with numerical errors due to their implementation using finite difference methods. Because of the prevalence of grid-based filtering in practical LES, the smallest filtered scales are generally under-resolved. These scales, however, are often important in determining the values of subfilter models. A priori tests on data from direct numerical simulation (DNS) of homogenous isotropic turbulence are performed to evaluate the numerical implications of specific model forms in the context of practical LES evaluated with finite differences. As with other subfilter quantities, such as kinetic energy, subfilter variance can be modeled according to one of two general methodologies. In the first of these, an algebraic equation relating the variance to gradients of the filtered scalar field is coupled with a dynamic procedure for coefficient estimation. Although finite difference methods substantially underpredict the gradient of the filtered scalar field, the dynamic method is shown to mitigate this error through overestimation of the model coefficient. The second group of models utilizes a transport equation for the subfilter variance itself or for the second moment of the scalar. Here, it is shown that the model formulation based on the variance transport equation is consistently biased toward underprediction of the subfilter variance. The numerical issues stem from making discrete approximations to the chain rule manipulations used to derive convective and diffusive terms in the variance transport equation associated with the square of the filtered scalar. This set of approximations can be avoided by solving the equation for the second moment of the scalar, suggesting that model's numerical superiority. / text
|
590 |
Optimal communications system design for array-based electric generationOrozco, Ricardo 03 November 2011 (has links)
The world's demand for energy is an ongoing challenge, which has yet to be overcome.
The efforts to find clean energy alternatives to fossil fuels have been hampered by the
lack of investment in technology and research. Among these clean energy alternatives
are ocean waves and wind. Wind power is generated through the use of wind generators
that harness the wind's kinetic energy; it has gained worldwide popularity as a large-scale
energy source, but only provides less than one percent of global energy consumption.
Due to infrastructure limitations on installations of wind turbines at locations where high
winds exist, wind energy faces critical challenges difficult to overcome to continue
improving electricity generation. Ocean wave energy on the other hand seems like a
promising adjunction to wind energy. Ocean energy comes in a variety of forms such as
marine currents, tidal currents, geothermal vents and waves. Most of today's research
however is based on wave energy. It has been estimated that approximately 257 Terawatt
hour per year (TWh/year) could be extracted from ocean waves alone. This amount of
energy could be enough to meet the U.S. energy demands of 28 TWh/year. Technologies
such as point absorbers, attenuators and overtopping devices are examples of wave
energy converters. Point absorbers use a floating structure with components that move
relative to each other due to the wave action. The relative motion is used to drive
electromechanical or hydraulic energy converters. The total energy throughput of a
single point absorber however, does not justify for the great engineering cost and effort
by researchers. Thus the need to explore other alternatives of wave conversion that result
in no extra-added cost but yet increases throughput.
Our research focuses on exploring a novel method to maximize wave energy conversion
of an array-based point absorber wave farm. Unlike previous research, our method
incorporates a predictive control algorithm to aid the wave farm with the prediction of
dynamics and optimal control trajectory over a finite time and space horizon of ocean
waves. By using a predictive control algorithm, wave energy conversion throughput can
be increased as opposed to a system without. This algorithm requires that the wave
characteristics of the incoming wave be provided in advance for appropriate processing.
This thesis focuses on designing an efficient and reliable wireless
communications system capable of delivering wave information such as speed, height
and direction to each point absorber in the network for further processing by the
predictive control algorithm. This process takes place in the presence of harsh
environmental conditions where the random shape of waves and moving surface can
further affect the communication channel. In this work we focus on the physical layer
where the transmission of bits over the wireless medium takes place. Specifically we are
interested in reducing the bit error rate with a unique relaying protocol to increase packet
transmission reliability. We make use of cooperative diversity and existing protocols to
achieve our goal of merit and improve end-to-end system performance. / Graduation date: 2012
|
Page generated in 0.0285 seconds