641 |
Adaptive unequal error protection for wireless video transmissionsYang, Guanghua, January 2006 (has links)
Thesis (Ph. D.)--University of Hong Kong, 2006. / Title proper from title frame. Also available in printed format.
|
642 |
Equalization and coding for the two-dimensional intersymbol interference channelCheng, Taikun, January 2007 (has links) (PDF)
Thesis (Ph. D.)--Washington State University, December 2007. / Includes bibliographical references (p. 74-80).
|
643 |
DIGITAL GAIN ERROR CORRECTION TECHNIQUE FOR 8-BIT PIPELINE ADCjaveed, khalid January 2010 (has links)
<p>An analog-to-digital converter (ADC) is a link between the analog and digital domains and plays a vital role in modern mixed signal processing systems. There are several architectures, for example flash ADCs, pipeline ADCs, sigma delta ADCs,successive approximation (SAR) ADCs and time interleaved ADCs. Among the various architectures, the pipeline ADC offers a favorable trade-off between speed,power consumption, resolution, and design effort. The commonly used applications of pipeline ADCs include high quality video systems, radio base stations,Ethernet, cable modems and high performance digital communication systems.Unfortunately, static errors like comparators offset errors, capacitors mismatch errors and gain errors degrade the performance of the pipeline ADC. Hence, there is need for accuracy enhancement techniques. The conventional way to overcome these mentioned errors is to calibrate the pipeline ADC after fabrication, the so-called post fabrication calibration techniques. But environmental changes like temperature and device aging necessitates the recalibration after regular intervals of time, resulting in a loss of time and money. A lot of effort can be saved if the digital outputs of the pipeline ADC can be used for the estimation and correctionof these errors, further classified as foreground and background techniques. In this thesis work, an algorithm is proposed that can estimate 10% inter stage gain errors in pipeline ADC without any need for a special calibration signal. The efficiency of the proposed algorithm is investigated on an 8-bit pipeline ADC architecture.The first seven stages are implemented using the 1.5-bit/stage architecture whilethe last stage is a one-bit flash ADC. The ADC and error correction algorithms simulated in Matlab and the signal to noise and distortion ratio (SNDR) is calculated to evaluate its efficiency.</p>
|
644 |
Hardware Accelerator for Duo-binary CTC decoding : Algorithm Selection, HW/SW Partitioning and FPGA ImplementationBjärmark, Joakim, Strandberg, Marco January 2006 (has links)
<p>Wireless communication is always struggling with errors in the transmission. The digital data received from the radio channel is often erroneous due to thermal noise and fading. The error rate can be lowered by using higher transmission power or by using an effective error correcting code. Power consumption and limits for electromagnetic radiation are two of the main problems with handheld devices today and an efficient error correcting code will lower the transmission power and therefore also the power consumption of the device. </p><p>Duo-binary CTC is an improvement of the innovative turbo codes presented in 1996 by Berrou and Glavieux and is in use in many of today's standards for radio communication i.e. IEEE 802.16 (WiMAX) and DVB-RSC. This report describes the development of a duo-binary CTC decoder and the different problems that were encountered during the process. These problems include different design issues and algorithm choices during the design.</p><p>An implementation in VHDL has been written for Alteras Stratix II S90 FPGA and a reference-model has been made in Matlab. The model has been used to simulate bit error rates for different implementation alternatives and as bit-true reference for the hardware verification.</p><p>The final result is a duo-binary CTC decoder compatible with Alteras Stratix II designs and a reference model that can be used when simulating the decoder alone or the whole signal processing chain. Some of the features of the hardware are that block sizes, puncture rates and number of iterations are dynamically configured between each block Before synthesis it is possible to choose how many decoders that will work in parallel and how many bits the soft input will be represented in. The circuit has been run in 100 MHz in the lab and that gives a throughput around 50Mbit with four decoders working in parallel. This report describes the implementation, including its development, background and future possibilities.</p>
|
645 |
Native Swedish Speakers’ Problems with English PrepositionsJansson, Hanna January 2007 (has links)
<p>This essay investigates native Swedish speakers’ problems in the area of prepositions. A total of 19 compositions, including 678 prepositions, written by native Swedish senior high school students were analysed. All the prepositions in the material were judged as either basic, systematic or idiomatic. Then all the errors of substitution, addition and omission were counted and corrected. As hypothesised, least errors were found in the category of basic prepositions and most errors were found in the category of idiomatic prepositions. However, the small difference between the two categories of systematic and idiomatic prepositions suggests that the learners have greater problems with systematic prepositions than what was first thought to be the case. Basic prepositions cause little or no problems. Systematic prepositions, i.e. those that are rule governed or whose usage is somehow generalisable, seem to be quite problematic to native Swedish speakers. Idiomatic prepositions seem to be learnt as ‘chunks’, and the learners are either aware of the whole constructions or do not use them at all. They also cause some problems for Swedish speakers. Since prepositions are often perceived as rather arbitrary without rules to sufficiently describe them, these conclusions might not be surprising to teachers, students and language learners. The greatest error cause was found to be interference from Swedish, and a few errors could be explained as intralingual errors. It seems as if the learners’ knowledge of their mother tongue strongly influences the acquisition of English prepositions.</p>
|
646 |
How do I pronounce this word? : Strategies used among Swedish learners of English when pronouncing unfamiliar wordsJaime, Ruti January 2008 (has links)
<p><p><p>This study aimed to identify some of the strategies students used when pronouncing unfamiliar words. Questionnaires were handed out to 94 students in the 9th grade in a medium-sized Swedish town. In addition, two teachers and 13 students were interviewed. The results indicate that the students had acquired some basic knowledge about the English sound system from phonetic training in their past education. However, there seemed to be a tendency among the students to use the trial-and-error strategy to a larger extent than using tools such as phonetic transcription in order to figure out the pronunciation of a word. The results also show that the teachers did not teach planned lessons on pronunciation, but instead it was more common that they responded to errors made by students. In conclusion, the results show that the students' knowledge in pronunciation in general was limited. In addition, there seemed to be a connection between the way the students and the teachers approached pronunciation and the student's ability to solve pronunciation issues.</p><p> </p><p> </p><p> </p></p></p>
|
647 |
Housing Investment in Germany : an Empirical TestHolm, Hanna January 2006 (has links)
<p>In this thesis I study the German housing market and specifically the level of housing investment. First, a theoretical background to housing market dynamics is presented and then I test whether there is a relationship between housing investments and GDP, the size of the population, Tobin’s Q and construction costs. An Error Correction Model is estimated and the result is that the equilibrium level of housing investment is restored after less then two quarters after a change in one of the explainable variables. The estimation indicates that GDP, the size of the population and construction costs affect the level of construction in the short run. However, in the long run the only significant effect is changes in construction cost.</p>
|
648 |
Toward the estimation of errors in cloud cover derived by threshold methodsChang, Fu-Lung 01 July 1991 (has links)
The accurate determination of cloud cover amount is important for characterizing
the role of cloud feedbacks in the climate system. Clouds have a large influence on
the climate system through their effect on the earth's radiation budget. As indicated
by the NASA Earth Radiation Budget Experiment (ERBE), the change in the earth's
radiation budget brought about by clouds is ~-15 Wm⁻² on a global scale, which
is several times the ~4 Wm⁻² gain in energy to the troposphere-surface system that
would arise from a doubling of CO₂ in the atmosphere. Consequently, even a small
change in global cloud amount may lead to a major change in the climate system.
Threshold methods are commonly used to derive cloud properties from satellite
imagery data. Here, in order to quantify errors due to thresholds, cloud cover is
obtained using three different values of thresholds. The three thresholds are applied to
the 11 μm, (4 km)² NOAA-9 AVHRR GAC satellite imagery data over four oceanic
regions. Regional cloud-cover fractions are obtained for two different scales, (60 km)²
and (250 km)². The spatial coherence method for obtaining cloud cover from imagery
data is applied to coincident data. The differences between cloud cover derived by the
spatial coherence method and by the threshold methods depends on the setting of the
threshold. Because the spatial coherence method is believed to provide good estimates
of cloud cover for opaque, single-layered cloud systems, this study is limited to such
systems, and the differences in derived cloud cover are interpreted as errors due to the
application of thresholds. The threshold errors are caused by pixels that are partially
covered by clouds and the errors have a dependence on the regional scale cloud cover.
The errors can be derived from the distribution of pixel-scale cloud cover.
Two simple models which assume idealized distributions for pixel-scale cloud
cover are constructed and used to estimate the threshold errors. The results show
that these models, though simple, perform rather well in estimating the differences
between cloud cover derived by the spatial coherence method and those obtained by
threshold methods. / Graduation date: 1992
|
649 |
Semiparametric maximum likelihood for regression with measurement errorSuh, Eun-Young 03 May 2001 (has links)
Semiparametric maximum likelihood analysis allows inference in errors-invariables
models with small loss of efficiency relative to full likelihood analysis but
with significantly weakened assumptions. In addition, since no distributional
assumptions are made for the nuisance parameters, the analysis more nearly
parallels that for usual regression. These highly desirable features and the high
degree of modelling flexibility permitted warrant the development of the approach
for routine use. This thesis does so for the special cases of linear and nonlinear
regression with measurement errors in one explanatory variable. A transparent and
flexible computational approach is developed, the analysis is exhibited on some
examples, and finite sample properties of estimates, approximate standard errors,
and likelihood ratio inference are clarified with simulation. / Graduation date: 2001
|
650 |
The expressive power and declarative attributes of exception handling in Forms/3Agrawal, Anurag 14 July 1997 (has links)
Exception handling is a programming language feature that can help increase the
reliability of programs. However, not much work has been done on exception handling in
visual programming languages. We present an approach for improving the exception
handling mechanism in Forms/3, a declarative visual programming language based on the
spreadsheet paradigm. We show how this approach can be added without sacrificing
referential transparency and lazy evaluation in Forms/3. We then present a comparison of
the Forms/3 exception handling mechanism with the mechanisms available in Java, C++,
Prograph, Haskell and Microsoft Excel, based on their expressive powers. / Graduation date: 1998
|
Page generated in 0.0476 seconds