• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 411
  • 49
  • 49
  • 21
  • 19
  • 12
  • 11
  • 10
  • 9
  • 8
  • 8
  • 6
  • 5
  • 4
  • 4
  • Tagged with
  • 690
  • 349
  • 272
  • 85
  • 68
  • 68
  • 62
  • 56
  • 54
  • 54
  • 50
  • 48
  • 47
  • 46
  • 45
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
281

Improvement Potential andEqualization Circuit Solutions forMulti-drop DRAM Memory Buses

Fredriksson, Henrik January 2008 (has links)
Digital computers have changed human society in a profound way over the last 50 years. Key properties that contribute to the success of the computer are flexible programmability and fast access to large amounts of data and instructions. Effective access to algorithms and data is a fundamental property that limits the capabilities of computer systems. For PC computers, the main memory consists of dynamic random access memory (DRAM). Communication between memory and processor has traditionally been performed over a multi-drop bus. Signal frequencies on these buses have gradually increased in order to keep up with the progress in integrated circuit data processing capabilities. Increased signal frequencies have exposed the inherent signal degradation effects of a multidrop bus structure. As of today, the main approach to tackle these effects has been to reduce the number of endpoints of the bus structure. Though improvements in DRAM memory technology have increased the available memory size at each endpoint, the increase has not been able to fully fulfill the demand for larger system memory capacity. Different bus structural changes have been used to overcome this problem. All are different compromises between access latency, data transmission capacity, memory capacity, and implementation costs. In this thesis we focus on using the signal processing capabilities of a modern integrated circuit technology as an alternative to bus structural changes. This has the potential to give low latency, high memory capacity, and relatively high data transmission capacity at an additional cost limited to integrated circuit blocks. We first use information theory to estimate the unexplored potential of existing multi-drop bus structures. Hereby showing that reduction of the number of endpoints for multi-drop buses, is by no means based on the fundamental limit of the data transmission capacity of the bus structure. Two test-chips have been designed and fabricated to experimentally demonstrate the feasibility of several Gb/s data-rates over multidrop buses, with limited cost overhead and no latency penalty. The test-chips implement decision feedback equalization, adopted for high speed multi-drop use. The equalizers feature digital filter implementations which, in combination with high speed DACs, enable the use of long digital filters for high speed decision feedback equalization. Blind adaptation has also been implemented to demonstrate extraction of channel characteristics during data transmission. The use of single sided equalization has been proposed in order to limit the need for equalization implementation to the host side of a DRAM memory bus. Furthermore, we propose to utilize the reciprocal properties of the communication channel to ensure that single sided equalization can be performed without any channel characterization hardware on the memory chips. Finally, issues related to evaluation of high-speed channels are addressed and the on-chip structures used for channel evaluation in this project are presented.
282

High-Speed Link Modeling: Analog/Digital Equalization and Modulation Techniques

Lee, Keytaek 2012 May 1900 (has links)
High-speed serial input-output (I/O) link has required advanced equalization and modulation techniques to mitigate inter-symbol interference (ISI) caused by multi-Gb/s signaling over band-limited channels. Increasing demands for transceiver power and area complexity has leveraged on-going interest in analog-to-digital converter (ADC) based link, which allows for robust equalization and flexible adaptation to advanced signaling. With diverse options in ISI control techniques, link performance analysis for complicated transceiver architectures is very important. This work presents advanced statistical modeling for ADC-based link, performance comparison of existing modulation and equalization techniques, and proposed hybrid ADC-based receiver that achieves further power saving in digital equalization. Statistical analysis precisely estimates high-speed link margins at given implementation constrains and low target bit-error-rate (BER), typically ranges from 1e-12 to 1e-15, by applying proper statistical bound of noise and distortion. The proposed statistical ADC-based link modeling utilizes bounded probability density function (PDF) of limited quantization distortion (4-6 bits) through digital feed-forward and decision feedback equalizers (FFE-DFE) to improve low target BER estimation. Based on statistical modeling, this work surveys the impact of insufficient equalization, jitter and crosstalk on modulation selection among two and four level pulse amplitude modulation (PAM-2 and PAM-4, respectively) and duobinary, and ADC resolution reduction performance by partial analog equalizer (PAE). While the information of channel loss at effective Nyquist frequency and signaling constellation loss initially guides modulation selection, the statistical analysis results show that PAM-4 best tolerates jitter and crosstalk, and duobinary requires the least equalization complexity. Meanwhile, despite robust digital equalization, high-speed ADC complexity and power consumption is still a critical bottleneck, so that PAE is necessitated to reduce ADC resolution requirement. Statistical analysis presents up to 8-bit resolution is required in 12.5Gb/s data communications at 46dB of channel loss without PAE, while 5-bit ADC is enough with 3-tap FFE PAE. For optimal ADC resolution reduction by PAE, digital equalizer complexity also increases to provide enough margin tolerating significant quantization distortion. The proposed hybrid receiver defines unreliable signal thresholds by statistical analysis and selectively takes additional digital equalization to save potentially increasing dynamic power consumption in digital. Simulation results report that the hybrid receiver saves at least 64% of digital equalization power with 3-tap FFE PAE in 12.5Gb/s data rate and up to 46dB loss channels. Finally, this work shows the use of embedded-DFE ADC in the hybrid receiver is limited by error propagation.
283

Am I rich enough to do well in math? Math test scores and socio-economic status

Southworth, Brian 05 1900 (has links)
This study investigated the relationship between math test scores and social economic status (SES). The model for this study was separated into three parts: student role performance (SRP), school, and family model segments. SES was divided into three categories: low, mid, and high SES. This study used the data from the Educational Longitudinal Study of 2002 (ELS). It was found that students with a higher SES have higher math test scores. It was also found that for large families of low and mid SES, students scored lower on their math test scores. Further study is needed to determine which aspects of SES have the greatest effect on math test scores. / Thesis (M.A.)--Wichita State University, College of Liberal Arts and Sciences. / "May 2006." / Includes bibliographic references (leaves 27-32)
284

Equalization and the offshore accords of 2005

Metz, Ashley Corinne 16 October 2006
The ad hoc Offshore Accords signed between the Martin Government, and each Newfoundland and Nova Scotia have fundamentally altered the landscape of regional redistribution in Canada. The fallout from the Accords has had an immediate impact on the functioning of the Equalization Program and the political factors that inform debate over future reforms. This thesis examines the factors that led to the February 2005 Offshore Accords. It also examines the case of Saskatchewan's treatment under the Equalization Program
285

An equalization technique for high rate OFDM systems

Yuan, Naihua 05 December 2003
In a typical orthogonal frequency division multiplexing (OFDM) broadband wireless communication system, a guard interval using cyclic prefix is inserted to avoid the inter-symbol interference and the inter-carrier interference. This guard interval is required to be at least equal to, or longer than the maximum channel delay spread. This method is very simple, but it reduces the transmission efficiency. This efficiency is very low in the communication systems, which inhibit a long channel delay spread with a small number of sub-carriers such as the IEEE 802.11a wireless LAN (WLAN). To increase the transmission efficiency, it is usual that a time domain equalizer (TEQ) is included in an OFDM system to shorten the effective channel impulse response within the guard interval. There are many TEQ algorithms developed for the low rate OFDM applications such as asymmetrical digital subscriber line (ADSL). The drawback of these algorithms is a high computational load. Most of the popular TEQ algorithms are not suitable for the IEEE 802.11a system, a high data rate wireless LAN based on the OFDM technique. In this thesis, a TEQ algorithm based on the minimum mean square error criterion is investigated for the high rate IEEE 802.11a system. This algorithm has a comparatively reduced computational complexity for practical use in the high data rate OFDM systems. In forming the model to design the TEQ, a reduced convolution matrix is exploited to lower the computational complexity. Mathematical analysis and simulation results are provided to show the validity and the advantages of the algorithm. In particular, it is shown that a high performance gain at a data rate of 54Mbps can be obtained with a moderate order of TEQ finite impulse response (FIR) filter. The algorithm is implemented in a field programmable gate array (FPGA). The characteristics and regularities between the elements in matrices are further exploited to reduce the hardware complexity in the matrix multiplication implementation. The optimum TEQ coefficients can be found in less than 4µs for the 7th order of the TEQ FIR filter. This time is the interval of an OFDM symbol in the IEEE 802.11a system. To compensate for the effective channel impulse response, a function block of 64-point radix-4 pipeline fast Fourier transform is implemented in FPGA to perform zero forcing equalization in frequency domain. The offsets between the hardware implementations and the mathematical calculations are provided and analyzed. The system performance loss introduced by the hardware implementation is also tested. Hardware implementation output and simulation results verify that the chips function properly and satisfy the requirements of the system running at a data rate of 54 Mbps.
286

An equalization technique for high rate OFDM systems

Yuan, Naihua 05 December 2003 (has links)
In a typical orthogonal frequency division multiplexing (OFDM) broadband wireless communication system, a guard interval using cyclic prefix is inserted to avoid the inter-symbol interference and the inter-carrier interference. This guard interval is required to be at least equal to, or longer than the maximum channel delay spread. This method is very simple, but it reduces the transmission efficiency. This efficiency is very low in the communication systems, which inhibit a long channel delay spread with a small number of sub-carriers such as the IEEE 802.11a wireless LAN (WLAN). To increase the transmission efficiency, it is usual that a time domain equalizer (TEQ) is included in an OFDM system to shorten the effective channel impulse response within the guard interval. There are many TEQ algorithms developed for the low rate OFDM applications such as asymmetrical digital subscriber line (ADSL). The drawback of these algorithms is a high computational load. Most of the popular TEQ algorithms are not suitable for the IEEE 802.11a system, a high data rate wireless LAN based on the OFDM technique. In this thesis, a TEQ algorithm based on the minimum mean square error criterion is investigated for the high rate IEEE 802.11a system. This algorithm has a comparatively reduced computational complexity for practical use in the high data rate OFDM systems. In forming the model to design the TEQ, a reduced convolution matrix is exploited to lower the computational complexity. Mathematical analysis and simulation results are provided to show the validity and the advantages of the algorithm. In particular, it is shown that a high performance gain at a data rate of 54Mbps can be obtained with a moderate order of TEQ finite impulse response (FIR) filter. The algorithm is implemented in a field programmable gate array (FPGA). The characteristics and regularities between the elements in matrices are further exploited to reduce the hardware complexity in the matrix multiplication implementation. The optimum TEQ coefficients can be found in less than 4µs for the 7th order of the TEQ FIR filter. This time is the interval of an OFDM symbol in the IEEE 802.11a system. To compensate for the effective channel impulse response, a function block of 64-point radix-4 pipeline fast Fourier transform is implemented in FPGA to perform zero forcing equalization in frequency domain. The offsets between the hardware implementations and the mathematical calculations are provided and analyzed. The system performance loss introduced by the hardware implementation is also tested. Hardware implementation output and simulation results verify that the chips function properly and satisfy the requirements of the system running at a data rate of 54 Mbps.
287

FPGA-based DOCSIS upstream demodulation

Berscheid, Brian Michael 02 September 2011
In recent years, the state-of-the-art in field programmable gate array (FPGA) technology has been advancing rapidly. Consequently, the use of FPGAs is being considered in many applications which have traditionally relied upon application-specific integrated circuits (ASICs). FPGA-based designs have a number of advantages over ASIC-based designs, including lower up-front engineering design costs, shorter time-to-market, and the ability to reconfigure devices in the field. However, ASICs have a major advantage in terms of computational resources. As a result, expensive high performance ASIC algorithms must be redesigned to fit the limited resources available in an FPGA. <p> Concurrently, coaxial cable television and internet networks have been undergoing significant upgrades that have largely been driven by a sharp increase in the use of interactive applications. This has intensified demand for the so-called upstream channels, which allow customers to transmit data into the network. The format and protocol of the upstream channels are defined by a set of standards, known as DOCSIS 3.0, which govern the flow of data through the network. <p> Critical to DOCSIS 3.0 compliance is the upstream demodulator, which is responsible for the physical layer reception from all customers. Although upstream demodulators have typically been implemented as ASICs, the design of an FPGA-based upstream demodulator is an intriguing possibility, as FPGA-based demodulators could potentially be upgraded in the field to support future DOCSIS standards. Furthermore, the lower non-recurring engineering costs associated with FPGA-based designs could provide an opportunity for smaller companies to compete in this market. <p> The upstream demodulator must contain complicated synchronization circuitry to detect, measure, and correct for channel distortions. Unfortunately, many of the synchronization algorithms described in the open literature are not suitable for either upstream cable channels or FPGA implementation. In this thesis, computationally inexpensive and robust synchronization algorithms are explored. In particular, algorithms for frequency recovery and equalization are developed. <p> The many data-aided feedforward frequency offset estimators analyzed in the literature have not considered intersymbol interference (ISI) caused by micro-reflections in the channel. It is shown in this thesis that many prominent frequency offset estimation algorithms become biased in the presence of ISI. A novel high-performance frequency offset estimator which is suitable for implementation in an FPGA is derived from first principles. Additionally, a rule is developed for predicting whether a frequency offset estimator will become biased in the presence of ISI. This rule is used to establish a channel excitation sequence which ensures the proposed frequency offset estimator is unbiased. <p> Adaptive equalizers that compensate for the ISI take a relatively long time to converge, necessitating a lengthy training sequence. The convergence time is reduced using a two step technique to seed the equalizer. First, the ISI equivalent model of the channel is estimated in response to a specific short excitation sequence. Then, the estimated channel response is inverted with a novel algorithm to initialize the equalizer. It is shown that the proposed technique, while inexpensive to implement in an FPGA, can decrease the length of the required equalizer training sequence by up to 70 symbols. <p> It is shown that a preamble segment consisting of repeated 11-symbol Barker sequences which is well-suited to timing recovery can also be used effectively for frequency recovery and channel estimation. By performing these three functions sequentially using a single set of preamble symbols, the overall length of the preamble may be further reduced.
288

FPGA-based DOCSIS upstream demodulation

Berscheid, Brian Michael 02 September 2011 (has links)
In recent years, the state-of-the-art in field programmable gate array (FPGA) technology has been advancing rapidly. Consequently, the use of FPGAs is being considered in many applications which have traditionally relied upon application-specific integrated circuits (ASICs). FPGA-based designs have a number of advantages over ASIC-based designs, including lower up-front engineering design costs, shorter time-to-market, and the ability to reconfigure devices in the field. However, ASICs have a major advantage in terms of computational resources. As a result, expensive high performance ASIC algorithms must be redesigned to fit the limited resources available in an FPGA. <p> Concurrently, coaxial cable television and internet networks have been undergoing significant upgrades that have largely been driven by a sharp increase in the use of interactive applications. This has intensified demand for the so-called upstream channels, which allow customers to transmit data into the network. The format and protocol of the upstream channels are defined by a set of standards, known as DOCSIS 3.0, which govern the flow of data through the network. <p> Critical to DOCSIS 3.0 compliance is the upstream demodulator, which is responsible for the physical layer reception from all customers. Although upstream demodulators have typically been implemented as ASICs, the design of an FPGA-based upstream demodulator is an intriguing possibility, as FPGA-based demodulators could potentially be upgraded in the field to support future DOCSIS standards. Furthermore, the lower non-recurring engineering costs associated with FPGA-based designs could provide an opportunity for smaller companies to compete in this market. <p> The upstream demodulator must contain complicated synchronization circuitry to detect, measure, and correct for channel distortions. Unfortunately, many of the synchronization algorithms described in the open literature are not suitable for either upstream cable channels or FPGA implementation. In this thesis, computationally inexpensive and robust synchronization algorithms are explored. In particular, algorithms for frequency recovery and equalization are developed. <p> The many data-aided feedforward frequency offset estimators analyzed in the literature have not considered intersymbol interference (ISI) caused by micro-reflections in the channel. It is shown in this thesis that many prominent frequency offset estimation algorithms become biased in the presence of ISI. A novel high-performance frequency offset estimator which is suitable for implementation in an FPGA is derived from first principles. Additionally, a rule is developed for predicting whether a frequency offset estimator will become biased in the presence of ISI. This rule is used to establish a channel excitation sequence which ensures the proposed frequency offset estimator is unbiased. <p> Adaptive equalizers that compensate for the ISI take a relatively long time to converge, necessitating a lengthy training sequence. The convergence time is reduced using a two step technique to seed the equalizer. First, the ISI equivalent model of the channel is estimated in response to a specific short excitation sequence. Then, the estimated channel response is inverted with a novel algorithm to initialize the equalizer. It is shown that the proposed technique, while inexpensive to implement in an FPGA, can decrease the length of the required equalizer training sequence by up to 70 symbols. <p> It is shown that a preamble segment consisting of repeated 11-symbol Barker sequences which is well-suited to timing recovery can also be used effectively for frequency recovery and channel estimation. By performing these three functions sequentially using a single set of preamble symbols, the overall length of the preamble may be further reduced.
289

Behavior Modeling of a Digital Video Broadcasting System and the Evaluation of its Equalization Methods

Jian, Wang, Yan, Xie January 2010 (has links)
In this thesis, a single carrier ATSC DTV baseband transmitter, part of the receiver(including channel estimator and channel equalizer), were modeled. Since multi-pathinduced ISI (inter symbol interference) is the most significant impact on theperformance of single carrier DTV reception, modeling and implementation of singlecarrier channel estimator and channel equalizer have been the focus of the thesis. Westarted with the investigation of channel estimation methods. Afterwards, severalchannel estimators and equalizers were modeled and the performance of each channelequalization methods in different scenarios was evaluated. Our results show that thefrequency domain equalizer can achieve low computing cost and handle long delaypaths. Another important issue to be considered in block equalization is Inter-BlockInterference (IBI). The impact of IBI was investigated via behavior modeling. In lastpart of our thesis, two methods for IBI cancellation are compared and the proposal forhardware implementation was given.
290

Fingerprint Segmentation

Jomaa, Diala January 2009 (has links)
In this thesis, a new algorithm has been proposed to segment the foreground of the fingerprint from the image under consideration. The algorithm uses three features, mean, variance and coherence. Based on these features, a rule system is built to help the algorithm to efficiently segment the image. In addition, the proposed algorithm combine split and merge with modified Otsu. Both enhancements techniques such as Gaussian filter and histogram equalization are applied to enhance and improve the quality of the image. Finally, a post processing technique is implemented to counter the undesirable effect in the segmented image. Fingerprint recognition system is one of the oldest recognition systems in biometrics techniques. Everyone have a unique and unchangeable fingerprint. Based on this uniqueness and distinctness, fingerprint identification has been used in many applications for a long period. A fingerprint image is a pattern which consists of two regions, foreground and background. The foreground contains all important information needed in the automatic fingerprint recognition systems. However, the background is a noisy region that contributes to the extraction of false minutiae in the system. To avoid the extraction of false minutiae, there are many steps which should be followed such as preprocessing and enhancement. One of these steps is the transformation of the fingerprint image from gray-scale image to black and white image. This transformation is called segmentation or binarization. The aim for fingerprint segmentation is to separate the foreground from the background. Due to the nature of fingerprint image, the segmentation becomes an important and challenging task. The proposed algorithm is applied on FVC2000 database. Manual examinations from human experts show that the proposed algorithm provides an efficient segmentation results. These improved results are demonstrating in diverse experiments.

Page generated in 0.0973 seconds