Spelling suggestions: "subject:"electrical engineering"" "subject:"electrical ingineering""
321 |
Robust Coding Methods For Space-Time Wireless CommunicationsHong, Zhihong 17 January 2002 (has links)
<p>HONG, ZHIHONG. Robust Coding Methods For Space-Time Wireless Communications. (Under the direction of Dr. Brian L. Hughes.)Space-time coding can exploit the presence of multiple transmit and receive antennasto increase diversity, spectral efficiency, and received power, to improvethe performance in wireless communication systems. Thus far, most work on space-time coding has assumed highly idealized channel fading conditions (e.g., quasi-static or ideal fast fading)as well as perfect channel state information at the receiver. Both of these assumptionsare often questionable in practice. In this dissertation, we present a new and general coding architecture for multi-antennacommunications, which is designed to perform well under a wide variety of channel fading conditionsand which (when differentially encoded) does not require accurate channel estimatesat the receiver. The architecture combines serial concatenation of short, full-diversityspace-time block codes with bit-interleaved coded modulation. Under slow fadingconditions, we show that codes constructed in this way achieve full diversity and perform close to the best known space-time trellis codes of comparable complexity. Under fast fading conditions, we show that these same codes can achieve higher diversity than all previously knowncodes of the same complexity. When used with differential space-time modulation, thesecodes can be reliably detected with or without channel estimates at the transmitter or receiver. Moreover, when iterative decoding is applied, the performance of these codes couldbe further improved. <P>
|
322 |
ELECTROMAGNETIC MODELING FOR THE OPTIMIZED DESIGN OF SPATIAL POWER AMPLIFIERS WITH HARD HORN FEEDSOzkar, Mete 05 March 2002 (has links)
<p>OZKAR, METE. Electromagnetic modeling for the optimized design of spatial power amplifiers with hard horn feeds. (Under the direction of Amir Mortazawi.)An electromagnetic analysis tool for the analysis of spatial power amplifiers excited with hard horn feeds is developed. The analysis is performed using a generalized scattering matrix (GSM) approach. This approach employs different numerical techniques for analyzing the modules that make upthe system. The GSM of the overall system is obtained by cascading the GSM of each of the modules. Different numerical techniques such as finite difference time domain (FDTD) and mode matching are used here for an efficient and accurate analysis of a waveguide based power combiner/divider structure. This analysis makes it possible to further investigate the effect of the design parameters on the performance of the waveguide based power combiners. The fault tolerance of spatial power combiners against device failures is also investigated through the electromagnetic simulation of an already built spatial amplifier array. This particular simulation is achieved through a combination of different techniques (an already available method of moments code and a commercial finite element method (FEM)).<P>
|
323 |
Surface kinetics of common semiconductor materials in FC plasmas /Nelson, Caleb Timothy, January 2007 (has links)
Thesis (M.S.) -- University of Texas at Dallas, 2007. / Includes vita. Includes bibliographical references (leaves 63-66)
|
324 |
A MULTI-GIGABIT NETWORK PACKET INSPECTION AND ANALYSIS ARCHITECTURE FOR INTRUSION DETECTION AND PREVENTION UTILIZING PIPELINING AND CONTENT-ADDRESSABLE MEMORYRepanshek, Jacob J. 28 January 2005 (has links)
Increases in network traffic volume and transmission speeds have given rise to the need for extremely fast packet processing. Many traditional processor-based network devices are no longer sufficient to handle tasks such as packet analysis and intrusion detection at multi-Gigabit rates. This thesis proposes two novel pipelined hardware architectures to relieve the computational load of a processor within network switches and routers. First, the Embedded Protocol Analyzer Pre-Processor (ePAPP) is capable of taking an unclassified packet byte stream directly off of a network cable at line speed and separating the data into individually classified protocol fields. Second, the CAM-Assisted Signature-Matching Architecture (CASMA) uses ternary content-addressable memory to perform the task of stateless intrusion detection signature-matching. The Snort open-source software network intrusion detection system is used as a model for intrusion detection functionality. Structured ASIC synthesis results show that ePAPP supports speeds of 2.89 Gb/s using less than 1% of available logic cells. CASMA is shown to support 1.25 Gb/s using less than 6% of available logic cells. The CASMA architecture is demonstrated to be able to implement 1729 of 1993 or 86.8% of the attack signatures, or rules, packaged with Snort version 2.1.2.
|
325 |
Transmitting Biological Waveforms Using a Cellular PhoneRoche, Paul A. 28 January 2005 (has links)
There exists a need to remotely monitor fully mobile patients in their natural environments. Monitoring a patients biological waveforms can track a patients vital signs or facilitate the diagnosis of a disease, which could then be treated to help prolong and/or improve the subjects life. If a patient must be monitored without the delay associated with delivering data stored on a recording device, biotelemetry is necessary. Biotelemetry entails transmitting biological waveforms to a remote site for recording, processing and analysis. Due to the limitations of the currently popular methods of biotelemetry, this thesis proposes the use of the increasingly prevalent cellular phone system. An adaptor design is developed to facilitate biotelemetry utilizing the most common features of a cell phone, barring the need for cell phone modification, as required for affordability. As cell phones notoriously confound sensitive medical equipment, especially patient-connected devices, their use is often distanced from sensitive equipment. However, the desire to use cell phones to transmit biological waveforms requires their joint-proximity to patient-connected devices. The adaptor must amplify the waveforms while rejecting cell phone interference to achieve an adequate signal-to-noise ratio. As the frequency range of most biological data does not conform to the passband of the phone system, the adapter must modulate the biological data. To limit the adapters size and weight, this design exploits the cell phones battery power. Methods are also introduced to receive and reconstruct high-fidelity representations of the original biological waveform.
|
326 |
NEW CHANGE DETECTION MODELS FOR OBJECT-BASED ENCODING OF PATIENT MONITORING VIDEOLiu, Qiang 21 June 2005 (has links)
The goal of this thesis is to find a highly efficient algorithm to compress
patient monitoring video. This type of video mainly contains local motions
and a large percentage of idle periods. To specifically utilize these
features, we present an object-based approach, which decomposes input video
into three objects representing background, slow-motion foreground and
fast-motion foreground. Encoding these three video objects with different
temporal scalabilities significantly improves the coding efficiency in terms
of bitrate vs. visual quality.
The video decomposition is built upon change detection which identifies
content changes between video frames. To improve the robustness of capturing
small changes, we contribute two new change detection models. The model
built upon Markov random theory discriminates foreground containing the
patient being monitored. The other model, called covariance test method,
identifies constantly changing content by exploiting temporal correlation in
multiple video frames. Both models show great effectiveness in constructing
the defined video objects. We present detailed algorithms of video object
construction, as well as experimental results on the object-based coding of
patient monitoring video.
|
327 |
Low Bit-rate Color Video Compression using Multiwavelets in Three DimensionsChien, Jong-Chih 21 June 2005 (has links)
In recent years, wavelet-based video compressions have become a major focus of research because of the advantages that it provides. More recently, a growing thrust of studies explored the use of multiple scaling functions and multiple wavelets with desirable properties in various fields, from image de-noising to compression. In term of data compression, multiple scaling functions and wavelets offer a greater flexibility in coefficient quantization at high compression ratio than a comparable single wavelet. The purpose of this research is to investigate the possible improvement of scalable wavelet-based color video compression at low bit-rates by using three-dimensional multiwavelets. The first part of this work included the development of the spatio-temporal decomposition process for multiwavelets and the implementation of an efficient 3-D SPIHT encoder/decoder as a common platform for performance evaluation of two well-known multiwavelet systems against a comparable single wavelet in low bitrate color video compression. The second part involved the development of a motion-compensated 3-D compression codec and a modified SPIHT algorithm designed specifically for this codec by incorporating an advantage in the design of 2D SPIHT into the 3D SPIHT coder. In an experiment that compared their performances, the 3D motion-compensated codec with unmodified 3D SPIHT had gains of 0.3dB to 4.88dB over regular 2D wavelet-based motion-compensated codec using 2D SPIHT in the coding of 19 endoscopy sequences at 1/40 compression ratio. The effectiveness of the modified SPIHT algorithm was verified by the results of a second experiment in which it was used to re-encode 4 of the 19 sequences with lowest performance gains and improved them by 0.5dB to 1.0dB. The last part of the investigation examined the effect of multiwavelet packet on 3-D video compression as well as the effects of coding multiwavelet packets based on the frequency order and energy content of individual subbands.
|
328 |
A Complete Characterization of Nash Solutions in Ordinal GamesPeterson, Joshua Michael 21 June 2005 (has links)
The traditional field of cardinal game theory requires that the objective functions, which map the control variables of each player into a decision space on the real numbers, be well defined. Often in economics, business, and political science, these objective functions are difficult, if not impossible to formulate mathematically. The theory of ordinal games has been described, in part, to overcome this problem.
Ordinal games define the decision space in terms of player preferences, rather than objective function values. This concept allows the techniques of cardinal game theory to be applied to ordinal games. Not surprisingly, an infinite number of cardinal games of a given size exist. However, only a finite number of corresponding ordinal games exist.
This thesis seeks to explore and characterize this finite number of ordinal games. We first present a general formula for the number of two-player ordinal games of an arbitrary size. We then completely characterize each 2x2 and 3x3 ordinal game based on its relationship to the Nash solution. This categorization partitions the finite space of ordinal games into three sectors, those games with a single unique Nash solution, those games with multiple non-unique Nash solutions, and those games with no Nash solution. This characterization approach, however, is not scalable to games larger than 3x3 due to the exponentially increasing dimensionality of the search space. The results for both 2x2 and 3x3 ordinal games are then codified in an algorithm capable of characterizing ordinal games of arbitrary size. The output of this algorithm, implemented on a PC, is presented for games as large as 6x6. For larger games, a more powerful computer is needed. Finally, two applications of this characterization are presented to illustrate the usefulness of our approach.
|
329 |
Identification of Transient Speech Using Wavelet TransformsRasetshwane, Daniel Motlotle 21 June 2005 (has links)
It is generally believed that abrupt stimulus changes, which in speech may be time-varying frequency edges associated with consonants, transitions between consonants and vowels and transitions within vowels are critical to the perception of speech by humans and for speech recognition by machines. Noise affects speech transitions more than it affects quasi-steady-state speech. I believe that identifying and selectively amplifying speech transitions may enhance the intelligibility of speech in noisy conditions. The purpose of this study is to evaluate the use of wavelet transforms to identify speech transitions. Using wavelet transforms may be computationally efficient and allow for real-time applications. The discrete wavelet transform (DWT), stationary wavelet transform (SWT) and wavelet packets (WP) are evaluated. Wavelet analysis is combined with variable frame rate processing to improve the identification process. Variable frame rate can identify time segments when speech feature vectors are changing rapidly and when they are relatively stationary. Energy profiles for words, which show the energy in each node of a speech signal decomposed using wavelets, are used to identify nodes that include predominately transient information and nodes that include predominately quasi-steady-state information, and these are used to synthesize transient and quasi-steady-state speech components. These speech components are estimates of the tonal and nontonal speech components, which Yoo et al identified using time-varying band-pass filters. Comparison of spectra, a listening test and mean-squared-errors between the transient components synthesized using wavelets and Yoos nontonal components indicated that wavelet packets identified the best estimates of Yoos components. An algorithm that incorporates variable frame rate analysis into wavelet packet analysis is proposed. The development of this algorithm involves the processes of choosing a wavelet function and a decomposition level to be used. The algorithm itself has 4 steps: wavelet packet decomposition; classification of terminal nodes; incorporation of variable frame rate processing; synthesis of speech components. Combining wavelet analysis with variable frame rate analysis provides the best estimates of Yoos speech components.
|
330 |
Speech Decomposition and EnhancementYoo, Sungyub 14 October 2005 (has links)
The goal of this study is to investigate the roles of steady-state speech sounds and transitions between these sounds in the intelligibility of speech. The motivation for this approach is that the auditory system may be particularly sensitive to time-varying frequency edges, which in speech are produced primarily by transitions between vowels and consonants and within vowels. The possibility that selectively amplifying these edges may enhance speech intelligibility is examined.
Computer algorithms to decompose speech into two different components were developed. One component, which is defined as a tonal component, was intended to predominately include formant activity. The second component, which is defined as a non-tonal component, was intended to predominately include transitions between and within formants.
The approach to the decomposition is to use a set of time-varying filters whose center frequencies and bandwidths are controlled to identify the strongest formant components in speech. Each center frequency and bandwidth is estimated based on FM and AM information of each formant component. The tonal component is composed of the sum of the filter outputs. The non-tonal component is defined as the difference between the original speech signal and the tonal component.
The relative energy and intelligibility of the tonal and non-tonal components were compared to the original speech. Psychoacoustic growth functions were used to assess the intelligibility. Most of the speech energy was in the tonal component, but this component had a significantly lower maximum word recognition than the original and non-tonal component had. The non-tonal component averaged 2% of the original speech energy, but this component had almost equal maximum word recognition as the original speech.
The non-tonal component was amplified and recombined with the original speech to generate enhanced speech. The energy of the enhanced speech was adjusted to be equal to the original speech, and the intelligibility of the enhanced speech was compared to the original speech in background noise. The enhanced speech showed higher recognition scores at lower SNRs, and the differences were significant. The original and enhanced speech showed similar recognition scores at higher SNRs. These results suggest that amplification of transient information can enhance the speech in noise and this enhancement method is more effective at severe noise conditions.
|
Page generated in 0.0948 seconds