• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 121
  • 96
  • 34
  • 17
  • 12
  • 11
  • 5
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 403
  • 403
  • 403
  • 98
  • 87
  • 61
  • 45
  • 43
  • 41
  • 36
  • 35
  • 33
  • 28
  • 26
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Generative Processes for Audification

Jackson, Judith 14 December 2018 (has links)
No description available.
82

A Structured ASIC Approach to a Radiation Hardened by Design Digital Single Sideband Modulator for Digital Radio Frequency Memories

Pemberton, Thomas B. 30 June 2010 (has links)
No description available.
83

A digital signal processing approach to analyze the effects of multiple reflections between highway noise barriers

Ghent, Jeremy E. January 2003 (has links)
No description available.
84

Digital Signal Processing of Neurocardiac Signals in Patients with Congestive Heart Failure / DSP of Neurocardiac Signals in Patients with CHF

Capogna, Joshua 08 1900 (has links)
Recent work has found that a frequency domain and time domain analysis of the heart rate variability signals can provide significant insights into function of the heart in healthy subjects and in patients with heart disease. Patients with congestive heart failure are an important clinical health issue and it is hoped that this work will contribute towards gaining knowledge of this debilitating pathological condition. Our laboratory has recently acquired more than three thousand 24-hour ECG tapes recorded during called Study of Left Ventricular Dysfunction (SOLVD). The SOL VD trial was conducted between 1987-1990 to test the efficacy of a medication called, Enalapril, to treat patients with heart failure. There were an equal number of patients with (group A) and without overt heart failure (group B). The work reported in this thesis describes the development of a hardware and software framework used to analyze the ECG signals recorded on these tapes. Primary objective of this work was to develop and test a system which would assist in analyzing the above tapes so as to examine if there are differences between two groups using the HRV parameters from both frequency and time domain. The research was conducted in three steps: Hardware design, software and algorithm development and finally the validation phase of the design, to test the usefulness of the overall system. The tapes were replayed on a tape recorder and the ECG was digitized at a rate corresponding to 500 samples/second. Labview software was invoked for this task. Secondly a set of algorithms were developed to perform QRS-detection and QT-interval identification. The detection algorithms involved placing critical ECG fiducials onto the ECG waveform through the use of a trained model. The model construction used patient specific pre-annotated data coupled with statistical and genetic algorithm techniques. The beat-to-beat HRV signal was thus generated using the annotation data from the ECG. Frequency domain indices were obtained using power spectral computation algorithms while time domain statistical indices were computed using standard methods. QT-interval algorithms were tested using a set of manually and automatically tagged set of beats from a sample of subjects. For the third part of this research, i.e. validation phase, we set up a test pool of 200 tapes each from patients with overt heart failure and with no heart failure, recorded at the baseline before the subjects entered the study. This phase of the study was conducted with the help of a statistician in a blinded fashion. Our results suggest that there is significant difference between frequency domain and time domain parameters computed from the HRV signals recorded from subjects belonging to group A and group B. The group A patients had a lot of ectopic beats and were challenging to analyze. These results provide a confirmation of our analytical procedures using real clinical data. The QT-analysis of the ECG signals suggest that automatic analysis of this interval is feasible using algorithms developed in this study. / Thesis / Master of Applied Science (MASc)
85

Post-Processing Method for Determining Peaks in Noisy Strain Gauge Data with a Low Sampling Frequency

Hill, Peter Lee 06 July 2017 (has links)
The Virginia Tech Transportation Institute is recognized for being a pioneer in naturalistic driving studies. These studies determine driving behavior, and its correlation to safety critical events, by equipping participant's vehicles with data acquisition systems and recording them for a period of time. The driver's habits and responses to certain scenarios and events are analyzed to determine trends and opportunities to improve overall driver safety. One of these studies installed strain gauges on the front and rear brake levers of motorcycles to record the frequency and magnitude of brake presses. The recorded data was sampled at 10 hertz and had a significant amount of noise introduced from temperature and electromagnetic interference. This thesis proposes a peak detection algorithm, written in MATLAB, that can parallel process the 40,000 trips recorded in this naturalistic driving study. This algorithm uses an iterative LOWESS regression to eliminate the offset from zero when the strain gauge is not stressed, as well as a cumulative sum and statistical concepts to separate brake activations from the rest of the noisy signal. This algorithm was verified by comparing its brake activation detection to brake activations that were manually identified through video reduction. The algorithm had difficulty in accurately identifying activations in files where the amplitude of the noise was close to the amplitude of the brake activations, but this only described 2% of the sampled data. For the rest of the files, the peak detection algorithm had an accuracy of over 90%. / Master of Science
86

Automatic Generation of Efficient Parallel Streaming Structures for Hardware Implementation

Koehn, Thaddeus E. 30 November 2016 (has links)
Digital signal processing systems demand higher computational performance and more operations per second than ever before, and this trend is not expected to end any time soon. Processing architectures must adapt in order to meet these demands. The two techniques most prevalent for achieving throughput constraints are parallel processing and stream processing. By combining these techniques, significant throughput improvements have been achieved. These preliminary results apply to specific applications, and general tools for automation are in their infancy. In this dissertation techniques are developed to automatically generate efficient parallel streaming hardware architectures. / Ph. D.
87

FPGA Implementation of a Pseudo-Random Aggregate Spectrum Generator for RF Hardware Test and Evaluation

Baweja, Randeep Singh 09 October 2020 (has links)
Test and evaluation (TandE) is a critically important step before in-the-field deployment of radio-frequency (RF) hardware in order to assure that the hardware meets its design requirements and specifications. Typically, TandE is performed either in a lab setting utilizing a software simulation environment or through real-world field testing. While the former approach is typically limited by the accuracy of the simulation models (particularly of the anticipated hardware effects) and by non-real-time data rates, the latter can be extremely costly in terms of time, money, and manpower. To build upon the strengths of these approaches and to mitigate their weaknesses, this work presents the development of an FPGA-based TandE tool that allows for real-time pseudo-random aggregate signal generation for testing RF receiver hardware (such as communication receivers, spectrum sensors, etc.). In particular, a framework is developed for an FPGA-based implementation of a test signal emulator that generates randomized aggregate spectral environments containing signals with random parameters such as center frequencies, bandwidths, start times, and durations, as well as receiver and channel effects such as additive white Gaussian noise (AWGN). To test the accuracy of the developed spectrum generation framework, the randomization properties of the framework are analyzed to assure correct probability distributions and independence. Additionally, FPGA implementation decisions, such as bit precision versus accuracy of the generated signal and the impact on the FPGA's hardware footprint, are analyzed.This analysis allows the test signal engineer to make informed decisions while designing a hardware-based RF test system. This framework is easily extensible to other signal types and channel models, and can be used to test a variety of signal-based applications. / Master of Science / Test and evaluation (TandE) is a critically important step before in-the-field deployment of radio-frequency signal hardware in order to assure that the hardware meets its design requirements and specifications. Typically, TandE is performed either in a lab setting utilizing a software simulation or through real-world field testing. While the former approach is typically limited by the accuracy of the simulation models and by slower data rates, the latter can be extremely costly in terms of time, money, and manpower. To address these issues, a hardware-based signal generation approach that takes the best of both methods mentioned above is developed in this thesis. This approach allows the user to accurately model a radio-frequency system without requiring expensive equipment. This work presents the development of a hardware-based TandE tool that allows for real-time random signal generation for testing radio-frequency receiver hardware (such as communication receivers). In particular, a framework is developed for an implementation of a test signal emulator that allows for user-defined randomization of test signal parameters such as frequencies, signal bandwidths, start times, and durations, as well as communications receiver effects. To test the accuracy of the developed emulation framework, the randomization properties of the framework are analyzed to assure correct probability distributions and independence. Additionally, hardware implementation decisions such as bit precision versus quality of the generated signal and the impact on the hardware footprint are analyzed. Ultimately, it is shown that this framework is easily extensible to other signal types and communication channel models.
88

Improving Object Classification in X-ray Luggage Inspection

Shi, Xinhua 27 July 2000 (has links)
X-ray detection methods have increasingly been used as an effective means for the automatic detection of explosives. While a number of devices are now commercially available, most of these technologies are not yet mature. The purpose of this research has been to investigate methods for using x-ray dual-energy transmission and scatter imaging technologies more effectively. Followed by an introduction and brief overview of x-ray detection technologies, a model for a prototype x-ray scanning system, which was built at Virginia Tech, is given. This model has primarily been used for the purpose of system analysis, design and simulations. Then, an algorithm is developed to correct the non-uniformity of transmission detectors in the prototype scanning system. The x-ray source output energy in the prototype scanning system is not monochromatic, resulting in two problems: spectrum overlap and output signal unbalance between high and low energy levels, which will degrade the performance of dual-energy x-ray sensing. A copper filter has been introduced and a numerical optimization method to remove thickness effect of objects has been developed to improve the system performance. The back scattering and forward scattering signals are functions of solid angles between the object and detectors. A given object may be randomly placed anywhere on the conveyor belt, resulting in a variation in the detected signals. Both an adaptive modeling technique and least squares method are used to decrease this distance effect. Finally, discriminate function methods have been studied experimentally, and classification rules have been obtained to separate explosives from other types of materials. In some laboratory tests on various scenarios by inserting six explosive simulants, we observed improvements in classification accuracy from 60% to 80%, depending on the complexity of luggage bags. / Ph. D.
89

Power Reduction of Digital Signal Processing Systems using Subthreshold Operation

Henry, Michael Brewer 15 July 2009 (has links)
Over the past couple decades, the capabilities of battery-powered electronics has expanded dramatically. What started out as large bulky 2-way radios, wristwatches, and simple pacemakers, has evolved into pocket sized smart-phones, digital cameras, person digital assistants, and implantable biomedical chips that can restore hearing and prevent heart attacks. With this increase in complexity comes an increase in the amount of processing, which runs on a limited energy source such as a battery or scavenged energy. It is therefore desirable to make the hardware as energy efficient as possible. Many battery-powered systems require digital signal processing, which often makes up a large portion of the total energy consumption. The digital signal processing of a battery-powered system is therefore a good target for power reduction techniques. One method of reducing the power consumption of digital signal processing is to operate the circuit in the subthreshold region, where the supply voltage is lower than the threshold voltage of the transistors. Subthreshold operation greatly reduces the power and energy consumption, but also decreases the maximum operating frequency. Many digital signal processing applications have real-time throughput requirements, so various architectural level techniques, such as pipelining and parallelism, must be used in order to achieve the required performance. This thesis investigates the use of parallelization and subthreshold operation to lower the power consumption of digital signal processing applications, while still meeting throughput requirements. Using an off the shelf fast fourier transform architecture, it will be shown that through parallelization and subthreshold operation, a 70% reduction in power consumption can be achieved, all while matching the performance of a nominal voltage single core architecture. Even better results can be obtained when an architecture is specifically designed for subthreshold operation. A novel Discrete Wavelet Transform architecture is presented that is designed to eliminate the need for memory banks, and a power reduction of 26x is achieved compared to a reference nominal voltage architecture that uses memory banks. Issues such as serial to parallel data distribution, dynamic throughput scaling, and memory usage are also explored in this thesis. Finally, voltage scaling greatly increases the design space, so power and timing analysis can be very slow due long SPICE simulation times. A simulation framework is presented that can characterize subthreshold circuits accurately using only fast gate level design automation tools. / Master of Science
90

Optimization and Verification of an Integrated DSP

Svensson, Markus, Österholm, Thomas January 2008 (has links)
<p>There is a lot of applications for DSPs (Digital Signal Processor) in the most rapidly growing areas in the industry right now as wireless communication along with audio and video products are getting more and more popular. In this report, a DSP, developed at the division of Computer Engineering at the University of Linköping, is optimized and verified.</p><p>Register Forwarding was implemented on a general architecture level to avoiddata hazards that may arise when implementing instruction pipelining in a processor.</p><p>The very common FFT algorithm is also optimized but on instruction setlevel. That means the algorithm is carefully analyzed to find operations that mayexecute in parallel and then create new instructions for these parallel operations.The optimization is concentrated on the butterfly operation as it is such a majorpart of the FFT computation. Comparing the accelerated butterfly with the unaccelerated gives an improvement of 30% in terms of clock cycles needed for thecomputation.</p><p>In the report there are also some discussions about the benefits and drawbacksof changing from a hardware to a software stack, mostly in terms of interrupts andthe return instruction.</p><p>Another important property of the processor is scalability. That is, it is possibleto attach extra peripherals to the core, which accelerates certain tasks. Aninterface towards these peripherals is developed along with two template designsthat may be used to develop other peripherals.</p><p>After all these modifications, a new test bench is developed to verify the functionality.</p>

Page generated in 0.0726 seconds