• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 141
  • 107
  • 42
  • 26
  • 23
  • 15
  • 12
  • 6
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 499
  • 499
  • 413
  • 99
  • 92
  • 76
  • 72
  • 65
  • 54
  • 50
  • 44
  • 41
  • 38
  • 35
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Desenvolvimento de um sistema de instrumentação de medição de vibração mecânica em máquina rotativas, em tempo real, embarcado em FPGA /

Costa, Cesar da. January 2011 (has links)
Orientador: Mauro Hugo Mathias / Banca: José Elias Tomazini / Banca: Samuel Euzédice de Lucena / Banca: Jussara Pimenta Matos / Banca:Francisco Carlos Parquet Bizarria / Resumo: O monitoramento em tempo real de eventos em uma planta industrial é uma técnica avançada que apresenta condições reais de funcionamento das máquinas responsáveis pelo processo de manufatura. Um programa de manutenção preditiva em máquinas rotativas engloba várias técnicas de monitoramento da condição da máquina, para determinar o surgimento de falhas. Para o aumento da confiabilidade operacional e redução da manutenção preditiva, faz-se necessário um instrumento eficiente de análise e monitoramento do processo que, em tempo real possibilite a detecção de falhas incipientes. Durante os últimos anos tem ocorrido uma grande evolução tecnológica na área de sistemas digitais, abrangendo inovações tanto em hardware como em software. tais inovações permitem o desenvovimento de novas metodologias de projeto que levam em conta a facilidade de futuras modificações modernizações e expansões do sistema projetado. Este trabalho apresenta um estudo de novas ferramentas de projeto para sistemas digitais embarcados, baseados em aquitetura de hardware aberta com lógica reconfigurável. Será abordado um estudo de caso na área de detecção de falhas em máquinas rotativas, bem como sua implementação e teste / Abstract: The real-time monitoring of events in an industrial plant is an advanced technique that presents the real conditions of operation of the machinery responsible for the manufactiring process. A predictive maintenance program inclues various rotating machinery condition monitoring rwchiques of the machine to determine the conditions of failure. To increase the operational reliability and reduced preventive maintenance, it is necessary an efficient tool for analysis and process monitoring, in real time, enabling the detection of incipient faults. Over the past few years there has been a major technological developments related to digital systems, including innovations in both hardware and software. These innovations enable the development of new design methodologies that take into account the ease of future modifications, upgrade and expansions of the esigned system. This paper presents a study of new design tools for embedded digital systems based on open hardware architecture eith reconfigurable logic. Wil be discussed a case study in the area of fault detection in rotating machinery, as well as its implementation and testing / Doutor
102

Computerised GRBAS assessement of voice quality

Jalalinajafabadi, Farideh January 2016 (has links)
Vocal cord vibration is the source of voiced phonemes in speech. Voice quality depends on the nature of this vibration. Vocal cords can be damaged by infection, neck or chest injury, tumours and more serious diseases such as laryngeal cancer. This kind of physical damage can cause loss of voice quality. To support the diagnosis of such conditions and also to monitor the effect of any treatment, voice quality assessment is required. Traditionally, this is done ‘subjectively’ by Speech and Language Therapists (SLTs) who, in Europe, use a well-known assessment approach called ‘GRBAS’. GRBAS is an acronym for a five dimensional scale of measurements of voice properties. The scale was originally devised and recommended by the Japanese Society of Logopeadics and Phoniatrics and several European research publications. The proper- ties are ‘Grade’, ‘Roughness’, ‘Breathiness’, ‘Asthenia’ and ‘Strain’. An SLT listens to and assesses a person’s voice while the person performs specific vocal maneuvers. The SLT is then required to record a discrete score for the voice quality in range of 0 to 3 for each GRBAS component. In requiring the services of trained SLTs, this subjective assessment makes the traditional GRBAS procedure expensive and time-consuming to administer. This thesis considers the possibility of using computer programs to perform objective assessments of voice quality conforming to the GRBAS scale. To do this, Digital Signal Processing (DSP) algorithms are required for measuring voice features that may indicate voice abnormality. The computer must be trained to convert DSP measurements to GRBAS scores and a ‘machine learning’ approach has been adopted to achieve this. This research was made possible by the development, by Manchester Royal Infirmary (MRI) Hospital Trust, of a ‘speech database’ with the participation of clinicians, SLT’s, patients and controls. The participation of five SLTs scorers allowed norms to be established for GRBAS scoring which provided ‘reference’ data for the machine learning approach.
To support the scoring procedure carried out at MRI, a software package, referred to as GRBAS Presentation and Scoring Package (GPSP), was developed for presenting voice recordings to each of the SLTs and recording their GRBAS scores. A means of assessing intra-scorer consistency was devised and built into this system. Also, the assessment of inter-scorer consistency was advanced by the invention of a new form of the ‘Fleiss Kappa’ which is applicable to ordinal as well as categorical scoring. The means of taking these assessments of scorer consistency into account when producing ‘reference’ GRBAS scores are presented in this thesis. Such reference scores are required for training the machine learning algorithms. The DSP algorithms required for feature measurements are generally well known and available as published or commercial software packages. However, an appraisal of these algorithms and the development of some DSP ‘thesis software’ was found to be necessary. Two ‘machine learning’ regression models have been developed for map- ping the measured voice features to GRBAS scores. These are K Nearest Neighbor Regression (KNNR) and Multiple Linear Regression (MLR). Our research is based on sets of features, sets of data and prediction models that are different from the approaches in the current literature. The performance of the computerised system is evaluated against reference scores using a Normalised Root Mean Squared Error (NRMSE) measure. The performances of MLR and KNNR for objective prediction of GRBAS scores are compared and analysed ‘with feature selection’ and ‘without feature selection’. It was found that MLR with feature selection was better than MLR without feature selection and KNNR with and without feature selection, for all five GRBAS components. It was also found that MLR with feature selection gives scores for ‘Asthenia’ and ‘Strain’ which are closer to the reference scores than the scores given by all five individual SLT scorers. The best objective score for ‘Roughness’ was closer than the scores given by two SLTs, roughly equal to the score of one SLT and worse than the other two SLT scores. The best objective scores for ‘Breathiness’ and ‘Grade’ were further from the reference scores than the scores produced by all five SLT scorers. However, the worst ‘MLR with feature selection’ result has normalised RMS error which is only about 3% worse than the worst SLT scoring. The results obtained indicate that objective GRBAS measurements have the potential for further development towards a commercial product that may at least be useful in augmenting the subjective assessments of SLT scorers.
103

A NEW QRS DETECTION AND ECG SIGNAL EXTRACTION TECHNIQUE FOR FETAL MONITORING

Janjarasjitt, Suparerk 07 April 2006 (has links)
No description available.
104

Generative Processes for Audification

Jackson, Judith 14 December 2018 (has links)
No description available.
105

A Structured ASIC Approach to a Radiation Hardened by Design Digital Single Sideband Modulator for Digital Radio Frequency Memories

Pemberton, Thomas B. 30 June 2010 (has links)
No description available.
106

A digital signal processing approach to analyze the effects of multiple reflections between highway noise barriers

Ghent, Jeremy E. January 2003 (has links)
No description available.
107

Digital Signal Processing of Neurocardiac Signals in Patients with Congestive Heart Failure / DSP of Neurocardiac Signals in Patients with CHF

Capogna, Joshua 08 1900 (has links)
Recent work has found that a frequency domain and time domain analysis of the heart rate variability signals can provide significant insights into function of the heart in healthy subjects and in patients with heart disease. Patients with congestive heart failure are an important clinical health issue and it is hoped that this work will contribute towards gaining knowledge of this debilitating pathological condition. Our laboratory has recently acquired more than three thousand 24-hour ECG tapes recorded during called Study of Left Ventricular Dysfunction (SOLVD). The SOL VD trial was conducted between 1987-1990 to test the efficacy of a medication called, Enalapril, to treat patients with heart failure. There were an equal number of patients with (group A) and without overt heart failure (group B). The work reported in this thesis describes the development of a hardware and software framework used to analyze the ECG signals recorded on these tapes. Primary objective of this work was to develop and test a system which would assist in analyzing the above tapes so as to examine if there are differences between two groups using the HRV parameters from both frequency and time domain. The research was conducted in three steps: Hardware design, software and algorithm development and finally the validation phase of the design, to test the usefulness of the overall system. The tapes were replayed on a tape recorder and the ECG was digitized at a rate corresponding to 500 samples/second. Labview software was invoked for this task. Secondly a set of algorithms were developed to perform QRS-detection and QT-interval identification. The detection algorithms involved placing critical ECG fiducials onto the ECG waveform through the use of a trained model. The model construction used patient specific pre-annotated data coupled with statistical and genetic algorithm techniques. The beat-to-beat HRV signal was thus generated using the annotation data from the ECG. Frequency domain indices were obtained using power spectral computation algorithms while time domain statistical indices were computed using standard methods. QT-interval algorithms were tested using a set of manually and automatically tagged set of beats from a sample of subjects. For the third part of this research, i.e. validation phase, we set up a test pool of 200 tapes each from patients with overt heart failure and with no heart failure, recorded at the baseline before the subjects entered the study. This phase of the study was conducted with the help of a statistician in a blinded fashion. Our results suggest that there is significant difference between frequency domain and time domain parameters computed from the HRV signals recorded from subjects belonging to group A and group B. The group A patients had a lot of ectopic beats and were challenging to analyze. These results provide a confirmation of our analytical procedures using real clinical data. The QT-analysis of the ECG signals suggest that automatic analysis of this interval is feasible using algorithms developed in this study. / Thesis / Master of Applied Science (MASc)
108

Improving Object Classification in X-ray Luggage Inspection

Shi, Xinhua 27 July 2000 (has links)
X-ray detection methods have increasingly been used as an effective means for the automatic detection of explosives. While a number of devices are now commercially available, most of these technologies are not yet mature. The purpose of this research has been to investigate methods for using x-ray dual-energy transmission and scatter imaging technologies more effectively. Followed by an introduction and brief overview of x-ray detection technologies, a model for a prototype x-ray scanning system, which was built at Virginia Tech, is given. This model has primarily been used for the purpose of system analysis, design and simulations. Then, an algorithm is developed to correct the non-uniformity of transmission detectors in the prototype scanning system. The x-ray source output energy in the prototype scanning system is not monochromatic, resulting in two problems: spectrum overlap and output signal unbalance between high and low energy levels, which will degrade the performance of dual-energy x-ray sensing. A copper filter has been introduced and a numerical optimization method to remove thickness effect of objects has been developed to improve the system performance. The back scattering and forward scattering signals are functions of solid angles between the object and detectors. A given object may be randomly placed anywhere on the conveyor belt, resulting in a variation in the detected signals. Both an adaptive modeling technique and least squares method are used to decrease this distance effect. Finally, discriminate function methods have been studied experimentally, and classification rules have been obtained to separate explosives from other types of materials. In some laboratory tests on various scenarios by inserting six explosive simulants, we observed improvements in classification accuracy from 60% to 80%, depending on the complexity of luggage bags. / Ph. D.
109

Power Reduction of Digital Signal Processing Systems using Subthreshold Operation

Henry, Michael Brewer 15 July 2009 (has links)
Over the past couple decades, the capabilities of battery-powered electronics has expanded dramatically. What started out as large bulky 2-way radios, wristwatches, and simple pacemakers, has evolved into pocket sized smart-phones, digital cameras, person digital assistants, and implantable biomedical chips that can restore hearing and prevent heart attacks. With this increase in complexity comes an increase in the amount of processing, which runs on a limited energy source such as a battery or scavenged energy. It is therefore desirable to make the hardware as energy efficient as possible. Many battery-powered systems require digital signal processing, which often makes up a large portion of the total energy consumption. The digital signal processing of a battery-powered system is therefore a good target for power reduction techniques. One method of reducing the power consumption of digital signal processing is to operate the circuit in the subthreshold region, where the supply voltage is lower than the threshold voltage of the transistors. Subthreshold operation greatly reduces the power and energy consumption, but also decreases the maximum operating frequency. Many digital signal processing applications have real-time throughput requirements, so various architectural level techniques, such as pipelining and parallelism, must be used in order to achieve the required performance. This thesis investigates the use of parallelization and subthreshold operation to lower the power consumption of digital signal processing applications, while still meeting throughput requirements. Using an off the shelf fast fourier transform architecture, it will be shown that through parallelization and subthreshold operation, a 70% reduction in power consumption can be achieved, all while matching the performance of a nominal voltage single core architecture. Even better results can be obtained when an architecture is specifically designed for subthreshold operation. A novel Discrete Wavelet Transform architecture is presented that is designed to eliminate the need for memory banks, and a power reduction of 26x is achieved compared to a reference nominal voltage architecture that uses memory banks. Issues such as serial to parallel data distribution, dynamic throughput scaling, and memory usage are also explored in this thesis. Finally, voltage scaling greatly increases the design space, so power and timing analysis can be very slow due long SPICE simulation times. A simulation framework is presented that can characterize subthreshold circuits accurately using only fast gate level design automation tools. / Master of Science
110

Automatic Generation of Efficient Parallel Streaming Structures for Hardware Implementation

Koehn, Thaddeus E. 30 November 2016 (has links)
Digital signal processing systems demand higher computational performance and more operations per second than ever before, and this trend is not expected to end any time soon. Processing architectures must adapt in order to meet these demands. The two techniques most prevalent for achieving throughput constraints are parallel processing and stream processing. By combining these techniques, significant throughput improvements have been achieved. These preliminary results apply to specific applications, and general tools for automation are in their infancy. In this dissertation techniques are developed to automatically generate efficient parallel streaming hardware architectures. / Ph. D.

Page generated in 0.0409 seconds