• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 123
  • 96
  • 34
  • 17
  • 12
  • 11
  • 5
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 404
  • 404
  • 404
  • 98
  • 87
  • 61
  • 45
  • 43
  • 41
  • 36
  • 35
  • 33
  • 28
  • 26
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Derivação eficiente e utilização de filtros de Volterra de referência na avaliação de formalismos não-lineares. / Efficient derivation and use of reference Volterra filters for the evaluation of non-linear formalisms.

José Henrique de Morais Goulart 03 August 2012 (has links)
O modelamento matemático de sistemas físicos é fundamental para diversas aplicações de processamento digital de sinais (PDS). Em muitos dos problemas enfrentados nesse contexto, para que um modelo seja útil, é necessário que ele represente seu análogo físico com precisão e possua características favoráveis para implementação, como estabilidade e compacidade. A obtenção de um modelo que atenda a estes requisitos depende da escolha de um formalismo matemático apropriado. Em se tratando do modelamento de sistemas (significativamente) não-lineares, tal decisão é particularmente desafiadora, uma vez que muitos formalismos com propriedades diferentes foram propostos na literatura. Basicamente, isto se deve à inexistência de uma teoria completa e geral para sistemas não-lineares, diferentemente do que ocorre no caso linear. Porém, em diversos trabalhos que lidam com aplicações nas quais é necessário modelar dispositivos não-lineares, adota-se alguma representação sem que sejam fornecidas justificativas claras e fundamentadas em características físicas do sistema a ser modelado. Ao invés disso, esse importante aspecto é discutido apenas superficialmente, com base em argumentos informais ou heurísticos. Adicionalmente, a definição de certas características estruturais de um modelo que possuem grande impacto sobre seu desempenho frequentemente não é feita de maneira sistemática, o que dificulta uma compreensão precisa do potencial do formalismo subjacente. Visando auxiliar na escolha por um formalismo adequado em aplicações de PDS, neste trabalho propõe-se uma metodologia de avaliação do desempenho de formalismos não-lineares que se apoia sobre considerações físicas. Para tanto, emprega-se um modelo físico do sistema de interesse como referência. Mais especificamente, a estratégia adotada baseia-se em fazer uso do método de bilinearização de Carleman para se obter, a partir deste modelo e de um conjunto de parâmetros típicos, um conjunto de núcleos de Volterra de referência. Uma vez que os núcleos de referência são obtidos, pode-se estimar, por exemplo, a ordem e a extensão de memória mínimas que um filtro de Volterra convencional deve possuir para se atingir o nível de precisão desejado, o que permite avaliar se o uso de modelos deste tipo é viável em termos de custo computacional. Quando este não é o caso, as informações fornecidas pelos núcleos podem ser exploradas para se escolher outra representação, como uma estrutura modular ou uma estrutura de Volterra alternativa. Além disso, os núcleos de referência são úteis ainda para se realizar uma avaliação quantitativa do desempenho da estrutura escolhida e compará-lo com aquele apresentado por um filtro de Volterra convencional. Para a realização do cômputo dos núcleos de referência, um algoritmo que implementa eficientemente o método de Carleman foi proposto. Tal algoritmo, juntamente com a ideia básica da metodologia desenvolvida, constituem as principais contribuições deste trabalho. Como estudo de caso, emprega-se um modelo físico para alto-falantes disponível na literatura para a avaliação da adequação de diversas estruturas ao modelamento de dispositivos deste tipo. Com este exemplo, demonstra-se a utilidade dos núcleos de referência para as finalidades supracitadas. / The mathematical modeling of physical systems is essential for several digital signal processing (DSP) applications. In many problems faced in this context, if a model is to be useful, it must represent its physical analog with precision and possess characteristics that favour implementation, such as stability and compactness. In order to obtain a model that meets those requirements, it is indispensable to choose an appropriate formalism. Regarding the modeling of (significantly) nonlinear systems, this decision is a particularly challenging problem, since many formalisms with different properties have been proposed in the literature. Basically, this is due to the inexistence of a complete and general theory for nonlinear systems as there is in the linear case. In several works that deal with applications in which it is necessary to model nonlinear devices, some representation is adopted without the provision of clear and physically motivated justifications. Instead, this important aspect is discussed only superficially, based on an informal or heuristic reasoning. Additionally, the definition of certain structural characteristics of a model which have great influence on its performance is frequently done in a non-systematic manner, which difficults a precise comprehension of the potential of the underlying formalism. Aiming to assist the choice of an adequate formalism in DSP applications, in this work we propose a methodology for evaluating the performance of nonlinear models that relies on physical considerations. For this purpose, a physical model of the system of interest is used as a reference. Specifically, the adopted strategy is based on using the Carleman bilinearization method for obtaining a set of reference Volterra kernels from that model, considering typical parameter values. Once the reference kernels are obtained, we can estimate, for instance, the order and the minimal memory extension that a conventional Volterra filter must have in order to achieve the desired precision level, which allows us to assess whether using models of this type is feasible in terms of computational cost. When this is not the case, the information provided by the kernels may be exploited for choosing another representation, as a modular structure or an alternative Volterra structure. Furthermore, the reference kernels are also useful for quantitatively evaluating the performance of the chosen structure and for comparing it with a conventional Volterra filter. To perform the reference kernels computation, an efficient algorithm for the Carleman method is proposed. This algorithm, together with the basic idea of the developed methodology, constitute the main contributions of this work. As a case study, a physical model for loudspeakers available in the literature is employed for assessing the suitableness of several structures for modeling devices of this kind. With this example, we show the utility of the reference kernels for the aforementioned purposes.
212

Algoritmo para detecção de eventos por medidores eletrônicos de faturamento monofásicos de baixo custo. / Events detection algorithm for low cost electronic billing meters.

Francisco Pereira Júnior 06 November 2014 (has links)
Esta tese apresenta um algoritmo capaz de detectar eventos que afetam a qualidade de energia e quantificar níveis de distorção harmônica utilizando poucos recursos de processamento. A carga de processamento reduzida é compatível com os processadores utilizados em medidores de faturamento monofásicos de baixo custo. Outros métodos de detecção, que exigem maior capacidade de processamento, podem comprometer o funcionamento destes medidores. O algoritmo proposto utiliza o sinal monofásico amostrado e gera sinais equivalentes a um sistema polifásico virtual capaz de detectar variações de tensão de curta duração e transitórios oscilatórios de baixa frequência. Distorções harmônicas presentes no sinal amostrado podem ser quantificadas em grupos, como múltiplos do número de fases virtuais criadas. São apresentados os algoritmos utilizados e os resultados obtidos com simulações para conversões A/D de 10 e 12 bits. O algoritmo foi testado com um processador de baixo custo com conversor A/D de 10 bits e seus resultados são comparados com as simulações. / This thesis presents an algorithm able to detect events that affect power quality and quantify levels of harmonic distortion using few memory and low processing resources. The reduced processing load is compatible with processors used in low-cost single-phase billing meters. Other detection methods, which require greater processing power, may compromise those meters operations. The proposed algorithm uses the phase sampled signal and generates a virtual equivalent polyphase system, capable of detecting voltage sags, swell and low frequency oscillatory transient signals. The harmonic distortion present in the sampled signal can be quantified in groups, related with the number of virtual phases created. The algorithms were simulated with 10 and 12 bits A/D and the results obtained are shown. A test meter, based on a low-cost processor with a 10 bits A/D converter, was programmed with this algorithm and its results are compared with simulations.
213

Assimetria humana no reconhecimento multibiométrico. / Human asymmetry in multibiometric recognition.

Rodolfo Vertamatti 13 October 2011 (has links)
A combinação de fontes biométricas não redundantes da multibiometria supera a precisão de cada fonte individual (monobiometria). Além do mais, dois problemas em biometria, ruído e ataques de usurpadores, podem ser minimizados pelo uso de múltiplos sensores e biometria multimodal. Entretanto, se as similaridades estão em todos traços biométricos, como em gêmeos monozigotos (MZ), o processamento de múltiplas fontes não melhora a performance. Para distinguir extrema similitude, influências epigenéticas e ambientais são mais importantes do que o DNA herdado. Esta tese examina a plasticidade fenotípica na assimetria humana como uma ferramenta para melhorar a multibiometria. A técnica de Processamento Bilateral (PB) é introduzida para analisar discordâncias em lados esquerdo e direito dos traços biométricos. PB foi testado com imagens de espectro visível e infravermelho usando Correlação Cruzada, Wavelets e Redes Neurais Artificiais. Os traços selecionados foram dentes, orelhas, íris, impressões digitais, narinas e bochechas. PB acústico também foi implementado para avaliação da assimetria vibracional durante sons vocálicos e comparado a um sistema reconhecedor de locutores com parametrização via MFCC (Mel Frequency Cepstral Coefficients) e classificado por Quantização Vetorial. Para o PB de imagens e acústico foram coletadas 20 amostras por traço biométrico durante um ano de nove irmãos masculinos adultos. Com propósito de teste, as biometrias esquerdas foram impostoras às biometrias direitas do mesmo indivíduo e vice-versa, o que levou a 18 entidades serem identificadas por traço biométrico. Resultados alcançaram identificação total em todas biometrias tratadas com PB, comparado a um máximo de 44% de identificação correta sem PB. Esta tese conclui que peculiaridades bilaterais melhoram a performance multibiométrica e podem complementar qualquer abordagem de reconhecimento. / Combination of non-redundant biometric sources in multibiometrics overcomes individual source accuracy (monobiometrics). Moreover, two problems in biometrics, noise and impostor attacks, can be minimized by the use of multi-sensor, multi-modal biometrics. However, if similarities are in all traits, as in monozygotic twins (MZ), multiple source processing does not improve performance. To distinguish extreme similitude, epigenetic and environmental influences are more important than DNA inherited. This thesis examines phenotypic plasticity in human asymmetry as a tool to ameliorate multibiometrics. Bilateral Processing (BP) technique is introduced to analyze discordances in left and right trait sides. BP was tested in visible and infrared spectrum images using Cross-Correlation, Wavelets and Artificial Neural Networks. Selected traits were teeth, ears, irises, fingerprints, nostrils and cheeks. Acoustic BP was also implemented for vibration asymmetry evaluation during voiced sounds and compared to a speaker recognition system parameterized via MFCC (Mel Frequency Cepstral Coefficients) and classified by Vector Quantization. Image and acoustic BP gathered 20 samples per biometric trait during one year from nine adult male brothers. For test purposes, left biometrics was impostor to right biometrics from the same individual and vice-versa, which led to 18 entities to be identified per trait. Results achieved total identification in all biometrics treated with BP, compared to maximum 44% of correct identification without BP. This study concludes that bilateral peculiarities improve multibiometric performance and can complement any recognition approach.
214

Design of Programmable Baseband Processors

Tell, Eric January 2005 (has links)
The world of wireless communications is under constant change. Radio standards evolve and new standards emerge. More and more functionality is put into wireless terminals. E.g. mobile phones need to handle both second and third generation mobile telephony as well as Bluetooth, and will soon also support wireless LAN functionality, reception of digital audio and video broadcasting, etc. These developments have lead to an increased interest in software defined radio (SDR), i.e. radio devices that can be reconfigured via software. SDR would provide benefits such as low cost for multi-mode devices, reuse of the same hardware in different products, and increased product life time via software updates. One essential part of any software defined radio is a programmable baseband processor that is flexible enough to handle different types of modulation, different channel coding schemes, and different trade-offs between data rate and mobility. So far, programmable baseband solutions have mostly been used in high end systems such as mobile telephony base stations since the cost and power consumption have been considered too high for handheld terminals. In this work a new low power and low silicon area programmable baseband processor architecture aimed for multi-mode terminals is presented. The architecture is based on a customized DSP core and a number of hardware accelerators connected via a configurable network. The architecture offers a good tradeoff between flexibility and performance through an optimized instruction set, efficient hardware acceleration of carefully selected functions, low memory cost, and low control overhead. One main contribution of this work is a study of important issues in programmable baseband processing such as software-hardware partitioning, instruction level acceleration, low power design, and memory issues. Further contributions are a unique optimized instruction set architecture, a unique architecture for efficient integration of hardware accelerators in the processor, and mapping of complete baseband applications to the presented architecture. The architecture has been proven in a manufactured demonstrator chip for wireless LAN applications. Wireless LAN firmware has been developed and run on the chip at full speed. Silicon area and measured power consumption have proven to be similar to that of a non-programmable ASIC solution.
215

Audio-video based handwritten mathematical content recognition

Vemulapalli, Smita 12 November 2012 (has links)
Recognizing handwritten mathematical content is a challenging problem, and more so when such content appears in classroom videos. However, given the fact that in such videos the handwritten text and the accompanying audio refer to the same content, a combination of video and audio based recognizer has the potential to significantly improve the content recognition accuracy. This dissertation, using a combination of video and audio based recognizers, focuses on improving the recognition accuracy associated with handwritten mathematical content in such videos. Our approach makes use of a video recognizer as the primary recognizer and a multi-stage assembly, developed as part of this research, is used to facilitate effective combination with an audio recognizer. Specifically, we address the following challenges related to audio-video based handwritten mathematical content recognition: (1) Video Preprocessing - generates a timestamped sequence of segmented characters from the classroom video in the face of occlusions and shadows caused by the instructor, (2) Ambiguity Detection - determines the subset of input characters that may have been incorrectly recognized by the video based recognizer and forwards this subset for disambiguation, (3) A/V Synchronization - establishes correspondence between the handwritten character and the spoken content, (4) A/V Combination - combines the synchronized outputs from the video and audio based recognizers and generates the final recognized character, and (5) Grammar Assisted A/V Based Mathematical Content Recognition - utilizes a base mathematical speech grammar for both character and structure disambiguation. Experiments conducted using videos recorded in a classroom-like environment demonstrate the significant improvements in recognition accuracy that can be achieved using our techniques.
216

Carrier frequency offset recovery for zero-IF OFDM receivers

Mitzel, Michael 13 February 2009
As trends in broadband wireless communications applications demand faster development cycles, smaller sizes, lower costs, and ever increasing data rates, engineers continually seek new ways to harness evolving technology. The zero intermediate frequency receiver architecture has now become popular as it has both economic and size advantages over the traditional superheterodyne architecture.<p> Orthogonal Frequency Division Multiplexing (OFDM) is a popular multi-carrier modulation technique with the ability to provide high data rates over echo ladened channels. It has excellent robustness to impairments caused by multipath, which includes frequency selective fading. Unfortunately, OFDM is very sensitive to the carrier frequency offset (CFO) that is introduced by the downconversion process. The objective of this thesis is to develop and to analyze an algorithm for blind CFO recovery suitable for use with a practical zero-Intermediate Frequency (zero-IF) OFDM telecommunications system.<p> A blind CFO recovery algorithm based upon characteristics of the received signal's power spectrum is proposed. The algorithm's error performance is mathematically analyzed, and the theoretical results are verified with simulations. Simulation shows that the performance of the proposed algorithm agrees with the mathematical analysis.<p> A number of other CFO recovery techniques are compared to the proposed algorithm. The proposed algorithm performs well in comparison and does not suffer from many of the disadvantages of existing blind CFO recovery techniques. Most notably, its performance is not significantly degraded by noisy, frequency selective channels.
217

Carrier frequency offset recovery for zero-IF OFDM receivers

Mitzel, Michael 13 February 2009 (has links)
As trends in broadband wireless communications applications demand faster development cycles, smaller sizes, lower costs, and ever increasing data rates, engineers continually seek new ways to harness evolving technology. The zero intermediate frequency receiver architecture has now become popular as it has both economic and size advantages over the traditional superheterodyne architecture.<p> Orthogonal Frequency Division Multiplexing (OFDM) is a popular multi-carrier modulation technique with the ability to provide high data rates over echo ladened channels. It has excellent robustness to impairments caused by multipath, which includes frequency selective fading. Unfortunately, OFDM is very sensitive to the carrier frequency offset (CFO) that is introduced by the downconversion process. The objective of this thesis is to develop and to analyze an algorithm for blind CFO recovery suitable for use with a practical zero-Intermediate Frequency (zero-IF) OFDM telecommunications system.<p> A blind CFO recovery algorithm based upon characteristics of the received signal's power spectrum is proposed. The algorithm's error performance is mathematically analyzed, and the theoretical results are verified with simulations. Simulation shows that the performance of the proposed algorithm agrees with the mathematical analysis.<p> A number of other CFO recovery techniques are compared to the proposed algorithm. The proposed algorithm performs well in comparison and does not suffer from many of the disadvantages of existing blind CFO recovery techniques. Most notably, its performance is not significantly degraded by noisy, frequency selective channels.
218

Using helicopter noise to prevent brownout crashes: an acoustic altimeter

Freedman, Joseph Saul 08 July 2010 (has links)
This thesis explores one possible method of preventing helicopter crashes caused by brownout using the noise generated by the helicopter rotor as an altimeter. The hypothesis under consideration is that the helicopter's height, velocity, and obstacle locations with respect to the helicopter, can be determined by comparing incident and reflected rotor noise signals, provided adequate bandwidth and signal to noise ratio. Heights can be determined by measuring the cepstrum of the reflected helicopter noise. The velocity can be determined by measuring small amounts of Doppler distortion using the Mellin-Scale Transform. Height and velocity detection algorithms are developed, optimized for this application, and tested using a microphone array. The algorithms and array are tested using a hemianechoic chamber and outside in Georgia Tech's Burger Bowl. Height and obstacle detection are determined to be feasible with the existing array. Velocity detection and surface mapping are not successfully accomplished.
219

Analysis, modeling and wide-area spatiotemporal control of low-frequency sound reproduction

Hill, Adam J. January 2012 (has links)
This research aims to develop a low-frequency response control methodology capable of delivering a consistent spectral and temporal response over a wide listening area. Low-frequency room acoustics are naturally plagued by room-modes, a result of standing waves at frequencies with wavelengths that are integer multiples of one or more room dimension. The standing wave pattern is different for each modal frequency, causing a complicated sound field exhibiting a highly position-dependent frequency response. Enhanced systems are investigated with multiple degrees of freedom (independently-controllable sound radiating sources) to provide adequate low-frequency response control. The proposed solution, termed a chameleon subwoofer array or CSA, adopts the most advantageous aspects of existing room-mode correction methodologies while emphasizing efficiency and practicality. Multiple degrees of freedom are ideally achieved by employing what is designated a hybrid subwoofer, which provides four orthogonal degrees of freedom configured within a modest-sized enclosure. The CSA software algorithm integrates both objective and subjective measures to address listener preferences including the possibility of individual real-time control. CSAs and existing techniques are evaluated within a novel acoustical modeling system (FDTD simulation toolbox) developed to meet the requirements of this research. Extensive virtual development of CSAs has led to experimentation using a prototype hybrid subwoofer. The resulting performance is in line with the simulations, whereby variance across a wide listening area is reduced by over 50% with only four degrees of freedom. A supplemental novel correction algorithm addresses correction issues at select narrow frequency bands. These frequencies are filtered from the signal and replaced using virtual bass to maintain all aural information, a psychoacoustical effect giving the impression of low-frequency. Virtual bass is synthesized using an original hybrid approach combining two mainstream synthesis procedures while suppressing each method‟s inherent weaknesses. This algorithm is demonstrated to improve CSA output efficiency while maintaining acceptable subjective performance.
220

Έλεγχος ακουστικής κλειστών χώρων με προσαρμοσμένα ακουστικά στοιχεία

Πολυχρονόπουλος, Σπύρος 05 February 2015 (has links)
Ο ήχος είναι ένα αρκετά παλιό πεδίο έρευνας, όμως μέχρι και σήμερα πολλές πτυχές του παραμένουν ανεξερεύνητες. Έτσι ακόμη και στις μέρες μας, παραμένει ελκυστική ερευνητική περιοχή για αρκετούς επιστήμονες. Ορισμένα από τα σύγχρονα επιστημονικά πεδία της ακουστικής είναι: η ακουστική χώρων, η ψυχοακουστική, η μουσική ακουστική, η ανάλυση φωνής, η ηλετρoακουστική, η ψηφιακή επεξεργασία ακουστικού σήματος, η υποβρύχια ακουστική, η ακουστική οικολογία, η περιβαλλοντική ακουστική, η αρχιτεκτονική ακουστική και άλλα. Κύριο αντικείμενο της διατριβής αποτελεί η μελέτη των συντονιστών Helmholtz. Για τον σκοπό της μελέτης αυτής υλοποιήθηκαν εξομοιώσεις σε περιβάλλον πεπερασμένων στοιχείων, μελετήθηκε η συμβολή των συντονιστών στην ακουστική των αρχαίων θεάτρων και υλοποιήθηκαν μοντέλα με τη βοήθεια ψηφιακών φίλτρων. Οι διαφορετικές προσεγγίσεις στη μελέτη των συντονιστών έχουν μεγάλο ενδιαφέρον, αφενός για την περαιτέρω γνώση των αρχών λειτουργίας τους, αφετέρου, για τη δημιουργία νέων υπολογιστικών εργαλείων, που χωρίς να απαιτούν μεγάλη υπολογιστική ισχύ προβλέπουν με σχετική ακρίβεια την συμπεριφορά τους στο ακουστικό πεδίο. / Sound is considered to be an old research field, but till now many of its aspects remain unexplored. Thus, it is found to be an attractive research area by several scientists. Some of the modern scientific fields of acoustics are: room acoustics, psychoacoustics, musical acoustics, speech analysis, electroacoustics, digital audio signal processing, underwater acoustics, acoustic ecology, environmental acoustics, architectural acoustics and other. The main object of this thesis is the study of Helmholtz resonators. In this study simulations in finite element environment were carried out and it was studied the contribution of the coordinators of the acoustics of ancient theaters as well as the implementation of complex models by using digital filters. The different approaches to the study of the resonators are of great interest both for further knowledge of the principles of their acoustic behavior and for creating new computational tools that they do not require large computational power to predict with fairly accuracy their behavior in the acoustic field.

Page generated in 0.1003 seconds