• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 28
  • 11
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 73
  • 18
  • 17
  • 14
  • 11
  • 10
  • 9
  • 9
  • 8
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Blind multi-user cancellation using the constant modulus algorithm

De Villiers, Johan Pieter 21 September 2005 (has links)
Please read the abstract in the section 00front of this document / Dissertation (M Eng (Electronic Engineering))--University of Pretoria, 2006. / Electrical, Electronic and Computer Engineering / unrestricted
52

Two Bayesian learning approaches to image processing / Traitement d’images par deux approches d’apprentissage Bayésien

Wang, Yiqing 02 March 2015 (has links)
Cette thèse porte sur deux méthodes à patch en traitement d’images dans le cadre de minimisation du risque Bayésien. Nous décrivons un mélange d’analyses factorielles pour modéliser la loi à priori des patchs dans une seule image et l’appliquons au débruitage et à l’inpainting. Nous étudions aussi les réseaux de neurones à multi-couches d’un point de vue probabiliste comme un outil permettant d’approcher l’espérance conditionnelle, ce qui ouvre quelques voies pour réduire leurs tailles et coût d’apprentissage. / This work looks at two patch based image processing methods in a Bayesian risk minimization framework. We describe a Gaussian mixture of factor analyzers for local prior modelling and apply it in the context of image denoising and inpainting. We also study multilayer neural networks from a probabilistic perspective as a tool for conditional expectation approximation, which suggests ways to reduce their sizes and training cost.
53

La plate-forme RAMSES pour un triple écran interactif : application à la génération automatique de télévision interactive / The RAMSES platform for triple display : application to automatic generation of interactive television

Royer, Julien 16 December 2009 (has links)
Avec la révolution du numérique, l’usage de la vidéo a fortement évolué durant les dernières décennies, passant du cinéma à la télévision, puis au web, du récit fictionnel au documentaire et de l’éditorialisation à la création par l’utilisateur. Les médias sont les vecteurs pour échanger des informations, des connaissances, des « reportages » personnels, des émotions... L’enrichissement automatique des documents multimédias est toujours un sujet de recherche depuis l’avènement des médias. Dans ce contexte, nous proposons dans un premier temps une modélisation des différents concepts et acteurs mis en œuvre pour analyser automatiquement des documents multimédias afin de déployer dynamiquement des services interactifs en relation avec le contenu des médias. Nous définissons ainsi les concepts d’analyseur, de service interactif, de description d’un document multimédia et enfin les fonctions nécessaires pour faire interagir ceux-ci. Le modèle d’analyse obtenu se démarque de la littérature en proposant une architecture modulaire, ouverte et évolutive. Nous présentons ensuite l’implantation de ces concepts dans le cadre d’un prototype de démonstration. Ce prototype permet ainsi de mettre en avant les contributions avancées dans la description des modèles. Une implantation ainsi que des recommandations sont détaillées pour chacun des modèles. Afin de montrer les résultats d’implantation des solutions proposées sur la plateforme telles que les standards MPEG-7 pour la description, MPEG-4 BIFS pour les scènes interactives ou encore OSGI pour l’architecture générale, nous présentons différents exemples de services interactifs intégrés dans la plateforme. Ceux-ci permettent de vérifier les capacités d’adaptation aux besoins d’un ou plusieurs services interactifs. / The concept developed in this thesis is to propose an architecture model allowing automatic multimedia analysis and inserting pertinent interactive contents accordingly to multimedia content. Until nowadays, studies are mainly trying to provide tools and frameworks to generate a full description of the multimedia. It can be compared as trying to describe the world since the system must have huge description capabilities. Actually, it is not possible to represent the world through a tree of concepts and relationships due to time and computer limitations. Therefore, according to the amount of multimedia analyzers developed all over the world, this thesis proposes a platform able to host, combine and share existing multimedia analyzers. Furthermore, we only consider user’s requirements to select only required elements from multimedia platform to analyze the multimedia. In order to easily adapt the platform to the service requirements, we propose a modular architecture based on plug-in multimedia analyzers to generate the contextual description of the media. Besides, we provide an interactive scene generator to dynamically create related interactive scenes. We choose the MPEG-7 standard to implement the multimedia’s description and MPEG-4 BIFS standard to implement interactive scenes into multimedia. We also present experimental results on different kind of interactive services using video real time information extraction. The main implemented example of interactive services concerns an interactive mobile TV application related to parliament session. This application aims to provide additional information to users by inserting automatically interactive contents (complementary information, subject of the current session…) into original TV program. In addition, we demonstrate the capacity of the platform to adapt to multiple domain applications through a set of simple interactive services (goodies, games...).
54

Mixture of Factor Analyzers with Information Criteria and the Genetic Algorithm

Turan, Esra 01 August 2010 (has links)
In this dissertation, we have developed and combined several statistical techniques in Bayesian factor analysis (BAYFA) and mixture of factor analyzers (MFA) to overcome the shortcoming of these existing methods. Information Criteria are brought into the context of the BAYFA model as a decision rule for choosing the number of factors m along with the Press and Shigemasu method, Gibbs Sampling and Iterated Conditional Modes deterministic optimization. Because of sensitivity of BAYFA on the prior information of the factor pattern structure, the prior factor pattern structure is learned directly from the given sample observations data adaptively using Sparse Root algorithm. Clustering and dimensionality reduction have long been considered two of the fundamental problems in unsupervised learning or statistical pattern recognition. In this dissertation, we shall introduce a novel statistical learning technique by focusing our attention on MFA from the perspective of a method for model-based density estimation to cluster the high-dimensional data and at the same time carry out factor analysis to reduce the curse of dimensionality simultaneously in an expert data mining system. The typical EM algorithm can get trapped in one of the many local maxima therefore, it is slow to converge and can never converge to global optima, and highly dependent upon initial values. We extend the EM algorithm proposed by cite{Gahramani1997} for the MFA using intelligent initialization techniques, K-means and regularized Mahalabonis distance and introduce the new Genetic Expectation Algorithm (GEM) into MFA in order to overcome the shortcomings of typical EM algorithm. Another shortcoming of EM algorithm for MFA is assuming the variance of the error vector and the number of factors is the same for each mixture. We propose Two Stage GEM algorithm for MFA to relax this constraint and obtain different numbers of factors for each population. In this dissertation, our approach will integrate statistical modeling procedures based on the information criteria as a fitness function to determine the number of mixture clusters and at the same time to choose the number factors that can be extracted from the data.
55

A new proposed method of contingency ranking

Gossman, Stephanie Mizzell 18 May 2010 (has links)
Security analysis of a power system requires a process called contingency analysis that analyzes results from all possible single contingencies (i.e. outages) in the system. The process of contingency analysis requires the definition of a parameter that is used to monitor a certain aspect of the system, which is called a performance index. The performance index definitions used traditionally have been highly nonlinear, and the results have not accurately predicted the outcome of the performance index in some cases. These incorrect results are referred to as misrankings since the contingency results are usually placed in order of severity so that the most severe cases are evident. This thesis considers a new definition of contingency ranking using a more linearized definition of the performance index. The construction of both the new, proposed definition and the classic definition both consider the current loading of circuits in the system as compared to their rated values. Specifically, the parameter measured by the proposed definition measures the difference, while the more nonlinear definition uses a ratio of the two quantities, which is then raised to a higher power. A small, four bus test system is used to demonstrate the benefits of the new, more linearized definition. The average percent error for all single line contingencies of the system decreased by over 9.5% using the proposed definition as compared to the previous one. This decrease in error allows this performance index to monitor a similar parameter (comparing current loading and current rating of the lines) and achieve a higher degree of accuracy. Further linearization of this proposed definition also shows a reduction in the average percent error by an additional 22% so that when compared to the original, highly nonlinear definition, the average error is reduced by almost 30%. By linearizing the definition of the performance index, the results are more accurate and misrankings are less likely to occur from the security analysis process.
56

Probabilistic modeling of neural data for analysis and synthesis of speech

Matthews, Brett Alexander 13 August 2012 (has links)
This research consists of probabilistic modeling of speech audio signals and deep-brain neurological signals in brain-computer interfaces. A significant portion of this research consists of a collaborative effort with Neural Signals Inc., Duluth, GA, and Boston University to develop an intracortical neural prosthetic system for speech restoration in a human subject living with Locked-In Syndrome, i.e., he is paralyzed and unable to speak. The work is carried out in three major phases. We first use kernel-based classifiers to detect evidence of articulation gestures and phonological attributes speech audio signals. We demonstrate that articulatory information can be used to decode speech content in speech audio signals. In the second phase of the research, we use neurological signals collected from a human subject with Locked-In Syndrome to predict intended speech content. The neural data were collected with a microwire electrode surgically implanted in speech motor cortex of the subject's brain, with the implant location chosen to capture extracellular electric potentials related to speech motor activity. The data include extracellular traces, and firing occurrence times for neural clusters in the vicinity of the electrode identified by an expert. We compute continuous firing rate estimates for the ensemble of neural clusters using several rate estimation methods and apply statistical classifiers to the rate estimates to predict intended speech content. We use Gaussian mixture models to classify short frames of data into 5 vowel classes and to discriminate intended speech activity in the data from non-speech. We then perform a series of data collection experiments with the subject designed to test explicitly for several speech articulation gestures, and decode the data offline. Finally, in the third phase of the research we develop an original probabilistic method for the task of spike-sorting in intracortical brain-computer interfaces, i.e., identifying and distinguishing action potential waveforms in extracellular traces. Our method uses both action potential waveforms and their occurrence times to cluster the data. We apply the method to semi-artificial data and partially labeled real data. We then classify neural spike waveforms, modeled with single multivariate Gaussians, using the method of minimum classification error for parameter estimation. Finally, we apply our joint waveforms and occurrence times spike-sorting method to neurological data in the context of a neural prosthesis for speech.
57

Efficient radio frequency power amplifiers for wireless communications

Cui, Xian. January 2007 (has links)
Thesis (Ph. D.)--Ohio State University, 2007. / Full text release at OhioLINK's ETD Center delayed at author's request
58

Desenvolvimento de um sistema eletronico com registro simultaneo de amplitude e instante de ocorrencias dos pulsos, aplicado ao metodo de coincidencias 4'pi''beta''gama' / Design of an electronic system with simultaneous registering of pulse amplitude and event time applied to the 4'pi''beta'-'gama' coincidence method

TOLEDO, FABIO de 09 October 2014 (has links)
Made available in DSpace on 2014-10-09T12:27:07Z (GMT). No. of bitstreams: 0 / Made available in DSpace on 2014-10-09T14:03:20Z (GMT). No. of bitstreams: 0 / Dissertacao (Mestrado) / IPEN/D / Instituto de Pesquisas Energeticas e Nucleares - IPEN-CNEN/SP
59

Remoção de corantes de efluente textil por zeólita de cinzas de carvão modificada por surfactante e avaliação dos efeitos tóxicos / Dyes removal of textile wastewater onto surfactant modified zeolite from coal ash and evaluation of the toxic effects

FERREIRA, PATRICIA C. 10 December 2015 (has links)
Submitted by Claudinei Pracidelli (cpracide@ipen.br) on 2015-12-10T17:52:55Z No. of bitstreams: 0 / Made available in DSpace on 2015-12-10T17:52:55Z (GMT). No. of bitstreams: 0 / Tese (Doutorado em Tecnologia Nuclear) / IPEN/T / Instituto de Pesquisas Energeticas e Nucleares - IPEN-CNEN/SP
60

Medidor de pressão e dose sonora

Gazzoni, Fernando Estevam 26 February 2009 (has links)
CAPES; CNPq / O estudo dos sons e a influência que ele exerce nos seres humanos foi intensificado nas últimas décadas devido ao grande número de veículos e indústrias nos centros das grandes cidades. O equipamento usado para caracterizar o som e verificar se ele está dentro dos padrões técnicos é o medidor de intensidade sonora ou sonômetro. As normas permitem que sejam comercializados desde equipamentos que medem apenas o nível de pressão sonora até equipamentos que além do nível de pressão sonora mostram seu espectro em freqüência e pressão média a que um operador foi exposto durante a jornada de trabalho. No Brasil, a maioria dos sonômetros comercializados que não medem apenas a pressão sonora normalmente são importados. O presente trabalho visa criar um protótipo de sonômetro capaz de medir a pressão sonora, caracterizar o sinal medido na freqüência e calcular a dose a que um indivíduo é exposto. Foi desenvolvido um sonômetro do tipo 2 com curvas de resposta lenta, rápida e impulsiva, com análise espectral em freqüência usando filtros de banda de oitava e com curvas de ponderação A e C. Os testes do software desenvolvido e da resposta do circuito montado foram realizados usando a curva de ponderação C, que é quase linear e por isso melhor para verificar a resposta em freqüência do circuito eletrônico projetado. A captura das medidas com curvas de ponderação A. Nos testes de dose de ruído foi usada a curva de ponderação A. Para testes usando ondas periódicas os resultados obtidos com o sonômetro usando as curvas de resposta lenta, rápida e impulsiva apresentaram resultados iguais, conforme esperado pela norma IEC 651. O firmware apresentou boa resolução em freqüência nos testes e respondeu de forma eficiente à variação de amplitude e freqüência do sinal sonoro de entrada. Os testes de bancada foram realizados comparando o resultado do protótipo com um sonômetro comercial e a medida de alguns sinais sonoros apresentou diferença elevada entre seus valores mínimo e máximo. Esse erro deve-se ao ruído de fundo da sala de testes, do microfone utilizado e dos erros intrínsecos ao processo da Transformada Rápida de Fourier (FFT), tais como espalhamento espectral devido à descontinuidade do início e fim da janela de amostragem, número de amostras da janela. O uso do filtro de decimação intensificou os erros ao redor da freqüência de 250Hz. A dose de ruído calculada pelo sonômetro foi proporcional ao aumento da intensidade sonora da fonte, conforme registrado pelo dosímetro comercial, porém sempre registrando um valor maior que o esperado. / The study of sounds and the influence it exerts in humans has intensified in recent decades due to the large number of vehicles and industries in the centers of large cities. The equipment used to characterize the sound and compare if it is within the technical standards is the sound intensity meter or sound level meter. The standards allow marketed since equipment that only measure the sound pressure level to equipment that show pressure level spectrum in frequency and mean pressure to which an operator was exposed during the workday. In Brazil, most sound level meters marketed that measure the sound pressure, the others are usually imported. The present work aims to create a prototype of sound level meters capable of measuring the sound pressure, characterize the signal measured in the frequency and calculate the dose to which an individual is exposed. We developed a sonometer type 2 with slow, fast and impulsive response curves, with spectral analysis using frequency filters and octave band weighting curves A and C. The testing software developed and the response of the developed circuit were performed using the C weighting curve, which is almost linear and therefore best to check the frequency response of the electronic circuit designed. In tests of noise dose and sound pressure level was used weighting curve A. For tests using periodic waves the results obtained with the sound level meter using the response curves of slow, fast and impulsive showed similar results, as expected by IEC 651. The firmware had good resolution in frequency in testing and responded efficiently to fluctuations in the amplitude and frequency of the sound signal input. The bench tests were performed comparing the results of the prototype with a marketed sonometer and some measure of sounds presented high difference in their minimum and maximum values. This error is due to the background noise of the testing room, the microphone used and the errors inherent to the process of fast Fourier transform (FFT), such as spread spectrum due to the discontinuity at the beginning and end of the sampling window, number of samples of the window. The use of the decimation filter intensified errors around the frequency of 250Hz. The calculated noise dose meter reading was proportional to the increase in the intensity of the sound source, as recorded by the marketed dosimeter, but recording a value greater than expected.

Page generated in 0.1262 seconds