• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

AvaliaÃÃo de redes neurais competitivas em tarefas de quantizaÃÃo vetorial:um estudo comparativo / Evaluation of competitive neural networks in tasks of vector quantization (VQ): a comparative study

Magnus Alencar da cruz 06 October 2007 (has links)
nÃo hà / Esta dissertaÃÃo tem como principal meta realizar um estudo comparativo do desempenho de algoritmos de redes neurais competitivas nÃo-supervisionadas em problemas de quantizaÃÃo vetorial (QV) e aplicaÃÃes correlatas, tais como anÃlise de agrupamentos (clustering) e compressÃo de imagens. A motivaÃÃo para tanto parte da percepÃÃo de que hà uma relativa escassez de estudos comparativos sistemÃticos entre algoritmos neurais e nÃo-neurais de anÃlise de agrupamentos na literatura especializada. Um total de sete algoritmos sÃo avaliados, a saber: algoritmo K -mÃdias e as redes WTA, FSCL, SOM, Neural-Gas, FuzzyCL e RPCL. De particular interesse à a seleÃÃo do nÃmero Ãtimo de neurÃnios. NÃo hà um mÃtodo que funcione para todas as situaÃÃes, restando portanto avaliar a influÃncia que cada tipo de mÃtrica exerce sobre algoritmo em estudo. Por exemplo, os algoritmos de QV supracitados sÃo bastante usados em tarefas de clustering. Neste tipo de aplicaÃÃo, a validaÃÃo dos agrupamentos à feita com base em Ãndices que quantificam os graus de compacidade e separabilidade dos agrupamentos encontrados, tais como Ãndice Dunn e Ãndice Davies-Bouldin (DB). Jà em tarefas de compressÃo de imagens, determinado algoritmo de QV à avaliado em funÃÃo da qualidade da informaÃÃo reconstruÃda, daà as mÃtricas mais usadas serem o erro quadrÃtico mÃdio de quantizaÃÃo (EQMQ) ou a relaÃÃo sinal-ruÃdo de pico (PSNR). Empiricamente verificou-se que, enquanto o Ãndice DB favorece arquiteturas com poucos protÃtipos e o Dunn com muitos, as mÃtricas EQMQ e PSNR sempre favorecem nÃmeros ainda maiores. Nenhuma das mÃtricas supracitadas leva em consideraÃÃo o nÃmero de parÃmetros do modelo. Em funÃÃo disso, esta dissertaÃÃo propÃe o uso do critÃrio de informaÃÃo de Akaike (AIC) e o critÃrio do comprimento descritivo mÃnimo (MDL) de Rissanen para selecionar o nÃmero Ãtimo de protÃtipos. Este tipo de mÃtrica mostra-se Ãtil na busca do nÃmero de protÃtipos que satisfaÃa simultaneamente critÃrios opostos, ou seja, critÃrios que buscam o menor erro de reconstruÃÃo a todo custo (MSE e PSNR) e critÃrios que buscam clusters mais compactos e coesos (Ãndices Dunn e DB). Como conseqÃÃncia, o nÃmero de protÃtipos obtidos pelas mÃtricas AIC e MDL à geralmente um valor intermediÃrio, i.e. nem tÃo baixo quanto o sugerido pelos Ãndices Dunn e DB, nem tÃo altos quanto o sugerido pelas mÃtricas MSE e PSNR. Outra conclusÃo importante à que nÃo necessariamente os algoritmos mais sofisticados do ponto de vista da modelagem, tais como as redes SOM e Neural-Gas, sÃo os que apresentam melhores desempenhos em tarefas de clustering e quantizaÃÃo vetorial. Os algoritmos FSCL e FuzzyCL sÃo os que apresentam melhores resultados em tarefas de quantizaÃÃo vetorial, com a rede FSCL apresentando melhor relaÃÃo custo-benefÃcio, em funÃÃo do seu menor custo computacional. Para finalizar, vale ressaltar que qualquer que seja o algoritmo escolhido, se o mesmo tiver seus parÃmetros devidamente ajustados e seus desempenhos devidamente avaliados, as diferenÃas de performance entre os mesmos sÃo desprezÃveis, ficando como critÃrio de desempate o custo computacional. / The main goal of this master thesis was to carry out a comparative study of the performance of algorithms of unsupervised competitive neural networks in problems of vector quantization (VQ) tasks and related applications, such as cluster analysis and image compression. This study is mainly motivated by the relative scarcity of systematic comparisons between neural and nonneural algorithms for VQ in specialized literature. A total of seven algorithms are evaluated, namely: K-means, WTA, FSCL, SOM, Neural-Gas, FuzzyCL and RPCL. Of particular interest is the problem of selecting an adequate number of neurons given a particular vector quantization problem. Since there is no widespread method that works satisfactorily for all applications, the remaining alternative is to evaluate the influence that each type of evaluation metric has on a specific algorithm. For example, the aforementioned vector quantization algorithms are widely used in clustering-related tasks. For this type of application, cluster validation is based on indexes that quantify the degrees of compactness and separability among clusters, such as the Dunn Index and the Davies- Bouldin (DB) Index. In image compression tasks, however, a given vector quantization algorithm is evaluated in terms of the quality of the reconstructed information, so that the most used evaluation metrics are the mean squared quantization error (MSQE) and the peak signal-to-noise ratio (PSNR). This work verifies empirically that, while the indices Dunn and DB or favors architectures with many prototypes (Dunn) or with few prototypes (DB), metrics MSE and PSNR always favor architectures with well bigger amounts. None of the evaluation metrics cited previously takes into account the number of parameters of the model. Thus, this thesis evaluates the feasibility of the use of the Akaikeâs information criterion (AIC) and Rissanenâs minimum description length (MDL) criterion to select the optimal number of prototypes. This type of evaluation metric indeed reveals itself useful in the search of the number of prototypes that simultaneously satisfies conflicting criteria, i.e. those favoring more compact and cohesive clusters (Dunn and DB indices) versus those searching for very low reconstruction errors (MSE and PSNR). Thus, the number of prototypes suggested by AIC and MDL is generally an intermediate value, i.e nor so low as much suggested for the indexes Dunn and DB, nor so high as much suggested one for metric MSE and PSNR. Another important conclusion is that sophisticated models, such as the SOM and Neural- Gas networks, not necessarily have the best performances in clustering and VQ tasks. For example, the algorithms FSCL and FuzzyCL present better results in terms of the the of the reconstructed information, with the FSCL presenting better cost-benefit ratio due to its lower computational cost. As a final remark, it is worth emphasizing that if a given algorithm has its parameters suitably tuned and its performance fairly evaluated, the differences in performance compared to others prototype-based algorithms is minimum, with the coputational cost being used to break ties.
2

Probabilistic models in noisy environments : and their application to a visual prosthesis for the blind

Archambeau, Cédric 26 September 2005 (has links)
In recent years, probabilistic models have become fundamental techniques in machine learning. They are successfully applied in various engineering problems, such as robotics, biometrics, brain-computer interfaces or artificial vision, and will gain in importance in the near future. This work deals with the difficult, but common situation where the data is, either very noisy, or scarce compared to the complexity of the process to model. We focus on latent variable models, which can be formalized as probabilistic graphical models and learned by the expectation-maximization algorithm or its variants (e.g., variational Bayes).<br> After having carefully studied a non-exhaustive list of multivariate kernel density estimators, we established that in most applications locally adaptive estimators should be preferred. Unfortunately, these methods are usually sensitive to outliers and have often too many parameters to set. Therefore, we focus on finite mixture models, which do not suffer from these drawbacks provided some structural modifications.<br> Two questions are central in this dissertation: (i) how to make mixture models robust to noise, i.e. deal efficiently with outliers, and (ii) how to exploit side-channel information, i.e. additional information intrinsic to the data. In order to tackle the first question, we extent the training algorithms of the popular Gaussian mixture models to the Student-t mixture models. the Student-t distribution can be viewed as a heavy-tailed alternative to the Gaussian distribution, the robustness being tuned by an extra parameter, the degrees of freedom. Furthermore, we introduce a new variational Bayesian algorithm for learning Bayesian Student-t mixture models. This algorithm leads to very robust density estimators and clustering. To address the second question, we introduce manifold constrained mixture models. This new technique exploits the information that the data is living on a manifold of lower dimension than the dimension of the feature space. Taking the implicit geometrical data arrangement into account results in better generalization on unseen data.<br> Finally, we show that the latent variable framework used for learning mixture models can be extended to construct probabilistic regularization networks, such as the Relevance Vector Machines. Subsequently, we make use of these methods in the context of an optic nerve visual prosthesis to restore partial vision to blind people of whom the optic nerve is still functional. Although visual sensations can be induced electrically in the blind's visual field, the coding scheme of the visual information along the visual pathways is poorly known. Therefore, we use probabilistic models to link the stimulation parameters to the features of the visual perceptions. Both black-box and grey-box models are considered. The grey-box models take advantage of the known neurophysiological information and are more instructive to medical doctors and psychologists.<br>

Page generated in 0.0913 seconds