• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • Tagged with
  • 4
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Classificação e reconhecimento de padrões em imagens tridimensionais utilizando Redes Neurais Artificiais (RNAs)

Kuester Neto, Paulo 24 April 2009 (has links)
Made available in DSpace on 2016-04-29T14:23:48Z (GMT). No. of bitstreams: 1 Paulo Kuester Neto.pdf: 1284476 bytes, checksum: 99c3eadc17da7f6d51803dba0833899f (MD5) Previous issue date: 2009-04-24 / This project is part of the research line Collective Intelligence and Interactive Environments and aims to investigate modes of pattern recognition and classification in three-dimensional images using artificial neural networks. To achieve this, three-dimensional images will be submitted to a connection is system based on Artificial Neural Networks according to a back propagation algorithm used as the basis for training, in order to obtain patterns that are common among these images. This work aims to contribute to image analysis so that it can be applied to research, from forest mapping and virtual worlds construction to prognostics and/or diagnoses in health-related areas, in which, due to variances and imperfections in images that are said to be similar, it is not possible to use simple algorithms that recognize similarities between them. In light of the theoretical presuppositions discussed in chapter 2 and to the state-of-the-art approached in chapter 3, the characteristics, organization modes, learning algorithms and free parameters of this neural model that best adapt to the nature of the research are defined. The work must involve a simulation environment, the framework for neural models experimentation and results verification, chosen according to characteristics like reliability, viability and adequacy to hardware conditions and limitations. In addition, the environment must be capable of dealing with the research object, that is, the analysis and classification of three-dimensional forms and their recognition through adjustments to the parameters of the neural model. The research to be carried out was divided into two phases: the first one is network training, in which some images are arbitrarily chosen from an image base. These images share common characteristics that must be recognized to make adjustments to the Neural Network. In the second phase, after the stage of tests and training, the network must be capable of dealing with the rest of the selected image base. The system must also effectively deal with exceptions and variation in some characteristics, such as light, positioning and color. The challenge is making the neural network training be as generic as possible, so it can deal with these variations, offering a degree of reliability without substantial decrease in effectiveness / Este projeto se insere na linha de pesquisa Inteligência Coletiva e Ambientes Interativos, visando investigar os modos de reconhecimento e classificação de padrões em imagens tridimensionais utilizando Redes Neurais Artificiais. Para tanto, pretende-se submeter imagens tridimensionais a um sistema conexionista baseado em Redes Neurais Artificiais de acordo com um algoritmo de retro-propagação (backpropagation) como base para treinamento, buscando-se obter padrões comuns entre essas imagens. Este trabalho objetiva contribuir com a análise de imagens para aplicação em pesquisa, desde mapeamento florestal, construção de mundos virtuais até prognósticos e/ou diagnóstico em áreas relacionadas à saúde, em que, devido a variâncias e imperfeições em imagens ditas similares, não se aplicam a utilização de algoritmos simples que reconheçam semelhanças entre elas. De acordo com os pressupostos teóricos discutidos no capítulo 2 e o estado da arte no capítulo 3, definem-se características, modos de organização, algoritmos de aprendizagem e parâmetros livres desse modelo neural que melhor se adaptam a natureza da pesquisa. O trabalho deve envolver um ambiente de simulação, framework para experimentação dos modelos neurais e verificação de resultados, escolhido de acordo com características como confiabilidade, viabilidade e adequação as condições e limitações de hardware. O ambiente deve ser capaz de lidar ainda com o objeto de pesquisa, ou seja, a análise e a classificação de formas tridimensionais e seu reconhecimento através de ajustes nos parâmetros do modelo neural. A pesquisa a ser realizada foi dividida em duas fases, a primeira, de treinamento da rede, escolhendo arbitrariamente, a partir de um banco de imagens, algumas que compartilhem características comuns que devem ser reconhecidas para ajustes da Rede Neural. Na segunda fase, posterior a etapa de testes e treinamento, a rede deve ser capaz de lidar com o restante do banco de imagens selecionado. O sistema deve ainda ser efetivo ao lidar com exceções e variação em algumas características como luminosidade, posicionamento e cor. O desafio é tornar o treinamento da Rede Neural o mais genérico possível a fim de lidar com essas variações, oferecendo um grau de confiabilidade sem degradação substancial de efetividade
2

Ultrasonic stochastic localization of hidden discontinuities in composites using multimodal probability beliefs

Warraich, Daud Sana, Mechanical & Manufacturing Engineering, Faculty of Engineering, UNSW January 2009 (has links)
This thesis presents a technique used to stochastically estimate the location of hidden discontinuities in carbon fiber composite materials. Composites pose a challenge to signal processing because speckle noise, as a result of reflections from impregnated laminas, masks useful information and impedes detection of hidden discontinuities. Although digital signal processing techniques have been exploited to lessen speckle noise and help to localize discontinuities, uncertainty in ultrasonic wave propagation and broadband frequency based inspections of composites still make it a difficult task. The technique proposed in this thesis estimates the location of hidden discontinuities stochastically in one- and two-dimensions based on statistical data of A-Scans and C-Scans. Multiple experiments have been performed on carbon fiber reinforced plastics including artificial delaminations and porosity at different depths in the thickness of material. A probabilistic approach, which precisely localizes discontinuities in high and low amplitude signals, has been used to present this method. Compared to conventional techniques the proposed technique offers a more reliable package, with the ability to detect discontinuities in signals with lower intensities by utilizing the repetitive amplitudes in multiple sensor observations obtained from one-dimensional A-Scans or two-dimensional C-Scan data sets. The thesis presents the methodology encompassing the proposed technique and the implementation of a system to process real ultrasonic signals and images for effective discontinuity detection and localization.
3

Parallel distributed-memory particle methods for acquisition-rate segmentation and uncertainty quantifications of large fluorescence microscopy images

Afshar, Yaser 08 November 2016 (has links) (PDF)
Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. Another issue is the information loss during image acquisition due to limitations of the optical imaging systems. Analysis of the acquired images may, therefore, find multiple solutions (or no solution) due to imaging noise, blurring, and other uncertainties introduced during image acquisition. In this thesis, we address the computational processing time and memory issues by developing a distributed parallel algorithm for segmentation of large fluorescence-microscopy images. The method is based on the versatile Discrete Region Competition (Cardinale et al., 2012) algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collective solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 10^10 pixels) but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data inspection and interactive experiments. Second, we estimate the segmentation uncertainty on large images that do not fit the main memory of a single computer. We there- fore develop a distributed parallel algorithm for efficient Markov- chain Monte Carlo Discrete Region Sampling (Cardinale, 2013). The parallel algorithm provides a measure of segmentation uncertainty in a statistically unbiased way. It approximates the posterior probability densities over the high-dimensional space of segmentations around the previously found segmentation. / Moderne Fluoreszenzmikroskopie, wie zum Beispiel Lichtblattmikroskopie, erlauben die Aufnahme hochaufgelöster, 3-dimensionaler Bilder. Dies führt zu einen Engpass bei der Bearbeitung und Analyse der aufgenommenen Bilder, da die Aufnahmerate die Datenverarbeitungsrate übersteigt. Zusätzlich können diese Bilder so groß sein, dass sie die Speicherkapazität eines einzelnen Computers überschreiten. Hinzu kommt der aus Limitierungen des optischen Abbildungssystems resultierende Informationsverlust während der Bildaufnahme. Bildrauschen, Unschärfe und andere Messunsicherheiten können dazu führen, dass Analysealgorithmen möglicherweise mehrere oder keine Lösung für Bildverarbeitungsaufgaben finden. Im Rahmen der vorliegenden Arbeit entwickeln wir einen verteilten, parallelen Algorithmus für die Segmentierung von speicherintensiven Fluoreszenzmikroskopie-Bildern. Diese Methode basiert auf dem vielseitigen "Discrete Region Competition" Algorithmus (Cardinale et al., 2012), der sich bereits in anderen Anwendungen als nützlich für die Segmentierung von Mikroskopie-Bildern erwiesen hat. Das hier präsentierte Verfahren unterteilt das Eingangsbild in kleinere Unterbilder, welche auf die Speicher mehrerer Computer verteilt werden. Die Koordinierung des globalen Segmentierungsproblems wird durch die Benutzung von Netzwerkkommunikation erreicht. Dies erlaubt die Segmentierung von sehr großen Bildern, wobei wir die Anwendung des Algorithmus auf Bildern mit bis zu 10^10 Pixeln demonstrieren. Zusätzlich wird die Segmentierungsgeschwindigkeit erhöht und damit vergleichbar mit der Aufnahmerate des Mikroskops. Dies ist eine Grundvoraussetzung für die intelligenten Mikroskope der Zukunft, und es erlaubt die Online-Betrachtung der aufgenommenen Daten, sowie interaktive Experimente. Wir bestimmen die Unsicherheit des Segmentierungsalgorithmus bei der Anwendung auf Bilder, deren Größe den Speicher eines einzelnen Computers übersteigen. Dazu entwickeln wir einen verteilten, parallelen Algorithmus für effizientes Markov-chain Monte Carlo "Discrete Region Sampling" (Cardinale, 2013). Dieser Algorithmus quantifiziert die Segmentierungsunsicherheit statistisch erwartungstreu. Dazu wird die A-posteriori-Wahrscheinlichkeitsdichte über den hochdimensionalen Raum der Segmentierungen in der Umgebung der zuvor gefundenen Segmentierung approximiert.
4

Parallel distributed-memory particle methods for acquisition-rate segmentation and uncertainty quantifications of large fluorescence microscopy images

Afshar, Yaser 17 October 2016 (has links)
Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. Another issue is the information loss during image acquisition due to limitations of the optical imaging systems. Analysis of the acquired images may, therefore, find multiple solutions (or no solution) due to imaging noise, blurring, and other uncertainties introduced during image acquisition. In this thesis, we address the computational processing time and memory issues by developing a distributed parallel algorithm for segmentation of large fluorescence-microscopy images. The method is based on the versatile Discrete Region Competition (Cardinale et al., 2012) algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collective solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 10^10 pixels) but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data inspection and interactive experiments. Second, we estimate the segmentation uncertainty on large images that do not fit the main memory of a single computer. We there- fore develop a distributed parallel algorithm for efficient Markov- chain Monte Carlo Discrete Region Sampling (Cardinale, 2013). The parallel algorithm provides a measure of segmentation uncertainty in a statistically unbiased way. It approximates the posterior probability densities over the high-dimensional space of segmentations around the previously found segmentation. / Moderne Fluoreszenzmikroskopie, wie zum Beispiel Lichtblattmikroskopie, erlauben die Aufnahme hochaufgelöster, 3-dimensionaler Bilder. Dies führt zu einen Engpass bei der Bearbeitung und Analyse der aufgenommenen Bilder, da die Aufnahmerate die Datenverarbeitungsrate übersteigt. Zusätzlich können diese Bilder so groß sein, dass sie die Speicherkapazität eines einzelnen Computers überschreiten. Hinzu kommt der aus Limitierungen des optischen Abbildungssystems resultierende Informationsverlust während der Bildaufnahme. Bildrauschen, Unschärfe und andere Messunsicherheiten können dazu führen, dass Analysealgorithmen möglicherweise mehrere oder keine Lösung für Bildverarbeitungsaufgaben finden. Im Rahmen der vorliegenden Arbeit entwickeln wir einen verteilten, parallelen Algorithmus für die Segmentierung von speicherintensiven Fluoreszenzmikroskopie-Bildern. Diese Methode basiert auf dem vielseitigen "Discrete Region Competition" Algorithmus (Cardinale et al., 2012), der sich bereits in anderen Anwendungen als nützlich für die Segmentierung von Mikroskopie-Bildern erwiesen hat. Das hier präsentierte Verfahren unterteilt das Eingangsbild in kleinere Unterbilder, welche auf die Speicher mehrerer Computer verteilt werden. Die Koordinierung des globalen Segmentierungsproblems wird durch die Benutzung von Netzwerkkommunikation erreicht. Dies erlaubt die Segmentierung von sehr großen Bildern, wobei wir die Anwendung des Algorithmus auf Bildern mit bis zu 10^10 Pixeln demonstrieren. Zusätzlich wird die Segmentierungsgeschwindigkeit erhöht und damit vergleichbar mit der Aufnahmerate des Mikroskops. Dies ist eine Grundvoraussetzung für die intelligenten Mikroskope der Zukunft, und es erlaubt die Online-Betrachtung der aufgenommenen Daten, sowie interaktive Experimente. Wir bestimmen die Unsicherheit des Segmentierungsalgorithmus bei der Anwendung auf Bilder, deren Größe den Speicher eines einzelnen Computers übersteigen. Dazu entwickeln wir einen verteilten, parallelen Algorithmus für effizientes Markov-chain Monte Carlo "Discrete Region Sampling" (Cardinale, 2013). Dieser Algorithmus quantifiziert die Segmentierungsunsicherheit statistisch erwartungstreu. Dazu wird die A-posteriori-Wahrscheinlichkeitsdichte über den hochdimensionalen Raum der Segmentierungen in der Umgebung der zuvor gefundenen Segmentierung approximiert.

Page generated in 0.1174 seconds