• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 1
  • 1
  • 1
  • Tagged with
  • 17
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Stochastic Nested Aggregation for Images and Random Fields

Wesolkowski, Slawomir Bogumil 27 March 2007 (has links)
Image segmentation is a critical step in building a computer vision algorithm that is able to distinguish between separate objects in an image scene. Image segmentation is based on two fundamentally intertwined components: pixel comparison and pixel grouping. In the pixel comparison step, pixels are determined to be similar or different from each other. In pixel grouping, those pixels which are similar are grouped together to form meaningful regions which can later be processed. This thesis makes original contributions to both of those areas. First, given a Markov Random Field framework, a Stochastic Nested Aggregation (SNA) framework for pixel and region grouping is presented and thoroughly analyzed using a Potts model. This framework is applicable in general to graph partitioning and discrete estimation problems where pairwise energy models are used. Nested aggregation reduces the computational complexity of stochastic algorithms such as Simulated Annealing to order O(N) while at the same time allowing local deterministic approaches such as Iterated Conditional Modes to escape most local minima in order to become a global deterministic optimization method. SNA is further enhanced by the introduction of a Graduated Models strategy which allows an optimization algorithm to converge to the model via several intermediary models. A well-known special case of Graduated Models is the Highest Confidence First algorithm which merges pixels or regions that give the highest global energy decrease. Finally, SNA allows us to use different models at different levels of coarseness. For coarser levels, a mean-based Potts model is introduced in order to compute region-to-region gradients based on the region mean and not edge gradients. Second, we develop a probabilistic framework based on hypothesis testing in order to achieve color constancy in image segmentation. We develop three new shading invariant semi-metrics based on the Dichromatic Reflection Model. An RGB image is transformed into an R'G'B' highlight invariant space to remove any highlight components, and only the component representing color hue is preserved to remove shading effects. This transformation is applied successfully to one of the proposed distance measures. The probabilistic semi-metrics show similar performance to vector angle on images without saturated highlight pixels; however, for saturated regions, as well as very low intensity pixels, the probabilistic distance measures outperform vector angle. Third, for interferometric Synthetic Aperture Radar image processing we apply the Potts model using SNA to the phase unwrapping problem. We devise a new distance measure for identifying phase discontinuities based on the minimum coherence of two adjacent pixels and their phase difference. As a comparison we use the probabilistic cost function of Carballo as a distance measure for our experiments.
12

Stochastic Nested Aggregation for Images and Random Fields

Wesolkowski, Slawomir Bogumil 27 March 2007 (has links)
Image segmentation is a critical step in building a computer vision algorithm that is able to distinguish between separate objects in an image scene. Image segmentation is based on two fundamentally intertwined components: pixel comparison and pixel grouping. In the pixel comparison step, pixels are determined to be similar or different from each other. In pixel grouping, those pixels which are similar are grouped together to form meaningful regions which can later be processed. This thesis makes original contributions to both of those areas. First, given a Markov Random Field framework, a Stochastic Nested Aggregation (SNA) framework for pixel and region grouping is presented and thoroughly analyzed using a Potts model. This framework is applicable in general to graph partitioning and discrete estimation problems where pairwise energy models are used. Nested aggregation reduces the computational complexity of stochastic algorithms such as Simulated Annealing to order O(N) while at the same time allowing local deterministic approaches such as Iterated Conditional Modes to escape most local minima in order to become a global deterministic optimization method. SNA is further enhanced by the introduction of a Graduated Models strategy which allows an optimization algorithm to converge to the model via several intermediary models. A well-known special case of Graduated Models is the Highest Confidence First algorithm which merges pixels or regions that give the highest global energy decrease. Finally, SNA allows us to use different models at different levels of coarseness. For coarser levels, a mean-based Potts model is introduced in order to compute region-to-region gradients based on the region mean and not edge gradients. Second, we develop a probabilistic framework based on hypothesis testing in order to achieve color constancy in image segmentation. We develop three new shading invariant semi-metrics based on the Dichromatic Reflection Model. An RGB image is transformed into an R'G'B' highlight invariant space to remove any highlight components, and only the component representing color hue is preserved to remove shading effects. This transformation is applied successfully to one of the proposed distance measures. The probabilistic semi-metrics show similar performance to vector angle on images without saturated highlight pixels; however, for saturated regions, as well as very low intensity pixels, the probabilistic distance measures outperform vector angle. Third, for interferometric Synthetic Aperture Radar image processing we apply the Potts model using SNA to the phase unwrapping problem. We devise a new distance measure for identifying phase discontinuities based on the minimum coherence of two adjacent pixels and their phase difference. As a comparison we use the probabilistic cost function of Carballo as a distance measure for our experiments.
13

Amélioration des techniques de reconnaissance automatique de mines marines par analyse de l'écho à partir d'images sonar haute résolution / Improvement of automatic recognition techniques of marine mines by analyzing echo in high resolution sonar images

Elbergui, Ayda 10 December 2013 (has links)
La classification des cibles sous-marines est principalement basée sur l'analyse de l'ombre acoustique. La nouvelle génération des sonars d'imagerie fournit une description plus précise de la rétrodiffusion de l'onde acoustique par les cibles. Par conséquent, la combinaison de l'analyse de l'ombre et de l'écho est une voie prometteuse pour améliorer la classification automatique des cibles. Quelques systèmes performants de classification automatique des cibles s'appuient sur un modèle pour faire l'apprentissage au lieu d'utiliser uniquement des réponses expérimentales ou simulées de cibles pour entraîner le classificateur. Avec une approche basée modèle, un bon niveau de performance en classification peut être obtenu si la modélisation de la réponse acoustique de la cible est suffisamment précise. La mise en œuvre de la méthode de classification a nécessité de modéliser avec précision la réponse acoustique des cibles. Le résultat de cette modélisation est un simulateur d'images sonar (SIS). Comme les sonars d'imagerie fonctionnent à haute et très haute fréquence le modèle est basé sur le lancer de rayons acoustiques. Plusieurs phénomènes sont pris en compte pour augmenter le réalisme de la réponse acoustique (les effets des trajets multiples, l'interaction avec le fond marin, la diffraction, etc.). La première phase du classificateur utilise une approche basée sur un modèle. L'information utile dans la signature acoustique de la cible est nommée « A-scan ». Dans la pratique, l'A-scan de la cible détectée est comparé à un ensemble d'A-scans générés par SIS dans les mêmes conditions opérationnelles. Ces gabarits (A-scans) sont créés en modélisant des objets manufacturés de formes simples et complexes (mines ou non mines). Cette phase intègre un module de filtrage adapté pour permettre un résultat de classification plus souple capable de fournir un degré d'appartenance en fonction du maximum de corrélation obtenu. Avec cette approche, l'ensemble d'apprentissage peut être enrichi afin d'améliorer la classification lorsque les classes sont fortement corrélées. Si la différence entre les coefficients de corrélation de l'ensemble de classes les plus probables n'est pas suffisante, le résultat est considéré ambigu. Une deuxième phase est proposée afin de distinguer ces classes en ajoutant de nouveaux descripteurs et/ou en ajoutant davantage d'A-scans dans la base d'apprentissage et ce, dans de nouvelles configurations proches des configurations ambiguës. Ce processus de classification est principalement évalué sur des données simulées et sur un jeu limité de données réelles. L'utilisation de l'A-scan a permis d'atteindre des bonnes performances de classification en mono-vue et a amélioré le résultat de classification pour certaines ambiguïtés récurrentes avec des méthodes basées uniquement sur l'analyse d'ombre. / Underwater target classification is mainly based on the analysis of the acoustic shadows. The new generation of imaging sonar provides a more accurate description of the acoustic wave scattered by the targets. Therefore, combining the analysis of shadows and echoes is a promising way to improve automated target classification. Some reliable schemes for automated target classification rely on model based learning instead of only using experimental samples of target acoustic response to train the classifier. With this approach, a good performance level in classification can be obtained if the modeling of the target acoustic response is accurate enough. The implementation of the classification method first consists in precisely modeling the acoustic response of the targets. The result of the modeling process is a simulator called SIS (Sonar Image Simulator). As imaging sonars operate at high or very high frequency the core of the model is based on acoustical ray-tracing. Several phenomena have been considered to increase the realism of the acoustic response (multi-path propagation, interaction with the surrounding seabed, edge diffraction, etc.). The first step of the classifier consists of a model-based approach. The classification method uses the highlight information of the acoustic signature of the target called « A-scan ». This method consists in comparing the A-scan of the detected target with a set of simulated A-scans generated by SIS in the same operational conditions. To train the classifier, a Template base (A-scans) is created by modeling manmade objects of simple and complex shapes (Mine Like Objects or not). It is based on matched filtering in order to allow more flexible result by introducing a degree of match related to the maximum of correlation coefficient. With this approach the training set can be extended increasingly to improve classification when classes are strongly correlated. If the difference between the correlation coefficients of the most likely classes is not sufficient the result is considered ambiguous. A second stage is proposed in order to discriminate these classes by adding new features and/or extending the initial training data set by including more A-scans in new configurations derived from the ambiguous ones. This classification process is mainly assessed on simulated side scan sonar data but also on a limited data set of real data. The use of A-scans have achieved good classification performances in a mono-view configuration and can improve the result of classification for some remaining confusions using methods only based on shadow analysis.
14

Zehn Jahre Biomasseforschung am DBFZ

Trainer, Paul 10 October 2019 (has links)
No description available.
15

Precise Mapping for Retinal Photocoagulation in SLIM (Slit-Lamp Image Mosaicing) / Cartographie précise pour la photocoagulation rétinienne dans SLIM (Mosaïque de l’image de la lampe à fente)

Prokopetc, Kristina 10 November 2017 (has links)
Cette thèse est issue d’un accord CIFRE entre le groupe de recherche EnCoV de l’Université Clermont Auvergne et la société Quantel Medical (www.quantel-medical.fr). Quantel Medical est une entreprise spécialisée dans le développement innovant des ultrasons et des produits laser en ophtalmologie. Cette thèse présente un travail de recherche visant à l’application du diagnostic assisté par ordinateur et du traitement des maladies de la rétine avec une utilisation du prototype industriel TrackScan développé par Quantel Medical. Plus précisément, elle contribue au problème du mosaicing précis de l’image de la lampe à fente (SLIM) et du recalage automatique et multimodal en utilisant les images SLIM avec l’angiographie par fluorescence (FA) pour aider à la photo coagulation pan-rétienne naviguée. Nous abordons trois problèmes différents.Le premier problème est lié à l’accumulation des erreurs du recalage en SLIM., il dérive de la mosaïque. Une approche commune pour obtenir la mosaïque consiste à calculer des transformations uniquement entre les images temporellement consécutives dans une séquence, puis à les combiner pour obtenir la transformation entre les vues non consécutives temporellement. Les nombreux algorithmes existants suivent cette approche. Malgré le faible coût de calcul et la simplicité de cette méthode, en raison de sa nature de ‘chaînage’, les erreurs d’alignement s’accumulent, ce qui entraîne une dérive des images dans la mosaïque. Nous proposons donc d’utilise les récents progrès réalisés dans les méthodes d’ajustement de faisceau et de présenter un cadre de réduction de la dérive spécialement conçu pour SLIM. Nous présentons aussi une nouvelle procédure de raffinement local.Deuxièmement, nous abordons le problème induit par divers types d’artefacts communs á l’imagerie SLIM. Ceus-sont liés à la lumière utilisée, qui dégrade considérablement la qualité géométrique et photométrique de la mosaïque. Les solutions existantes permettent de faire face aux blouissements forts qui corrompent entièrement le rendu de la rétine dans l’image tout en laissant de côté la correction des reflets spéculaires semi-transparents et reflets des lentilles. Cela introduit des images fantômes et des pertes d’information. En outre, les méthodes génériques ne produisent pas de résultats satisfaisants dans SLIM. Par conséquent, nous proposons une meilleure alternative en concevant une méthode basée sur une technique rapide en utilisant une seule image pour éliminer les éblouissements et la notion de feux spéculaires semi-transparents en utilisant les indications de mouvement pour la correction intelligente de reflet de lentille.Finalement, nous résolvons le problème du recalage multimodal automatique avec SLIM. Il existe une quantité importante de travaux sur le recalage multimodal de diverses modalités d’image rétinienne. Cependant, la majorité des méthodes existantes nécessitent une détection de points clés dans les deux modalités d’image, ce qui est une tâche très difficile. Dans le cas de SLIM et FA ils ne tiennent pas compte du recalage précis dans la zone maculaire - le repère prioritaire. En outre, personne n’a développé une solution entièrement automatique pour SLIM et FA. Dans cette thèse, nous proposons la première méthode capable de recolu ces deux modalités sans une saisie manuelle, en détectant les repères anatomiques uniquement sur une seule image pour assurer un recalage précis dans la zone maculaire. (...) / This thesis arises from an agreement Convention Industrielle de Formation par la REcherche (CIFRE) between the Endoscopy and Computer Vision (EnCoV) research group at Université Clermont Auvergne and the company Quantel Medical (www.quantel-medical.fr), which specializes in the development of innovative ultrasound and laser products in ophthalmology. It presents a research work directed at the application of computer-aided diagnosis and treatment of retinal diseases with a use of the TrackScan industrial prototype developed at Quantel Medical. More specifically, it contributes to the problem of precise Slit-Lamp Image Mosaicing (SLIM) and automatic multi-modal registration of SLIM with Fluorescein Angiography (FA) to assist navigated pan-retinal photocoagulation. We address three different problems.The first is a problem of accumulated registration errors in SLIM, namely the mosaicing drift.A common approach to image mosaicking is to compute transformations only between temporally consecutive images in a sequence and then to combine them to obtain the transformation between non-temporally consecutive views. Many existing algorithms follow this approach. Despite the low computational cost and the simplicity of such methods, due to its ‘chaining’ nature, alignment errors tend to accumulate, causing images to drift in the mosaic. We propose to use recent advances in key-frame Bundle Adjustment methods and present a drift reduction framework that is specifically designed for SLIM. We also introduce a new local refinement procedure.Secondly, we tackle the problem of various types of light-related imaging artifacts common in SLIM, which significantly degrade the geometric and photometric quality of the mosaic. Existing solutions manage to deal with strong glares which corrupt the retinal content entirely while leaving aside the correction of semi-transparent specular highlights and lens flare. This introduces ghosting and information loss. Moreover, related generic methods do not produce satisfactory results in SLIM. Therefore, we propose a better alternative by designing a method based on a fast single-image technique to remove glares and the notion of the type of semi-transparent specular highlights and motion cues for intelligent correction of lens flare.Finally, we solve the problem of automatic multi-modal registration of FA and SLIM. There exist a number of related works on multi-modal registration of various retinal image modalities. However, the majority of existing methods require a detection of feature points in both image modalities. This is a very difficult task for SLIM and FA. These methods do not account for the accurate registration in macula area - the priority landmark. Moreover, none has developed a fully automatic solution for SLIM and FA. In this thesis, we propose the first method that is able to register these two modalities without manual input by detecting retinal features only on one image and ensures an accurate registration in the macula area.The description of the extensive experiments that were used to demonstrate the effectiveness of each of the proposed methods is also provided. Our results show that (i) using our new local refinement procedure for drift reduction significantly ameliorates the to drift reduction allowing us to achieve an improvement in precision over the current solution employed in the TrackScan; (ii) the proposed methodology for correction of light-related artifacts exhibits a good efficiency, significantly outperforming related works in SLIM; and (iii) despite our solution for multi-modal registration builds on existing methods, with the various specific modifications made, it is fully automatic, effective and improves the baseline registration method currently used on the TrackScan.
16

INTERTEXTUALIZAÇÃO NA OBRA DE MARINA COLASANTI: O TEAR E O TECIDO

Costa, Ivonete Ferreira da 23 March 2016 (has links)
Submitted by admin tede (tede@pucgoias.edu.br) on 2016-12-12T17:45:59Z No. of bitstreams: 1 IVONETE FERREIRA DA COSTA.pdf: 1144206 bytes, checksum: 112aea88b52fbaed3f56007447beaf47 (MD5) / Made available in DSpace on 2016-12-12T17:46:00Z (GMT). No. of bitstreams: 1 IVONETE FERREIRA DA COSTA.pdf: 1144206 bytes, checksum: 112aea88b52fbaed3f56007447beaf47 (MD5) Previous issue date: 2016-03-23 / The text brings the analysis of aspects of the literary discourse as the processes of construction of the scenes and the magical universe, in which the narratives of Marina Colasanti are realized, having as it shows the tales of the works Doze reis e a moca no labirinto do vento (2006): "The woman ramada", Uma ideia toda azul (2006): "Beyond the frame", "Between the leaves of green ó" and "Yarn after yarn". The general and specific objectives are to highlight and distinguish the encompassing and generic scenes present in the narratives, to identify the nature of the verbal sign in its relation to the nonverbal sign, and to analyze intertext resources, paratext, among others, as an artistic procedure. The narrative plans are approached, in which the characters are realized mimically, starting from the initial assumption formulated by Dominique Maingueneau. Non-verbal language is an invitation to read verbal language and vice versa. Both are associated with the signs that are constructed through the textual writing: loom and fabric. They can be seen now either explicitly or implicitly, and put in the service of a power that is realized by the act of reading. Thus, in the narrative text, there are traces of a speech in which the text is staged. / O texto traz a análise de aspectos do discurso literário como os processos de construção das cenas e o universo mágico, em que se realizam as narrativas de Marina Colasanti, tendo como mostra os contos das obras Doze reis e a moca no labirinto do vento (2006): “A mulher ramada”, Uma ideia toda azul (2006): “Além do bastidor”, “Entre as folhas do verde ó” e “Fio após fio”. Os objetivos geral e específicos são destacar e distinguir as cenas englobante e genérica presentes nas narrativas, identificar a natureza do signo verbal na sua relação com o signo não verbal e analisar recursos de intertexto, paratexto, entre outros, como procedimento artístico. Abordam-se os planos narrativos, nos quais se dá a realização dos personagens mimeticamente, partindo do pressuposto inicial formulado por Dominique Maingueneau. A linguagem não verbal é um convite à leitura da linguagem verbal e vice-versa. Ambas se associam aos signos que se constroem por meio da escritura textual: tear e tecido. Elas podem ser vistas ora de modo explícito, ora implícito, e se colocam a serviço de um poder que se realiza pelo ato de leitura. Assim, no texto narrativo, há rastros de um discurso em que o texto é encenado.
17

Intelligent Energy-Savings and Process Improvement Strategies in Energy-Intensive Industries / Intelligent Energy-Savings and Process Improvement Strategies in Energy-Intensive Industries

Teng, Sin Yong January 2020 (has links)
S tím, jak se neustále vyvíjejí nové technologie pro energeticky náročná průmyslová odvětví, stávající zařízení postupně zaostávají v efektivitě a produktivitě. Tvrdá konkurence na trhu a legislativa v oblasti životního prostředí nutí tato tradiční zařízení k ukončení provozu a k odstavení. Zlepšování procesu a projekty modernizace jsou zásadní v udržování provozních výkonů těchto zařízení. Současné přístupy pro zlepšování procesů jsou hlavně: integrace procesů, optimalizace procesů a intenzifikace procesů. Obecně se v těchto oblastech využívá matematické optimalizace, zkušeností řešitele a provozní heuristiky. Tyto přístupy slouží jako základ pro zlepšování procesů. Avšak, jejich výkon lze dále zlepšit pomocí moderní výpočtové inteligence. Účelem této práce je tudíž aplikace pokročilých technik umělé inteligence a strojového učení za účelem zlepšování procesů v energeticky náročných průmyslových procesech. V této práci je využit přístup, který řeší tento problém simulací průmyslových systémů a přispívá následujícím: (i)Aplikace techniky strojového učení, která zahrnuje jednorázové učení a neuro-evoluci pro modelování a optimalizaci jednotlivých jednotek na základě dat. (ii) Aplikace redukce dimenze (např. Analýza hlavních komponent, autoendkodér) pro vícekriteriální optimalizaci procesu s více jednotkami. (iii) Návrh nového nástroje pro analýzu problematických částí systému za účelem jejich odstranění (bottleneck tree analysis – BOTA). Bylo také navrženo rozšíření nástroje, které umožňuje řešit vícerozměrné problémy pomocí přístupu založeného na datech. (iv) Prokázání účinnosti simulací Monte-Carlo, neuronové sítě a rozhodovacích stromů pro rozhodování při integraci nové technologie procesu do stávajících procesů. (v) Porovnání techniky HTM (Hierarchical Temporal Memory) a duální optimalizace s několika prediktivními nástroji pro podporu managementu provozu v reálném čase. (vi) Implementace umělé neuronové sítě v rámci rozhraní pro konvenční procesní graf (P-graf). (vii) Zdůraznění budoucnosti umělé inteligence a procesního inženýrství v biosystémech prostřednictvím komerčně založeného paradigmatu multi-omics.

Page generated in 0.0401 seconds