Spelling suggestions: "subject:"line detection"" "subject:"eine detection""
11 |
Development of a vision-based local positioning system for weed detectionFontaine, Veronique 18 May 2004 (has links)
Herbicides applications could possibly be reduced if targeted. Targeting the applications requires prior identification and quantification of the weed population. This task could possibly be done by a weed scout robot. The ability to position a camera over the inter-row space of densely seeded crops will help to simplify the task of automatically quantifying weed infestations. As part of the development of an autonomous weed scout, a vision-based local positioning system for weed detection has been developed and tested in a laboratory setting. Four Line-detection algorithms have been tested and a robotic positioning device, or XYZtheta-table, was developed and tested. <p> The Line-detection algorithms were based respectively on a stripe analysis, a blob analysis, a linear regression and the Hough Transform. The last two also included an edge-detection step. Images of parallel line patterns representing crop rows were collected at different angles, with and without weed-simulating noise. The images were processed by the four programs. The ability of the programs to determine the angle of the rows and the location of an inter-row space centreline was evaluated in a laboratory setting. All algorithms behaved approximately the same when determining the rows angle in the noise-free images, with a mean error of 0.5°. In the same situation, all algorithms could find the centreline of an inter-row space within 2.7 mm. Generally, the mean errors increased when noise was added to the images, up to 1.1° and 8.5 mm for the Linear Regression algorithm. Specific dispersions of the weeds were identified as possible causes of increase of the error in noisy images. Because of its insensitivity to noise, the Stripe Analysis algorithm was considered the best overall. The fastest program was the Blob Analysis algorithm with a mean processing time of 0.35 s per image. Future work involves evaluation of the Line-detection algorithms with field images. <p>The XYZtheta-table consisted of rails allowing movement of a camera in the 3 orthogonal directions and of a rotational table that could rotate the camera about a vertical axis. The ability of the XYZtheta-table to accurately move the camera within the XY-space and rotate it at a desired angle was evaluated in a laboratory setting. The XYZtheta-table was able to move the camera within 7 mm of a target and to rotate it with a mean error of 0.07°. The positioning accuracy could be improved by simple mechanical modifications on the XYZtheta-table.
|
12 |
On-line non-intrusive partial discharges detection in aeronautical systems / Détection non intrusive et en fonctionnement des décharges partielles dans des systèmes aéronautiquesAbadie, Cédric 03 April 2017 (has links)
L'évolution de l'électronique de puissance ces dernières années a entraîné une augmentation de la densité de puissance et une diminution du coût des onduleurs de tension à modulation de largeur d'impulsion (MLI). Ces évolutions ont répandu l'utilisation de convertisseurs de puissance pour les applications de variateurs de vitesse ce qui a permis le développement du concept d' " avion plus électrique ". Ce concept consiste à remplacer un des vecteurs énergétiques (pneumatique ou hydraulique) par l'énergie électrique. Cependant, le développement du réseau électrique a entraîné une augmentation de la tension embarquée, ce qui a conduit à un vieillissement prématuré des équipements électriques embarqués. La forme de tension appliquée, appelée "modulation de largeur d'impulsion" (MLI), est constituée de trains d'impulsions. Avec l'application de ces impulsions, la tension n'est plus distribuée de manière homogène le long du bobinage. Dans ce cas, on pourra observer d'importantes différences de potentiel entre les spires d'une même phase voire entre deux phases du bobinage. En outre, un autre paramètre important provient du type d'enroulement des moteurs utilisés par l'industrie. L'enroulement aléatoire est la technique de bobinage la plus courante pour les moteurs basses tensions car cette méthode présente un faible coût. Le risque induit par ce type d'enroulement est que la première et une des dernières spires de la première bobine peuvent être proches l'une de l'autre. Dans ce cas, jusqu'à 80% de la tension sera supportée par quelques dizaines de microns d'émail, et les systèmes d'isolation existants ne sont pas dimensionnés pour résister à de telles contraintes. L'utilisation de longs câbles reliant l'onduleur au moteur peut aussi provoquer des surtensions importantes aux bornes du moteur. Ce phénomène s'explique par le fait que le câble se comporte comme une ligne de transmission qui n'est pas adaptée en termes d'impédance au bobinage du moteur. De plus, ces importantes différences de potentiel associées à de faibles pressions, présentes dans les zones dépressurisées de l'aéronef, peuvent entraîner l'apparition de décharges partielles. Les décharges partielles sont des décharges électriques qui court-circuitent partiellement l'intervalle entre deux conducteurs. Il existe de nombreuses méthodes de détection bien connues pour les tensions AC et DC, cependant, la détection sous tension de type MLI dans des moteurs basse tension est beaucoup plus complexe. Les signaux de décharge partielle sont en effet intégrés dans le bruit électromagnétique généré par la commutation. Le but de cette thèse est donc de développer un procédé de détection et un procédé de filtrage permettant une détection non intrusive et en fonctionnement (on-line) des décharges partielles dans le domaine aéronautique afin de qualifier les systèmes d'isolation électrique utilisés dans les aéronefs. / The development of power electronics in recent years has led to increase power density and to decrease pulse width modulation (PWM) voltage inverter cost. These developments have expanded the use of power converters for variable speed drive applications which enabled the development of the concept of "more electric aircraft". This concept consists in replacing one of energy carriers (pneumatic or hydraulic) with electrical energy. However, the deployment of electrical energy has increased the onboard voltage, which leads to premature aging of onboard electrical equipment. The shape of the PWM voltage consists of pulse trains. With the application of these pulses, the voltage is no longer homogeneously distributed along the coil. In this case, large differences in potential between the strands are present. In addition, another important parameter derived from the winding type motor used in industry. The random winding is the most common technique for low voltage motors due to its lower cost. The risk generated by this type of winding is that the first and the last turns of the first coil can be facing one another. In this case, up to 80% of the voltage will be supported by a few tens of microns of enamel, and existing insulation systems are not designed to withstand such severe constraints. The use of long cable connecting the inverter to the motor can also cause significant overvoltage at the motor terminals. This phenomenon is explained by the fact that the cable behaves as a transmission line to which the motor coils is not adapted in terms of impedance. In addition, these large potential differences associated with low pressures in the depressurized areas of the aircraft, may cause the occurrence of partial discharge. Partial discharges are electrical discharges that short-circuited partially the gap between two conductors. There are many detection methods well known under AC and DC voltage, however, in the case of the detection under PWM like voltage in low-voltage motors, the detection is much more complex. Partial discharge signals are embedded in the electromagnetic noise generated by the switching. The aim of this thesis is to develop a detection method and filtering method enabling a non-intrusive and an "on-line" partial discharges detection in the aeronautical field in order to qualify the electrical insulation systems used in aircraft.
|
13 |
Detecção de ovos de S. mansoni a partir da detecção de seus contornos / Schistosoma mansoni egg detection from contours detectionEdwin Delgado Huaynalaya 25 April 2012 (has links)
Schistosoma mansoni é o parasita causador da esquistossomose mansônica que, de acordo com o Ministério da Saúde do Brasil, afeta atualmente vários milhões de pessoas no país. Uma das formas de diagnóstico da esquistossomose é a detecção de ovos do parasita através da análise de lâminas microscópicas com material fecal. Esta tarefa é extremamente cansativa, principalmente nos casos de baixa endemicidade, pois a quantidade de ovos é muito pequena. Nesses casos, uma abordagem computacional para auxílio na detecção de ovos facilitaria o trabalho de diagnóstico. Os ovos têm formato ovalado, possuem uma membrana translúcida, apresentam uma espícula e sua cor é ligeiramente amarelada. Porém nem todas essas características são observadas em todos os ovos e algumas delas são visíveis apenas com uma ampliação adequada. Além disso, o aspecto visual do material fecal varia muito de indivíduo para indivíduo em termos de cor e presença de diversos artefatos (tais como partículas que não são desintegradas pelo sistema digestivo), tornando difícil a tarefa de detecção dos ovos. Neste trabalho investigamos, em particular, o problema de detecção das linhas que contornam a borda de vários dos ovos. Propomos um método composto por duas fases. A primeira fase consiste na detecção de estruturas do tipo linha usando operadores morfológicos. A detecção de linhas é dividida em três etapas principais: (i) realce de linhas, (ii) detecção de linhas, e (iii) refinamento do resultado para eliminar segmentos de linhas que não são de interesse. O resultado dessa fase é um conjunto de segmentos de linhas. A segunda fase consiste na detecção de subconjuntos de segmentos de linha dispostos em formato elíptico, usando um algoritmo baseado na transformada Hough. As elipses detectadas são fortes candidatas a contorno de ovos de S. mansoni. Resultados experimentais mostram que a abordagem proposta pode ser útil para compor um sistema de auxílio à detecção dos ovos. / Schistosoma mansoni is one of the parasites which causes schistosomiasis. According to the Brazilian Ministry of Health, several million people in the country are currently affected by schistosomiasis. One way of diagnosing it is by egg identification in stool. This task is extremely time-consuming and tiring, especially in cases of low endemicity, when only few eggs are present. In such cases, a computational approach to help the detection of eggs would greatly facilitate the diagnostic task. Schistosome eggs present oval shape, have a translucent membrane and a spike, and their color is slightly yellowish. However, not all these features are observed in every egg and some of them are visible only with an adequate microscopic magnification. Furthermore, the visual aspect of the fecal material varies widely from person to person in terms of color and presence of different artifacts (such as particles which are not disintegrated by the digestive system), making it difficult to detect the eggs. In this work we investigate the problem of detecting lines which delimit the contour of the eggs. We propose a method comprising two steps. The first phase consists in detecting line-like structures using morphological operators. This line detection phase is divided into three steps: (i) line enhancement, (ii) line detection, and (iii) result refinement in order to eliminate line segments that are not of interest. The output of this phase is a set of line segments. The second phase consists in detecting subsets of line segments arranged in an elliptical shape, using an algorithm based on the Hough transform. Detected ellipses are strong candidates to contour of S. mansoni eggs. Experimental results show that the proposed approach has potential to be effectively used as a component in a computer system to help egg detection.
|
14 |
High Voltage Power Line Detection Based on Intersection Point AlgorithmDu, Zijun 24 September 2018 (has links)
In this paper, an introduction of the challenge of High Voltage Power Line Detection and some methods about solving the similar problem are talked about. To get a better result, a sort of new methods is developed for detecting and tracking high voltage power lines in the field of high voltage power line inspection by using Unmanned Aerial Vehicle (UAV). With the fast development of automated technology, a solution of real-time detecting and tracking of high voltage power lines can be considered on UAV instead of human work. The usability of Intersection Point Algorithm is the main task for detect the power lines from the preprocessing image.
There are many lines located in the preprocessing image in different directions, which get crossing with each other many times. To eliminate the false lines, some invariant features for Intersection Point Algorithm are needed. The intersection points inside of a small region and quite similar directions can probably be considered as the intersection point of power lines. Therefore, three methods are considered for grouping points, which conform to the features of intersection points of power lines. There should be only one concentrated area, which represents both power lines and heading direction of it. Method one is to select the points based on distance of points. Method two is to select the overlap region of the circles based on overlap layers. And method three is searching the overlapped layers by using Sliding Window.
Result evaluation in Project APOLI is done with the Hit, Miss, Fail standard.
|
15 |
Segmentation of heterogeneous document images : an approach based on machine learning, connected components analysis, and texture analysis / Segmentation d'images hétérogènes de documents : une approche basée sur l'apprentissage automatique de données, l'analyse en composantes connexes et l'analyse de textureBonakdar Sakhi, Omid 06 December 2012 (has links)
La segmentation de page est l'une des étapes les plus importantes de l'analyse d'images de documents. Idéalement, une méthode de segmentation doit être capable de reconstituer la structure complète de toute page de document, en distinguant les zones de textes, les parties graphiques, les photographies, les croquis, les figures, les tables, etc. En dépit de nombreuses méthodes proposées à ce jour pour produire une segmentation de page correcte, les difficultés sont toujours nombreuses. Le chef de file du projet qui a rendu possible le financement de ce travail de thèse (*) utilise une chaîne de traitement complète dans laquelle les erreurs de segmentation sont corrigées manuellement. Hormis les coûts que cela représente, le résultat est subordonné au réglage de nombreux paramètres. En outre, certaines erreurs échappent parfois à la vigilance des opérateurs humains. Les résultats des méthodes de segmentation de page sont généralement acceptables sur des documents propres et bien imprimés; mais l'échec est souvent à constater lorsqu'il s'agit de segmenter des documents manuscrits, lorsque la structure de ces derniers est vague, ou lorsqu'ils contiennent des notes de marge. En outre, les tables et les publicités présentent autant de défis supplémentaires à relever pour les algorithmes de segmentation. Notre méthode traite ces problèmes. La méthode est divisée en quatre parties : - A contrario de ce qui est fait dans la plupart des méthodes de segmentation de page classiques, nous commençons par séparer les parties textuelles et graphiques de la page en utilisant un arbre de décision boosté. - Les parties textuelles et graphiques sont utilisées, avec d'autres fonctions caractéristiques, par un champ conditionnel aléatoire bidimensionnel pour séparer les colonnes de texte. - Une méthode de détection de lignes, basée sur les profils partiels de projection, est alors lancée pour détecter les lignes de texte par rapport aux frontières des zones de texte. - Enfin, une nouvelle méthode de détection de paragraphes, entraînée sur les modèles de paragraphes les plus courants, est appliquée sur les lignes de texte pour extraire les paragraphes, en s'appuyant sur l'apparence géométrique des lignes de texte et leur indentation. Notre contribution sur l'existant réside essentiellement dans l'utilisation, ou l'adaptation, d'algorithmes empruntés aux méthodes d'apprentissage automatique de données, pour résoudre les cas les plus difficiles. Nous démontrons en effet un certain nombre d'améliorations : sur la séparation des colonnes de texte lorsqu'elles sont proches l'une de l'autre~; sur le risque de fusion d'au moins deux cellules adjacentes d'une même table~; sur le risque qu'une région encadrée fusionne avec d'autres régions textuelles, en particulier les notes de marge, même lorsque ces dernières sont écrites avec une fonte proche de celle du corps du texte. L'évaluation quantitative, et la comparaison des performances de notre méthode avec des algorithmes concurrents par des métriques et des méthodologies d'évaluation reconnues, sont également fournies dans une large mesure.(*) Cette thèse a été financée par le Conseil Général de Seine-Saint-Denis, par l'intermédiaire du projet Demat-Factory, initié et conduit par SAFIG SA / Document page segmentation is one of the most crucial steps in document image analysis. It ideally aims to explain the full structure of any document page, distinguishing text zones, graphics, photographs, halftones, figures, tables, etc. Although to date, there have been made several attempts of achieving correct page segmentation results, there are still many difficulties. The leader of the project in the framework of which this PhD work has been funded (*) uses a complete processing chain in which page segmentation mistakes are manually corrected by human operators. Aside of the costs it represents, this demands tuning of a large number of parameters; moreover, some segmentation mistakes sometimes escape the vigilance of the operators. Current automated page segmentation methods are well accepted for clean printed documents; but, they often fail to separate regions in handwritten documents when the document layout structure is loosely defined or when side notes are present inside the page. Moreover, tables and advertisements bring additional challenges for region segmentation algorithms. Our method addresses these problems. The method is divided into four parts:1. Unlike most of popular page segmentation methods, we first separate text and graphics components of the page using a boosted decision tree classifier.2. The separated text and graphics components are used among other features to separate columns of text in a two-dimensional conditional random fields framework.3. A text line detection method, based on piecewise projection profiles is then applied to detect text lines with respect to text region boundaries.4. Finally, a new paragraph detection method, which is trained on the common models of paragraphs, is applied on text lines to find paragraphs based on geometric appearance of text lines and their indentations. Our contribution over existing work lies in essence in the use, or adaptation, of algorithms borrowed from machine learning literature, to solve difficult cases. Indeed, we demonstrate a number of improvements : on separating text columns when one is situated very close to the other; on preventing the contents of a cell in a table to be merged with the contents of other adjacent cells; on preventing regions inside a frame to be merged with other text regions around, especially side notes, even when the latter are written using a font similar to that the text body. Quantitative assessment, and comparison of the performances of our method with competitive algorithms using widely acknowledged metrics and evaluation methodologies, is also provided to a large extend.(*) This PhD thesis has been funded by Conseil Général de Seine-Saint-Denis, through the FUI6 project Demat-Factory, lead by Safig SA
|
16 |
Revendo o problema da detecção de retas através dos olhos da aranha. / Straight Line detection revisited: Through the eyes of the spider.Costa, Felipe Miney Gonçalves da 06 July 1999 (has links)
Visão é um processo que envolve uma grande quantidade de informações, as quais precisam ser otimizadas de alguma forma para propiciar um processamento eficiente. Grande parte das informações visuais estão contidas nos contornos de uma imagem e uma grande redução no volume dos dados pode ser conseguida com a análise dos contornos. Além dos contornos, a detecção de segmentos de reta é o próximo passo na compressão das informações visuais. A detecção de retas ocorre no sistema visual humano, e também no de outros seres vivos. Entre os invertebrados terrestres, o melhor sistema de visão é o das aranhas da família Salticidae e este apresenta características que facilitam a detecção de retas. Este trabalho propõe um novo método de detecção de retas, baseado no sistema visual das aranhas saltadoras, que aborda este problema através de um enfoque inédito, por otimização. O método realiza a busca por retas através de janelas lineares. Para isso, a detecção de retas será feita em um espaço de parâmetros, com a utilização do algoritmo de maximização de funções \"Downhill Simplex\". O método desenvolvido leva em consideração a natureza discreta da imagem e do espaço de parâmetros utilizado, e este trabalho inclui um estudo detalhado destes espaços discretos. O método incorpora, para lidar adequadamente com as peculiaridades do problema, características como \"Simulated Annealing\" e largura adaptativa. O desempenho do método depende de um conjunto de parâmetros cujo comportamento é de difícil previsão, e a escolha de um conjunto foi realizada através de um algoritmo genético. O trabalho envolve também a construção de um protótipo para a realização de testes utilizando o método desenvolvido. Os resultados foram analisados quanto a precisão na detecção de retas, ao tempo de processamento e a movimentação das janelas lineares, relacionada aos esforços na busca por retas. / Vision is a process that involves a large amount of information that need to be somehow optimized to allow efficient processing. Most of the visual information is contained in the contours of an image and a considerable reduction in the amount of data can be achieved by fmding and processing these contours. The next step to further compress the visual data is to fmd straight segments, and represent the contours in terms of these entities. Straight-line segment detection is performed by the human visual system, as well as by other creatures. Among terrestrial invertebrates, the best visual system is that of the Salticidae family of spiders, also known as jumping spiders. This visual system presents some characteristics that facilitate the detection of straight-lines. The present work proposes a new method for straight-line detection, based on the visual system of the jumping spiders, using linear windows. This method approach the straight-line detection problem through an optimization point of view yet unexplored in literature. The detection will be accomplished in a parameter space, using the \"Downhill Simplex\" maximization algorithm. The method considers the discrete nature of both the image and the parameter spaces, and this work includes a detailed analysis of these discrete spaces. The method also incorporates, to adequately deal with the specific characteristics of the problem, resources such as \"Simulated Annealing\" and adaptive width of the linear windows. The performance of the method depends on a set of parameters, which behavior is hard to predict, and the choice of an adequate set was made using a genetic algorithm. The work also involves the project and construction of a prototype, to evaluate the proposed method. Results were analyzed regarding their precision, processing time and the movements of the linear windows, related to the effort made to detect the straight lines.
|
17 |
Modélisation de textures anisotropes par la transformée en ondelettes monogéniques / Modelisation of anisotropic textures by the monogenic wavelet transformPolisano, Kévin 12 December 2017 (has links)
L’analyse de texture est une composante du traitement d’image qui suscite beaucoup d’intérêt tant les applications qu’elle recouvre sont diverses. En imagerie médicale, les signaux enregistrés sous forme d’images telles que les radiographies de l’os ou les mammographies, présentent une micro-architecture fortement irrégulière qui invite à considérer la formation de ces textures comme la réalisation d’un champ aléatoire. Suite aux travaux précurseurs de Benoit Mandelbrot, de nombreux modèles dérivés du champ brownien fractionnaire ont été proposés pour caractériser l’aspect fractal des images et synthétiser des textures à rugosité prescrite. Ainsi l’estimation des paramètres de ces modèles, a notamment permis de relier la dimension fractale des images à la détection de modifications de la structure osseuse telle qu’on l’observe en cas d’ostéoporose. Plus récemment, d’autres modèles de champs aléatoires, dits anisotropes, ont été utilisés pour décrire des phénomènes présentant des directions privilégiées, et détecter par exemple des anomalies dans les tissus mammaires.Cette thèse porte sur l’élaboration de nouveaux modèles de champs anisotropes, permettant de contrôler localement l’anisotropie des textures. Une première contribution a consisté à définir un champ brownien fractionnaire anisotrope généralisé (GAFBF), et un second modèle basé sur une déformation de champs élémentaires (WAFBF), permettant tous deux de prescrire l’orientation locale de la texture. L’étude de la structure locale de ces champs est menée à l’aide du formalisme des champs tangents. Des procédures de simulation sont mises en oeuvres pour en observer concrètement le comportement, et servir de benchmark à la validation d’outils de détection de l’anisotropie. En effet l’étude de l’orientation locale et de l’anisotropie dans le cadre des textures soulève encore de nombreux problèmes mathématiques, à commencer par la définition rigoureuse de cette orientation. Notre seconde contribution s’inscrit dans cette perspective. En transposant les méthodes de détection de l’orientation basées sur la transformée en ondelettes monogéniques, nous avons été en mesure, pour une vaste classe de champs aléatoires, de définir une notion d’orientation intrinsèque. En particulier l’étude des deux nouveaux modèles de champs anisotropes introduits précédemment, a permis de relier formellement cette notion d’orientation aux paramètres d’anisotropie de ces modèles. Des connexions avec les statistiques directionnelles sont également établies, de façon à caractériser la loi de probabilité des estimateurs d’orientation.Enfin une troisième partie de la thèse est consacrée au problème de la détection de lignes dans les images. Le modèle sous jacent est celui d’une superposition de lignes diffractées (c-a-d convoluées par un noyau de flou) et bruitées, dont il s’agit de retrouver les paramètres de position et d’intensité avec une précision sub-pixel. Nous avons développé dans cet objectif une méthode basée sur le paradigme de la super-résolution. La reformulation du problème en termes d’atomes 1-D a permis de dégager un problème d’optimisation sous contraintes, et de reconstruire ces lignes en atteignant cette précision. Les algorithmes employés pour effectuer la minimisation appartiennent à la famille des algorithmes dits proximaux. La formalisation de ce problème inverse et sa résolution, constituent une preuve de concept ouvrant des perspectives à l’élaboration d’une transformée de Hough revisitée pour la détection ‘continue’ de lignes dans les images. / Texture analysis is a component of image processing which hold the interest in the various applications it covers. In medical imaging, the images recorded such as bone X-rays or mammograms show a highly irregular micro-architecture, which invites to consider these textures formation as a realization of a random field. Following Benoit Mandelbrot’s pioneer work, many models derived from the fractional Brownian field have been proposed to characterize the fractal behavior of images and to synthesize textures with prescribed roughness. Thus, the parameters estimation of these models has made possible to link the fractal dimension of these images to the detection of bone structure alteration as it is observed in the case of osteoporosis. More recently, other models known as anisotropic random fields have been used to describe phenomena with preferred directions, for example for detecting abnormalities in the mammary tissues.This thesis deals with the development of new models of anisotropic fields, allowing to locally control the anisotropy of the textures. A first contribution was to define a generalized anisotropic fractional Brownian field (GAFBF), and a second model based on an elementary field deformation (WAFBF), both allowing to prescribe the local orientation of the texture. The study of the local structure of these fields is carried out using the tangent fields formalism. Simulation procedures are implemented to concretely observe the behavior, and serve as a benchmark for the validation of anisotropy detection tools. Indeed, the investigation of local orientation and anisotropy in the context of textures still raises many mathematical problems, starting with the rigorous definition of this orientation. Our second contribution is in this perspective. By transposing the orientation detection methods based on the monogenic wavelet transform, we have been able, for a wide class of random fields, to define an intrinsic notion of orientation. In particular, the study of the two new models of anisotropic fields introduced previously allowed to formally link this notion of orientation with the anisotropy parameters of these models. Connections with directional statistics are also established, in order to characterize the probability distribution of orientation estimators.Finally, a third part of this thesis was devoted to the problem of the lines detection in images. The underlying model is that of a superposition of diffracted lines (i.e, convoluted by a blur kernel) with presence of noise, whose position and intensity parameters must be recovered with sub-pixel precision. We have developed a method based on the super-resolution paradigm. The reformulation of the problem in the framework of 1-D atoms lead to an optimization problem under constraints, and enables to reconstruct these lines by reaching this precision. The algorithms used to perform the minimization belong to the family of algorithms known as proximal algorithms. The modelization and the resolution of this inverse problem, provides a proof of concept opening perspectives to the development of a revised Hough transform for the continuous detection of lines in images.
|
18 |
Revendo o problema da detecção de retas através dos olhos da aranha. / Straight Line detection revisited: Through the eyes of the spider.Felipe Miney Gonçalves da Costa 06 July 1999 (has links)
Visão é um processo que envolve uma grande quantidade de informações, as quais precisam ser otimizadas de alguma forma para propiciar um processamento eficiente. Grande parte das informações visuais estão contidas nos contornos de uma imagem e uma grande redução no volume dos dados pode ser conseguida com a análise dos contornos. Além dos contornos, a detecção de segmentos de reta é o próximo passo na compressão das informações visuais. A detecção de retas ocorre no sistema visual humano, e também no de outros seres vivos. Entre os invertebrados terrestres, o melhor sistema de visão é o das aranhas da família Salticidae e este apresenta características que facilitam a detecção de retas. Este trabalho propõe um novo método de detecção de retas, baseado no sistema visual das aranhas saltadoras, que aborda este problema através de um enfoque inédito, por otimização. O método realiza a busca por retas através de janelas lineares. Para isso, a detecção de retas será feita em um espaço de parâmetros, com a utilização do algoritmo de maximização de funções \"Downhill Simplex\". O método desenvolvido leva em consideração a natureza discreta da imagem e do espaço de parâmetros utilizado, e este trabalho inclui um estudo detalhado destes espaços discretos. O método incorpora, para lidar adequadamente com as peculiaridades do problema, características como \"Simulated Annealing\" e largura adaptativa. O desempenho do método depende de um conjunto de parâmetros cujo comportamento é de difícil previsão, e a escolha de um conjunto foi realizada através de um algoritmo genético. O trabalho envolve também a construção de um protótipo para a realização de testes utilizando o método desenvolvido. Os resultados foram analisados quanto a precisão na detecção de retas, ao tempo de processamento e a movimentação das janelas lineares, relacionada aos esforços na busca por retas. / Vision is a process that involves a large amount of information that need to be somehow optimized to allow efficient processing. Most of the visual information is contained in the contours of an image and a considerable reduction in the amount of data can be achieved by fmding and processing these contours. The next step to further compress the visual data is to fmd straight segments, and represent the contours in terms of these entities. Straight-line segment detection is performed by the human visual system, as well as by other creatures. Among terrestrial invertebrates, the best visual system is that of the Salticidae family of spiders, also known as jumping spiders. This visual system presents some characteristics that facilitate the detection of straight-lines. The present work proposes a new method for straight-line detection, based on the visual system of the jumping spiders, using linear windows. This method approach the straight-line detection problem through an optimization point of view yet unexplored in literature. The detection will be accomplished in a parameter space, using the \"Downhill Simplex\" maximization algorithm. The method considers the discrete nature of both the image and the parameter spaces, and this work includes a detailed analysis of these discrete spaces. The method also incorporates, to adequately deal with the specific characteristics of the problem, resources such as \"Simulated Annealing\" and adaptive width of the linear windows. The performance of the method depends on a set of parameters, which behavior is hard to predict, and the choice of an adequate set was made using a genetic algorithm. The work also involves the project and construction of a prototype, to evaluate the proposed method. Results were analyzed regarding their precision, processing time and the movements of the linear windows, related to the effort made to detect the straight lines.
|
19 |
Parametrizace bodů a čar pomocí paralelních souřadnic pro Houghovu transformaci / Point and Line Parameterizations Using Parallel Coordinates for Hough TransformJuránková, Markéta Unknown Date (has links)
Tato dizertační práce se zaměřuje na použití paralelních souřadnic pro parametrizaci čar a bodů. Paralelní souřadný systém má souřadnicové osy vzájemně rovnoběžné. Bod ve dvourozměrném prostoru je v paralelních souřadnicích zobrazen jako přímka a přímka jako bod. Toho je možné využít pro Houghovu transformaci - metodu, při které body zájmu hlasují v prostoru parametrů pro danou hypotézu. Parametrizace pomocí paralelních souřadnic vyžaduje pouze rasterizaci úseček, a proto je velmi rychlá a přesná. V práci je tato parameterizace demonstrována na detekci maticových kódů a úběžníků.
|
20 |
Stratified-medium sound speed profiling for CPWC ultrasound imagingD'Souza, Derrell 13 July 2020 (has links)
Coherent plane-wave compounding (CPWC) ultrasound is an important modality enabling ultrafast biomedical imaging. To perform CWPC image reconstruction for a stratified (horizontally layered) medium, one needs to know how the speed of sound (SOS) varies with the propagation depth. Incorrect sound speed and layer thickness assumptions can cause focusing errors, degraded spatial resolution and significant geometrical distortions resulting in poor image reconstruction. We aim to determine the speed of sound and thickness values for each horizontal layer to accurately locate the recorded reflection events to their true locations within the medium. Our CPWC image reconstruction process is based on phase-shift migration (PSM) that requires the user to specify the speed of sound and thickness of each layer in advance. Prior to performing phase-shift migration (one layer at a time, starting from the surface), we first estimate the speed of sound values of a given layer using a cosine similarity metric, based on the data obtained by a multi-element transducer array for two different plane-wave emission angles. Then, we use our speed estimate to identify the layer thickness via end-of-layer boundary detection. A low-cost alternative that obtains reconstructed images with fewer phase shifts (i.e., fewer complex multiplications) using a spectral energy threshold is also proposed in this thesis. Our evaluation results, based on the CPWC imaging simulation of a three-layer medium, show that our sound speed and layer thickness estimates are within 4% of their true values (i.e., those used to generate simulated data). We have also confirmed the accuracy of our speed and layer thickness estimation separately, using two experimental datasets representing two special cases. For speed estimation, we used a CPWC imaging dataset for a constant-speed (i.e., single-layer) medium, yielding estimates within 1% of their true values. For layer thickness estimation, we used a monostatic (i.e., single-element) synthetic-aperture (SA) imaging dataset of the three-layer medium, also yielding estimates within 1% of their true values. Our evaluation results for the low-cost alternative showed a 93% reduction in complex multiplications for the three-layer CPWC imaging dataset and 76% for the three-layer monostatic SA imaging dataset, producing images nearly similar to those obtained using the original PSM methods. / Graduate
|
Page generated in 0.0689 seconds