• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 276
  • 82
  • 58
  • 25
  • 17
  • 7
  • 6
  • 6
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 588
  • 588
  • 153
  • 116
  • 107
  • 96
  • 85
  • 84
  • 81
  • 80
  • 74
  • 72
  • 70
  • 69
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
371

Análise de cenas de pomares de laranjeiras através de segmentação de imagens e reconhecimento de padrões / Orange orchard scene analysis with image segmentation and pattern recognition

Cavani, Felipe Alves 05 November 2007 (has links)
Os sistemas automáticos são normalmente empregados na indústria com o objetivo de otimizar a produção. Na agro-indústria, estes sistemas são usados com o mesmo propósito, sendo que dentre estes sistemas é possível destacar os que empregam a visão computacional, pois esta tem sido usada para inspeção de lavouras, colheita mecanizada, guiagem de veículos e robôs entre outras aplicações. No presente trabalho, técnicas de visão computacional foram utilizadas para segmentar e classificar elementos presentes em imagens obtidas de pomares de laranjeiras. Uma arquitetura modular foi utilizada na qual a imagem é segmentada automaticamente e, posteriormente, os segmentos são classificados. Nesta arquitetura, o algoritmo de segmentação e o classificador podem ser alterados sem prejudicar a flexibilidade do sistema implementado. Foram realizados experimentos com um banco de imagens composto por 658 imagens. Estas imagens foram obtidas sob diferentes condições de iluminação durante o período que as frutas estavam maduras. Estes experimentos foram realizados para avaliar, no contexto da arquitetura desenvolvida, o algoritmo de segmentação JSEG, vetores de características derivados dos espaços de cores RGB e HSV, além de três tipos de classificadores: bayesiano, classificador ingênuo de Bayes e classificador baseado no perceptron multicamadas. Finalmente, foram construídos os mapas de classes. As funções de distribuição de probabilidades foram estimadas com o algoritmo de Figueiredo-Jain. Dos resultados obtidos, deve-se destacar que o algoritmo de segmentação mostrou-se adequado aos propósitos deste trabalho e o classificador bayesiano mostrou-se mais prático que o classificador baseado no perceptron multicamadas. Por fim, a arquitetura mostrou-se adequada para o reconhecimento de cenas obtidas em pomares de laranjeiras. / Automation systems are usually used in the industry to optimize the production. In the agroindustry, these systems are used with the same intentions. Among them are systems that use computer vision for inspection, mechanized harvest, vehicles and robots guidance and other applications. Because of this, in the present work, techniques of computer vision were used to segment and classify elements in the images from oranges orchards. A modular architecture was used. The image are automatically segmented and, then the segments are classified. In this architecture, the segmentation algorithm and the classifier can be modified without loss of flexibility. The experiments were carried out with 658 images. These images were acquired under different illumination conditions during the period that the fruits are mature. These experiments were carried out to evaluate, in the context of developed architecture, the segmentation algorithm JSEG, characteristics vectors derived from the colors spaces RGB and HSV and three classifiers: Bayes\'s classifier, Bayes\'s naive classifier and multilayer perceptron classifier. Finally, the class maps were constructed. The Figueiredo-Jain algorithm was used to estimate the probability distribution functions. The results show that the segmentation algorithm is adequate to this work and the Bayes classifier is more practical that the multilayer perceptron classifier. Finally, the architecture is adequate for recognition of images acquired in orange orchards.
372

Segmentation d'images par combinaison adaptative couleur-texture et classification de pixels. : Applications à la caractérisation de l'environnement de réception de signaux GNSS / Image segmentation by adaptive color/texture combination and classification of pixels : Application to characterization of the reception environment of GNSS signals

Attia, Dhouha 03 October 2013 (has links)
En segmentation d’images, les informations de couleur et de texture sont très utilisées. Le premier apport de cette thèse se situe au niveau de l’utilisation conjointe de ces deux sources d’informations. Nous proposons alors une méthode de combinaison couleur/texture, adaptative et non paramétrique, qui consiste à combiner un (ou plus) gradient couleur et un (ou plus) gradient texture pour ensuite générer un gradient structurel utilisé comme image de potentiel dans l’algorithme de croissance de régions par LPE. L’originalité de notre méthode réside dans l’étude de la dispersion d’un nuage de point 3D dans l’espace, en utilisant une étude comparative des valeurs propres obtenues par une analyse des composantes principales de la matrice de covariance de ce nuage de points. L’approche de combinaison couleur/texture proposée est d’abord testée sur deux bases d’images, à savoir la base générique d’images couleur de BERKELEY et la base d’images de texture VISTEX. Cette thèse s’inscrivant dans le cadre des projets ViLoc (RFC) et CAPLOC (PREDIT), le deuxième apport de celle-ci se situe au niveau de la caractérisation de l’environnement de réception des signaux GNSS pour améliorer le calcul de la position d’un mobile en milieu urbain. Dans ce cadre, nous proposons d’exclure certains satellites (NLOS dont les signaux sont reçus par réflexion voir totalement bloqués par les obstacles environnants) dans le calcul de la position d’un mobile. Deux approches de caractérisation, basées sur le traitement d’images, sont alors proposées. La première approche consiste à appliquer la méthode de combinaison couleur/texture proposée sur deux bases d’images réelles acquises en mobilité, à l’aide d’une caméra fisheye installée sur le toit du véhicule de laboratoire, suivie d’une classification binaire permettant d’obtenir les deux classes d’intérêt « ciel » (signaux LOS) et « non ciel » (signaux NLOS). Afin de satisfaire la contrainte temps réel exigée par le projet CAPLOC, nous avons proposé une deuxième approche basée sur une simplification de l’image couplée à une classification pixellaire adaptée. Le principe d’exclusion des satellites NLOS permet d’améliorer la précision de la position estimée, mais uniquement lorsque les satellites LOS (dont les signaux sont reçus de manière direct) sont géométriquement bien distribués dans l’espace. Dans le but de prendre en compte cette connaissance relative à la distribution des satellites, et par conséquent, améliorer la précision de localisation, nous avons proposé une nouvelle stratégie pour l’estimation de position, basée sur l’exclusion des satellites NLOS (identifiés par le traitement d’images), conditionnée par l’information DOP, contenue dans les trames GPS. / Color and texture are two main information used in image segmentation. The first contribution of this thesis focuses on the joint use of color and texture information by developing a robust and non parametric method combining color and texture gradients. The proposed color/texture combination allows defining a structural gradient that is used as potential image in watershed algorithm. The originality of the proposed method consists in studying a 3D points cloud generated by color and texture descriptors, followed by an eigenvalue analysis. The color/texture combination method is firstly tested and compared with well known methods in the literature, using two databases (generic BERKELEY database of color images and the VISTEX database of texture images). The applied part of the thesis is within ViLoc project (funded by RFC regional council) and CAPLOC project (funded by PREDIT). In this framework, the second contribution of the thesis concerns the characterization of the environment of GNSS signals reception. In this part, we aim to improve estimated position of a mobile in urban environment by excluding NLOS satellites (for which the signal is masked or received after reflections on obstacles surrounding the antenna environment). For that, we propose two approaches to characterize the environment of GNSS signals reception using image processing. The first one consists in applying the proposed color/texture combination on images acquired in mobility with a fisheye camera located on the roof of a vehicle and oriented toward the sky. The segmentation step is followed by a binary classification to extract two classes « sky » (LOS signals) and « not sky » (NLOS signals). The second approach is proposed in order to satisfy the real-time constraint required by the application. This approach is based on image simplification and adaptive pixel classification. The NLOS satellites exclusion principle is interesting, in terms of improving precision of position, when the LOS satellites (for which the signals are received directly) are well geometrically distributed in space. To take into account the knowledge of satellite distribution and then increase the precision of position, we propose a new strategy of position estimation, based on the exclusion of NLOS satellites (identified by the image processing step), conditioned by DOP information, which is provided by GPS data.
373

Segmentation d'objets mobiles par fusion RGB-D et invariance colorimétrique / Mooving objects segmentation by RGB-D fusion and color constancy

Murgia, Julian 24 May 2016 (has links)
Cette thèse s'inscrit dans un cadre de vidéo-surveillance, et s'intéresse plus précisément à la détection robustesd'objets mobiles dans une séquence d'images. Une bonne détection d'objets mobiles est un prérequis indispensableà tout traitement appliqué à ces objets dans de nombreuses applications telles que le suivi de voitures ou depersonnes, le comptage des passagers de transports en commun, la détection de situations dangereuses dans desenvironnements spécifiques (passages à niveau, passages piéton, carrefours, etc.), ou encore le contrôle devéhicules autonomes. Un très grand nombre de ces applications utilise un système de vision par ordinateur. Lafiabilité de ces systèmes demande une robustesse importante face à des conditions parfois difficiles souventcausées par les conditions d'illumination (jour/nuit, ombres portées), les conditions météorologiques (pluie, vent,neige) ainsi que la topologie même de la scène observée (occultations). Les travaux présentés dans cette thèsevisent à améliorer la qualité de détection d'objets mobiles en milieu intérieur ou extérieur, et à tout moment de lajournée.Pour ce faire, nous avons proposé trois stratégies combinables :i) l'utilisation d'invariants colorimétriques et/ou d'espaces de représentation couleur présentant des propriétésinvariantes ;ii) l'utilisation d'une caméra stéréoscopique et d'une caméra active Microsoft Kinect en plus de la caméra couleurafin de reconstruire l'environnement 3D partiel de la scène, et de fournir une dimension supplémentaire, à savoirune information de profondeur, à l'algorithme de détection d'objets mobiles pour la caractérisation des pixels ;iii) la proposition d'un nouvel algorithme de fusion basé sur la logique floue permettant de combiner les informationsde couleur et de profondeur tout en accordant une certaine marge d'incertitude quant à l'appartenance du pixel aufond ou à un objet mobile. / This PhD thesis falls within the scope of video-surveillance, and more precisely focuses on the detection of movingobjects in image sequences. In many applications, good detection of moving objects is an indispensable prerequisiteto any treatment applied to these objects such as people or cars tracking, passengers counting, detection ofdangerous situations in specific environments (level crossings, pedestrian crossings, intersections, etc.), or controlof autonomous vehicles. The reliability of computer vision based systems require robustness against difficultconditions often caused by lighting conditions (day/night, shadows), weather conditions (rain, wind, snow...) and thetopology of the observed scene (occultation...).Works detailed in this PhD thesis aim at reducing the impact of illumination conditions by improving the quality of thedetection of mobile objects in indoor or outdoor environments and at any time of the day. Thus, we propose threestrategies working as a combination to improve the detection of moving objects:i) using colorimetric invariants and/or color spaces that provide invariant properties ;ii) using passive stereoscopic camera (in outdoor environments) and Microsoft Kinect active camera (in outdoorenvironments) in order to partially reconstruct the 3D environment, providing an additional dimension (a depthinformation) to the background/foreground subtraction algorithm ;iii) a new fusion algorithm based on fuzzy logic in order to combine color and depth information with a certain level ofuncertainty for the pixels classification.
374

Sky detection in images for solar exposure prediction

Laungrungthip, Nuchjira January 2008 (has links)
This project describes a technique for segmenting regions of sky in an image from the remainder of the image. This segmentation technique is part of a method for predicting the solar exposure at a location of interest from a set of photographs. Given the latitude and longitude of the position and the direction and field of view of the camera it is possible to calculate the position of the sun in the image at a particular time on a particular day. If that position is in a sky region of the image then the location will be exposed to the sun at that time. Critical to the success of this method for determining solar exposure is the image processing used to separate the sky from the rest of the image. This work is concerned with finding a technique which can do this for images taken under different weather conditions. The general approach to separate the sky from the rest of the image is to use the Canny edge detector and the morphology closing algorithm to find the regions in the image. The brightness and area of each region are then used to determine which regions are sky. The FloodFill algorithm is applied to identify all pixels in each sky region. An extensive empirical study is used to find a set of threshold values for the Canny edge detector, applied to the blue colour channel, which allow successful identification of the sky regions in a wide range of images. Tests using different camera filters show that they do not usefully increase the contrast between the sky and the rest of the image, when a standard compact camera is used. The work reported in this thesis shows that this approach of finding edges to identify possible sky regions works successfully on a wide range of images although there will always be situations, such as when the image is taken directly into the sun, where manual adjustment to the identified regions may be required.
375

Road Extraction From High Resolution Satellite Images Using Adaptive Boosting With Multi-resolution Analysis

Cinar, Umut 01 September 2012 (has links) (PDF)
Road extraction from satellite or aerial imagery is a popular topic in remote sensing, and there are many road extraction algorithms suggested by various researches. However, the need of reliable remotely sensed road information still persists as there is no sufficiently robust road extraction algorithm yet. In this study, we explore the road extraction problem taking advantage of the multi-resolution analysis and adaptive boosting based classifiers. That is, we propose a new road extraction algorithm exploiting both spectral and structural features of the high resolution multi-spectral satellite images. The proposed model is composed of three major components / feature extraction, classification and road detection. Well-known spectral band ratios are utilized to represent reflectance properties of the data whereas a segmentation operation followed by an elongatedness scoring technique renders structural evaluation of the road parts within the multi-resolution analysis framework. The extracted features are fed into Adaptive Boosting (Adaboost) learning procedure, and the learning method iteratively combines decision trees to acquire a classifier with a high accuracy. The road network is identified from the probability map constructed by the classifier suggested by Adaboost. The algorithm is designed to be modular in the sense of its extensibility, that is / new road descriptor features can be easily integrated into the existing model. The empirical evaluation of the proposed algorithm suggests that the algorithm is capable of extracting majority of the road network, and it poses promising performance results.
376

Automatic Bayesian Segmentation Of Human Facial Tissue Using 3d Mr-ct Fusion By Incorporating Models Of Measurement Blurring, Noise And Partial Volume

Sener, Emre 01 September 2012 (has links) (PDF)
Segmentation of human head on medical images is an important process in a wide array of applications such as diagnosis, facial surgery planning, prosthesis design, and forensic identification. In this study, a new Bayesian method for segmentation of facial tissues is presented. Segmentation classes include muscle, bone, fat, air and skin. The method incorporates a model to account for image blurring during data acquisition, a prior helping to reduce noise as well as a partial volume model. Regularization based on isotropic and directional Markov Random Field priors are integrated to the algorithm and their effects on segmentation accuracy are investigated. The Bayesian model is solved iteratively yielding tissue class labels at every voxel of an image. Sub-methods as variations of the main method are generated by switching on/off a combination of the models. Testing of the sub-methods are performed on two patients using single modality three-dimensional (3D) images as well as registered multi-modal 3D images (Magnetic Resonance and Computerized Tomography). Numerical, visual and statistical analyses of the methods are conducted. Improved segmentation accuracy is obtained through the use of the proposed image models and multi-modal data. The methods are also compared with the Level Set method and an adaptive Bayesiansegmentation method proposed in a previous study.
377

Contrast-enhanced magnetic resonance liver image registration, segmentation, and feature analysis for liver disease diagnosis

Oh, Ji Hun 13 November 2012 (has links)
The global objectives of this research are to develop a liver-specific magnetic resonance (MR) image registration and segmentation algorithms and to find highly correlated MR imaging features that help automatically score the severity of chronic liver disease (CLD). For a concise analysis of liver disease, time sequences of 3-D MR images should be preprocessed through an image registration to compensate for the patient motion, respiration, or tissue motion. To register contrast-enhanced MR image volume sequences, we propose a novel version of the demons algorithm that is based on a bi-directional local correlation coefficient (Bi-LCC) scheme. This scheme improves the speed at which a convergent sequence approaches to the optimum state and achieves the higher accuracy. Furthermore, the simple and parallelizable hierarchy of the Bi-LCC demons can be implemented on a graphics processing unit (GPU) using OpenCL. To automate segmentation of the liver parenchyma regions, an edge function-scaled region-based active contour (ESRAC), which hybridizes gradient and regional statistical information, with approximate partitions of the liver was proposed. Next, a significant purpose in grading liver disease is to assess the level of remaining liver function and to estimate regional liver function. On motion-corrected and segmented liver parenchyma regions, for quantitative analysis of the hepatic extraction of liver-specific MRI contrast agent, liver signal intensity change is evaluated from hepatobiliary phases (3-20 minutes), and parenchymal texture features are deduced from the equilibrium (3 minutes) phase. To build a classifier using texture features, a set of training input and output values, which is estimated by experts as a score of malignancy, trains the supervised learning algorithm using a multivariate normal distribution model and a maximum a posterior (MAP) decision rule. We validate the classifier by assessing the prediction accuracy with a set of testing data.
378

Algorithms to Process and Measure Biometric Information Content in Low Quality Face and Iris Images

Youmaran, Richard 02 February 2011 (has links)
Biometric systems allow identification of human persons based on physiological or behavioral characteristics, such as voice, handprint, iris or facial characteristics. The use of face and iris recognition as a way to authenticate user’s identities has been a topic of research for years. Present iris recognition systems require that subjects stand close (<2m) to the imaging camera and look for a period of about three seconds until the data are captured. This cooperative behavior is required in order to capture quality images for accurate recognition. This will eventually restrict the amount of practical applications where iris recognition can be applied, especially in an uncontrolled environment where subjects are not expected to cooperate such as criminals and terrorists, for example. For this reason, this thesis develops a collection of methods to deal with low quality face and iris images and that can be applied for face and iris recognition in a non-cooperative environment. This thesis makes the following main contributions: I. For eye and face tracking in low quality images, a new robust method is developed. The proposed system consists of three parts: face localization, eye detection and eye tracking. This is accomplished using traditional image-based passive techniques such as shape information of the eye and active based methods which exploit the spectral properties of the pupil under IR illumination. The developed method is also tested on underexposed images where the subject shows large head movements. II. For iris recognition, a new technique is developed for accurate iris segmentation in low quality images where a major portion of the iris is occluded. Most existing methods perform generally quite well but tend to overestimate the occluded regions, and thus lose iris information that could be used for identification. This information loss is potentially important in the covert surveillance applications we consider in this thesis. Once the iris region is properly segmented using the developed method, the biometric feature information is calculated for the iris region using the relative entropy technique. Iris biometric feature information is calculated using two different feature decomposition algorithms based on Principal Component Analysis (PCA) and Independent Component Analysis (ICA). III. For face recognition, a new approach is developed to measure biometric feature information and the changes in biometric sample quality resulting from image degradations. A definition of biometric feature information is introduced and an algorithm to measure it proposed, based on a set of population and individual biometric features, as measured by a biometric algorithm under test. Examples of its application were shown for two different face recognition algorithms based on PCA (Eigenface) and Fisher Linear Discriminant (FLD) feature decompositions.
379

Algorithms to Process and Measure Biometric Information Content in Low Quality Face and Iris Images

Youmaran, Richard 02 February 2011 (has links)
Biometric systems allow identification of human persons based on physiological or behavioral characteristics, such as voice, handprint, iris or facial characteristics. The use of face and iris recognition as a way to authenticate user’s identities has been a topic of research for years. Present iris recognition systems require that subjects stand close (<2m) to the imaging camera and look for a period of about three seconds until the data are captured. This cooperative behavior is required in order to capture quality images for accurate recognition. This will eventually restrict the amount of practical applications where iris recognition can be applied, especially in an uncontrolled environment where subjects are not expected to cooperate such as criminals and terrorists, for example. For this reason, this thesis develops a collection of methods to deal with low quality face and iris images and that can be applied for face and iris recognition in a non-cooperative environment. This thesis makes the following main contributions: I. For eye and face tracking in low quality images, a new robust method is developed. The proposed system consists of three parts: face localization, eye detection and eye tracking. This is accomplished using traditional image-based passive techniques such as shape information of the eye and active based methods which exploit the spectral properties of the pupil under IR illumination. The developed method is also tested on underexposed images where the subject shows large head movements. II. For iris recognition, a new technique is developed for accurate iris segmentation in low quality images where a major portion of the iris is occluded. Most existing methods perform generally quite well but tend to overestimate the occluded regions, and thus lose iris information that could be used for identification. This information loss is potentially important in the covert surveillance applications we consider in this thesis. Once the iris region is properly segmented using the developed method, the biometric feature information is calculated for the iris region using the relative entropy technique. Iris biometric feature information is calculated using two different feature decomposition algorithms based on Principal Component Analysis (PCA) and Independent Component Analysis (ICA). III. For face recognition, a new approach is developed to measure biometric feature information and the changes in biometric sample quality resulting from image degradations. A definition of biometric feature information is introduced and an algorithm to measure it proposed, based on a set of population and individual biometric features, as measured by a biometric algorithm under test. Examples of its application were shown for two different face recognition algorithms based on PCA (Eigenface) and Fisher Linear Discriminant (FLD) feature decompositions.
380

Dense Depth Map Estimation For Object Segmentation In Multi-view Video

Cigla, Cevahir 01 August 2007 (has links) (PDF)
In this thesis, novel approaches for dense depth field estimation and object segmentation from mono, stereo and multiple views are presented. In the first stage, a novel graph-theoretic color segmentation algorithm is proposed, in which the popular Normalized Cuts 59H[6] segmentation algorithm is improved with some modifications on its graph structure. Segmentation is obtained by the recursive partitioning of the weighted graph. The simulation results for the comparison of the proposed segmentation scheme with some well-known segmentation methods, such as Recursive Shortest Spanning Tree 60H[3] and Mean-Shift 61H[4] and the conventional Normalized Cuts, show clear improvements over these traditional methods. The proposed region-based approach is also utilized during the dense depth map estimation step, based on a novel modified plane- and angle-sweeping strategy. In the proposed dense depth estimation technique, the whole scene is assumed to be region-wise planar and 3D models of these plane patches are estimated by a greedy-search algorithm that also considers visibility constraint. In order to refine the depth maps and relax the planarity assumption of the scene, at the final step, two refinement techniques that are based on region splitting and pixel-based optimization via Belief Propagation 62H[32] are also applied. Finally, the image segmentation algorithm is extended to object segmentation in multi-view video with the additional depth and optical flow information. Optical flow estimation is obtained via two different methods, KLT tracker and region-based block matching and the comparisons between these methods are performed. The experimental results indicate an improvement for the segmentation performance by the usage of depth and motion information.

Page generated in 0.0948 seconds