• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 43
  • 35
  • 1
  • Tagged with
  • 208
  • 34
  • 33
  • 27
  • 19
  • 17
  • 17
  • 16
  • 13
  • 13
  • 12
  • 12
  • 11
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

A new algorithm for minutiae extraction and matching in fingerprint

Noor, Azad January 2012 (has links)
A novel algorithm for fingerprint template formation and matching in automatic fingerprint recognition has been developed. At present, fingerprint is being considered as the dominant biometric trait among all other biometrics due to its wide range of applications in security and access control. Most of the commercially established systems use singularity point (SP) or ‘core’ point for fingerprint indexing and template formation. The efficiency of these systems heavily relies on the detection of the core and the quality of the image itself. The number of multiple SPs or absence of ‘core’ on the image can cause some anomalies in the formation of the template and may result in high False Acceptance Rate (FAR) or False Rejection Rate (FRR). Also the loss of actual minutiae or appearance of new or spurious minutiae in the scanned image can contribute to the error in the matching process. A more sophisticated algorithm is therefore necessary in the formation and matching of templates in order to achieve low FAR and FRR and to make the identification more accurate. The novel algorithm presented here does not rely on any ‘core’ or SP thus makes the structure invariant with respect to global rotation and translation. Moreover, it does not need orientation of the minutiae points on which most of the established algorithm are based. The matching methodology is based on the local features of each minutiae point such as distances to its nearest neighbours and their internal angle. Using a publicly available fingerprint database, the algorithm has been evaluated and compared with other benchmark algorithms. It has been found that the algorithm has performed better compared to others and has been able to achieve an error equal rate of 3.5%.
132

Detection of facial expressions based on time dependent morphological features

Bozed, Kenz Amhmed January 2011 (has links)
Facial expression detection by a machine is a valuable topic for Human Computer Interaction and has been a study issue in the behavioural science for some time. Recently, significant progress has been achieved in machine analysis of facial expressions but there are still some interestes to study the area in order to extend its applications. This work investigates the theoretical concepts behind facial expressions and leads to the proposal of new algorithms in face detection and facial feature localisation, design and construction of a prototype system to test these algorithms. The overall goals and motivation of this work is to introduce vision based techniques able to detect and recognise the facial expressions. In this context, a facial expression prototype system is developed that accomplishes facial segmentation (i.e. face detection, facial features localisation), facial features extraction and features classification. To detect a face, a new simplified algorithm is developed to detect and locate its presence from the fackground by exploiting skin colour properties which are then used to distinguish between face and non-face regions. This allows facial parts to be extracted from a face using elliptical and box regions whose geometrical relationships are then utilised to determine the positions of the eyes and mouth through morphological operations. The mean and standard deviations of segmented facial parts are then computed and used as features for the face. For images belonging to the same class, thses features are applied to the K-mean algorithm to compute the controid point of each class expression. This is repeated for images in the same expression class. The Euclidean distance is computed between each feature point and its cluster centre in the same expression class. This determines how close a facial expression is to a particular class and can be used as observation vectors for a Hidden Markov Model (HMM) classifier. Thus, an HMM is built to evaluate an expression of a subject as belonging to one of the six expression classes, which are Joy, Anger, Surprise, Sadness, Fear and Disgust by an HMM using distance features. To evaluate the proposed classifier, experiments are conducted on new subjects using 100 video clips that contained a mixture of expressions. The average successful detection rate of 95.6% is measured from a total of 9142 frames contained in the video clips. The proposed prototype system processes facial features parts and presents improved results of facial expressions detection rather than using whole facial features as proposed by previous authors. This work has resulted in four contributions: the Ellipse Box Face Detection Algorithm (EBFDA), Facial Features Distance Algorithm (FFDA), Facial features extraction process, and Facial features classification. These were tested and verified using the prototype system.
133

Imagerie de la myéline par IRM à temps d'écho ultracourt / Myelin imaging in MRI using ultra-short echo time sequences

Soustelle, Lucas 16 May 2018 (has links)
L'évaluation non-invasive de la myéline dans la substance blanche du système nerveux central est fondamentale pour le suivi de pathologies telles que la sclérose en plaques. La myéline est majoritairement constituée de lipides et de protéines : du fait des nombreuses interactions dans ces macromolécules, les temps de relaxation transversale sont très courts (T2 < 1 ms), rendant indétectables ces signaux par des séquences conventionnelles. Les méthodes standards d’imagerie par RMN pour la caractérisation de la myéline reposent sur la modélisation des interactions entre les protons aqueux et la structure myélinisée. Néanmoins, la sélectivité et la robustesse de ces méthodes indirectes peuvent être remises en cause. Les séquences à temps d’écho ultracourt (UTE – TE < 1 ms) permettraient de faire l’acquisition directe des signaux issus de la matrice semi-solide de la myéline. Le développement de telles méthodes pour la mise en contraste positif et sélectif de la myéline sur système préclinique est l’objet de cette thèse. La validation de chacune des méthodes a été menée sur modèle murin ex vivo en confrontant des animaux sains et démyélinisés. Les résultats à partir des méthodes UTE montrent une sélectivité significative à la démyélinisation, suggérant l’adéquation de la technique pour l'évaluation de la myéline dans la substance blanche. / Non-invasive evaluation of white matter myelin in the central nervous system is essential for the monitoring of pathologies such as multiple sclerosis. Myelin is essentially composed of lipids and proteins: because of the numerous interactions between these macromolecules, the transverse relaxation times are very short (T2 < 1 ms), and their signals are undetectable using conventional sequences. Standard MRI methods for the characterization of myelin rely on the modeling of the interactions of aqueous protons with myelinated structures. Nonetheless, the selectivity and robustness of such indirect methods are questionable. Ultrashort echo time sequences (UTE – TE < 1 ms) may allow to directly detect the signals arising from the semi-solid spin pool of myelin. The main objective of this thesis consists in developing such methods in order to generate a positive and selective contrast of myelin using a preclinical imaging system. Validation of each method was carried out using an ex vivo murine model by confronting healthy and demyelinated animals. Results show a significant selectivity of the UTE methods to demyelination, suggesting that the technique is promising for white matter myelin monitoring.
134

On guided model-based analysis for ear biometrics

Arbab-Zavar, Banafshe January 2009 (has links)
Ears are a new biometric with major advantage in that they appear to maintain their structure with increasing age. Current approaches have exploited 2D and 3D images of the ear in human identification. Contending that the ear is mainly a planar shape we use 2D images, which are consistent with deployment in surveillance and other planar-image scenarios. So far ear biometric approaches have mostly used general properties and overall appearance of ear images in recognition, while the structure of the ear has not been discussed. In this thesis, we propose a new model-based approach to ear biometrics. Our model is a part-wise description of the ear structure. By embryological evidence of ear development, we shall show that the ear is indeed a composite structure of individual components. Our model parts are derived by a stochastic clustering method on a set of scale invariant features on a training set. We shall review different accounts of ear formation and consider some research into congenital ear anomalies which discuss apportioning various components to the ear's complex structure. We demonstrate that our model description is in accordance with these accounts. We extend our model description, by proposing a new wavelet-based analysis with a specific aim of capturing information in the ear's outer structures. We shall show that this section of the ear is not sufficiently explored by the model, while given that it exhibits large variations in shape, intuitively, it is significant to the recognition process. In this new analysis, log-Gabor filters exploit the frequency content of the ear's outer structures. In recognition, ears are automatically enrolled via our new enrolment algorithm, which is based on the elliptical shape of ears in head profile images. These samples are then recognized via the parts selected by the model. The incorporation of the wavelet-based analysis of the outer ear structures forms an extended or hybrid method. The performance is evaluated on test sets selected from the XM2VTS database. By results, bothin modelling and recognition, our new model-based approach does indeed appear to be a promising new approach to ear biometrics. In this, the recognition performance has improved notably by the incorporation of our new wavelet-based analysis. The main obstacle hindering the deployment of ear biometrics is the potential occlusion by hair. A model-based approach has a further attraction, since it has an advantage in handling noise and occlusion. Also, by localization, a wavelet can offer performance advantages when handling occluded data. A robust matching technique is also added to restrict the influence of corrupted wavelet projections. Furthermore, our automatic enrolment is tolerant of occlusion in ear samples. We shall present a thorough evaluation of performance in occlusion, using PCA and a robust PCA for comparison purposes. Our hybrid method obtains promising results recognizing occluded ears. Our results have confirmed the validity of this approach both in modelling and recognition. Our new hybrid method does indeed appear to be a promising new approach to ear biometrics, by guiding a model-based analysis via anatomical knowledge.
135

Reconnaissance en-ligne d'actions 3D par l'analyse des trajectoires du squelette humain / Online 3D actions recognition by analyzing the trajectories of human's skeleton

Boulahia, Said Yacine 11 July 2018 (has links)
L'objectif de cette thèse est de concevoir une approche transparente originale apte à détecter en temps-réel l'occurrence d'une action, dans un flot non segmenté et idéalement le plus tôt possible. Ces travaux s'inscrivent dans une collaboration entre deux équipes de l'IRISA-lnria de Rennes, à savoir lntuidoc et MimeTIC. En profitant de la complémentarité des savoir-faire des deux équipes de recherche, nous proposons de reconsidérer les besoins et les difficultés rencontrées pour modéliser, reconnaître et détecter une action 30 en proposant de nouvelles solutions à la lumière des avancées réalisées en termes de modélisation de gestes manuscrits 20. Les contributions de cette thèse sont regroupées en trois parties principales. Dans la première partie, nous proposons une nouvelle approche pour modéliser et reconnaître une action pré­segmentée. Dans la deuxième partie, nous introduisons une approche permettant de reconnaître une action dans un flot non segmenté. Enfin, dans la troisième partie, nous étendons cette dernière approche pour la caractérisation précoce d'une action avec très peu de d'information. Pour chacune de ces trois problématiques, nous avons identifié explicitement les difficultés à considérer afin d'en effectuer une description complète pour permettre de concevoir des solutions ciblées pour chacune d'elles. Les résultats expérimentaux obtenus sur différents benchmarks d'actions attestent de la validité de notre démarche. En outre, à travers des coopérations ayant eu lieu au cours de la thèse, les approches développées ont été déployées dans trois applications, dont des applications en animation et en reconnaissance de gestes dynamiques de la main. / The objective of this thesis is to design an original transparent approach able to detect in real time the occurrence of an action, in an unsegmented flow and ideally as early as possible. This work is part of a collaboration between two IRISA-lnria teams in Rennes, namely lntuidoc and Mime TIC. By taking advantage of the complementary expertise of the two research teams, we propose to reconsider the needs and difficulties encountered to model, recognize and detect a 30 action by proposing new solutions in the light of the advances made in terms of 20 handwriting modeling. The contributions of this thesis are grouped into three main parts. In the first part, we propose a new approach to model and recognize a pre-segmented action. Indeed, it is first necessary to develop a representation able to characterize as finely as possible a given action to facilitate recognition. In the second part, we introduce an approach to recognize an action in an unsegmented flow. Finally, in the third part, we extend this last approach for the early characterization of an action with very little information. For each of these three issues, we have explicitly identified the difficulties to be considered in order to make a complete description of them so that we can design targeted solutions for each of them. The experimental results obtained on different benchmarks of actions attest to the validity of our approach. In addition, through collaborations that took place during the thesis, the developed approaches were deployed in three applications, including applications in animation and in dynamic hand gestures recognition.
136

Continuous regression : a functional regression approach to facial landmark tracking

Sánchez Lozano, Enrique January 2017 (has links)
Facial Landmark Tracking (Face Tracking) is a key step for many Face Analysis systems, such as Face Recognition, Facial Expression Recognition, or Age and Gender Recognition, among others. The goal of Facial Landmark Tracking is to locate a sparse set of points defining a facial shape in a video sequence. These typically include the mouth, the eyes, the contour, or the nose tip. The state of the art method for Face Tracking builds on Cascaded Regression, in which a set of linear regressors are used in a cascaded fashion, each receiving as input the output of the previous one, subsequently reducing the error with respect to the target locations. Despite its impressive results, Cascaded Regression suffers from several drawbacks, which are basically caused by the theoretical and practical implications of using Linear Regression. Under the context of Face Alignment, Linear Regression is used to predict shape displacements from image features through a linear mapping. This linear mapping is learnt through the typical least-squares problem, in which a set of random perturbations is given. This means that, each time a new regressor is to be trained, Cascaded Regression needs to generate perturbations and apply the sampling again. Moreover, existing solutions are not capable of incorporating incremental learning in real time. It is well-known that person-specific models perform better than generic ones, and thus the possibility of personalising generic models whilst tracking is ongoing is a desired property, yet to be addressed. This thesis proposes Continuous Regression, a Functional Regression solution to the least-squares problem, resulting in the first real-time incremental face tracker. Briefly speaking, Continuous Regression approximates the samples by an estimation based on a first-order Taylor expansion yielding a closed-form solution for the infinite set of shape displacements. This way, it is possible to model the space of shape displacements as a continuum, without the need of using complex bases. Further, this thesis introduces a novel measure that allows Continuous Regression to be extended to spaces of correlated variables. This novel solution is incorporated into the Cascaded Regression framework, and its computational benefits for training under different configurations are shown. Then, it presents an approach for incremental learning within Cascaded Regression, and shows its complexity allows for real-time implementation. To the best of my knowledge, this is the first incremental face tracker that is shown to operate in real-time. The tracker is tested in an extensive benchmark, attaining state of the art results, thanks to the incremental learning capabilities.
137

Car make and model recognition under limited lighting conditions at night

Boonsim, Noppakun January 2016 (has links)
Car make and model recognition (CMMR) has become an important part of intelligent transport systems. Information provided by CMMR can be utilized when licence plate numbers cannot be identified or fake number plates are used. CMMR can also be used when automatic identification of a certain model of a vehicle by camera is required. The majority of existing CMMR methods are designed to be used only in daytime when most car features can be easily seen. Few methods have been developed to cope with limited lighting conditions at night where many vehicle features cannot be detected. This work identifies car make and model at night by using available rear view features. A binary classifier ensemble is presented, designed to identify a particular car model of interest from other models. The combination of salient geographical and shape features of taillights and licence plates from the rear view are extracted and used in the recognition process. The majority vote of individual classifiers, support vector machine, decision tree, and k-nearest neighbours is applied to verify a target model in the classification process. The experiments on 100 car makes and models captured under limited lighting conditions at night against about 400 other car models show average high classification accuracy about 93%. The classification accuracy of the presented technique, 93%, is a bit lower than the daytime technique, as reported at 98 % tested on 21 CMMs (Zhang, 2013). However, with the limitation of car appearances at night, the classification accuracy of the car appearances gained from the technique used in this study is satisfied.
138

Video Analysis for Micro- Expression Spotting and Recognition / Analyse de vidéo pour la détection et la reconnaissance de micro-expressions

Lu, Hua 05 April 2018 (has links)
Les principales contributions de cette these, en analyse d'image, portent sur l’etude des caracteristiques de reperage et de reconnaissance des micro-expressions. les approches d’analyse d’expressions faciales dans le domaine de la vision par ordinateur consistent a les detecter et a les classer dans des videos. par rapport a la macro-expression, une micro-expression induit dans une partie du visage un changement rapide durant moins d'une demi-seconde. de plus, cette subtile apparition dans une partie du visage rend difficile sa detection et sa reconnaissance. ces dernieres annees ont connu un interet croissant pour des algorithmes d’extraction automatique de micro-expressions faciales. cela a ete motive par des applications dans des contextes a enjeux eleves tels les enquetes criminelles, les points de controle des aeroports et des transports en commun, le contre-terrorisme, … le choix de caracteristiques faciales efficaces joue un role crucial dans l’analyse des micro-expressions.ce travail se concentre sur la partie d’extraction de caracteristiques, en proposant diverses methodes pour les taches de detection et de reconnaissance de micro-expression.la detection constitue la premiere etape dans l'analyse des micro-expressions. les methodes de detection existantes basees sur des caracteristiques, tels les motifs binaires locaux (lbp), l’histogramme de gradients orientes (hog), le flux optique, souffrent de complexite de mise en œuvre entrainant un probleme d'implementation en temps reel. ainsi, dans cette these, une methode de detection basee sur la projection integrale est proposee pour resoudre ce probleme. cependant, toutes les caracteristiques ci-dessus sont extraites des visages recadres et rognees ; ce qui cause, generalement, un decalage residuel entre les images. pour resoudre ce probleme, est proposee une autre methode de detection basee sur des caracteristiques geometriques. cette derniere exploite les distances geometriques entre des points cles du visage sans necessite de recadrer l'image. ceci permet de capturer des deplacements geometriques subtils le long des sequences et s’avere approprie pour differentes taches d’analyse faciale qui requierent une grande vitesse de calcul.parmi les caracteristiques de reconnaissance de micro-expressions existantes, celles de mouvement basees sur le flux optique presentent des avantages dans la caracterisation de mouvements subtils sur le visage. toutefois, il reste difficile de determiner les emplacements precis de chaque mappage de traits du visage entre les differentes trames par flux optique, meme si les images ont ete alignees. un tel probleme peut donner lieu a une mauvaise estimation, a la fois, de l'orientation et de l’amplitude associees au flux optique. pour y pallier, nous proposons une nouvelle approche (dite fmbh) basee sur les histogrammes de frontiere de mouvement (mbh). elle permet de supprimer les mouvements inattendus causes par un mauvais recalage residuel apparaissant entre les images recadrees tout en capturant le mouvement relatif caracterisant la micro-expression. cette caracteristique est generee en combinant les composantes horizontales et verticales du differentiel de flux optique.les differents developpements de ce travail ont conduit a des etudes comparatives avec des approches de l'etat de l'art sur des bases de donnees bien connues et exploitees par la communaute du domaine. les resultats experimentaux, ainsi obtenues, montrent l'efficacite de nos contributions. / Recent years, there has been an increasing interest in the computer vision in automatic facial micro-expression algorithms. this has been driven by applications in high-stakes contexts such as criminal investigations, airport and mass transit checkpoints, counter terrorism, and so on. micro-expression approaches in computer vision area consist of detecting and classifying them from videos. compared to macro-expression, a micro-expression involves a rapid change which lasts less than a half of second, and moreover, its subtle appearance in part of the face makes detection and recognition difficult to achieve. effective facial features play a crucial role for micro-expression analysis. this thesis focuses on the feature extraction parts, by developing various feature extraction methods for types of micro-expression detection and recognition tasks.the detection of micro-expressions is the first step for its analysis. this thesis aims to spot micro-expressions from videos. existing detection methods based on features, such as the local binary patterns, the histogram of gradient, the optical flow suffer difficulties in computation consuming leading to real-time implementation problem. thus, in this thesis, the spotting method based on integral projection to address this problem. however, all the above features are extracted from cropped faces which usually cause residual mis-registration that appears between images. in order to deal with this issue, another detection method based on geometrical feature is proposed. it involves the geometrical distances between facial key-points without the need of cropping face. this captures subtle geometric displacements along sequences and is proved to be suitable for different facial analysis tasks that require high computational speed. for micro-expression recognition, motion features based on the optical flow have advantages in characterizing subtle movements on face among the existing recognition features. it is still a difficult problem for optical flow to determine the accurate locations of each facial feature mappings between different images even though the face images have been aligned. such an issue may give rise to wrong orientation and magnitude estimation associated to the optical flow field. in order to address this problem, the motion boundary histograms are considered. it can remove unexpected motions caused by residual mis-registration that appears between images cropped from different frames. nevertheless, the relative motion can be captured. based on the the motion boundary, a new descriptor the fusion motion boundary histograms is introduced. this feature is generated by combing both the horizontal and the vertical components of the differential of optical flow as inspired from the motion boundary histograms. the main contributions of this thesis lie at the study of features for micro-expressions spotting and recognition. experiments on the micro-expression databases show the effectiveness of the presented contributions.
139

Kernel-based learning on hierarchical image representations : applications to remote sensing data classification / Apprentissage à base de noyaux sur représentations d’images arborescentes : applications à la classification des images de télédétection

Cui, Yanwei 04 July 2017 (has links)
La représentation d’image sous une forme hiérarchique a été largement utilisée dans un contexte de classification. Une telle représentation est capable de modéliser le contenu d’une image à travers une structure arborescente. Dans cette thèse, nous étudions les méthodes à noyaux qui permettent de prendre en entrée des données sous une forme structurée et de tenir compte des informations topologiques présentes dans chaque structure en concevant des noyaux structurés. Nous présentons un noyau structuré dédié aux structures telles que des arbres non ordonnés et des chemins (séquences de noeuds) équipés de caractéristiques numériques. Le noyau proposé, appelé Bag of Subpaths Kernel (BoSK), est formé en sommant les noyaux calculés sur les sous-chemins (un sac de tous les chemins et des noeuds simples) entre deux sacs. Le calcul direct de BoSK amène à une complexité quadratique par rapport à la taille de la structure (nombre de noeuds) et la quantité de données (taille de l’ensemble d’apprentissage). Nous proposons également une version rapide de notre algorithme, appelé Scalable BoSK (SBoSK), qui s’appuie sur la technique des Random Fourier Features pour projeter les données structurées dans un espace euclidien, où le produit scalaire du vecteur transformé est une approximation de BoSK. Cet algorithme bénéficie d’une complexité non plus linéaire mais quadratique par rapport aux tailles de la structure et de l’ensemble d’apprentissage, rendant ainsi le noyau adapté aux situations d’apprentissage à grande échelle. Grâce à (S)BoSK, nous sommes en mesure d’effectuer un apprentissage à partir d’informations présentes à plusieurs échelles dans les représentations hiérarchiques d’image. (S)BoSK fonctionne sur des chemins, permettant ainsi de tenir compte du contexte d’un pixel (feuille de la représentation hiérarchique) par l’intermédiaire de ses régions ancêtres à plusieurs échelles. Un tel modèle est utilisé dans la classification des images au niveau pixel. (S)BoSK fonctionne également sur les arbres, ce qui le rend capable de modéliser la composition d’un objet (racine de la représentation hiérarchique) et les relations topologiques entre ses sous-parties. Cette stratégie permet la classification des tuiles ou parties d’image. En poussant plus loin l’utilisation de (S)BoSK, nous introduisons une nouvelle approche de classification multi-source qui effectue la classification directement à partir d’une représentation hiérarchique construite à partir de deux images de la même scène prises à différentes résolutions, éventuellement selon différentes modalités. Les évaluations sur plusieurs jeux de données de télédétection disponibles dans la communauté illustrent la supériorité de (S)BoSK par rapport à l’état de l’art en termes de précision de classification, et les expériences menées sur une tâche de classification urbaine montrent la pertinence de l’approche de classification multi-source proposée. / Hierarchical image representations have been widely used in the image classification context. Such representations are capable of modeling the content of an image through a tree structure. In this thesis, we investigate kernel-based strategies that make possible taking input data in a structured form and capturing the topological patterns inside each structure through designing structured kernels. We develop a structured kernel dedicated to unordered tree and path (sequence of nodes) structures equipped with numerical features, called Bag of Subpaths Kernel (BoSK). It is formed by summing up kernels computed on subpaths (a bag of all paths and single nodes) between two bags. The direct computation of BoSK yields a quadratic complexity w.r.t. both structure size (number of nodes) and amount of data (training size). We also propose a scalable version of BoSK (SBoSK for short), using Random Fourier Features technique to map the structured data in a randomized finite-dimensional Euclidean space, where inner product of the transformed feature vector approximates BoSK. It brings down the complexity from quadratic to linear w.r.t. structure size and amount of data, making the kernel compliant with the large-scale machine-learning context. Thanks to (S)BoSK, we are able to learn from cross-scale patterns in hierarchical image representations. (S)BoSK operates on paths, thus allowing modeling the context of a pixel (leaf of the hierarchical representation) through its ancestor regions at multiple scales. Such a model is used within pixel-based image classification. (S)BoSK also works on trees, making the kernel able to capture the composition of an object (top of the hierarchical representation) and the topological relationships among its subparts. This strategy allows tile/sub-image classification. Further relying on (S)BoSK, we introduce a novel multi-source classification approach that performs classification directly from a hierarchical image representation built from two images of the same scene taken at different resolutions, possibly with different modalities. Evaluations on several publicly available remote sensing datasets illustrate the superiority of (S)BoSK compared to state-of-the-art methods in terms of classification accuracy, and experiments on an urban classification task show the effectiveness of proposed multi-source classification approach.
140

Détection de formes compactes en imagerie : développement de méthodes cumulatives basées sur l'étude des gradients : Applications à l'agroalimentaire / Detection of compact forms in imaging : Development of cumulative methods based on the study of gradients : Applications for the food

Denimal, Emmanuel 28 March 2018 (has links)
Les cellules de comptage (Malassez, Thoma …) sont conçues pour permettre le dénombrement de cellules sous microscope et la détermination de leur concentration grâce au volume calibré de la grille apparaissant dans l’image microscopique. Le comptage manuel présente des inconvénients majeurs : subjectivité, non-répétabilité… Il existe des solutions commerciales de comptage automatique dont l’inconvénient est de nécessiter un environnement bien contrôlé qu’il n’est pas possible d’obtenir dans le cadre de certaines études (ex. : le glycérol influe grandement sur la qualité des images). L’objectif du projet est donc double : un comptage des cellules automatisé et suffisamment robuste pour être réalisable, quelles que soient les conditions d’acquisition.Dans un premier temps, une méthode basée sur la transformée de Fourier a été développée pour détecter, caractériser et effacer la grille de la cellule de comptage. Les caractéristiques de la grille extraites par cette méthode servent à déterminer une zone d’intérêt et son effacement permet de faciliter la détection des cellules à compter.Pour réaliser le comptage, la problématique principale est d’obtenir une méthode de détection des cellules suffisamment robuste pour s’adapter aux conditions d’acquisition variables. Les méthodes basées sur les accumulations de gradients ont été améliorées par l’adjonction de structures permettant une détection plus fine des pics d’accumulation. La méthode proposée permet une détection précise des cellules et limite l’apparition de faux positifs.Les résultats obtenus montrent que la combinaison de ces 2 méthodes permet d’obtenir un comptage répétable et représentatif d’un consensus des comptages manuels réalisés par des opérateurs. / The counting cells (Malassez, Thoma ...) are designed to allow the enumeration of cells under a microscope and the determination of their concentration thanks to the calibrated volume of the grid appearing in the microscopic image. Manual counting has major disadvantages: subjectivity, non-repeatability ... There are commercial automatic counting solutions, the disadvantage of which is that a well-controlled environment is required which can’t be obtained in certain studies ( eg glycerol greatly affects the quality of the images ). The objective of the project is therefore twofold: an automated cell count and sufficiently robust to be feasible regardless of the acquisition conditions.In a first step, a method based on the Fourier transform has been developed to detect, characterize and erase the grid of the counting cell. The characteristics of the grid extracted by this method serve to determine an area of interest and its erasure makes it easier to detect the cells to count.To perform the count, the main problem is to obtain a cell detection method robust enough to adapt to the variable acquisition conditions. The methods based on gradient accumulations have been improved by the addition of structures allowing a finer detection of accumulation peaks. The proposed method allows accurate detection of cells and limits the appearance of false positives.The results obtained show that the combination of these two methods makes it possible to obtain a repeatable and representative count of a consensus of manual counts made by operators.

Page generated in 0.0153 seconds