Spelling suggestions: "subject:"sitt,"" "subject:"sit,""
51 |
Traitement d’images de microscopie confocale 3D haute résolution du cerveau de la mouche Drosophile / Three-dimensional image analysis of high resolution confocal microscopy data of the Drosophila melanogaster brainMurtin, Chloé Isabelle 20 September 2016 (has links)
La profondeur possible d’imagerie en laser-scanning microscopie est limitée non seulement par la distance de travail des lentilles de objectifs mais également par la dégradation de l’image causée par une atténuation et une diffraction de la lumière passant à travers l’échantillon. Afin d’étendre cette limite, il est possible, soit de retourner le spécimen pour enregistrer les images depuis chaque côté, or couper progressivement la partie supérieure de l’échantillon au fur et à mesure de l‘acquisition. Les différentes images prises de l’une de ces manières doivent ensuite être combinées pour générer un volume unique. Cependant, des mouvements de l’échantillon durant les procédures d’acquisition engendrent un décalage non seulement sur en translation selon les axes x, y et z mais également en rotation autour de ces même axes, rendant la fusion entres ces multiples images difficile. Nous avons développé une nouvelle approche appelée 2D-SIFT-in-3D-Space utilisant les SIFT (scale Invariant Feature Transform) pour atteindre un recalage robuste en trois dimensions de deux images. Notre méthode recale les images en corrigeant séparément les translations et rotations sur les trois axes grâce à l’extraction et l’association de caractéristiques stables de leurs coupes transversales bidimensionnelles. Pour évaluer la qualité du recalage, nous avons également développé un simulateur d’images de laser-scanning microscopie qui génère une paire d’images 3D virtuelle dans laquelle le niveau de bruit et les angles de rotations entre les angles de rotation sont contrôlés avec des paramètres connus. Pour une concaténation précise et naturelle de deux images, nous avons également développé un module permettant une compensation progressive de la luminosité et du contraste en fonction de la distance à la surface de l’échantillon. Ces outils ont été utilisés avec succès pour l’obtention d’images tridimensionnelles de haute résolution du cerveau de la mouche Drosophila melanogaster, particulièrement des neurones dopaminergiques, octopaminergiques et de leurs synapses. Ces neurones monoamines sont particulièrement important pour le fonctionnement du cerveau et une étude de leur réseau et connectivité est nécessaire pour comprendre leurs interactions. Si une évolution de leur connectivité au cours du temps n’a pas pu être démontrée via l’analyse de la répartition des sites synaptiques, l’étude suggère cependant que l’inactivation de l’un de ces types de neurones entraine des changements drastiques dans le réseau neuronal. / Although laser scanning microscopy is a powerful tool for obtaining thin optical sections, the possible depth of imaging is limited by the working distance of the microscope objective but also by the image degradation caused by the attenuation of both excitation laser beam and the light emitted from the fluorescence-labeled objects. Several workaround techniques have been employed to overcome this problem, such as recording the images from both sides of the sample, or by progressively cutting off the sample surface. The different views must then be combined in a unique volume. However, a straightforward concatenation is often not possible, because the small rotations that occur during the acquisition procedure, not only in translation along x, y and z axes but also in rotation around those axis, making the fusion uneasy. To address this problem we implemented a new algorithm called 2D-SIFT-in-3D-Space using SIFT (scale Invariant Feature Transform) to achieve a robust registration of big image stacks. Our method register the images fixing separately rotations and translations around the three axes using the extraction and matching of stable features in 2D cross-sections. In order to evaluate the registration quality, we created a simulator that generates artificial images that mimic laser scanning image stacks to make a mock pair of image stacks one of which is made from the same stack with the other but is rotated arbitrarily with known angles and filtered with a known noise. For a precise and natural-looking concatenation of the two images, we also developed a module progressively correcting the sample brightness and contrast depending on the sample surface. Those tools we successfully used to generate tridimensional high resolution images of the fly Drosophila melanogaster brain, in particular, its octopaminergic and dopaminergic neurons and their synapses. Those monoamine neurons appear to be determinant in the correct operating of the central nervous system and a precise and systematic analysis of their evolution and interaction is necessary to understand its mechanisms. If an evolution over time could not be highlighted through the pre-synaptic sites analysis, our study suggests however that the inactivation of one of these neuron types triggers drastic changes in the neural network.
|
52 |
Adaptive Vision Based Scene Registration for Outdoor Augmented RealityCatchpole, Jason James January 2008 (has links)
Augmented Reality (AR) involves adding virtual content into real scenes. Scenes are viewed using a Head-Mounted Display or other display type. In order to place content into the user's view of a scene, the user's position and orientation relative to the scene, commonly referred to as their pose, must be determined accurately. This allows the objects to be placed in the correct positions and to remain there when the user moves or the scene changes. It is achieved by tracking the user in relation to their environment using a variety of technology. One technology which has proven to provide accurate results is computer vision. Computer vision involves a computer analysing images and achieving an understanding of them. This may be locating objects such as faces in the images, or in the case of AR, determining the pose of the user. One of the ultimate goals of AR systems is to be capable of operating under any condition. For example, a computer vision system must be robust under a range of different scene types, and under unpredictable environmental conditions due to variable illumination and weather. The majority of existing literature tests algorithms under the assumption of ideal or 'normal' imaging conditions. To ensure robustness under as many circumstances as possible it is also important to evaluate the systems under adverse conditions. This thesis seeks to analyse the effects that variable illumination has on computer vision algorithms. To enable this analysis, test data is required to isolate weather and illumination effects, without other factors such as changes in viewpoint that would bias the results. A new dataset is presented which also allows controlled viewpoint differences in the presence of weather and illumination changes. This is achieved by capturing video from a camera undergoing a repeatable motion sequence. Ground truth data is stored per frame allowing images from the same position under differing environmental conditions, to be easily extracted from the videos. An in depth analysis of six detection algorithms and five matching techniques demonstrates the impact that non-uniform illumination changes can have on vision algorithms. Specifically, shadows can degrade performance and reduce confidence in the system, decrease reliability, or even completely prevent successful operation. An investigation into approaches to improve performance yields techniques that can help reduce the impact of shadows. A novel algorithm is presented that merges reference data captured at different times, resulting in reference data with minimal shadow effects. This can significantly improve performance and reliability when operating on images containing shadow effects. These advances improve the robustness of computer vision systems and extend the range of conditions in which they can operate. This can increase the usefulness of the algorithms and the AR systems that employ them.
|
53 |
Exploration visuelle d'environnement intérieur par détection et modélisation d'objets saillantsCottret, Maxime 26 October 2007 (has links) (PDF)
Un robot compagnon doit comprendre le lieu de vie de l'homme pour satisfaire une requête telle que "Va chercher un verre dans la cuisine" avec un haut niveau d'autonomie. Pour cela, le robot doit acquérir un ensemble de représentations adaptées aux différentes tâches à effectuer. Dans cette thèse, nous proposons d'apprendre en ligne un modèle d'apparence de structures locales qui pourront être nommées par l'utilisateur. Cela permettra ensuite de caractériser un lieu topologique (ex: la cuisine) par un ensemble de structures locales ou d'objets s'y trouvant (réfrigérateur, cafetière, evier, ...). Pour découvrir ces structures locales, nous proposons une approche cognitive, exploitant des processus visuels pré-attentif et attentif, mis en oeuvre à partir d'un système sensoriel multi-focal. Le processus pré-attentif a pour rôle la détection de zones d'intérêt, supposées contenir des informations visuelles discriminantes: basé sur le modèle de 'saillance' de Itti et Koch, il détecte ces zones dans une carte de saillance, construite à partir d'images acquises avec une caméra large champ; une zone détectée est ensuite suivie sur quelques images afin d'estimer grossièrement la taille et la position 3D de la structure locale de l'environnement qui lui correspond. Le processus attentif se focalise sur la zone d'intérêt: le but est de caractériser chaque structure locale, par un modèle d'apparence sous la forme de mémoires associatives vues-patches-aspects. De chaque image sont extraits des points d'intérêt, caractérisés par un descripteur d'apparence local. Après cette phase d'exploration, l'homme peut annoter le modèle en segmentant les structures locales en objets, en nommant ces objets et en les regroupant dans des zones (cuisine&). Ce modèle d'apparence sera ensuite exploité pour la reconnaissance et la localisation grossière des objets et des lieux perçus par le robot
|
54 |
Simultaneous Localization And Mapping in a Marine Environment using Radar ImagesSvensson, Henrik January 2009 (has links)
<p>Simultaneous Localization And Mapping (SLAM) is a process of mapping an unknown environment and at the same time keeping track of the position within this map. In this theses, SLAM is performed in a marine environent using radar images only.</p><p>A SLAM solution is presented. It uses SIFT to compare pairs of radar images. From these comparisons, measurements of the boat movements are obtained. A type of Kalman filter (Exactly Sparse Delayed-state Filter, ESDF) uses these measurements to estimate the trajectory of the boat. Once the trajectory is estimated, the radar images are joined together in order to create a map.</p><p>The presented solution is tested and the estimated trajectory is compared to GPS data. Results show that the method performs well for at least shorter periods of time.</p>
|
55 |
Automated Crowd Behavior Analysis For Video Surveillance ApplicationsGuler, Puren 01 September 2012 (has links) (PDF)
Automated analysis of a crowd behavior using surveillance videos is an important issue for public security, as it allows detection of dangerous crowds and where they are headed. Computer vision based crowd analysis algorithms can be divided into three groups / people counting, people tracking and crowd behavior analysis. In this thesis, the behavior understanding will be used for crowd behavior analysis. In the literature, there are two types of approaches for behavior understanding problem: analyzing behaviors of individuals in a crowd (object based) and using this knowledge to make deductions regarding the crowd behavior and analyzing the crowd as a whole (holistic based). In this work, a holistic approach is used to develop a real-time abnormality detection in crowds using scale invariant feature transform (SIFT) based features and unsupervised machine learning techniques.
|
56 |
Simultaneous Localization And Mapping in a Marine Environment using Radar ImagesSvensson, Henrik January 2009 (has links)
Simultaneous Localization And Mapping (SLAM) is a process of mapping an unknown environment and at the same time keeping track of the position within this map. In this theses, SLAM is performed in a marine environent using radar images only. A SLAM solution is presented. It uses SIFT to compare pairs of radar images. From these comparisons, measurements of the boat movements are obtained. A type of Kalman filter (Exactly Sparse Delayed-state Filter, ESDF) uses these measurements to estimate the trajectory of the boat. Once the trajectory is estimated, the radar images are joined together in order to create a map. The presented solution is tested and the estimated trajectory is compared to GPS data. Results show that the method performs well for at least shorter periods of time.
|
57 |
High-speed View Matching using Region Descriptors / Vymatchning i realtid med region-deskriptorerLind, Anders January 2010 (has links)
This thesis treats topics within the area of object recognition. A real-time view matching method has been developed to compute the transformation between two different images of the same scene. This method uses a color based region detector called MSCR and affine transformations of these regions to create affine-invariant patches that are used as input to the SIFT algorithm. A parallel method to compute the SIFT descriptor has been created with relaxed constraints so that the descriptor size and the number of histogram bins can be adjusted. Additionally, a matching step to deduce correspondences and a parallel RANSAC method have been created to estimate the undergone transformation between these descriptors. To achieve real-time performance, the implementation has been targeted to use the parallel nature of the GPU with CUDA as the programming language. Focus has been put on the architecture of the GPU to find the best way to parallelize the different processing steps. CUDA has also been combined with OpenGL to be able to use the hardware accelerated anisotropic sampling method for affine transformations of regions. Parts of the implementation can also be used individually from either Matlab or by using the provided C++ library directly. The method was also evaluated in terms of accuracy and speed. It was shown that our algorithm has similar or better accuracy at finding correspondences than SIFT when the 3D geometry changes are large but we get a slightly worse result on images with flat surfaces.
|
58 |
Efficient Detection And Tracking Of Salient Regions For Visual Processing On Mobile PlatformsSerhat, Gulhan 01 October 2009 (has links) (PDF)
Visual Attention is an interesting concept that constantly widens its application areas in the field of image processing and computer vision. The main idea of visual attention is to find the locations on the image that are visually attractive. In this thesis, the visually attractive regions are extracted and tracked in video sequences coming from the vision systems of mobile platforms. First, the salient regions are extracted in each frame and a feature vector is constructed for each one. Then Scale Invariant Feature Transform (SIFT) is applied only to the salient regions to extract more stable features. The tracking is achieved by matching the salient regions of consecutive frames by comparing their feature vectors. Then the SIFT points of salient regions are matched to calculate the shift values for the matched pairs. Limiting the SIFT application to only the salient regions results in significantly reduced computational cost. Moreover, the salient region detection procedure is also limited to the predetermined regions throughout the video sequence in order to increase the efficiency. In addition, the visual attention channels are limited to the most dominant features of the regions. Experimental results that compare the algorithm outputs with ground-truth data reveal that, the proposed algorithm has fine tracking performance together with acceptable computational cost. Promising results are obtained even with blurred video sequences typical of ground vehicles and robots and in an uncontrolled environment.
|
59 |
Image Annotation With Semi-supervised ClusteringSayar, Ahmet 01 December 2009 (has links) (PDF)
Image annotation is defined as generating a set of textual words for a given image, learning from the available training data consisting of visual image content and annotation words.
Methods developed for image annotation usually make use of region clustering algorithms to quantize the visual information. Visual codebooks are generated from the region clusters of low level visual features. These codebooks are then, matched with the words of the text document related to the image, in various ways.
In this thesis, we propose a new image annotation technique, which improves the representation and quantization of the visual information by employing the available but unused information, called side information, which is hidden in the system. This side information is used to semi-supervise the clustering process which creates the visterms. The selection of side information depends on the visual image content, the annotation words and the relationship between them. Although there may be many different ways of defining and selecting side information, in this thesis, three types of side information are proposed. The first one is the hidden topic probability information obtained automatically from the text document associated with the image. The second one is the orientation and the third one is the color information around interest points that correspond to critical locations in the image. The side information provides a set of constraints in a semi-supervised K-means region clustering algorithm. Consequently, in generation of the visual terms from the regions, not only low level features are clustered, but also side information is used to complement the visual information,
called visterms. This complementary information is expected to close the semantic gap between the low level features extracted from each region and the high level textual information. Therefore, a better match between visual codebook and the annotation words is obtained. Moreover, a speedup is obtained in the modified K-means algorithm because of the constraints brought by the side information. The proposed algorithm is implemented in a high performance parallel computation environment.
|
60 |
3d Face Recognition With Local Shape DescriptorsInan, Tolga 01 September 2011 (has links) (PDF)
This thesis represents two approaches for three dimensional face recognition. In the first approach, a generic face model is fitted to human face. Local shape descriptors are located on the nodes of generic model mesh. Discriminative local shape descriptors on the nodes are selected and fed as input into the face recognition system. In the second approach, local shape descriptors which are uniformly distributed across the face are
calculated. Among the calculated shape descriptors that are discriminative for recognition process are selected and used for three dimensional face recognition.
Both approaches are tested with widely accepted FRGCv2.0 database and experiment protocol. Reported results are better than the state-of-theart systems. Recognition performances for neutral and non-neutral faces are also reported.
|
Page generated in 0.0465 seconds