• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 5
  • 2
  • 1
  • Tagged with
  • 22
  • 11
  • 8
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Adaptive registration using 2D and 3D features for indoor scene reconstruction. / Registro adaptativo usando características 2D e 3D para reconstrução de cenas em ambientes internos.

Juan Carlos Perafán Villota 27 October 2016 (has links)
Pairwise alignment between point clouds is an important task in building 3D maps of indoor environments with partial information. The combination of 2D local features with depth information provided by RGB-D cameras are often used to improve such alignment. However, under varying lighting or low visual texture, indoor pairwise frame registration with sparse 2D local features is not a particularly robust method. In these conditions, features are hard to detect, thus leading to misalignment between consecutive pairs of frames. The use of 3D local features can be a solution as such features come from the 3D points themselves and are resistant to variations in visual texture and illumination. Because varying conditions in real indoor scenes are unavoidable, we propose a new framework to improve the pairwise frame alignment using an adaptive combination of sparse 2D and 3D features based on both the levels of geometric structure and visual texture contained in each scene. Experiments with datasets including unrestricted RGB-D camera motion and natural changes in illumination show that the proposed framework convincingly outperforms methods using 2D or 3D features separately, as reflected in better level of alignment accuracy. / O alinhamento entre pares de nuvens de pontos é uma tarefa importante na construção de mapas de ambientes em 3D. A combinação de características locais 2D com informação de profundidade fornecida por câmeras RGB-D são frequentemente utilizadas para melhorar tais alinhamentos. No entanto, em ambientes internos com baixa iluminação ou pouca textura visual o método usando somente características locais 2D não é particularmente robusto. Nessas condições, as características 2D são difíceis de serem detectadas, conduzindo a um desalinhamento entre pares de quadros consecutivos. A utilização de características 3D locais pode ser uma solução uma vez que tais características são extraídas diretamente de pontos 3D e são resistentes a variações na textura visual e na iluminação. Como situações de variações em cenas reais em ambientes internos são inevitáveis, essa tese apresenta um novo sistema desenvolvido com o objetivo de melhorar o alinhamento entre pares de quadros usando uma combinação adaptativa de características esparsas 2D e 3D. Tal combinação está baseada nos níveis de estrutura geométrica e de textura visual contidos em cada cena. Esse sistema foi testado com conjuntos de dados RGB-D, incluindo vídeos com movimentos irrestritos da câmera e mudanças naturais na iluminação. Os resultados experimentais mostram que a nossa proposta supera aqueles métodos que usam características 2D ou 3D separadamente, obtendo uma melhora da precisão no alinhamento de cenas em ambientes internos reais.
12

Anatomy of the SIFT method / L'Anatomie de la méthode SIFT

Rey Otero, Ives 26 September 2015 (has links)
Cette thèse est une analyse approfondie de la méthode SIFT, la méthode de comparaison d'images la plus populaire. En proposant un échantillonnage du scale-space Gaussien, elle est aussi la première méthode à mettre en pratique la théorie scale-space et faire usage de ses propriétés d'invariance aux changements d'échelles.SIFT associe à une image un ensemble de descripteurs invariants aux changements d'échelle, invariants à la rotation et à la translation. Les descripteurs de différentes images peuvent être comparés afin de mettre en correspondance les images. Compte tenu de ses nombreuses applications et ses innombrables variantes, étudier un algorithme publié il y a une décennie pourrait surprendre. Il apparaît néanmoins que peu a été fait pour réellement comprendre cet algorithme majeur et établir de façon rigoureuse dans quelle mesure il peut être amélioré pour des applications de haute précision. Cette étude se découpe en quatre parties. Le calcul exact du scale-space Gaussien, qui est au cœur de la méthode SIFT et de la plupart de ses compétiteurs, est l'objet de la première partie.La deuxième partie est une dissection méticuleuse de la longue chaîne de transformations qui constitue la méthode SIFT. Chaque paramètre y est documenté et son influence analysée. Cette dissection est aussi associé à une publication en ligne de l'algorithme. La description détaillée s'accompagne d'un code en C ainsi que d'une plateforme de démonstration permettant l'analyse par le lecteur de l'influence de chaque paramètre. Dans la troisième partie, nous définissons un cadre d'analyse expérimental exact dans le but de vérifier que la méthode SIFT détecte de façon fiable et stable les extrema du scale-space continue à partir de la grille discrète. En découlent des conclusions pratiques sur le bon échantillonnage du scale-space Gaussien ainsi que sur les stratégies de filtrage de points instables. Ce même cadre expérimental est utilisé dans l'analyse de l'influence de perturbations dans l'image (aliasing, bruit, flou). Cette analyse démontre que la marge d'amélioration est réduite pour la méthode SIFT ainsi que pour toutes ses variantes s'appuyant sur le scale-space pour extraire des points d'intérêt. L'analyse démontre qu'un suréchantillonnage du scale-space permet d'améliorer l'extraction d'extrema et que se restreindre aux échelles élevées améliore la robustesse aux perturbations de l'image.La dernière partie porte sur l'évaluation des performances de détecteurs de points. La métrique de performance la plus généralement utilisée est la répétabilité. Nous démontrons que cette métrique souffre pourtant d'un biais et qu'elle favorise les méthodes générant des détections redondantes. Afin d'éliminer ce biais, nous proposons une variante qui prend en considération la répartition spatiale des détections. A l'aide de cette correction nous réévaluons l'état de l'art et montrons que, une fois la redondance des détections prise en compte, la méthode SIFT est meilleure que nombre de ses variantes les plus modernes. / This dissertation contributes to an in-depth analysis of the SIFT method. SIFT is the most popular and the first efficient image comparison model. SIFT is also the first method to propose a practical scale-space sampling and to put in practice the theoretical scale invariance in scale space. It associates with each image a list of scale invariant (also rotation and translation invariant) features which can be used for comparison with other images. Because after SIFT feature detectors have been used in countless image processing applications, and because of an intimidating number of variants, studying an algorithm that was published more than a decade ago may be surprising. It seems however that not much has been done to really understand this central algorithm and to find out exactly what improvements we can hope for on the matter of reliable image matching methods. Our analysis of the SIFT algorithm is organized as follows. We focus first on the exact computation of the Gaussian scale-space which is at the heart of SIFT as well as most of its competitors. We provide a meticulous dissection of the complex chain of transformations that form the SIFT method and a presentation of every design parameter from the extraction of invariant keypoints to the computation of feature vectors. Using this documented implementation permitting to vary all of its own parameters, we define a rigorous simulation framework to find out if the scale-space features are indeed correctly detected by SIFT, and which sampling parameters influence the stability of extracted keypoints. This analysis is extended to see the influence of other crucial perturbations, such as errors on the amount of blur, aliasing and noise. This analysis demonstrates that, despite the fact that numerous methods claim to outperform the SIFT method, there is in fact limited room for improvement in methods that extract keypoints from a scale-space. The comparison of many detectors proposed in SIFT competitors is the subject of the last part of this thesis. The performance analysis of local feature detectors has been mainly based on the repeatability criterion. We show that this popular criterion is biased toward methods producing redundant (overlapping) descriptors. We therefore propose an amended evaluation metric and use it to revisit a classic benchmark. For the amended repeatability criterion, SIFT is shown to outperform most of its more recent competitors. This last fact corroborates the unabating interest in SIFT and the necessity of a thorough scrutiny of this method.
13

Skládání snímků panoramatického pohledu / Panoramatic View Reconstruction

Kuzdas, Oldřich January 2008 (has links)
This paper deals step by step with process of stitching images taken by perspective camera rotated by its optical center into the panoramic image. There are described keypoint searching algorhytms, possibilities of calculating homography matrix and methods of eliminating unwanted seams between source images in final panoramic image. A part of this paper is also standalone application in which are implemented some algorhytms described in the work.
14

Podobnost obrazů na základě bodů zájmu / Image similarity measurement using points of interest

Jelínek, Ondřej January 2015 (has links)
This paper presents a new object detection method. The method is based on keypoints analysis and their parameters. Computed parameters are used for building a decision model using machine learning methods. The model is able to detect object in the picture based on input data and compares its similarity to the chosen example. The new method is described in detail, its accuracy is evaluated and this accuracy is compared to other existing detectors. The new method’s detection ability is by more than 40% better than detection ability of detectors like SURF. In order to understand the object detection this paper describes the process step by step including popular algorithms designed for specific roles in each step.
15

Alignement de données 2D, 3D et applications en réalité augmentée. / 2D, 3D data alignment and application in augmented reality

El Rhabi, Youssef 12 June 2017 (has links)
Ette thèse s’inscrit dans le contexte de la réalité augmentée (RA). La problématique majeure consiste à calculer la pose d’une caméra en temps réel. Ce calcul doit être effectué en respectant trois critères principaux : précision, robustesse et rapidité. Dans le cadre de cette thèse, nous introduisons certaines méthodes permettant d’exploiter au mieux les primitives des images. Dans notre cas, les primitives sont des points que nous allons détecter puis décrire dans une image. Pour ce faire, nous nous basons sur la texture de cette image. Nous avons dans un premier temps mis en place une architecture favorisant le calcul rapide de la pose, sans perdre en précision ni en robustesse. Nous avons pour cela exploité une phase hors ligne, où nous reconstruisons la scène en 3D. Nous exploitons les informations que nous obtenons lors de cette phase hors ligne afin de construire un arbre de voisinage. Cet arbre lie les images de la base de données entre elles. Disposer de cet arbre nous permet de calculer la pose de la caméra plus efficacement en choisissant les images de la base de données jugées les plus pertinentes. Nous rendant compte que la phase de description et de comparaison des primitives n’est pas suffisamment rapide, nous en avons optimisé les calculs. Cela nous a mené jusqu’à proposer notre propre descripteur. Pour cela, nous avons dressé un schéma générique basé sur la théorie de l’information qui englobe une bonne part des descripteurs binaires, y compris un descripteur récent nommé BOLD [BTM15]. Notre objectif a été, comme pour BOLD, d’augmenter la stabilité aux changements d’orientation du descripteur produit. Afin de réaliser cela, nous avons construit un nouveau schéma de sélection hors ligne plus adapté à la procédure de mise en correspondance en ligne. Cela permet d’intégrer ces améliorations dans le descripteur que nous construisons. Procéder ainsi permet d’améliorer les performances du descripteur notamment en terme de rapidité en comparaison avec les descripteurs de l’état de l’art. Nous détaillons dans cette thèse les différentes méthodes que nous avons mises en place afin d’optimiser l’estimation de la pose d’une caméra. Nos travaux ont fait l’objet de 2 publications (1 nationale et 1 internationale) et d’un dépôt de brevet. / This thesis belongs within the context of augmented reality. The main issue resides in estimating a camera pose in real-time. This estimation should be done following three main criteria: precision, robustness and computation efficiency.In the frame of this thesis we established methods enabling better use of image primitives. As far as we are concerned, we limit ourselves to keypoint primitives. We first set an architecture enabling faster pose estimation without loss of precision or robustness. This architecture is based on using data collected during an offline phase. This offline phase is used to construct a 3D point cloud of the scene. We use those data in order to build a neighbourhood graph within the images in the database. This neighbourhood graph enables us to select the most relevant images in order to compute the camera pose more efficiently. Since the description and matching processes are not fast enough with SIFT descriptor, we decided to optimise the bottleneck parts of the whole pipeline. It led us to propose our own descriptor. Towards this aim, we built a framework encompassing most recent binary descriptors including a recent state-of-the-art one named BOLD. We pursue a similar goal to BOLD, namely to increase the stability of the produced descriptors with respect to rotations. To achieve this goal, we have designed a novel offline selection criterion which is better adapted to the online matching procedure introduced in BOLD.In this thesis we introduce several methods used to estimate camera poses more efficiently. Our work has been distinguished by two publications (a national and an international one) as well as with a patent application.
16

Deep Image Processing with Spatial Adaptation and Boosted Efficiency & Supervision for Accurate Human Keypoint Detection and Movement Dynamics Tracking

Chao Yang Dai (14709547) 31 May 2023 (has links)
<p>This thesis aims to design and develop the spatial adaptation approach through spatial transformers to improve the accuracy of human keypoint recognition models. We have studied different model types and design choices to gain an accuracy increase over models without spatial transformers and analyzed how spatial transformers increase the accuracy of predictions. A neural network called Widenet has been leveraged as a specialized network for providing the parameters for the spatial transformer. Further, we have evaluated methods to reduce the model parameters, as well as the strategy to enhance the learning supervision for further improving the performance of the model. Our experiments and results have shown that the proposed deep learning framework can effectively detect the human key points, compared with the baseline methods. Also, we have reduced the model size without significantly impacting the performance, and the enhanced supervision has improved the performance. This study is expected to greatly advance the deep learning of human key points and movement dynamics. </p>
17

Continuous Balance Evaluation by Image Analysis of Live Video : Fall Prevention Through Pose Estimation / Kontinuerlig Balansutvärdering Genom Bildanalys av Video i Realtid : Fallprevention Genom Kroppshållningsestimation

Runeskog, Henrik January 2021 (has links)
The deep learning technique Human Pose Estimation (or Human Keypoint Detection) is a promising field in tracking a person and identifying its posture. As posture and balance are two closely related concepts, the use of human pose estimation could be applied to fall prevention. By deriving the location of a persons Center of Mass and thereafter its Center of Pressure, one can evaluate the balance of a person without the use of force plates or sensors and solely using cameras. In this study, a human pose estimation model together with a predefined human weight distribution model were used to extract the location of a persons Center of Pressure in real time. The proposed method utilized two different methods of acquiring depth information from the frames - stereoscopy through two RGB-cameras and with the use of one RGB-depth camera. The estimated location of the Center of Pressure were compared to the location of the same parameter extracted while using the force plate Wii Balance Board. As the proposed method were to operate in real-time and without the use of computational processor enhancement, the choice of human pose estimation model were aimed to maximize software input/output speed. Thus, three models were used - one smaller and faster model called Lightweight Pose Network, one larger and accurate model called High-Resolution Network and one model placing itself somewhere in between the two other models, namely Pose Residual Network. The proposed method showed promising results for a real-time method of acquiring balance parameters. Although the largest source of error were the acquisition of depth information from the cameras. The results also showed that using a smaller and faster human pose estimation model proved to be sufficient in relation to the larger more accurate models in real-time usage and without the use of computational processor enhancement. / Djupinlärningstekniken Kroppshållningsestimation är ett lovande medel gällande att följa en person och identifiera dess kroppshållning. Eftersom kroppshållning och balans är två närliggande koncept, kan användning av kroppshållningsestimation appliceras till fallprevention. Genom att härleda läget för en persons tyngdpunkt och därefter läget för dess tryckcentrum, kan utvärdering en persons balans genomföras utan att använda kraftplattor eller sensorer och att enbart använda kameror. I denna studie har en kroppshållningsestimationmodell tillsammans med en fördefinierad kroppsviktfördelning använts för att extrahera läget för en persons tryckcentrum i realtid. Den föreslagna metoden använder två olika metoder för att utvinna djupseende av bilderna från kameror - stereoskopi genom användning av två RGB-kameror eller genom användning av en RGB-djupseende kamera. Det estimerade läget av tryckcentrat jämfördes med läget av samma parameter utvunnet genom användning av tryckplattan Wii Balance Board. Eftersom den föreslagna metoden var ämnad att fungera i realtid och utan hjälp av en GPU, blev valet av kroppshållningsestimationsmodellen inriktat på att maximera mjukvaruhastighet. Därför användes tre olika modeller - en mindre och snabbare modell vid namn Lightweight Pose Network, en större och mer träffsäker modell vid namn High-Resolution Network och en model som placerar sig någonstans mitt emellan de två andra modellerna gällande snabbhet och träffsäkerhet vid namn Pose Resolution Network. Den föreslagna metoden visade lovande resultat för utvinning av balansparametrar i realtid, fastän den största felfaktorn visade sig vara djupseendetekniken. Resultaten visade att användning av en mindre och snabbare kroppshållningsestimationsmodellen påvisar att hålla måttet i jämförelse med större och mer träffsäkra modeller vid användning i realtid och utan användning av externa dataprocessorer.
18

Rozpoznávání obrazů pro ovládání robotické ruky / Image recognition for robotic hand

Labudová, Kristýna January 2017 (has links)
This thesis concerns with processing of embedded terminals’ images and their classification. There is problematics of moire noise reduction thought filtration in frequency domain and the image normalization for further processing analyzed. Keypoints detectors and descriptors are used for image classification. Detectors FAST and Harris corner detector and descriptors SURF, BRIEF and BRISK are emphasized as well as their evaluation in terms of potential contribution to this work.
19

Terrain Mapping for Autonomous Vehicles / Terrängkartläggning för autonoma fordon

Pedreira Carabel, Carlos Javier January 2015 (has links)
Autonomous vehicles have become the forefront of the automotive industry nowadays, looking to have safer and more efficient transportation systems. One of the main issues for every autonomous vehicle consists in being aware of its position and the presence of obstacles along its path. The current project addresses the pose and terrain mapping problem integrating a visual odometry method and a mapping technique. An RGB-D camera, the Kinect v2 from Microsoft, was chosen as sensor for capturing information from the environment. It was connected to an Intel mini-PC for real-time processing. Both pieces of hardware were mounted on-board of a four-wheeled research concept vehicle (RCV) to test the feasibility of the current solution at outdoor locations. The Robot Operating System (ROS) was used as development environment with C++ as programming language. The visual odometry strategy consisted in a frame registration algorithm called Adaptive Iterative Closest Keypoint (AICK) based on Iterative Closest Point (ICP) using Oriented FAST and Rotated BRIEF (ORB) as image keypoint extractor. A grid-based local costmap rolling window type was implemented to have a two-dimensional representation of the obstacles close to the vehicle within a predefined area, in order to allow further path planning applications. Experiments were performed both offline and in real-time to test the system at indoors and outdoors scenarios. The results confirmed the viability of using the designed framework to keep tracking the pose of the camera and detect objects in indoor environments. However, outdoor environments evidenced the limitations of the features of the RGB-D sensor, making the current system configuration unfeasible for outdoor purposes. / Autonoma fordon har blivit spetsen för bilindustrin i dag i sökandet efter säkrare och effektivare transportsystem. En av de viktigaste sakerna för varje autonomt fordon består i att vara medveten om sin position och närvaron av hinder längs vägen. Det aktuella projektet behandlar position och riktning samt terrängkartläggningsproblemet genom att integrera en visuell distansmätnings och kartläggningsmetod. RGB-D kameran Kinect v2 från Microsoft valdes som sensor för att samla in information från omgivningen. Den var ansluten till en Intel mini PC för realtidsbehandling. Båda komponenterna monterades på ett fyrhjuligt forskningskonceptfordon (RCV) för att testa genomförbarheten av den nuvarande lösningen i utomhusmiljöer. Robotoperativsystemet (ROS) användes som utvecklingsmiljö med C++ som programmeringsspråk. Den visuella distansmätningsstrategin bestod i en bildregistrerings-algoritm som kallas Adaptive Iterative Closest Keypoint (AICK) baserat på Iterative Closest Point (ICP) med hjälp av Oriented FAST och Rotated BRIEF (ORB) som nyckelpunktsutvinning från bilder. En rutnätsbaserad lokalkostnadskarta av rullande-fönster-typ implementerades för att få en tvådimensionell representation av de hinder som befinner sig nära fordonet inom ett fördefinierat område, i syfte att möjliggöra ytterligare applikationer för körvägen. Experiment utfördes både offline och i realtid för att testa systemet i inomhus- och utomhusscenarier. Resultaten bekräftade möjligheten att använda den utvecklade metoden för att spåra position och riktning av kameran samt upptäcka föremål i inomhusmiljöer. Men utomhus visades begränsningar i RGB-D-sensorn som gör att den aktuella systemkonfigurationen är värdelös för utomhusbruk.
20

Unsupervised Domain Adaptation for Regressive Annotation : Using Domain-Adversarial Training on Eye Image Data for Pupil Detection / Oövervakad domänadaptering för regressionsannotering : Användning av domänmotstående träning på ögonbilder för pupilldetektion

Zetterström, Erik January 2023 (has links)
Machine learning has seen a rapid progress the last couple of decades, with more and more powerful neural network models continuously being presented. These neural networks require large amounts of data to train them. Labelled data is especially in great demand, but due to the time consuming and costly nature of data labelling, there exists a scarcity for labelled data, whereas there usually is an abundance of unlabelled data. In some cases, data from a certain distribution, or domain, is labelled, whereas the data we actually want to optimise our model on is unlabelled and from another domain. This falls under the umbrella of domain adaptation and the purpose of this thesis is to train a network using domain-adversarial training on eye image datasets consisting of a labelled source domain and an unlabelled target domain, with the goal of performing well on target data, i.e., overcoming the domain gap. This was done on two different datasets: a proprietary dataset from Tobii with real images and the public U2Eyes dataset with synthetic data. When comparing domain-adversarial training to a baseline model trained conventionally on source data and a oracle model trained conventionally on target data, the proposed DAT-ResNet model outperformed the baseline on both datasets. For the Tobii dataset, DAT-ResNet improved the Huber loss by 22.9% and the Intersection over Union (IoU) by 7.6%, and for the U2Eyes dataset, DAT-ResNet improved the Huber loss by 67.4% and the IoU by 37.6%. Furthermore, the IoU measures were extended to also include the portion of predicted ellipsis with no intersection with the corresponding ground truth ellipsis – referred to as zero-IoUs. By this metric, the proposed model improves the percentage of zero-IoUs by 34.9% on the Tobii dataset and by 90.7% on the U2Eyes dataset. / Maskininlärning har sett en snabb utveckling de senaste decennierna med mer och mer kraftfulla neurala nätverk-modeller presenterades kontinuerligt. Dessa neurala nätverk kräver stora mängder data för att tränas. Data med etiketter är det framförallt stor efterfrågan på, men på grund av det är tidskrävande och kostsamt att etikettera data så finns det en brist på sådan data medan det ofta finns ett överflöd av data utan etiketter. I vissa fall så är data från en viss fördelning, eller domän, etiketterad, medan datan som vi faktiskt vill optimera vår modell efter saknar etiketter och är från en annan domän. Det här faller under området domänadaptering och målet med det här arbetet är att träna ett nätverk genom att använda domänmoststående träning på dataset med ögonbilder som har en källdomän med etiketter och en måldomän utan etiketter, där målet är att prestera bra på data från måldomänen, i.e., att lösa ett domänadapteringsproblem. Det här gjordes på två olika dataset: ett dataset som ägs av Tobii med riktiga ögonbilder och det offentliga datasetet U2Eyes med syntetiska bilder. När domänadapteringsmodellen jämförs med en basmodell tränad konventionellt på källdata och en orakelmodell tränad konventionellt på måldata, så utklassar den presenterade DAT-ResNet-modellen basmodellen på båda dataseten. På Tobii-datasetet så förbättrade DAT-ResNet förlusten med 22.9% och Intersection over Union (IoU):n med 7.6%, och på U2Eyes-datasetet, förbättrade DAT-ResNet förlusten med 67.4% och IoU:n med 37.6%. Dessutom så utökades IoU-måtten till att också innefatta andelen av förutspådda ellipser utan något överlapp med tillhörande grundsanningsellipser – refererat till som noll-IoU:er. Enligt detta mått så förbättrar den föreslagna modellen noll-IoU:erna med 34.9% på Tobii-datasetet och 90.7% på U2Eyes-datasetet.

Page generated in 0.1002 seconds