• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • 5
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 51
  • 51
  • 14
  • 10
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

基於點群排序關係的動態設定特徵描述子建構及優化 / Construction and optimization of feature descriptor based on dynamic local intensity order relations of pixel group

游佳霖, Yu, Carolyn Unknown Date (has links)
隨著智慧型手機的普及,在移動裝置上直接處理圖像的需求也大幅增加,故對於影像特徵描述子的要求,除了要表現出區域特徵的穩健性,同時也要維持良好的特徵比對效率與合理的儲存空間。過去所提出的區域影像特徵描述子建構方法之中,LIOP方法具有相當不錯的表現力,但其特徵描述子維度會隨著點群取樣數量的提高而以倍數增加,因此本研究提出Dynamic Local Intensity Order Relations (DLIOR)特徵描述子建構方法,利用LIOR方法探討點群中點與點之間的關係,減緩其維度增長幅度;透過動態設定像素差距門檻值,處理影像間像素差距分佈不均的問題,並使用線性轉換、點對歐幾里德距離等方式,重新定義描述子欄位的權重設定。經過實驗證實,DLIOR方法能夠使用比LIOP方法更少的維度空間,描述更多點群數的特徵資訊,並且具有更高的特徵比對能力。 / With the popularity of smart phones, the amounts of images being captured and processed on mobile devices have grown significantly in recent years. Image feature descriptors, which play crucial roles in recognition tasks, are expected to exhibit robust matching performance while at the same time maintain reasonable storage requirement. Among the local feature descriptors that have been proposed previously, local intensity order patterns (LIOP) demonstrated superior performance in many benchmark studies. As LIOP encodes the ranking relation in a point set (with N elements), however, its feature dimension increases drastically (N!) with the number of the neighboring sampling points around a pixel. To alleviate the dimensionality issue, this thesis presents a local feature descriptor by considering pairwise intensity relation in a pixel group, thereby reducing feature dimension to the order of C^N_2. In the proposed method, the threshold for assigning order relation is set dynamically according to local intensity distribution. Different weighting schemes, including linear transformation and Euclidean distance, have also been investigated to adjust the contribution of each pairing relation. Ultimately, the dynamic local intensity order relations (DLIOR) is devised to effectively encode intensity order relation of each pixel group. Experimental results indicate that DLIOR consumes less storage space than LIOP but achieves better feature matching performance using benchmark dataset.
42

Anatomy of the SIFT method / L'Anatomie de la méthode SIFT

Rey Otero, Ives 26 September 2015 (has links)
Cette thèse est une analyse approfondie de la méthode SIFT, la méthode de comparaison d'images la plus populaire. En proposant un échantillonnage du scale-space Gaussien, elle est aussi la première méthode à mettre en pratique la théorie scale-space et faire usage de ses propriétés d'invariance aux changements d'échelles.SIFT associe à une image un ensemble de descripteurs invariants aux changements d'échelle, invariants à la rotation et à la translation. Les descripteurs de différentes images peuvent être comparés afin de mettre en correspondance les images. Compte tenu de ses nombreuses applications et ses innombrables variantes, étudier un algorithme publié il y a une décennie pourrait surprendre. Il apparaît néanmoins que peu a été fait pour réellement comprendre cet algorithme majeur et établir de façon rigoureuse dans quelle mesure il peut être amélioré pour des applications de haute précision. Cette étude se découpe en quatre parties. Le calcul exact du scale-space Gaussien, qui est au cœur de la méthode SIFT et de la plupart de ses compétiteurs, est l'objet de la première partie.La deuxième partie est une dissection méticuleuse de la longue chaîne de transformations qui constitue la méthode SIFT. Chaque paramètre y est documenté et son influence analysée. Cette dissection est aussi associé à une publication en ligne de l'algorithme. La description détaillée s'accompagne d'un code en C ainsi que d'une plateforme de démonstration permettant l'analyse par le lecteur de l'influence de chaque paramètre. Dans la troisième partie, nous définissons un cadre d'analyse expérimental exact dans le but de vérifier que la méthode SIFT détecte de façon fiable et stable les extrema du scale-space continue à partir de la grille discrète. En découlent des conclusions pratiques sur le bon échantillonnage du scale-space Gaussien ainsi que sur les stratégies de filtrage de points instables. Ce même cadre expérimental est utilisé dans l'analyse de l'influence de perturbations dans l'image (aliasing, bruit, flou). Cette analyse démontre que la marge d'amélioration est réduite pour la méthode SIFT ainsi que pour toutes ses variantes s'appuyant sur le scale-space pour extraire des points d'intérêt. L'analyse démontre qu'un suréchantillonnage du scale-space permet d'améliorer l'extraction d'extrema et que se restreindre aux échelles élevées améliore la robustesse aux perturbations de l'image.La dernière partie porte sur l'évaluation des performances de détecteurs de points. La métrique de performance la plus généralement utilisée est la répétabilité. Nous démontrons que cette métrique souffre pourtant d'un biais et qu'elle favorise les méthodes générant des détections redondantes. Afin d'éliminer ce biais, nous proposons une variante qui prend en considération la répartition spatiale des détections. A l'aide de cette correction nous réévaluons l'état de l'art et montrons que, une fois la redondance des détections prise en compte, la méthode SIFT est meilleure que nombre de ses variantes les plus modernes. / This dissertation contributes to an in-depth analysis of the SIFT method. SIFT is the most popular and the first efficient image comparison model. SIFT is also the first method to propose a practical scale-space sampling and to put in practice the theoretical scale invariance in scale space. It associates with each image a list of scale invariant (also rotation and translation invariant) features which can be used for comparison with other images. Because after SIFT feature detectors have been used in countless image processing applications, and because of an intimidating number of variants, studying an algorithm that was published more than a decade ago may be surprising. It seems however that not much has been done to really understand this central algorithm and to find out exactly what improvements we can hope for on the matter of reliable image matching methods. Our analysis of the SIFT algorithm is organized as follows. We focus first on the exact computation of the Gaussian scale-space which is at the heart of SIFT as well as most of its competitors. We provide a meticulous dissection of the complex chain of transformations that form the SIFT method and a presentation of every design parameter from the extraction of invariant keypoints to the computation of feature vectors. Using this documented implementation permitting to vary all of its own parameters, we define a rigorous simulation framework to find out if the scale-space features are indeed correctly detected by SIFT, and which sampling parameters influence the stability of extracted keypoints. This analysis is extended to see the influence of other crucial perturbations, such as errors on the amount of blur, aliasing and noise. This analysis demonstrates that, despite the fact that numerous methods claim to outperform the SIFT method, there is in fact limited room for improvement in methods that extract keypoints from a scale-space. The comparison of many detectors proposed in SIFT competitors is the subject of the last part of this thesis. The performance analysis of local feature detectors has been mainly based on the repeatability criterion. We show that this popular criterion is biased toward methods producing redundant (overlapping) descriptors. We therefore propose an amended evaluation metric and use it to revisit a classic benchmark. For the amended repeatability criterion, SIFT is shown to outperform most of its more recent competitors. This last fact corroborates the unabating interest in SIFT and the necessity of a thorough scrutiny of this method.
43

Product Matching Using Image Similarity

Forssell, Melker, Janér, Gustav January 2020 (has links)
PriceRunner is an online shopping comparison company. To maintain up-todate prices, PriceRunner has to process large amounts of data every day. The processing of the data includes matching unknown products, referred to as offers, to known products. Offer data includes information about the product such as: title, description, price and often one image of the product. PriceRunner has previously implemented a textual-based machine learning (ML) model, but is also looking for new approaches to complement the current product matching system. The objective of this master’s thesis is to investigate the potential of using an image-based ML model for product matching. Our method uses a similarity learning approach where the network learns to recognise the similarity between images. To achieve this, a siamese neural network was trained with the triplet loss function. The network is trained to map similar images closer together and dissimilar images further apart in a vector space. This approach is often used for face recognition, where there is an extensive amount of classes and a limited amount of images per class, and new classes are frequently added. This is also the case for the image data used in this thesis project. A general model was trained on images from the Clothing and Accessories hierarchy, one of the 16 toplevel hierarchies at PriceRunner, consisting of 17 product categories. The results varied between each product category. Some categories proved to be less suitable for image-based classification while others excelled. The model handles new classes relatively well without any, or with briefer, retraining. It was concluded that there is potential in using images to complement the current product matching system at PriceRunner.
44

Adaptive Losses for Camera Pose Supervision

Dahlqvist, Marcus January 2021 (has links)
This master thesis studies the learning of dense feature descriptors where camera poses are the only supervisory signal. The use of camera poses as a supervisory signal has only been published once before, and this thesis expands on this previous work by utilizing a couple of different techniques meant increase the robustness of the method, which is particularly important when not having access to ground-truth correspondences. Firstly, an adaptive robust loss is utilized to better differentiate inliers and outliers. Secondly, statistical properties during training are both enforced and adapted to, in an attempt to alleviate problems with uncertainties introduced by not having true correspondences available. These additions are shown to slightly increase performance, and also highlights some key ideas related to prediction certainty and robustness when working with camera poses as a supervisory signal. Finally, possible directions for future work are discussed.
45

Image Stitching and Matching Tool in the Automated Iterative Reverse Engineer (AIRE) Integrated Circuit Analysis Suite

Bowman, David C. 24 August 2018 (has links)
No description available.
46

Apprentissage de descripteurs locaux pour l’amélioration des systèmes de SLAM visuel

Luttun, Johan 12 1900 (has links)
This thesis covers the topic of image matching in a visual SLAM or SfM context. These problems are generally based on a vector representation of the keypoints of one image, called a descriptor, which we seek to map to the keypoints of another, using a similarity measure to compare the descriptors. However, it remains difficult to perform this matching successfully, especially for challenging scenes where illumination changes, occlusions, motion, textureless and similar features are present, leading to mis-matched points. In this thesis, we develop a self-supervised contrastive deep learning framework for computing robust descriptors, particularly for these challenging situations.We use the TartanAir dataset built explicitly for this task, and in which these difficult scene cases are present. Our results show that descriptor learning works, improves scores, and that our method is competitive with traditional methods such as ORB. In particular, the invariance built implicitly by training pairs of positive examples through the construction of a trajectory from a sequence of images, as well as the controlled introduction of ambiguous negative examples during training, have a real observable effect on the scores obtained. / Le présent mémoire traite du sujet de mise en correspondance entre deux images dans un contexte de SLAM visuel ou de SfM. Ces problèmes reposent généralement sur une représentation vectorielle de points saillants d’une image, appelée descripteur, et qu’on cherche à mettre en correspondance avec les points saillants d’une autre, en utilisant une mesure de similarité pour comparer les descripteurs. Cependant, il reste difficile de réaliser cette mise en correspondance avec succès, en particulier pour les scènes difficiles où des changements d’illumination, des occultations, des mouvements, des éléments sans texture, et des éléments similaires sont présents, conduisant à des mises en correspondance incorrectes. Nous développons dans ce mémoire une méthode d’apprentissage profond contrastif auto-supervisé pour calculer des descripteurs robustes, particulièrement à ces situations difficiles. Nous utilisons le jeu de données TartanAir construit explicitement pour cette tâche, et dans lequel ces cas de scènes difficiles sont présents. Nos résultats montrent que l’apprentissage de descripteurs fonctionne, améliore les scores, et que notre méthode est compétitive avec les méthodes traditionnelles telles que ORB. En particulier, l’invariance bâtie implicitement en formant des paires d’exemples positifs grâce à la construction d’une trajectoire depuis une séquence d’images, ainsi que l’introduction contrôlée d’exemples négatifs ambigus pendant l’entraînement a un réel effet observable sur les scores obtenus.
47

Contributions aux problèmes de l'étalonnage extrinsèque d'affichages semi-transparents pour la réalité augmentée et de la mise en correspondance dense d'images / Contributions to the problems of extrinsic calibration semitransparent displays for augmented reality and dense mapping images

Braux-Zin, Jim 26 September 2014 (has links)
La réalité augmentée consiste en l'insertion d'éléments virtuels dans une scène réelle, observée à travers un écran. Les systèmes de réalité augmentée peuvent prendre des formes différentes pour obtenir l'équilibre désiré entre trois critères : précision, latence et robustesse. On identifie trois composants principaux : localisation, reconstruction et affichage. Nous nous concentrons sur l'affichage et la reconstruction. Pour certaines applications, l'utilisateur ne peut être isolé de la réalité. Nous proposons un système sous forme de "tablette augmentée" avec un écran semi transparent, au prix d'un étalonnage adapté. Pour assurer l'alignement entre augmentations et réalité, il faut connaître les poses relatives de l'utilisateur et de la scène observée par rapport à l'écran. Deux dispositifs de localisation sont nécessaires et l'étalonnage consiste à calculer la pose de ces dispositifs par rapport à l'écran. Le protocole d'étalonnage est le suivant : l'utilisateur renseigne les projections apparentes dans l'écran de points de référence d'un objet 3D connu ; les poses recherchées minimisent la distance 2D entre ces projections et celles calculées par le système. Ce problème est non convexe et difficile à optimiser. Pour obtenir une estimation initiale, nous développons une méthode directe par l'étalonnage intrinsèque et extrinsèque de caméras virtuelles. Ces dernières sont définies par leurs centres optiques, confondus avec les positions de l'utilisateur, ainsi que leur plan focal, constitué par l'écran. Les projections saisies par l'utilisateur constituent alors les observations 2D des points de référence dans ces caméras virtuelles. Un raisonnement symétrique permet de considérer des caméras virtuelles centrées sur les points de référence de l'objet, "observant" les positions de l'utilisateur. Ces estimations initiales sont ensuite raffinées par ajustement de faisceaux. La reconstruction 3D est basée sur la triangulation de correspondances entre images. Ces correspondances peuvent être éparses lorsqu'elles sont établies par détection, description et association de primitives géométriques ou denses lorsqu'elles sont établies par minimisation d'une fonction de coût sur toute l'image. Un champ dense de correspondance est préférable car il permet une reconstruction de surface, utile notamment pour une gestion réaliste des occultations en réalité augmentée. Les méthodes d'estimation d'un tel champ sont basées sur une optimisation variationnelle, précise mais sensible aux minimums locaux et limitée à des images peu différentes. A l'opposé, l'emploi de descripteurs discriminants peut rendre les correspondances éparses très robustes. Nous proposons de combiner les avantages des deux approches par l'intégration d'un coût basé sur des correspondances éparses de primitives à une méthode d'estimation variationnelle dense. Cela permet d'empêcher l'optimisation de tomber dans un minimum local sans dégrader la précision. Notre terme basé correspondances éparses est adapté aux primitives à coordonnées non entières, et peut exploiter des correspondances de points ou de segments tout en filtrant implicitement les correspondances erronées. Nous proposons aussi une détection et gestion complète des occultations pour pouvoir mettre en correspondance des images éloignées. Nous avons adapté et généralisé une méthode locale de détection des auto-occultations. Notre méthode produit des résultats compétitifs avec l'état de l'art, tout en étant plus simple et plus rapide, pour les applications de flot optique 2D et de stéréo à large parallaxe. Nos contributions permettent d'appliquer les méthodes variationnelles à de nouvelles applications sans dégrader leur performance. Le faible couplage des modules permet une grande flexibilité et généricité. Cela nous permet de transposer notre méthode pour le recalage de surfaces déformables avec des résultats surpassant l'état de l'art, ouvrant de nouvelles perspectives. / Augmented reality is the process of inserting virtual elements into a real scene, observed through a screen. Augmented Reality systems can take different forms to get the desired balance between three criteria: accuracy, latency and robustness. Three main components can be identified: localization, reconstruction and display. The contributions of this thesis are focused on display and reconstruction. Most augmented reality systems use non-transparent screens as they are widely available. However, for critical applications such as surgery or driving assistance, the user cannot be ever isolated from reality. We answer this problem by proposing a new “augmented tablet” system with a semi-transparent screen. Such a system needs a suitable calibration scheme:to correctly align the displayed augmentations and reality, one need to know at every moment the poses of the user and the observed scene with regard to the screen. Two tracking devices (user and scene) are thus necessary, and the system calibration aims to compute the pose of those devices with regard to the screen. The calibration process set up in this thesis is as follows: the user indicates the apparent projections in the screen of reference points from a known 3D object ; then the poses to estimate should minimize the 2D on-screen distance between those projections and the ones computed by the system. This is a non-convex problem difficult to solve without a sane initialization. We develop a direct estimation method by computing the extrinsic parameters of virtual cameras. Those are defined by their optical centers which coincide with user positions, and their common focal plane consisting of the screen plane. The user-entered projections are then the 2D observations of the reference points in those virtual cameras. A symmetrical thinking allows one to define virtual cameras centered on the reference points, and “looking at” the user positions. Those initial estimations can then be refined with a bundle adjustment. Meanwhile, 3D reconstruction is based on the triangulation of matches between images. Those matches can be sparse when computed by detection and description of image features or dense when computed through the minimization of a cost function of the whole image. A dense correspondence field is better because it makes it possible to reconstruct a 3D surface, useful especially for realistic handling of occlusions for augmented reality. However, such a field is usually estimated thanks to variational methods, minimizing a convex cost function using local information. Those methods are accurate but subject to local minima, thus limited to small deformations. In contrast, sparse matches can be made very robust by using adequately discriminative descriptors. We propose to combine the advantages of those two approaches by adding a feature-based term into a dense variational method. It helps prevent the optimization from falling into local minima without degrading the end accuracy. Our feature-based term is suited to feature with non-integer coordinates and can handle point or line segment matches while implicitly filtering false matches. We also introduce comprehensive handling of occlusions so as to support large deformations. In particular, we have adapted and generalized a local method for detecting selfocclusions. Results on 2D optical flow and wide-baseline stereo disparity estimation are competitive with the state of the art, with a simpler and most of the time faster method. This proves that our contributions enables new applications of variational methods without degrading their accuracy. Moreover, the weak coupling between the components allows great flexibility and genericness. This is the reason we were able to also transpose the proposed method to the problem of non-rigid surface registration and outperforms the state of the art methods.
48

Matching Sticky Notes Using Latent Representations / Matchning av klisterlappar med hjälp av latent representation

García San Vicent, Javier January 2022 (has links)
his project addresses the issue of accurately identifying repeated images of sticky notes. Due to environmental conditions and the 3D location of the camera, different pictures taken of sticky notes may look distinct enough to be hard to determine if they belong to the same note. More specifically, this thesis aims to create latent representations of these pictures of sticky notes to encode their content so that all the pictures of the same note have a similar representation that allows to identify them. Thus, those representations must be invariant to light conditions, blur and camera position. To that end, a Siamese neural architecture will be trained based on data augmentation methods. The method consists of learning to embed two augmented versions of the same image into similar representations. This architecture has been trained with unsupervised learning and fine-tuned with supervised learning to detect if two representations belong or not to the same note. The performance of ResNet, EfficientNet and Vision Transformers in encoding the images into their representations has been compared with different configurations. The results show that, while the most complex models overfit small amounts of data, the simplest encoders are capable of properly identifying more than 95% of the sticky notes in grey scale. Those models can create invariant representations that are close to each other in the latent space for pictures of the same sticky note. Gathering more data could result in an improvement of the performance of the model and the possibility of applying it to other fields such as handwritten documents. / Detta projekt tar upp frågan om att identifiera upprepade bilder av klisterlappar. På grund av miljöförhållanden och kamerans 3D-placering kan olika bilder som tagits till klisterlappar se tillräckligt distinkta ut för att det ska vara svårt att avgöra om de faktiskt tillhör samma klisterlappar. Mer specifikt är syftet med denna avhandling att skapa latenta representationer av bilder av klisterlappar som kodar deras innehåll, så att alla bilder av en klisterlapp har en liknande representation som gör det möjligt att identifiera dem. Sålunda måste representationerna vara oföränderliga för ljusförhållanden, oskärpa och kameraposition. För det ändamålet kommer en enkel siamesisk neural arkitektur att tränas baserad på dataförstärkningsmetoder. Metoden går ut på att lära sig att göra representationerna av två förstärkta versioner av en bild så lika som möjligt. Genomatt tillämpa vissa förbättringar av arkitekturen kan oövervakat lärande användas för att träna nätverket. Prestandan hos ResNet, EfficientNet och Vision Transformers när det gäller att koda bilderna till deras representationer har jämförts med olika konfigurationer. Resultaten visar att även om de mest komplexa modellerna överpassar små mängder data, kan de enklaste kodarna korrekt identifiera mer än 95% av klisterlapparna. Dessa modeller kan skapa oföränderliga representationer som är nära i det latenta utrymmet för bilder av samma klisterlapp. Att samla in mer data kan resultera i en förbättring av modellens prestanda och möjligheten att tillämpa den på andra områden som till exempel handskrivna dokument.
49

Geometrische und stochastische Modelle zur Verarbeitung von 3D-Kameradaten am Beispiel menschlicher Bewegungsanalysen / Geometric and stochastic models for the processing of 3D camera data within the context of human motion analyses

Westfeld, Patrick 15 June 2012 (has links) (PDF)
Die dreidimensionale Erfassung der Form und Lage eines beliebigen Objekts durch die flexiblen Methoden und Verfahren der Photogrammetrie spielt für ein breites Spektrum technisch-industrieller und naturwissenschaftlicher Einsatzgebiete eine große Rolle. Die Anwendungsmöglichkeiten reichen von Messaufgaben im Automobil-, Maschinen- und Schiffbau über die Erstellung komplexer 3D-Modelle in Architektur, Archäologie und Denkmalpflege bis hin zu Bewegungsanalysen in Bereichen der Strömungsmesstechnik, Ballistik oder Medizin. In der Nahbereichsphotogrammetrie werden dabei verschiedene optische 3D-Messsysteme verwendet. Neben flächenhaften Halbleiterkameras im Einzel- oder Mehrbildverband kommen aktive Triangulationsverfahren zur Oberflächenmessung mit z.B. strukturiertem Licht oder Laserscanner-Systeme zum Einsatz. 3D-Kameras auf der Basis von Photomischdetektoren oder vergleichbaren Prinzipien erzeugen durch die Anwendung von Modulationstechniken zusätzlich zu einem Grauwertbild simultan ein Entfernungsbild. Als Einzelbildsensoren liefern sie ohne die Notwendigkeit einer stereoskopischen Zuordnung räumlich aufgelöste Oberflächendaten in Videorate. In der 3D-Bewegungsanalyse ergeben sich bezüglich der Komplexität und des Rechenaufwands erhebliche Erleichterungen. 3D-Kameras verbinden die Handlichkeit einer Digitalkamera mit dem Potential der dreidimensionalen Datenakquisition etablierter Oberflächenmesssysteme. Sie stellen trotz der noch vergleichsweise geringen räumlichen Auflösung als monosensorielles System zur Echtzeit-Tiefenbildakquisition eine interessante Alternative für Aufgabenstellungen der 3D-Bewegungsanalyse dar. Der Einsatz einer 3D-Kamera als Messinstrument verlangt die Modellierung von Abweichungen zum idealen Abbildungsmodell; die Verarbeitung der erzeugten 3D-Kameradaten bedingt die zielgerichtete Adaption, Weiter- und Neuentwicklung von Verfahren der Computer Vision und Photogrammetrie. Am Beispiel der Untersuchung des zwischenmenschlichen Bewegungsverhaltens sind folglich die Entwicklung von Verfahren zur Sensorkalibrierung und zur 3D-Bewegungsanalyse die Schwerpunkte der Dissertation. Eine 3D-Kamera stellt aufgrund ihres inhärenten Designs und Messprinzips gleichzeitig Amplituden- und Entfernungsinformationen zur Verfügung, welche aus einem Messsignal rekonstruiert werden. Die simultane Einbeziehung aller 3D-Kamerainformationen in jeweils einen integrierten Ansatz ist eine logische Konsequenz und steht im Vordergrund der Verfahrensentwicklungen. Zum einen stützen sich die komplementären Eigenschaften der Beobachtungen durch die Herstellung des funktionalen Zusammenhangs der Messkanäle gegenseitig, wodurch Genauigkeits- und Zuverlässigkeitssteigerungen zu erwarten sind. Zum anderen gewährleistet das um eine Varianzkomponentenschätzung erweiterte stochastische Modell eine vollständige Ausnutzung des heterogenen Informationshaushalts. Die entwickelte integrierte Bündelblockausgleichung ermöglicht die Bestimmung der exakten 3D-Kamerageometrie sowie die Schätzung der distanzmessspezifischen Korrekturparameter zur Modellierung linearer, zyklischer und signalwegeffektbedingter Fehleranteile einer 3D-Kamerastreckenmessung. Die integrierte Kalibrierroutine gleicht in beiden Informationskanälen gemessene Größen gemeinsam, unter der automatischen Schätzung optimaler Beobachtungsgewichte, aus. Die Methode basiert auf dem flexiblen Prinzip einer Selbstkalibrierung und benötigt keine Objektrauminformation, wodurch insbesondere die aufwendige Ermittlung von Referenzstrecken übergeordneter Genauigkeit entfällt. Die durchgeführten Genauigkeitsuntersuchungen bestätigen die Richtigkeit der aufgestellten funktionalen Zusammenhänge, zeigen aber auch Schwächen aufgrund noch nicht parametrisierter distanzmessspezifischer Fehler. Die Adaptivität und die modulare Implementierung des entwickelten mathematischen Modells gewährleisten aber eine zukünftige Erweiterung. Die Qualität der 3D-Neupunktkoordinaten kann nach einer Kalibrierung mit 5 mm angegeben werden. Für die durch eine Vielzahl von meist simultan auftretenden Rauschquellen beeinflusste Tiefenbildtechnologie ist diese Genauigkeitsangabe sehr vielversprechend, vor allem im Hinblick auf die Entwicklung von auf korrigierten 3D-Kameradaten aufbauenden Auswertealgorithmen. 2,5D Least Squares Tracking (LST) ist eine im Rahmen der Dissertation entwickelte integrierte spatiale und temporale Zuordnungsmethode zur Auswertung von 3D-Kamerabildsequenzen. Der Algorithmus basiert auf der in der Photogrammetrie bekannten Bildzuordnung nach der Methode der kleinsten Quadrate und bildet kleine Oberflächensegmente konsekutiver 3D-Kameradatensätze aufeinander ab. Die Abbildungsvorschrift wurde, aufbauend auf einer 2D-Affintransformation, an die Datenstruktur einer 3D-Kamera angepasst. Die geschlossen formulierte Parametrisierung verknüpft sowohl Grau- als auch Entfernungswerte in einem integrierten Modell. Neben den affinen Parametern zur Erfassung von Translations- und Rotationseffekten, modellieren die Maßstabs- sowie Neigungsparameter perspektivbedingte Größenänderungen des Bildausschnitts, verursacht durch Distanzänderungen in Aufnahmerichtung. Die Eingabedaten sind in einem Vorverarbeitungsschritt mit Hilfe der entwickelten Kalibrierroutine um ihre opto- und distanzmessspezifischen Fehler korrigiert sowie die gemessenen Schrägstrecken auf Horizontaldistanzen reduziert worden. 2,5D-LST liefert als integrierter Ansatz vollständige 3D-Verschiebungsvektoren. Weiterhin können die aus der Fehlerrechnung resultierenden Genauigkeits- und Zuverlässigkeitsangaben als Entscheidungskriterien für die Integration in einer anwendungsspezifischen Verarbeitungskette Verwendung finden. Die Validierung des Verfahrens zeigte, dass die Einführung komplementärer Informationen eine genauere und zuverlässigere Lösung des Korrespondenzproblems bringt, vor allem bei schwierigen Kontrastverhältnissen in einem Kanal. Die Genauigkeit der direkt mit den Distanzkorrekturtermen verknüpften Maßstabs- und Neigungsparameter verbesserte sich deutlich. Darüber hinaus brachte die Erweiterung des geometrischen Modells insbesondere bei der Zuordnung natürlicher, nicht gänzlich ebener Oberflächensegmente signifikante Vorteile. Die entwickelte flächenbasierte Methode zur Objektzuordnung und Objektverfolgung arbeitet auf der Grundlage berührungslos aufgenommener 3D-Kameradaten. Sie ist somit besonders für Aufgabenstellungen der 3D-Bewegungsanalyse geeignet, die den Mehraufwand einer multiokularen Experimentalanordnung und die Notwendigkeit einer Objektsignalisierung mit Zielmarken vermeiden möchten. Das Potential des 3D-Kamerazuordnungsansatzes wurde an zwei Anwendungsszenarien der menschlichen Verhaltensforschung demonstriert. 2,5D-LST kam zur Bestimmung der interpersonalen Distanz und Körperorientierung im erziehungswissenschaftlichen Untersuchungsgebiet der Konfliktregulation befreundeter Kindespaare ebenso zum Einsatz wie zur Markierung und anschließenden Klassifizierung von Bewegungseinheiten sprachbegleitender Handgesten. Die Implementierung von 2,5D-LST in die vorgeschlagenen Verfahren ermöglichte eine automatische, effektive, objektive sowie zeitlich und räumlich hochaufgelöste Erhebung und Auswertung verhaltensrelevanter Daten. Die vorliegende Dissertation schlägt die Verwendung einer neuartigen 3D-Tiefenbildkamera zur Erhebung menschlicher Verhaltensdaten vor. Sie präsentiert sowohl ein zur Datenaufbereitung entwickeltes Kalibrierwerkzeug als auch eine Methode zur berührungslosen Bestimmung dichter 3D-Bewegungsvektorfelder. Die Arbeit zeigt, dass die Methoden der Photogrammetrie auch für bewegungsanalytische Aufgabenstellungen auf dem bisher noch wenig erschlossenen Gebiet der Verhaltensforschung wertvolle Ergebnisse liefern können. Damit leistet sie einen Beitrag für die derzeitigen Bestrebungen in der automatisierten videographischen Erhebung von Körperbewegungen in dyadischen Interaktionen. / The three-dimensional documentation of the form and location of any type of object using flexible photogrammetric methods and procedures plays a key role in a wide range of technical-industrial and scientific areas of application. Potential applications include measurement tasks in the automotive, machine building and ship building sectors, the compilation of complex 3D models in the fields of architecture, archaeology and monumental preservation and motion analyses in the fields of flow measurement technology, ballistics and medicine. In the case of close-range photogrammetry a variety of optical 3D measurement systems are used. Area sensor cameras arranged in single or multi-image configurations are used besides active triangulation procedures for surface measurement (e.g. using structured light or laser scanner systems). The use of modulation techniques enables 3D cameras based on photomix detectors or similar principles to simultaneously produce both a grey value image and a range image. Functioning as single image sensors, they deliver spatially resolved surface data at video rate without the need for stereoscopic image matching. In the case of 3D motion analyses in particular, this leads to considerable reductions in complexity and computing time. 3D cameras combine the practicality of a digital camera with the 3D data acquisition potential of conventional surface measurement systems. Despite the relatively low spatial resolution currently achievable, as a monosensory real-time depth image acquisition system they represent an interesting alternative in the field of 3D motion analysis. The use of 3D cameras as measuring instruments requires the modelling of deviations from the ideal projection model, and indeed the processing of the 3D camera data generated requires the targeted adaptation, development and further development of procedures in the fields of computer graphics and photogrammetry. This Ph.D. thesis therefore focuses on the development of methods of sensor calibration and 3D motion analysis in the context of investigations into inter-human motion behaviour. As a result of its intrinsic design and measurement principle, a 3D camera simultaneously provides amplitude and range data reconstructed from a measurement signal. The simultaneous integration of all data obtained using a 3D camera into an integrated approach is a logical consequence and represents the focus of current procedural development. On the one hand, the complementary characteristics of the observations made support each other due to the creation of a functional context for the measurement channels, with is to be expected to lead to increases in accuracy and reliability. On the other, the expansion of the stochastic model to include variance component estimation ensures that the heterogeneous information pool is fully exploited. The integrated bundle adjustment developed facilitates the definition of precise 3D camera geometry and the estimation of range-measurement-specific correction parameters required for the modelling of the linear, cyclical and latency defectives of a distance measurement made using a 3D camera. The integrated calibration routine jointly adjusts appropriate dimensions across both information channels, and also automatically estimates optimum observation weights. The method is based on the same flexible principle used in self-calibration, does not require spatial object data and therefore foregoes the time-consuming determination of reference distances with superior accuracy. The accuracy analyses carried out confirm the correctness of the proposed functional contexts, but nevertheless exhibit weaknesses in the form of non-parameterized range-measurement-specific errors. This notwithstanding, the future expansion of the mathematical model developed is guaranteed due to its adaptivity and modular implementation. The accuracy of a new 3D point coordinate can be set at 5 mm further to calibration. In the case of depth imaging technology – which is influenced by a range of usually simultaneously occurring noise sources – this level of accuracy is very promising, especially in terms of the development of evaluation algorithms based on corrected 3D camera data. 2.5D Least Squares Tracking (LST) is an integrated spatial and temporal matching method developed within the framework of this Ph.D. thesis for the purpose of evaluating 3D camera image sequences. The algorithm is based on the least squares image matching method already established in photogrammetry, and maps small surface segments of consecutive 3D camera data sets on top of one another. The mapping rule has been adapted to the data structure of a 3D camera on the basis of a 2D affine transformation. The closed parameterization combines both grey values and range values in an integrated model. In addition to the affine parameters used to include translation and rotation effects, the scale and inclination parameters model perspective-related deviations caused by distance changes in the line of sight. A pre-processing phase sees the calibration routine developed used to correct optical and distance-related measurement specific errors in input data and measured slope distances reduced to horizontal distances. 2.5D LST is an integrated approach, and therefore delivers fully three-dimensional displacement vectors. In addition, the accuracy and reliability data generated by error calculation can be used as decision criteria for integration into an application-specific processing chain. Process validation showed that the integration of complementary data leads to a more accurate, reliable solution to the correspondence problem, especially in the case of difficult contrast ratios within a channel. The accuracy of scale and inclination parameters directly linked to distance correction terms improved dramatically. In addition, the expansion of the geometric model led to significant benefits, and in particular for the matching of natural, not entirely planar surface segments. The area-based object matching and object tracking method developed functions on the basis of 3D camera data gathered without object contact. It is therefore particularly suited to 3D motion analysis tasks in which the extra effort involved in multi-ocular experimental settings and the necessity of object signalling using target marks are to be avoided. The potential of the 3D camera matching approach has been demonstrated in two application scenarios in the field of research into human behaviour. As in the case of the use of 2.5D LST to mark and then classify hand gestures accompanying verbal communication, the implementation of 2.5D LST in the proposed procedures for the determination of interpersonal distance and body orientation within the framework of pedagogical research into conflict regulation between pairs of child-age friends facilitates the automatic, effective, objective and high-resolution (from both a temporal and spatial perspective) acquisition and evaluation of data with relevance to behaviour. This Ph.D. thesis proposes the use of a novel 3D range imaging camera to gather data on human behaviour, and presents both a calibration tool developed for data processing purposes and a method for the contact-free determination of dense 3D motion vector fields. It therefore makes a contribution to current efforts in the field of the automated videographic documentation of bodily motion within the framework of dyadic interaction, and shows that photogrammetric methods can also deliver valuable results within the framework of motion evaluation tasks in the as-yet relatively untapped field of behavioural research.
50

Ανάπτυξη αποδοτικών παραμετρικών τεχνικών αντιστοίχισης εικόνων με εφαρμογή στην υπολογιστική όραση

Ευαγγελίδης, Γεώργιος 12 January 2009 (has links)
Μια από τις συνεχώς εξελισσόμενες περιοχές της επιστήμης των υπολογιστών είναι η Υπολογιστική Όραση, σκοπός της οποίας είναι η δημιουργία έξυπνων συστημάτων για την ανάκτηση πληροφοριών από πραγματικές εικόνες. Πολλές σύγχρονες εφαρμογές της υπολογιστικής όρασης βασίζονται στην αντιστοίχιση εικόνων. Την πλειοψηφία των αλγορίθμων αντιστοίχισης συνθέτουν παραμετρικές τεχνικές, σύμφωνα με τις οποίες υιοθετείται ένα παραμετρικό μοντέλο, το οποίο εφαρμοζόμενο στη μια εικόνα δύναται να παρέχει μια προσέγγιση της άλλης. Στο πλαίσιο της διατριβής μελετάται εκτενώς το πρόβλημα της Στερεοσκοπικής Αντιστοίχισης και το γενικό πρόβλημα της Ευθυγράμμισης Εικόνων. Για την αντιμετώπιση του πρώτου προβλήματος προτείνεται ένας τοπικός αλγόριθμος διαφορικής αντιστοίχισης που κάνει χρήση μιας νέας συνάρτησης κόστους, του Τροποποιημένου Συντελεστή Συσχέτισης (ECC), η οποία ενσωματώνει το παραμετρικό μοντέλο μετατόπισης στον κλασικό συντελεστή συσχέτισης. Η ενσωμάτωση αυτή καθιστά τη νέα συνάρτηση κατάλληλη για εκτιμήσεις ανομοιότητας με ακρίβεια μικρότερη από αυτήν του εικονοστοιχείου. Αν και η συνάρτηση αυτή είναι μη γραμμική ως προς την παράμετρο μετατόπισης, το πρόβλημα μεγιστοποίησης έχει κλειστού τύπου λύση με αποτέλεσμα τη μειωμένη πολυπλοκότητα της διαδικασίας της αντιστοίχισης με ακρίβεια υπο-εικονοστοιχείου. Ο προτεινόμενος αλγόριθμος παρέχει ακριβή αποτελέσματα ακόμα και κάτω από μη γραμμικές φωτομετρικές παραμορφώσεις, ενώ η απόδοσή του υπερτερεί έναντι γνωστών στη διεθνή βιβλιογραφία τεχνικών αντιστοίχισης ενώ φαίνεται να είναι απαλλαγμένος από το φαινόμενο pixel locking. Στην περίπτωση του προβλήματος της ευθυγράμμισης εικόνων, η προτεινόμενη συνάρτηση γενικεύεται με αποτέλεσμα τη δυνατότητα χρήσης οποιουδήποτε δισδιάστατου μετασχηματισμού. Η μεγιστοποίησή της, η οποία αποτελεί ένα μη γραμμικό πρόβλημα, επιτυγχάνεται μέσω της επίλυσης μιας ακολουθίας υπο-προβλημάτων βελτιστοποίησης. Σε κάθε επανάληψη επιβάλλεται η μεγιστοποίηση μιας μη γραμμικής συνάρτησης του διανύσματος διορθώσεων των παραμέτρων, η οποία αποδεικνύεται ότι καταλήγει στη λύση ενός γραμμικού συστήματος. Δύο εκδόσεις του σχήματος αυτού προτείνονται: ο αλγόριθμος Forwards Additive ECC (FA-ECC) και o αποδοτικός υπολογιστικά αλγόριθμος Inverse Compositional ECC (IC-ECC). Τα προτεινόμενα σχήματα συγκρίνονται με τα αντίστοιχα (FA-LK και SIC) του αλγόριθμου Lucas-Kanade, ο οποίος αποτελεί σημείο αναφοράς στη σχετική βιβλιογραφία, μέσα από μια σειρά πειραμάτων. Ο αλγόριθμος FA-ECC παρουσιάζει όμοια πολυπλοκότητα με τον ευρέως χρησιμοποιούμενο αλγόριθμο FA-LΚ και παρέχει πιο ακριβή αποτελέσματα ενώ συγκλίνει με αισθητά μεγαλύτερη πιθανότητα και ταχύτητα. Παράλληλα, παρουσιάζεται πιο εύρωστος σε περιπτώσεις παρουσίας προσθετικού θορύβου, φωτομετρικών παραμορφώσεων και υπερ-μοντελοποίησης της γεωμετρικής παραμόρφωσης των εικόνων. Ο αλγόριθμος IC-ECC κάνει χρήση της αντίστροφης λογικής, η οποία στηρίζεται στην αλλαγή των ρόλων των εικόνων αντιστοίχισης και συνδυάζει τον κανόνα ενημέρωσης των παραμέτρων μέσω της σύνθεσης των μετασχηματισμών. Τα δύο αυτά χαρακτηριστικά έχουν ως αποτέλεσμα τη δραστική μείωση του υπολογιστικού κόστους, ακόμα και σε σχέση με τον SIC αλγόριθμο, με τον οποίο βέβαια παρουσιάζει παρόμοια συμπεριφορά. Αν και ο αλγόριθμος FA-ECC γενικά υπερτερεί έναντι των τριών άλλων αλγορίθμων, η επιλογή μεταξύ των δύο προτεινόμενων σχημάτων εξαρτάται από το λόγο μεταξύ ακρίβειας αντιστοίχισης και υπολογιστικού κόστους. / Computer Vision has been recently one of the most active research areas in computer society. Many modern computer vision applications require the solution of the well known image registration problem which consist in finding correspondences between projections of the same scene. The majority of registration algorithms adopt a specific parametric transformation model, which is applied to one image, thus providing an approach of the other one. Towards the solution of the Stereo Correspondence problem, where the goal is the construction of the disparity map, a local differential algorithm is proposed which involves a new similarity criterion, the Enhanced Correlation Coefficient (ECC). This criterion is invariant to linear photometric distortions and results from the incorporation of a single parameter model into the classical correlation coefficient, defining thus a continuous objective function. Although the objective function is non-linear in translation parameter, its maximization results in a closed form solution, saving thus much computational burden. The proposed algorithm provides accurate results even under non-linear photometric distortions and its performance is superior to well known conventional stereo correspondence techniques. In addition, the proposed technique seems not to suffer from pixel locking effect and outperforms even stereo techniques, dedicated to the cancellation of this effect. For the image alignment problem, the maximization of a generalized version of ECC function that incorporates any 2D warp transformation is proposed. Although this function is a highly non-linear function of the warp parameters, an efficient iterative scheme for its maximization is developed. In each iteration of the new scheme, an efficient approximation of the nonlinear objective function is used leading to a closed form solution of low computational complexity. Two different iterative schemes are proposed; the Forwards Additive ECC (FA-ECC) and the Inverse Compositional ECC (IC-ECC) algorithm. Τhe proposed iterative schemes are compared with the corresponding schemes (FA-LK and SIC) of the leading Lucas-Kanade algorithm, through a series of experiments. FA-ECC algorithm makes use of the known additive parameter update rule and its computational cost is similar to the one required by the most widely used FA-LK algorithm. The proposed iterative scheme exhibits increased learning ability, since it converges faster with higher probability. This superiority is retained even in presence of additive noise and photometric distortion, as well as in cases of over-modelling the geometric distortion of the images. On the other hand, IC-ECC algorithm makes use of inverse logic by swapping the role of images and adopts the transformation composition update rule. As a consequence of these two options, the complexity per iteration is drastically reduced and the resulting algorithm constitutes the most computationally efficient scheme than three other above mentioned algorithms. However, empirical learning curves and probability of convergence scores indicate that the proposed algorithm has a similar performance to the one exhibited by SIC. Though FA-ECC seems to be clearly more robust in real situation conditions among all the above mentioned alignment algorithms, the choice between two proposed schemes necessitates a trade-off between accuracy and speed.

Page generated in 0.0579 seconds