• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 2
  • Tagged with
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Facial Expression Recognition by Using Class Mean Gabor Responses with Kernel Principal Component Analysis

Chung, Koon Yin C. 16 April 2010 (has links)
No description available.
2

Ambient-vibration-based Long-term SHM of Bridges Using Two-stage Output-only System Identification / 二段階出力のみのシステム同定による常時振動に基づく橋梁の長期モニタリング

Jiang, Wenjie 25 September 2023 (has links)
京都大学 / 新制・課程博士 / 博士(工学) / 甲第24895号 / 工博第5175号 / 新制||工||1988(附属図書館) / 京都大学大学院工学研究科社会基盤工学専攻 / (主査)教授 KIM Chul-Woo, 教授 杉浦 邦征, 教授 八木 知己 / 学位規則第4条第1項該当 / Doctor of Philosophy (Engineering) / Kyoto University / DFAM
3

Stereo vision and LIDAR based Dynamic Occupancy Grid mapping : Application to scenes analysis for Intelligent Vehicles / Cartographie dynamique occupation grille basée sur la vision stéréo et LIDAR : Application à l'analyse de scènes pour les véhicules intelligents

Li, You 03 December 2013 (has links)
Les systèmes de perception, qui sont à la base du concept du véhicule intelligent, doivent répondre à des critères de performance à plusieurs niveaux afin d’assurer des fonctions d’aide à la conduite et/ou de conduite autonome. Aujourd’hui, la majorité des systèmes de perception pour véhicules intelligents sont basés sur la combinaison de données issues de plusieurs capteurs (caméras, lidars, radars, etc.). Les travaux de cette thèse concernent le développement d’un système de perception à base d’un capteur de vision stéréoscopique et d’un capteur lidar pour l’analyse de scènes dynamiques en environnement urbain. Les travaux présentés sont divisés en quatre parties.La première partie présente une méthode d’odométrie visuelle basée sur la stéréovision, avec une comparaison de différents détecteurs de primitives et différentes méthodes d’association de ces primitives. Un couple de détecteur et de méthode d’association de primitives a été sélectionné sur la base d’évaluation de performances à base de plusieurs critères. Dans la deuxième partie, les objets en mouvement sont détectés et segmentés en utilisant les résultats d’odométrie visuelle et l’image U-disparité. Ensuite, des primitives spatiales sont extraites avec une méthode basée sur la technique KPCA et des classifieurs sont enfin entrainés pour reconnaitre les objets en mouvement (piétons, cyclistes, véhicules). La troisième partie est consacrée au calibrage extrinsèque d’un capteur stéréoscopique et d’un Lidar. La méthode de calibrage proposée, qui utilise une mire plane, est basée sur l’exploitation d’une relation géométrique entre les caméras du capteur stéréoscopique. Pour une meilleure robustesse, cette méthode intègre un modèle de bruit capteur et un processus d’optimisation basé sur la distance de Mahalanobis. La dernière partie de cette thèse présente une méthode de construction d’une grille d’occupation dynamique en utilisant la reconstruction 3D de l’environnement, obtenue des données de stéréovision et Lidar de manière séparée puis conjointement. Pour une meilleure précision, l’angle entre le plan de la chaussée et le capteur stéréoscopique est estimé. Les résultats de détection et de reconnaissance (issus des première et deuxième parties) sont incorporés dans la grille d’occupation pour lui associer des connaissances sémantiques. Toutes les méthodes présentées dans cette thèse sont testées et évaluées avec la simulation et avec de données réelles acquises avec la plateforme expérimentale véhicule intelligent SetCar” du laboratoire IRTES-SET. / Intelligent vehicles require perception systems with high performances. Usually, perception system consists of multiple sensors, such as cameras, 2D/3D lidars or radars. The works presented in this Ph.D thesis concern several topics on cameras and lidar based perception for understanding dynamic scenes in urban environments. The works are composed of four parts.In the first part, a stereo vision based visual odometry is proposed by comparing several different approaches of image feature detection and feature points association. After a comprehensive comparison, a suitable feature detector and a feature points association approach is selected to achieve better performance of stereo visual odometry. In the second part, independent moving objects are detected and segmented by the results of visual odometry and U-disparity image. Then, spatial features are extracted by a kernel-PCA method and classifiers are trained based on these spatial features to recognize different types of common moving objects e.g. pedestrians, vehicles and cyclists. In the third part, an extrinsic calibration method between a 2D lidar and a stereoscopic system is proposed. This method solves the problem of extrinsic calibration by placing a common calibration chessboard in front of the stereoscopic system and 2D lidar, and by considering the geometric relationship between the cameras of the stereoscopic system. This calibration method integrates also sensor noise models and Mahalanobis distance optimization for more robustness. At last, dynamic occupancy grid mapping is proposed by 3D reconstruction of the environment, obtained from stereovision and Lidar data separately and then conjointly. An improved occupancy grid map is obtained by estimating the pitch angle between ground plane and the stereoscopic system. The moving object detection and recognition results (from the first and second parts) are incorporated into the occupancy grid map to augment the semantic meanings. All the proposed and developed methods are tested and evaluated with simulation and real data acquired by the experimental platform “intelligent vehicle SetCar” of IRTES-SET laboratory.
4

A Multilinear (Tensor) Algebraic Framework for Computer Graphics, Computer Vision and Machine Learning

Vasilescu, M. Alex O. 09 June 2014 (has links)
This thesis introduces a multilinear algebraic framework for computer graphics, computer vision, and machine learning, particularly for the fundamental purposes of image synthesis, analysis, and recognition. Natural images result from the multifactor interaction between the imaging process, the scene illumination, and the scene geometry. We assert that a principled mathematical approach to disentangling and explicitly representing these causal factors, which are essential to image formation, is through numerical multilinear algebra, the algebra of higher-order tensors. Our new image modeling framework is based on(i) a multilinear generalization of principal components analysis (PCA), (ii) a novel multilinear generalization of independent components analysis (ICA), and (iii) a multilinear projection for use in recognition that maps images to the multiple causal factor spaces associated with their formation. Multilinear PCA employs a tensor extension of the conventional matrix singular value decomposition (SVD), known as the M-mode SVD, while our multilinear ICA method involves an analogous M-mode ICA algorithm. As applications of our tensor framework, we tackle important problems in computer graphics, computer vision, and pattern recognition; in particular, (i) image-based rendering, specifically introducing the multilinear synthesis of images of textured surfaces under varying view and illumination conditions, a new technique that we call ``TensorTextures'', as well as (ii) the multilinear analysis and recognition of facial images under variable face shape, view, and illumination conditions, a new technique that we call ``TensorFaces''. In developing these applications, we introduce a multilinear image-based rendering algorithm and a multilinear appearance-based recognition algorithm. As a final, non-image-based application of our framework, we consider the analysis, synthesis and recognition of human motion data using multilinear methods, introducing a new technique that we call ``Human Motion Signatures''.
5

A Multilinear (Tensor) Algebraic Framework for Computer Graphics, Computer Vision and Machine Learning

Vasilescu, M. Alex O. 09 June 2014 (has links)
This thesis introduces a multilinear algebraic framework for computer graphics, computer vision, and machine learning, particularly for the fundamental purposes of image synthesis, analysis, and recognition. Natural images result from the multifactor interaction between the imaging process, the scene illumination, and the scene geometry. We assert that a principled mathematical approach to disentangling and explicitly representing these causal factors, which are essential to image formation, is through numerical multilinear algebra, the algebra of higher-order tensors. Our new image modeling framework is based on(i) a multilinear generalization of principal components analysis (PCA), (ii) a novel multilinear generalization of independent components analysis (ICA), and (iii) a multilinear projection for use in recognition that maps images to the multiple causal factor spaces associated with their formation. Multilinear PCA employs a tensor extension of the conventional matrix singular value decomposition (SVD), known as the M-mode SVD, while our multilinear ICA method involves an analogous M-mode ICA algorithm. As applications of our tensor framework, we tackle important problems in computer graphics, computer vision, and pattern recognition; in particular, (i) image-based rendering, specifically introducing the multilinear synthesis of images of textured surfaces under varying view and illumination conditions, a new technique that we call ``TensorTextures'', as well as (ii) the multilinear analysis and recognition of facial images under variable face shape, view, and illumination conditions, a new technique that we call ``TensorFaces''. In developing these applications, we introduce a multilinear image-based rendering algorithm and a multilinear appearance-based recognition algorithm. As a final, non-image-based application of our framework, we consider the analysis, synthesis and recognition of human motion data using multilinear methods, introducing a new technique that we call ``Human Motion Signatures''.

Page generated in 0.0137 seconds