• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 11
  • 11
  • 6
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Active Illumination for the RealWorld

Achar, Supreeth 01 July 2017 (has links)
Active illumination systems use a controllable light source and a light sensor to measure properties of a scene. For such a system to work reliably across a wide range of environments it must be able to handle the effects of global light transport, bright ambient light, interference from other active illumination devices, defocus, and scene motion. The goal of this thesis is to develop computational techniques and hardware arrangements to make active illumination devices based on commodity-grade components that work under real world conditions. We aim to combine the robustness of a scanning laser rangefinder with the speed, measurement density, compactness, and economy of a consumer depth camera. Towards this end, we have made four contributions. The first is a computational technique for compensating for the effects of motion while separating the direct and global components of illumination. The second is a method that combines triangulation and depth from illumination defocus cues to increase the working range of a projector-camera system. The third is a new active illumination device that can efficiently image the epipolar component of light transport between a source and sensor. The device can measure depth using active stereo or structured light and is robust to many global light transport effects. Most importantly, it works outdoors in bright sunlight despite using a low power source. Finally, we extend the proposed epipolar-only imaging technique to time-of-flight sensing and build a low-power sensor that is robust to sunlight, global illumination, multi-device interference, and camera shake. We believe that the algorithms and sensors proposed and developed in this thesis could find applications in a diverse set of fields including mobile robotics, medical imaging, gesture recognition, and agriculture.
2

Investigation of the effect of relative humidity on additive manufactured polymers by depth sensing indentation

Altaf, Kazim January 2011 (has links)
Additive manufacturing methods have been developed from rapid prototyping techniques and are now being considered as alternatives to conventional techniques of manufacturing. Stereolithography is one of the main additive methods and is considered highly accurate and consistent. Polymers are used as stereolithography materials and exhibit features such as high strength-to-weight ratio, corrosion resistance, ease of manufacturing and good thermal and electrical resistance properties. However, they are sensitive to environmental factors such as temperature, moisture and UV light, with moisture being identified as one of the most important factors that affect their properties. Moisture generally has an adverse effect on the mechanical properties of polymers. Investigation of the effects of moisture on polymers can be carried out using a number of experimental techniques; however, the benefits of the depth sensing indentation method over bulk tests include its ability to characterise various mechanical properties in a single test from only a small volume of material and the investigation of spatial variation in mechanical properties near the surface. The aim of this research was to investigate the effects of varying relative humidity on the indentation behaviour of stereolithography polymers and to develop a modelling methodology that can predict this behaviour under various humidities. It was achieved by a combination of experimental and numerical methods. Depth sensing indentation experiments were carried out at 33.5 %, 53.8 %, 75.3 % and 84.5 % RH (relative humidity) and 22.5 °C temperature to investigate the effects of varying humidity on the micron scale properties of the stereolithography resin, Accura 60. In order to minimise the effects of creep on the calculated properties, appropriate loading and unloading rates with suitable dwell period were selected and indentation data was analysed using the Oliver and Pharr method (1992). A humidity control unit fitted to the machine was used to condition the samples and regulate humidity during testing. Samples were also preconditioned at 33.5 %, 53.8 %, 75.3 % and 84.5 % RH using saturated salt solutions and were tested at 33.5 % RH using humidity control unit. It was seen that properties such as indentation depth increased and contact iv hardness and contact modulus decreased with increasing RH. The samples conditioned and tested using the humidity control unit at high RH showed a greater effect of moisture than the preconditioned samples tested at 33.5 % RH. This was because the samples preconditioned at high RH exhibited surface desorption of moisture when tested at ambient RH, resulting in some recovery of the mechanical properties. In order to investigate these further, tests were performed periodically on saturated samples after drying. Ten days drying of samples conditioned for five days at 84.5 % RH provided significant, though not complete, recovery in the mechanical properties. These tests confirmed that Accura 60 is highly hygroscopic and its mechanical properties are a function of RH and removal of moisture leads to a significant recovery of the original mechanical properties.
3

Recognizing human activity using RGBD data

Xia, Lu, active 21st century 03 July 2014 (has links)
Traditional computer vision algorithms try to understand the world using visible light cameras. However, there are inherent limitations of this type of data source. First, visible light images are sensitive to illumination changes and background clutter. Second, the 3D structural information of the scene is lost when projecting the 3D world to 2D images. Recovering the 3D information from 2D images is a challenging problem. Range sensors have existed for over thirty years, which capture 3D characteristics of the scene. However, earlier range sensors were either too expensive, difficult to use in human environments, slow at acquiring data, or provided a poor estimation of distance. Recently, the easy access to the RGBD data at real-time frame rate is leading to a revolution in perception and inspired many new research using RGBD data. I propose algorithms to detect persons and understand the activities using RGBD data. I demonstrate the solutions to many computer vision problems may be improved with the added depth channel. The 3D structural information may give rise to algorithms with real-time and view-invariant properties in a faster and easier fashion. When both data sources are available, the features extracted from the depth channel may be combined with traditional features computed from RGB channels to generate more robust systems with enhanced recognition abilities, which may be able to deal with more challenging scenarios. As a starting point, the first problem is to find the persons of various poses in the scene, including moving or static persons. Localizing humans from RGB images is limited by the lighting conditions and background clutter. Depth image gives alternative ways to find the humans in the scene. In the past, detection of humans from range data is usually achieved by tracking, which does not work for indoor person detection. In this thesis, I propose a model based approach to detect the persons using the structural information embedded in the depth image. I propose a 2D head contour model and a 3D head surface model to look for the head-shoulder part of the person. Then, a segmentation scheme is proposed to segment the full human body from the background and extract the contour. I also give a tracking algorithm based on the detection result. I further research on recognizing human actions and activities. I propose two features for recognizing human activities. The first feature is drawn from the skeletal joint locations estimated from a depth image. It is a compact representation of the human posture called histograms of 3D joint locations (HOJ3D). This representation is view-invariant and the whole algorithm runs at real-time. This feature may benefit many applications to get a fast estimation of the posture and action of the human subject. The second feature is a spatio-temporal feature for depth video, which is called Depth Cuboid Similarity Feature (DCSF). The interest points are extracted using an algorithm that effectively suppresses the noise and finds salient human motions. DCSF is extracted centered on each interest point, which forms the description of the video contents. This descriptor can be used to recognize the activities with no dependence on skeleton information or pre-processing steps such as motion segmentation, tracking, or even image de-noising or hole-filling. It is more flexible and widely applicable to many scenarios. Finally, all the features herein developed are combined to solve a novel problem: first-person human activity recognition using RGBD data. Traditional activity recognition algorithms focus on recognizing activities from a third-person perspective. I propose to recognize activities from a first-person perspective with RGBD data. This task is very novel and extremely challenging due to the large amount of camera motion either due to self exploration or the response of the interaction. I extracted 3D optical flow features as the motion descriptor, 3D skeletal joints features as posture descriptors, spatio-temporal features as local appearance descriptors to describe the first-person videos. To address the ego-motion of the camera, I propose an attention mask to guide the recognition procedures and separate the features on the ego-motion region and independent-motion region. The 3D features are very useful at summarizing the discerning information of the activities. In addition, the combination of the 3D features with existing 2D features brings more robust recognition results and make the algorithm capable of dealing with more challenging cases. / text
4

Studies In Depth Sensing Indentation

Bobji, M S 12 1900 (has links) (PDF)
No description available.
5

La caractérisation mécanique de systèmes film-substrat par indentation instrumentée (nanoindentation) en géométrie sphère-plan / Mechanical characterization of film-substrate systems by instrumented indentation (nanoindentation) on sphere-plane geometry

Oumarou, Noura 06 January 2009 (has links)
L’indentation instrumentée (nanoindentation) est une technique d’analyse des données expérimentales utilisées pour atteindre les propriétés mécaniques de matériaux (dureté H, module de Young E) pour lesquels les techniques classiques sont difficilement applicables voire non envisageables. Ces paramètres mécaniques sont issus de l’exploitation de la seule courbe expérimentale charge-décharge. L’analyse de cette dernière repose sur des nombreux modèles reportés dans la littérature (Oliver et pharr, Field et Swain, Doener et Nix, Loubet et al.) qui considèrent la décharge purement élastique. De nombreuses expériences que nous avons menées, sur divers types de matériaux massifs (aciers inoxydables AISI304, AISI316, AISI430; aciers rapides HSS652; verre de silice SiO2) et revêtus de films minces de TiN et TiO2 ont montré que les propriétés mécaniques (E et H), déduites de la méthode de Oliver et Pharr, dépendent du pourcentage de la courbe de décharge considéré, de la charge appliquée et du rayon de la pointe. De plus, pour un système film-substrat, la technique est en général utilisée pour atteindre les propriétés in-situ du film ou du substrat, alors que la méthode de dépouillement fournit des paramètres composites qu’il faut ensuite déconvoluer. Dans la recherche d’une stratégie simple, permettant d’accéder au module élastique d’un film « dur » pour les applications mécaniques, nous avons fait appel à la simulation numérique. Le code de simulation numérique utilisé, est basé sur la méthode des éléments de frontière. Nos investigations numériques utilisant l’indentation sphérique nous ont permis de mettre en évidence un certain nombre de résultats utiles pour l’analyse des données expérimentales. Nous avons commencé par montrer que aussi bien pour un matériau massif homogène élastoplastique que pour un système film dur – substrat élastoplastique, la relation [delta]=a2/R demeure valable (R étant le rayon de l’indenteur, a le rayon de l’aire projetée de contact). Cela permet de représenter les résultats de l’essai d’indentation sphérique par la courbe pression moyenne F/[pi]a2- déformation a/R . Au début du chargement, la pente cette courbe est proportionnelle au module de Young du film tandis que la pente initiale de la courbe de décharge est proportionnelle au module d’élasticité du substrat. Une relation entre le déplacement de l’indenteur et [delta] , puis une méthode d’analyse d’indentation ont été établies. Enfin, la procédure a été validée numériquement et expérimentalement sur les données issues de l’indentation de divers combinaisons film-substrat (TiN/AISI430, TiN/HSS652 et TiO2/HSS652) avec succès / Depth sensing Indentation (nanoindentation) is an experimental technique increasing retained for the assessment of the mechanical properties of materials (hardness H, Young's modulus E) for which common homogeneous mechanical tests can not be performed or are extremely difficult to perform. The mechanical parameters are obtained from the indentation curve (the plot of the load vs penetration depth during both load and unload). Usually, some methodology reported in the literature (Oliver and pharr, Field and Swain, Doener and Nix, Loubet and al.) are used in order to assess E and H. We have performed a number of experiments on homogeneous materials (stainless steel AISI304, AISI316, AISI430; high-speed steel HSS652; glass SiO2) as well as a film-substrate system (TiN/AISI430, TiN/HSS652, TiO2/HSS652). Applying the Oliver and Pharr methodology, E end H vary with the applied load as well as the percentage of used unload curve retained for the analysis, as reported in the literature. Besides, in the case of the film-substrate system, only composite parameters are obtained instead of the in-situ films properties. In order to establish a simple strategy for the determination of the elastic modulus of a hard coating, we have carried out many simulations using a boundary element based numerical tool. Then a number of useful results have been identified. The well known elastic relation [delta]=a2/R between the relative approach [delta], the projected contact radius a and the punch radius R, remain valid in the plastic range for homogeneous as well as film-substrate specimens. This allows data indentation to be represented in term of mean pressure F/[pi]a2 vs indentation strain a/R . The initial slope of the loading part of the latter curve is proportional to the elastic modulus of the film, while the slope of the initial part of the unloading curve is proportional to the substrate elastic modulus. Our indentation procedure anlysis has been validated experimentally on a number of samples (TiN/AISI430, TiN/HSS652, TiO2/HSS652) after having established a relation between the punch displacement and the relative approach [delta]
6

Structural Characterization and Thermoelectric Performance of ZrNiSn Half-Heusler Compound Synthesized by Mechanical Alloying

Germond, Jeffrey 14 May 2010 (has links)
Thermoelectric (TE) ZrNiSn samples with a half-Heusler atomic structure were synthesized by mechanical alloying (MA) and consolidation by either Spark Plasma Sintering (SPS) or hot pressing (HP). X-Ray diffraction patterns of as milled powders and consolidated samples were compared and analyzed for phase purity. Thermal conductivity, electrical conductivity and Seebeck coefficient are measured as a function of temperature in the range 300 K to 800 K and compared with measurements reported for high temperature solid state reaction synthesis of this compound. HP samples, compared to SPS samples, demonstrate increased grain growth due to longer heating times. Reduced grain size achieved by MA and SPS causes increased phonon scattering due to the increased number of grain boundaries, which lowers the thermal conductivity without doping the base system with addition phonon scattering centers. Mechanical characterization of the samples by microindentation and depth sensing indentation for hardness and elastic modulus will be discussed.
7

ShapeUD: A Real-time, Modifiable, Tangible Interactive Tabletop System for Collaborative Urban Design

Hui Tang (6861467) 02 August 2019 (has links)
This research was to develop a real-time, modifiable, tangible interactive tabletop system for participatory urban design. The targeting user group was those stakeholders in urban design charrettes. Previous system solutions overlooked the importance of the modifiable tangible medium in the situation of reaching spatial-temporal consensus. These design issues impeded communication between the stakeholders and the professionals. Users of these systems had difficulties expressing ideas to professionals during the collaborative design process. Literature in evolving technology in the smart city context, collaborative urban design, embodied interaction, and depth-sensing was referred to guide the system design. Based on the review, this research identified the pivotal role of a shapeable and tangible medium in the system. The prototype system unified the modifiable, realistic model with its digital equivalent in urban analytics in real-time. By integrating tangible interaction, depth-sensing, and large touch screen tabletop, an intuitive, immersive decision-making interface for non-professional stakeholders could be created. During the system implementation, system elements centering ‘tangible interoperability’ were documented along the system pipeline. A heuristic evaluation, a method of usability inspection, was conducted to assess and to guide the future system design. The result was promising and inspiring. In the end, challenges and directions of system design were discussed. The contribution of this research included: discovering direction, centering tangibility, implementing a prototype, and documenting elements in each stage along the system pipeline of designing a modifiable tangible interactive tabletop system for the urban design charrette.
8

A Novel Approach for Spherical Stereo Vision / Ein Neuer Ansatz für Sphärisches Stereo Vision

Findeisen, Michel 27 April 2015 (has links) (PDF)
The Professorship of Digital Signal Processing and Circuit Technology of Chemnitz University of Technology conducts research in the field of three-dimensional space measurement with optical sensors. In recent years this field has made major progress. For example innovative, active techniques such as the “structured light“-principle are able to measure even homogeneous surfaces and find its way into the consumer electronic market in terms of Microsoft’s Kinect® at the present time. Furthermore, high-resolution optical sensors establish powerful, passive stereo vision systems in the field of indoor surveillance. Thereby they induce new application domains such as security and assistance systems for domestic environments. However, the constraint field of view can be still considered as an essential characteristic of all these technologies. For instance, in order to measure a volume in size of a living space, two to three deployed 3D sensors have to be applied nowadays. This is due to the fact that the commonly utilized perspective projection principle constrains the visible area to a field of view of approximately 120°. On the contrary, novel fish-eye lenses allow the realization of omnidirectional projection models. Therewith, the visible field of view can be enlarged up to more than 180°. In combination with a 3D measurement approach, thus, the number of required sensors for entire room coverage can be reduced considerably. Motivated by the requirements of the field of indoor surveillance, the present work focuses on the combination of the established stereo vision principle and omnidirectional projection methods. The entire 3D measurement of a living space by means of one single sensor can be considered as major objective. As a starting point for this thesis chapter 1 discusses the underlying requirement, referring to various relevant fields of application. Based on this, the distinct purpose for the present work is stated. The necessary mathematical foundations of computer vision are reflected in Chapter 2 subsequently. Based on the geometry of the optical imaging process, the projection characteristics of relevant principles are discussed and a generic method for modeling fish-eye cameras is selected. Chapter 3 deals with the extraction of depth information using classical (perceptively imaging) binocular stereo vision configurations. In addition to a complete recap of the processing chain, especially occurring measurement uncertainties are investigated. In the following, Chapter 4 addresses special methods to convert different projection models. The example of mapping an omnidirectional to a perspective projection is employed, in order to develop a method for accelerating this process and, hereby, for reducing the computational load associated therewith. Any errors that occur, as well as the necessary adjustment of image resolution, are an integral part of the investigation. As a practical example, an application for person tracking is utilized in order to demonstrate to which extend the usage of “virtual views“ can increase the recognition rate for people detectors in the context of omnidirectional monitoring. Subsequently, an extensive search with respect to omnidirectional imaging stereo vision techniques is conducted in chapter 5. It turns out that the complete 3D capture of a room is achievable by the generation of a hemispherical depth map. Therefore, three cameras have to be combined in order to form a trinocular stereo vision system. As a basis for further research, a known trinocular stereo vision method is selected. Furthermore, it is hypothesized that, applying a modified geometric constellation of cameras, more precisely in the form of an equilateral triangle, and using an alternative method to determine the depth map, the performance can be increased considerably. A novel method is presented, which shall require fewer operations to calculate the distance information and which is to avoid a computational costly step for depth map fusion as necessary in the comparative method. In order to evaluate the presented approach as well as the hypotheses, a hemispherical depth map is generated in Chapter 6 by means of the new method. Simulation results, based on artificially generated 3D space information and realistic system parameters, are presented and subjected to a subsequent error estimate. A demonstrator for generating real measurement information is introduced in Chapter 7. In addition, the methods that are applied for calibrating the system intrinsically as well as extrinsically are explained. It turns out that the calibration procedure utilized cannot estimate the extrinsic parameters sufficiently. Initial measurements present a hemispherical depth map and thus con.rm the operativeness of the concept, but also identify the drawbacks of the calibration used. The current implementation of the algorithm shows almost real-time behaviour. Finally, Chapter 8 summarizes the results obtained along the studies and discusses them in the context of comparable binocular and trinocular stereo vision approaches. For example the results of the simulations carried out produced a saving of up to 30% in terms of stereo correspondence operations in comparison with a referred trinocular method. Furthermore, the concept introduced allows the avoidance of a weighted averaging step for depth map fusion based on precision values that have to be calculated costly. The achievable accuracy is still comparable for both trinocular approaches. In summary, it can be stated that, in the context of the present thesis, a measurement system has been developed, which has great potential for future application fields in industry, security in public spaces as well as home environments.
9

A Novel Approach for Spherical Stereo Vision

Findeisen, Michel 23 April 2015 (has links)
The Professorship of Digital Signal Processing and Circuit Technology of Chemnitz University of Technology conducts research in the field of three-dimensional space measurement with optical sensors. In recent years this field has made major progress. For example innovative, active techniques such as the “structured light“-principle are able to measure even homogeneous surfaces and find its way into the consumer electronic market in terms of Microsoft’s Kinect® at the present time. Furthermore, high-resolution optical sensors establish powerful, passive stereo vision systems in the field of indoor surveillance. Thereby they induce new application domains such as security and assistance systems for domestic environments. However, the constraint field of view can be still considered as an essential characteristic of all these technologies. For instance, in order to measure a volume in size of a living space, two to three deployed 3D sensors have to be applied nowadays. This is due to the fact that the commonly utilized perspective projection principle constrains the visible area to a field of view of approximately 120°. On the contrary, novel fish-eye lenses allow the realization of omnidirectional projection models. Therewith, the visible field of view can be enlarged up to more than 180°. In combination with a 3D measurement approach, thus, the number of required sensors for entire room coverage can be reduced considerably. Motivated by the requirements of the field of indoor surveillance, the present work focuses on the combination of the established stereo vision principle and omnidirectional projection methods. The entire 3D measurement of a living space by means of one single sensor can be considered as major objective. As a starting point for this thesis chapter 1 discusses the underlying requirement, referring to various relevant fields of application. Based on this, the distinct purpose for the present work is stated. The necessary mathematical foundations of computer vision are reflected in Chapter 2 subsequently. Based on the geometry of the optical imaging process, the projection characteristics of relevant principles are discussed and a generic method for modeling fish-eye cameras is selected. Chapter 3 deals with the extraction of depth information using classical (perceptively imaging) binocular stereo vision configurations. In addition to a complete recap of the processing chain, especially occurring measurement uncertainties are investigated. In the following, Chapter 4 addresses special methods to convert different projection models. The example of mapping an omnidirectional to a perspective projection is employed, in order to develop a method for accelerating this process and, hereby, for reducing the computational load associated therewith. Any errors that occur, as well as the necessary adjustment of image resolution, are an integral part of the investigation. As a practical example, an application for person tracking is utilized in order to demonstrate to which extend the usage of “virtual views“ can increase the recognition rate for people detectors in the context of omnidirectional monitoring. Subsequently, an extensive search with respect to omnidirectional imaging stereo vision techniques is conducted in chapter 5. It turns out that the complete 3D capture of a room is achievable by the generation of a hemispherical depth map. Therefore, three cameras have to be combined in order to form a trinocular stereo vision system. As a basis for further research, a known trinocular stereo vision method is selected. Furthermore, it is hypothesized that, applying a modified geometric constellation of cameras, more precisely in the form of an equilateral triangle, and using an alternative method to determine the depth map, the performance can be increased considerably. A novel method is presented, which shall require fewer operations to calculate the distance information and which is to avoid a computational costly step for depth map fusion as necessary in the comparative method. In order to evaluate the presented approach as well as the hypotheses, a hemispherical depth map is generated in Chapter 6 by means of the new method. Simulation results, based on artificially generated 3D space information and realistic system parameters, are presented and subjected to a subsequent error estimate. A demonstrator for generating real measurement information is introduced in Chapter 7. In addition, the methods that are applied for calibrating the system intrinsically as well as extrinsically are explained. It turns out that the calibration procedure utilized cannot estimate the extrinsic parameters sufficiently. Initial measurements present a hemispherical depth map and thus con.rm the operativeness of the concept, but also identify the drawbacks of the calibration used. The current implementation of the algorithm shows almost real-time behaviour. Finally, Chapter 8 summarizes the results obtained along the studies and discusses them in the context of comparable binocular and trinocular stereo vision approaches. For example the results of the simulations carried out produced a saving of up to 30% in terms of stereo correspondence operations in comparison with a referred trinocular method. Furthermore, the concept introduced allows the avoidance of a weighted averaging step for depth map fusion based on precision values that have to be calculated costly. The achievable accuracy is still comparable for both trinocular approaches. In summary, it can be stated that, in the context of the present thesis, a measurement system has been developed, which has great potential for future application fields in industry, security in public spaces as well as home environments.:Abstract 7 Zusammenfassung 11 Acronyms 27 Symbols 29 Acknowledgement 33 1 Introduction 35 1.1 Visual Surveillance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 1.2 Challenges in Visual Surveillance . . . . . . . . . . . . . . . . . . . . . . . 38 1.3 Outline of the Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 2 Fundamentals of Computer Vision Geometry 43 2.1 Projective Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 2.1.1 Euclidean Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 2.1.2 Projective Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 2.2 Camera Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 2.2.1 Geometrical Imaging Process . . . . . . . . . . . . . . . . . . . . . 45 2.2.1.1 Projection Models . . . . . . . . . . . . . . . . . . . . . . 46 2.2.1.2 Intrinsic Model . . . . . . . . . . . . . . . . . . . . . . . . 47 2.2.1.3 Extrinsic Model . . . . . . . . . . . . . . . . . . . . . . . 50 2.2.1.4 Distortion Models . . . . . . . . . . . . . . . . . . . . . . 51 2.2.2 Pinhole Camera Model . . . . . . . . . . . . . . . . . . . . . . . . . 51 2.2.2.1 Complete Forward Model . . . . . . . . . . . . . . . . . . 52 2.2.2.2 Back Projection . . . . . . . . . . . . . . . . . . . . . . . 53 2.2.3 Equiangular Camera Model . . . . . . . . . . . . . . . . . . . . . . 54 2.2.4 Generic Camera Models . . . . . . . . . . . . . . . . . . . . . . . . 55 2.2.4.1 Complete Forward Model . . . . . . . . . . . . . . . . . . 56 2.2.4.2 Back Projection . . . . . . . . . . . . . . . . . . . . . . . 58 2.3 Camera Calibration Methods . . . . . . . . . . . . . . . . . . . . . . . . . 58 2.3.1 Perspective Camera Calibration . . . . . . . . . . . . . . . . . . . . 59 2.3.2 Omnidirectional Camera Calibration . . . . . . . . . . . . . . . . . 59 2.4 Two-View Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 2.4.1 Epipolar Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 2.4.2 The Fundamental Matrix . . . . . . . . . . . . . . . . . . . . . . . 63 2.4.3 Epipolar Curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 3 Fundamentals of Stereo Vision 67 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 3.1.1 The Concept Stereo Vision . . . . . . . . . . . . . . . . . . . . . . 67 3.1.2 Overview of a Stereo Vision Processing Chain . . . . . . . . . . . . 68 3.2 Stereo Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3.2.1 Extrinsic Stereo Calibration With Respect to the Projective Error 70 3.3 Stereo Rectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 3.3.1 A Compact Algorithm for Rectification of Stereo Pairs . . . . . . . 73 3.4 Stereo Correspondence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 3.4.1 Disparity Computation . . . . . . . . . . . . . . . . . . . . . . . . 76 3.4.2 The Correspondence Problem . . . . . . . . . . . . . . . . . . . . . 77 3.5 Triangulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 3.5.1 Depth Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . 79 3.5.2 Range Field of Measurement . . . . . . . . . . . . . . . . . . . . . 80 3.5.3 Measurement Accuracy . . . . . . . . . . . . . . . . . . . . . . . . 80 3.5.4 Measurement Errors . . . . . . . . . . . . . . . . . . . . . . . . . . 81 3.5.4.1 Quantization Error . . . . . . . . . . . . . . . . . . . . . 82 3.5.4.2 Statistical Distribution of Quantization Errors . . . . . . 83 4 Virtual Cameras 87 4.1 Introduction and Related Works . . . . . . . . . . . . . . . . . . . . . . . 88 4.2 Omni to Perspective Vision . . . . . . . . . . . . . . . . . . . . . . . . . . 90 4.2.1 Forward Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 4.2.2 Backward Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 4.2.3 Fast Backward Mapping . . . . . . . . . . . . . . . . . . . . . . . . 96 4.3 Error Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 4.4 Accuracy Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 4.4.1 Intrinsics of the Source Camera . . . . . . . . . . . . . . . . . . . . 102 4.4.2 Intrinsics of the Target Camera . . . . . . . . . . . . . . . . . . . . 102 4.4.3 Marginal Virtual Pixel Size . . . . . . . . . . . . . . . . . . . . . . 104 4.5 Performance Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . 109 4.6 Virtual Perspective Views for Real-Time People Detection . . . . . . . . . 110 5 Omnidirectional Stereo Vision 113 5.1 Introduction and Related Works . . . . . . . . . . . . . . . . . . . . . . . 113 5.1.1 Geometrical Configuration . . . . . . . . . . . . . . . . . . . . . . . 116 5.1.1.1 H-Binocular Omni-Stereo with Panoramic Views . . . . . 117 5.1.1.2 V-Binocular Omnistereo with Panoramic Views . . . . . 119 5.1.1.3 Binocular Omnistereo with Hemispherical Views . . . . . 120 5.1.1.4 Trinocular Omnistereo . . . . . . . . . . . . . . . . . . . 122 5.1.1.5 Miscellaneous Configurations . . . . . . . . . . . . . . . . 125 5.2 Epipolar Rectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 5.2.1 Cylindrical Rectification . . . . . . . . . . . . . . . . . . . . . . . . 127 5.2.2 Epipolar Equi-Distance Rectification . . . . . . . . . . . . . . . . . 128 5.2.3 Epipolar Stereographic Rectification . . . . . . . . . . . . . . . . . 128 5.2.4 Comparison of Rectification Methods . . . . . . . . . . . . . . . . 129 5.3 A Novel Spherical Stereo Vision Setup . . . . . . . . . . . . . . . . . . . . 129 5.3.1 Physical Omnidirectional Camera Configuration . . . . . . . . . . 131 5.3.2 Virtual Rectified Cameras . . . . . . . . . . . . . . . . . . . . . . . 131 6 A Novel Spherical Stereo Vision Algorithm 135 6.1 Matlab Simulation Environment . . . . . . . . . . . . . . . . . . . . . . . 135 6.2 Extrinsic Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 6.3 Physical Camera Configuration . . . . . . . . . . . . . . . . . . . . . . . . 137 6.4 Virtual Camera Configuration . . . . . . . . . . . . . . . . . . . . . . . . . 137 6.4.1 The Focal Length . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 6.4.2 Prediscussion of the Field of View . . . . . . . . . . . . . . . . . . 138 6.4.3 Marginal Virtual Pixel Sizes . . . . . . . . . . . . . . . . . . . . . . 139 6.4.4 Calculation of the Field of View . . . . . . . . . . . . . . . . . . . 142 6.4.5 Calculation of the Virtual Pixel Size Ratios . . . . . . . . . . . . . 143 6.4.6 Results of the Virtual Camera Parameters . . . . . . . . . . . . . . 144 6.5 Spherical Depth Map Generation . . . . . . . . . . . . . . . . . . . . . . . 147 6.5.1 Omnidirectional Imaging Process . . . . . . . . . . . . . . . . . . . 148 6.5.2 Rectification Process . . . . . . . . . . . . . . . . . . . . . . . . . . 148 6.5.3 Rectified Depth Map Generation . . . . . . . . . . . . . . . . . . . 150 6.5.4 Spherical Depth Map Generation . . . . . . . . . . . . . . . . . . . 151 6.5.5 3D Reprojection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 6.6 Error Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 7 Stereo Vision Demonstrator 163 7.1 Physical System Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 7.2 System Calibration Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . 165 7.2.1 Intrinsic Calibration of the Physical Cameras . . . . . . . . . . . . 165 7.2.2 Extrinsic Calibration of the Physical and the Virtual Cameras . . 166 7.2.2.1 Extrinsic Initialization of the Physical Cameras . . . . . 167 7.2.2.2 Extrinsic Initialization of the Virtual Cameras . . . . . . 167 7.2.2.3 Two-View Stereo Calibration and Rectification . . . . . . 167 7.2.2.4 Three-View Stereo Rectification . . . . . . . . . . . . . . 168 7.2.2.5 Extrinsic Calibration Results . . . . . . . . . . . . . . . . 169 7.3 Virtual Camera Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 7.4 Software Realization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 7.5 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 7.5.1 Qualitative Assessment . . . . . . . . . . . . . . . . . . . . . . . . 172 7.5.2 Performance Measurements . . . . . . . . . . . . . . . . . . . . . . 174 8 Discussion and Outlook 177 8.1 Discussion of the Current Results and Further Need for Research . . . . . 177 8.1.1 Assessment of the Geometrical Camera Configuration . . . . . . . 178 8.1.2 Assessment of the Depth Map Computation . . . . . . . . . . . . . 179 8.1.3 Assessment of the Depth Measurement Error . . . . . . . . . . . . 182 8.1.4 Assessment of the Spherical Stereo Vision Demonstrator . . . . . . 183 8.2 Review of the Different Approaches for Hemispherical Depth Map Generation184 8.2.1 Comparison of the Equilateral and the Right-Angled Three-View Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 8.2.2 Review of the Three-View Approach in Comparison with the Two- View Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 8.3 A Sample Algorithm for Human Behaviour Analysis . . . . . . . . . . . . 187 8.4 Closing Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 A Relevant Mathematics 191 A.1 Cross Product by Skew Symmetric Matrix . . . . . . . . . . . . . . . . . . 191 A.2 Derivation of the Quantization Error . . . . . . . . . . . . . . . . . . . . . 191 A.3 Derivation of the Statistical Distribution of Quantization Errors . . . . . . 192 A.4 Approximation of the Quantization Error for Equiangular Geometry . . . 194 B Further Relevant Publications 197 B.1 H-Binocular Omnidirectional Stereo Vision with Panoramic Views . . . . 197 B.2 V-Binocular Omnidirectional Stereo Vision with Panoramic Views . . . . 198 B.3 Binocular Omnidirectional Stereo Vision with Hemispherical Views . . . . 200 B.4 Trinocular Omnidirectional Stereo Vision . . . . . . . . . . . . . . . . . . 201 B.5 Miscellaneous Configurations . . . . . . . . . . . . . . . . . . . . . . . . . 202 Bibliography 209 List of Figures 223 List of Tables 229 Affidavit 231 Theses 233 Thesen 235 Curriculum Vitae 237
10

Obtenção de revestimentos dúplex por nitretação a plasma e PVD-TiN em aços ferramenta AISI D2 e AISI H13. / Duplex coatings on AISI H13 and AISI D2 tool steels by using plasma nitriding and TiN-PVD.

Franco Júnior, Adonias Ribeiro 05 August 2003 (has links)
No presente trabalho foi avaliado o efeito da microestrutura e da capacidade de suportar carregamento de camadas nitretadas produzidas em aços ferramenta AISI H13 e AISI D2 sobre a aderência e a resistência ao desgaste microabrasivo de revestimentos de TiN-PVD. Em cada um desses aços, foram produzidas camadas nitretadas de diferentes estruturas e espessuras, e foram determinadas experimentalmente as curvas potencial início de formação de camada branca, para a nitretação a 520oC. Para o aço ferramenta AISI H13, o emprego de tempos de pré-tratamento de nitretação mais prolongados ( aproximadamente 11 h) foi necessário para aprofundar a camada nitretada e, conseqüentemente, aumentar a capacidade de suportar carregamento dos revestimentos, evitando a formação de bordas que provocam o lascamento e a escamação das camadas de TiN. Observou-se que esse tipo de falha persiste se a zona de endurecimento for pouco profunda, uma vez que a transição de propriedades mecânicas da camada de TiN para o núcleo não nitretado continua abrupta e a capacidade de suportar carregamento da camada nitretada ainda é baixa. Por outro lado, curtos tempos de nitretação (aproximadamente 42 min.) foram suficientes para aumentar a aderência das camadas de TiN ao aço ferramenta D2, pois o núcleo não nitretado desse aço possui uma capacidade de suportar carregamento razoável. Observou-se que a resistência ao desgaste microabrasivo e a aderência dos revestimentos são prejudicadas com a presença de uma camada preta na interface camada de TiN/camada nitretada. Quando a superfície dos revestimentos é carregada, falhas do tipo “casca de ovo" facilmente ocorrem. / In this work, the influence of both the microstructure and the load-bearing capacity of nitrided layers, formed on top of AISI D2 and AISI H13 tool steels, on adhesion and wear resistance of PVD-TiN coatings was studied. The threshold nitriding potential curves for the above mentioned steels and the optimum conditions of the pre-treatments which increased the adhesion as well as the wear resistance of the PVD-TiN were determined experimentally. By using longer nitriding times (about 11 h) and lower nitrogen contents in the gas mixture (about N2-5%vol.), it was possible to minimize the pile-up degree of the TiN/H13 nitrided substrates and, consequently, the occurrence of coatings chipping. This flaw persists when the nitrided layer is thin, due to an abrupt transition of mechanical properties at the TiN coating / steel core interface. Shorter nitriding times (about 42 min.) and lower nitrogen contents (about N2-5%vol.), on the other hand, are sufficient to guarantee a better adhesion of TiN coatings on AISI D2 tool steel, as the core of such steel possesses relatively better load-bearing capacity than the AISI H13 tool steel. The presence of a black layer at the TiN/nitrided layer interface was observed in all coatings deposited over nitrided layers produced above the threshold nitriding potential curves. This layer affects adversely the wear resistance and the adhesion of the TiN coatings. When higher loads are applied on the coated surface, “egg shell" type flaws easily occur.

Page generated in 0.0768 seconds