• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 15
  • 15
  • 7
  • 5
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Three dimensional sensing via coloured spots

Davies, Colin J. January 1996 (has links)
No description available.
2

Stereoskopisk 3D i spel / Stereoscopic 3D in games

Lindström, David, Bennet, Henning January 2015 (has links)
I den här rapporten undersöks stereoskopisk 3D. Vi utreder hur ett spel ska anpassas för att tafram en så bra och tydlig stereoskopisk 3D-effekt som möjligt och så att betraktaren upplever etttydligt djup utan att uppleva ett obehag på grund av effekten. Rapporten tittar djupare på vilkatekniska aspekter man behöver ta hänsyn till vid spelutveckling i stereoskopisk 3D. Samt vilkaprestandabegränsningar som man bör ta hänsyn till vid stereoskopisk 3D. Vi beskriver hurprocessen och framtagandet av prototypen Kodo med anaglyfisk stereoskopisk 3D såg ut.Prototypen togs fram för att testa och analysera resultatet av stereoskopisk 3D-effekten. / In this report we investigate the technique of stereoscopic 3D. This report investigates the stepsneeded to create a game adapted for an improved stereoscopic 3D effect. Furthermore weinvestigate what improvements one should make to avoid the beholder to experience anydiscomfort due to the effect. The report talks about technical aspects one needs to considerwhen using stereoscopic 3D, as well as performance issues we might need to take intoconsideration. The process of developing the prototype of the game Kodo using anaglyphstereoscopic 3D and OpenGL is described in this report. The prototype was then used for testingand analyzing the stereoscopic 3D effects.
3

Extraction de paramètres bio-geo-physiques de surfaces 3D reconstruites par multi-stéréo-restitution d'images prises sans contraintes / Bio-geo-physics parameters extraction from 3D surface reconstructed from multi-stereoscopic of images acquired without constraint

Petitpas, Benoit 15 December 2011 (has links)
Extraire des mesures sur des surfaces est un problème dans de nombreux domaines de recherche. L'archaïsme de certains systèmes ou la cherté d'appareils perfectionnés empêchent l'extraction rapide et robuste de ces paramètres. Pourtant, ils sont essentiels dans de nombreux domaines, comme les paramètres de rugosité qui interviennent dans de nombreux phénomènes physiques ou les valeurs dendrométriques pour l'étude de la bio-diversité. En parallèle, l'utilisation et la production de contenus 3D se développent considérablement ces dernières années dans des domaines très divers. Le but de cette thèse est donc d'utiliser toutes ces innovations dans le domaine de la reconstruction 3D et de les appliquer à la mesure de paramètres de surfaces. Pour cela, il est nécessaire de créer une chaîne complète de reconstruction 3D, n'utilisant que des images prises sans contrainte dans le but d'être accessible au plus grand nombre. Dans cette chaîne nous utilisons des algorithmes de stéréo-vision robustes produisant, pour chaque couple d'images, un nuage de points. Après le passage de ces nuages dans un référentiel commun, une étape de filtrage des points 3D et de suppression des redondances est nécessaire. Une étape de lissage permet d'obtenir le nuage final. Pour justifier des bons résultats obtenus, une étape de validation nous a permis de vérifier et d'étudier la robustesse de la chaîne de traitements développée. Enfin, les paramètres de rugosités et dendrométriques seront extraits. Nous étudierons dans ces deux cas, comment extraire ces informations et leurs utilisations / Extracting measures on surfaces is a problem in many areas of research. The archaism of some systems or the costliness of sophisticated devices prevent the fast and robust extraction of these parameters. Yet these measures are essential in many areas, such as roughness parameters involved in many physical phenomena or dendrometric values for the study of biodiversity. In parallel, the use and production of 3D content has grown dramatically this past year in very diverse domains. The purpose of this thesis is to use these innovations in the context of surfaces parameter measurements. It is necessary to create a complete chain of 3D reconstruction, using pictures taken without constraint, in order to be open to as many people. This chain uses robust stereo-vision algorithms in order to produce a point cloud for each pair of images. After the generation of these point cloud in the same geometric frame, a filtering step of 3D points and a deletion step of redundancies are necessary and a smoothing step allows us to obtain the final point cloud. To reveal the good results, a validation step has enabled us to verify and investigate the robustness of the developed chain. The roughness and dendrometric parameters are finally extracted. We will study in both cases, how to extract this information and their uses
4

Conception et réalisation de caméras plénoptiques pour l'apport d'une vision 3D à un imageur infrarouge mono plan focal / Design and implementation of cooled infrared cameras with single focal plane array depth estimation capability

Cossu, Kevin 23 November 2018 (has links)
Les systèmes d’imagerie infrarouge suivent depuis plusieurs années la même tendance de miniaturisation que les systèmes d’imagerie visible. Aujourd’hui cette miniaturisation se rapproche d’une limite physique qui amène la communauté à se tourner vers une nouvelle approche : la fonctionnalisation, c’est-à-dire l’apport de fonctions d’imagerie avancées aux systèmes telles que l’imagerie 3D.En infrarouge, la fonction d’imagerie 3D est très recherchée car elle pourrait apporter à un fantassin un outil de télémétrie passive fonctionnant de nuit comme de jour, ou encore permettre l’autonomie en environnements complexes à des systèmes tels que les drones. Cependant, le cout d’une caméra infrarouge hautes-performances est élevé. Multiplier le nombre de cameras n’est donc pas une solution acceptable pour répondre à ce besoin.C’est dans ce contexte que se situe ce travail qui consiste à apporter une fonction de vision 3D à des caméras infrarouges possédant un unique plan focal.Au cours de cette thèse, j’ai identifié la technologie d’imagerie 3D la plus adaptée à ce besoin : la camera plénoptique. J’ai montré que cette dernière permet de proposer, en intégrant une matrice de microlentilles dans le cryostat, un bloc de détection infrarouge avec une fonction d’imagerie 3D. L’environnement scellé du bloc de détection m’a amené à développer un modèle de dimensionnement rigoureux que j’ai appliqué pour concevoir et réaliser une camera plénoptique infrarouge refroidie. J’ai ensuite mis au point une méthode de caractérisation originale et intégré les mesures dans une série d’algorithmes de traitement d’image afin de remonter à la distance des objets observés. / For a few years now, infrared cameras have been following the same miniaturization trend introduced with visible cameras. Today, this miniaturization is nearing a physical limit, leading the community to take a different approach called functionalization: that is bringing an advanced imaging capability to the system.For infrared cameras, one of the most desired functions is 3D vision. This could be used to bring soldiers a passive telemetry tool or to help UAVs navigate a complex environment, even at night. However, high performance infrared cameras are expensive. Multiplying the number of cameras would thus not be an acceptable solution to bring 3D vision to these systems.That is why this work focuses on bringing 3D vision to cooled infrared cameras using only a single focal plane array.During this PhD, I have first identified the plenoptic technology as the most suitable for our need of 3D vision with a single cooled infrared sensor. I have shown that integrating a microlens array inside the dewar could bring this function to the infrared region. I have then developed a complete design model for such a camera and used it to design and build a cooled infrared plenoptic camera. I have created a method to characterize our camera and integrated this method into the image processing algorithms necessary to generate refocused images and derive the distance of objects in the scene.
5

Prosodic Writing with 2D- and 3D-fonts: An approach to integrate pronunciation in writing systems

Rude, Markus 28 October 2013 (has links)
No description available.
6

A real time 3D surface measurement system using projected line patterns.

Shen, Anqi January 2010 (has links)
This thesis is based on a research project to evaluate a quality control system for car component stamping lines. The quality control system measures the abrasion of the stamping tools by measuring the surface of the products. A 3D vision system is developed for the real time online measurement of the product surface. In this thesis, there are three main research themes. First is to produce an industrial application. All the components of this vision system are selected from industrial products and user application software is developed. A rich human machine interface for interaction with the vision system is developed along with a link between the vision system and a control unit which is established for interaction with a production line. The second research theme is to enhance the robustness of the 3D measurement. As an industrial product, this system will be deployed in different factories. It should be robust against environmental uncertainties. For this purpose, a high signal to noise ratio is required with the light pattern being produced by a laser projector. Additionally, multiple height calculation methods and a spatial Kalman filter are proposed for optimal height estimation. The final research theme is to achieve real time 3D measurement. The vision system is expected to be installed on production lines for online quality inspection. A new 3D measurement method is developed. It combines the spatial binary coded method with phase shift methods with a single image needs to be captured. / SHRIS (Shanghai Ro-Intelligent System,co.,Ltd.)
7

COMPUTER VISION SYSTEMS FOR PRACTICAL APPLICATIONS IN PRECISION LIVESTOCK FARMING

Prajwal Rao (19194526) 23 July 2024 (has links)
<p dir="ltr">The use of advanced imaging technology and algorithms for managing and monitoring livestock improves various aspects of livestock, such as health monitoring, behavioral analysis, early disease detection, feed management, and overall farming efficiency. Leveraging computer vision techniques such as keypoint detection, and depth estimation for these problems help to automate repeatable tasks, which in turn improves farming efficiency. In this thesis, we delve into two main aspects that are early disease detection, and feed management:</p><ul><li><b>Phenotyping Ducks using Keypoint Detection: </b>A platform to measure duck phenotypes such as wingspan, back length, and hip width packaged in an online user interface for ease of use.</li><li><b>Real-Time Cattle Intake Monitoring Using Computer Vision:</b> A complete end-to-end real-time monitoring system to measure cattle feed intake using stereo cameras.</li></ul><p dir="ltr">Furthermore, considering the above implementations and their drawbacks, we propose a cost-effective simulation environment for feed estimation to conduct extensive experiments prior to real-world implementation. This approach allows us to test and refine the computer vision systems under controlled conditions, identify potential issues, and optimize performance without the high costs and risks associated with direct deployment on farms. By simulating various scenarios and conditions, we can gather valuable data, improve algorithm accuracy, and ensure the system's robustness. Ultimately, this preparatory step will facilitate a smoother transition to real-world applications, enhancing the reliability and effectiveness of computer vision in precision livestock farming.</p>
8

Local visual feature based localisation and mapping by mobile robots

Andreasson, Henrik January 2008 (has links)
This thesis addresses the problems of registration, localisation and simultaneous localisation and mapping (SLAM), relying particularly on local visual features extracted from camera images. These fundamental problems in mobile robot navigation are tightly coupled. Localisation requires a representation of the environment (a map) and registration methods to estimate the pose of the robot relative to the map given the robot’s sensory readings. To create a map, sensor data must be accumulated into a consistent representation and therefore the pose of the robot needs to be estimated, which is again the problem of localisation. The major contributions of this thesis are new methods proposed to address the registration, localisation and SLAM problems, considering two different sensor configurations. The first part of the thesis concerns a sensor configuration consisting of an omni-directional camera and odometry, while the second part assumes a standard camera together with a 3D laser range scanner. The main difference is that the former configuration allows for a very inexpensive set-up and (considering the possibility to include visual odometry) the realisation of purely visual navigation approaches. By contrast, the second configuration was chosen to study the usefulness of colour or intensity information in connection with 3D point clouds (“coloured point clouds”), both for improved 3D resolution (“super resolution”) and approaches to the fundamental problems of navigation that exploit the complementary strengths of visual and range information. Considering the omni-directional camera/odometry setup, the first part introduces a new registration method based on a measure of image similarity. This registration method is then used to develop a localisation method, which is robust to the changes in dynamic environments, and a visual approach to metric SLAM, which does not require position estimation of local image features and thus provides a very efficient approach. The second part, which considers a standard camera together with a 3D laser range scanner, starts with the proposal and evaluation of non-iterative interpolation methods. These methods use colour information from the camera to obtain range information at the resolution of the camera image, or even with sub-pixel accuracy, from the low resolution range information provided by the range scanner. Based on the ability to determine depth values for local visual features, a new registration method is then introduced, which combines the depth of local image features and variance estimates obtained from the 3D laser range scanner to realise a vision-aided 6D registration method, which does not require an initial pose estimate. This is possible because of the discriminative power of the local image features used to determine point correspondences (data association). The vision-aided registration method is further developed into a 6D SLAM approach where the optimisation constraint is based on distances of paired local visual features. Finally, the methods introduced in the second part are combined with a novel adaptive normal distribution transform (NDT) representation of coloured 3D point clouds into a robotic difference detection system.
9

Completing unknown portions of 3D scenes by 3D visual propagation

Breckon, Toby P. January 2006 (has links)
As the requirement for more realistic 3D environments is pushed forward by the computer {graphics | movie | simulation | games} industry, attention turns away from the creation of purely synthetic, artist derived environments towards the use of real world captures from the 3D world in which we live. However, common 3D acquisition techniques, such as laser scanning and stereo capture, are realistically only 2.5D in nature - such that the backs and occluded portions of objects cannot be realised from a single uni-directional viewpoint. Although multi-directional capture has existed for sometime, this incurs additional temporal and computational cost with no existing guarantee that the resulting acquisition will be free of minor holes, missing surfaces and alike. Drawing inspiration from the study of human abilities in 3D visual completion, we consider the automated completion of these hidden or missing portions in 3D scenes originally acquired from 2.5D (or 3D) capture. We propose an approach based on the visual propagation of available scene knowledge from the known (visible) scene areas to these unknown (invisible) 3D regions (i.e. the completion of unknown volumes via visual propagation - the concept of volume completion). Our proposed approach uses a combination of global surface fitting, to derive an initial underlying geometric surface completion, together with a 3D extension of nonparametric texture synthesis in order to provide the propagation of localised structural 3D surface detail (i.e. surface relief). We further extend our technique both to the combined completion of 3D surface relief and colour and additionally to hierarchical surface completion that offers both improved structural results and computational efficiency gains over our initial non-hierarchical technique. To validate the success of these approaches we present the completion and extension of numerous 2.5D (and 3D) surface examples with relief ranging in natural, man-made, stochastic, regular and irregular forms. These results are evaluated both subjectively within our definition of plausible completion and quantitatively by statistical analysis in the geometric and colour domains.
10

Contribution à la perception visuelle multi-résolution de l’environnement 3D : application à la robotique autonome / Contribution to the visual perception multi-resolution of the 3D environment : application to autonomous robotics

Fraihat, Hossam 19 December 2017 (has links)
Le travail de recherche effectué dans le cadre de cette thèse concerne le développement d’un système de perception de la saillance en environnement 3D en tirant l’avantage d’une représentation pseudo-3D. Notre contribution et concept issue de celle-ci part de l'hypothèse que la profondeur de l’objet par rapport au robot est un facteur important dans la détection de la saillance. Sur ce principe, un système de vision saillante de l’environnement 3D a été proposé, conçu et validée sur une plateforme comprenant un robot équipé d’un capteur pseudo-3D. La mise en œuvre du concept précité et sa conception ont été d’abord validés sur le système de vision pseudo-3D KINECT. Puis dans une deuxième étape, le concept et les algorithmes mis aux points ont été étendus à la plateforme précitée. Les principales contributions de la présente thèse peuvent être résumées de la manière suivante : A) Un état de l'art sur les différents capteurs d'acquisition de l’information de la profondeur ainsi que les différentes méthodes de la détection de la saillance 2D et pseudo 3D. B) Etude d’un système basé sur la saillance visuelle pseudo 3D réalisée grâce au développement d’un algorithme robuste permettant la détection d'objets saillants dans l’environnement 3D. C) réalisation d’un système d’estimation de la profondeur en centimètres pour le robot Pepper. D) La mise en œuvre des concepts et des méthodes proposés sur la plateforme précitée. Les études et les validations expérimentales réalisées ont notamment confirmé que les approches proposées permettent d’accroitre l’autonomie des robots dans un environnement 3D réel / The research work, carried out within the framework of this thesis, concerns the development of a system of perception and saliency detection in 3D environment taking advantage from a pseudo-3D representation. Our contribution and the issued concept derive from the hypothesis that the depth of the object with respect to the robot is an important factor in the detection of the saliency. On this basis, a salient vision system of the 3D environment has been proposed, designed and validated on a platform including a robot equipped with a pseudo-3D sensor. The implementation of the aforementioned concept and its design were first validated on the pseudo-3D KINECT vision system. Then, in a second step, the concept and the algorithms have been extended to the aforementioned robotic platform. The main contributions of the present thesis can be summarized as follow: A) A state of the art on the various sensors for acquiring depth information as well as different methods of detecting 2D salience and pseudo 3D. B) Study of pseudo-3D visual saliency system based on benefiting from the development of a robust algorithm allowing the detection of salient objects. C) Implementation of a depth estimation system in centimeters for the Pepper robot. D) Implementation of the concepts and methods proposed on the aforementioned platform. The carried out studies and the experimental validations confirmed that the proposed approaches allow to increase the autonomy of the robots in a real 3D environment

Page generated in 0.0788 seconds