• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 35
  • 5
  • 4
  • 4
  • 1
  • 1
  • Tagged with
  • 55
  • 55
  • 24
  • 21
  • 14
  • 14
  • 14
  • 10
  • 10
  • 9
  • 7
  • 7
  • 7
  • 7
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Digital image based surface modelling

Eberhardt, Joerg January 1998 (has links)
No description available.
2

View-Based Strategies for 3D Object Recognition

Sinha, Pawan, Poggio, Tomaso 21 April 1995 (has links)
A persistent issue of debate in the area of 3D object recognition concerns the nature of the experientially acquired object models in the primate visual system. One prominent proposal in this regard has expounded the use of object centered models, such as representations of the objects' 3D structures in a coordinate frame independent of the viewing parameters [Marr and Nishihara, 1978]. In contrast to this is another proposal which suggests that the viewing parameters encountered during the learning phase might be inextricably linked to subsequent performance on a recognition task [Tarr and Pinker, 1989; Poggio and Edelman, 1990]. The 'object model', according to this idea, is simply a collection of the sample views encountered during training. Given that object centered recognition strategies have the attractive feature of leading to viewpoint independence, they have garnered much of the research effort in the field of computational vision. Furthermore, since human recognition performance seems remarkably robust in the face of imaging variations [Ellis et al., 1989], it has often been implicitly assumed that the visual system employs an object centered strategy. In the present study we examine this assumption more closely. Our experimental results with a class of novel 3D structures strongly suggest the use of a view-based strategy by the human visual system even when it has the opportunity of constructing and using object-centered models. In fact, for our chosen class of objects, the results seem to support a stronger claim: 3D object recognition is 2D view-based.
3

3D Object Detection for Advanced Driver Assistance Systems

Demilew, Selameab 29 June 2021 (has links)
Robust and timely perception of the environment is an essential requirement of all autonomous and semi-autonomous systems. This necessity has been the main factor behind the rapid growth and adoption of LiDAR sensors within the ADAS sensor suite. In this thesis, we develop a fast and accurate 3D object detector that converts raw point clouds collected by LiDARs into sparse occupancy cuboids to detect cars and other road users using deep convolutional neural networks. The proposed pipeline reduces the runtime of PointPillars by 43% and performs on par with other state-of-the-art models. We do not gain improvements in speed by compromising the network's complexity and learning capacity but rather through the use of an efficient input encoding procedure. In addition to rigorous profiling on three different platforms, we conduct a comprehensive error analysis and recognize principal sources of error among the predicted attributes. Even though point clouds adequately capture the 3D structure of the physical world, they lack the rich texture information present in color images. In light of this, we explore the possibility of fusing the two modalities with the intent of improving detection accuracy. We present a late fusion strategy that merges the classification head of our LiDAR-based object detector with semantic segmentation maps inferred from images. Extensive experiments on the KITTI 3D object detection benchmark demonstrate the validity of the proposed fusion scheme.
4

3D Object Representation and Recognition Based on Biologically Inspired Combined Use of Visual and Tactile Data

Rouhafzay, Ghazal 13 May 2021 (has links)
Recent research makes use of biologically inspired computation and artificial intelligence as efficient means to solve real-world problems. Humans show a significant performance in extracting and interpreting visual information. In the cases where visual data is not available, or, for example, if it fails to provide comprehensive information due to occlusions, tactile exploration assists in the interpretation and better understanding of the environment. This cooperation between human senses can serve as an inspiration to embed a higher level of intelligence in computational models. In the context of this research, in the first step, computational models of visual attention are explored to determine salient regions on the surface of objects. Two different approaches are proposed. The first approach takes advantage of a series of contributing features in guiding human visual attention, namely color, contrast, curvature, edge, entropy, intensity, orientation, and symmetry are efficiently integrated to identify salient features on the surface of 3D objects. This model of visual attention also learns to adaptively weight each feature based on ground-truth data to ensure a better compatibility with human visual exploration capabilities. The second approach uses a deep Convolutional Neural Network (CNN) for feature extraction from images collected from 3D objects and formulates saliency as a fusion map of regions where the CNN looks at, while classifying the object based on their geometrical and semantic characteristics. The main difference between the outcomes of the two algorithms is that the first approach results in saliencies spread over the surface of the objects while the second approach highlights one or two regions with concentrated saliency. Therefore, the first approach is an appropriate simulation of visual exploration of objects, while the second approach successfully simulates the eye fixation locations on objects. In the second step, the first computational model of visual attention is used to determine scattered salient points on the surface of objects based on which simplified versions of 3D object models preserving the important visual characteristics of objects are constructed. Subsequently, the thesis focuses on the topic of tactile object recognition, leveraging the proposed model of visual attention. Beyond the sensor technologies which are instrumental in ensuring data quality, biological models can also assist in guiding the placement of sensors and support various selective data sampling strategies that allow exploring an object’s surface faster. Therefore, the possibility to guide the acquisition of tactile data based on the identified visually salient features is tested and validated in this research. Different object exploration and data processing approaches were used to identify the most promising solution. Our experiments confirm the effectiveness of computational models of visual attention as a guide for data selection for both simplifying 3D representation of objects as well as enhancing tactile object recognition. In particular, the current research demonstrates that: (1) the simplified representation of objects by preserving visually salient characteristics shows a better compatibility with human visual capabilities compared to uniformly simplified models, and (2) tactile data acquired based on salient visual features are more informative about the objects’ characteristics and can be employed in tactile object manipulation and recognition scenarios. In the last section, the thesis addresses the issue of transfer of learning from vision to touch. Inspired from biological studies that attest similarities between the processing of visual and tactile stimuli in human brain, the thesis studies the possibility of transfer of learning from vision to touch using deep learning architectures and proposes a hybrid CNN that handles both visual and tactile object recognition.
5

CCD based active triangulation for automatic close range monitoring of rock movement

Singh, Rajendra January 1998 (has links)
No description available.
6

Contributions to 3D data processing and social robotics

Escalona, Félix 30 September 2021 (has links)
In this thesis, a study of artificial intelligence applied to 3D data and social robotics is carried out. The first part of the present document is dedicated to 3D object recognition. Object recognition consists on the automatic detection and categorisation of the objects that appear in a scene. This capability is an important need for social robots, as it allows them to understand and interact with their environment. Image-based methods have been largely studied with great results, but they only rely on visual features and can confuse different objects with similar appearances (picture with the object depicted in it), so 3D data can help to improve these systems using topological features. For this part, we present different novel techniques that use pure 3D data. The second part of the thesis is about the mapping of the environment. Mapping of the environment consists on constructing a map that can be used by a robot to locate itself. This capability enables them to perform a more elaborated navigation strategy, which is tremendously usable by a social robot to interact with the different rooms of a house and its objects. In this section, we will explore 2D and 3D maps and their refinement with object recognition. Finally, the third part of this work is about social robotics. Social robotics is focused on serving people in a caring interaction rather than to perform a mechanical task. Previous sections are related to two main capabilities of a social robot, and this final section contains a survey about this kind of robots and other projects that explore other aspects of them.
7

Scene-Dependent Human Intention Recognition for an Assistive Robotic System

Duncan, Kester 17 January 2014 (has links)
In order for assistive robots to collaborate effectively with humans for completing everyday tasks, they must be endowed with the ability to effectively perceive scenes and more importantly, recognize human intentions. As a result, we present in this dissertation a novel scene-dependent human-robot collaborative system capable of recognizing and learning human intentions based on scene objects, the actions that can be performed on them, and human interaction history. The aim of this system is to reduce the amount of human interactions necessary for communicating tasks to a robot. Accordingly, the system is partitioned into scene understanding and intention recognition modules. For scene understanding, the system is responsible for segmenting objects from captured RGB-D data, determining their positions and orientations in space, and acquiring their category labels. This information is fed into our intention recognition component where the most likely object and action pair that the user desires is determined. Our contributions to the state of the art are manifold. We propose an intention recognition framework that is appropriate for persons with limited physical capabilities, whereby we do not observe human physical actions for inferring intentions as is commonplace, but rather we only observe the scene. At the core of this framework is our novel probabilistic graphical model formulation entitled Object-Action Intention Networks. These networks are undirected graphical models where the nodes are comprised of object, action, and object feature variables, and the links between them indicate some form of direct probabilistic interaction. This setup, in tandem with a recursive Bayesian learning paradigm, enables our system to adapt to a user's preferences. We also propose an algorithm for the rapid estimation of position and orientation values of scene objects from single-view 3D point cloud data using a multi-scale superquadric fitting approach. Additionally, we leverage recent advances in computer vision for an RGB-D object categorization procedure that balances discrimination and generalization as well as a depth segmentation procedure that acquires candidate objects from tabletops. We demonstrate the feasibility of the collaborative system presented herein by conducting evaluations on multiple scenes comprised of objects from 11 categories, along with 7 possible actions, and 36 possible intentions. We achieve approximately 81% reduction in interactions overall after learning despite changes to scene structure.
8

3D model vybraného objektu / 3D model of the selected object

Mrůzek, Tomáš January 2021 (has links)
This diploma thesis describes the implementation of a 3D model of two objects using laser scanning. This paper deals with the accuracy evaluation of several data interpretation. The first two methods are the outputs of the results from the FARO SCENE program and other interpretations are the outputs from the TRIMBLE REAL WORKS program. To assess accuracy and veracity, the exact test field of points previously built in the AdMas complex was used. The result of the project is a georeferenced 3D model of two objects with the surrounding environment.
9

Automated freeform assembly of threaded fasteners

Dharmaraj, Karthick January 2015 (has links)
Over the past two decades, a major part of the manufacturing and assembly market has been driven by its customer requirements. Increasing customer demand for personalised products create the demand for smaller batch sizes, shorter production times, lower costs, and the flexibility to produce families of products - or different parts - with the same sets of equipment. Consequently, manufacturing companies have deployed various automation systems and production strategies to improve their resource efficiency and move towards right-first-time production. However, many of these automated systems, which are involved with robot-based, repeatable assembly automation, require component- specific fixtures for accurate positioning and extensive robot programming, to achieve flexibility in their production. Threaded fastening operations are widely used in assembly. In high-volume production, the fastening processes are commonly automated using jigs, fixtures, and semi-automated tools. This form of automation delivers reliable assembly results at the expense of flexibility and requires component variability to be adequately controlled. On the other hand, in low- volume, high- value manufacturing, fastening processes are typically carried out manually by skilled workers. This research is aimed at addressing the aforementioned issues by developing a freeform automated threaded fastener assembly system that uses 3D visual guidance. The proof-of-concept system developed focuses on picking up fasteners from clutter, identifying a hole feature in an imprecisely positioned target component and carry out torque-controlled fastening. This approach has achieved flexibility and adaptability without the use of dedicated fixtures and robot programming. This research also investigates and evaluates different 3D imaging technology to identify the suitable technology required for fastener assembly in a non-structured industrial environment. The proposed solution utilises the commercially available technologies to enhance the precision and speed of identification of components for assembly processes, thereby improving and validating the possibility of reliably implementing this solution for industrial applications. As a part of this research, a number of novel algorithms are developed to robustly identify assembly components located in a random environment by enhancing the existing methods and technologies within the domain of the fastening processes. A bolt identification algorithm was developed to identify bolts located in a random clutter by enhancing the existing surface-based matching algorithm. A novel hole feature identification algorithm was developed to detect threaded holes and identify its size and location in 3D. The developed bolt and feature identification algorithms are robust and has sub-millimetre accuracy required to perform successful fastener assembly in industrial conditions. In addition, the processing time required for these identification algorithms - to identify and localise bolts and hole features - is less than a second, thereby increasing the speed of fastener assembly.
10

Feature extraction from 3D point clouds / Extração de atributos robustos a partir de nuvens de pontos 3D

Przewodowski Filho, Carlos André Braile 13 March 2018 (has links)
Computer vision is a research field in which images are the main object of study. One of its category of problems is shape description. Object classification is one important example of applications using shape descriptors. Usually, these processes were performed on 2D images. With the large-scale development of new technologies and the affordable price of equipment that generates 3D images, computer vision has adapted to this new scenario, expanding the classic 2D methods to 3D. However, it is important to highlight that 2D methods are mostly dependent on the variation of illumination and color, while 3D sensors provide depth, structure/3D shape and topological information beyond color. Thus, different methods of shape descriptors and robust attributes extraction were studied, from which new attribute extraction methods have been proposed and described based on 3D data. The results obtained from well known public datasets have demonstrated their efficiency and that they compete with other state-of-the-art methods in this area: the RPHSD (a method proposed in this dissertation), achieved 85:4% of accuracy on the University of Washington RGB-D dataset, being the second best accuracy on this dataset; the COMSD (another proposed method) has achieved 82:3% of accuracy, standing at the seventh position in the rank; and the CNSD (another proposed method) at the ninth position. Also, the RPHSD and COMSD methods have relatively small processing complexity, so they achieve high accuracy with low computing time. / Visão computacional é uma área de pesquisa em que as imagens são o principal objeto de estudo. Um dos problemas abordados é o da descrição de formatos (em inglês, shapes). Classificação de objetos é um importante exemplo de aplicação que usa descritores de shapes. Classicamente, esses processos eram realizados em imagens 2D. Com o desenvolvimento em larga escala de novas tecnologias e o barateamento dos equipamentos que geram imagens 3D, a visão computacional se adaptou para este novo cenário, expandindo os métodos 2D clássicos para 3D. Entretanto, estes métodos são, majoritariamente, dependentes da variação de iluminação e de cor, enquanto os sensores 3D fornecem informações de profundidade, shape 3D e topologia, além da cor. Assim, foram estudados diferentes métodos de classificação de objetos e extração de atributos robustos, onde a partir destes são propostos e descritos novos métodos de extração de atributos a partir de dados 3D. Os resultados obtidos utilizando bases de dados 3D públicas conhecidas demonstraram a eficiência dos métodos propóstos e que os mesmos competem com outros métodos no estado-da-arte: o RPHSD (um dos métodos propostos) atingiu 85:4% de acurácia, sendo a segunda maior acurácia neste banco de dados; o COMSD (outro método proposto) atingiu 82:3% de acurácia, se posicionando na sétima posição do ranking; e o CNSD (outro método proposto) em nono lugar. Além disso, os métodos RPHSD têm uma complexidade de processamento relativamente baixa. Assim, eles atingem uma alta acurácia com um pequeno tempo de processamento.

Page generated in 0.0969 seconds