• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 195
  • 24
  • 17
  • 10
  • 9
  • 6
  • 6
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 335
  • 212
  • 141
  • 103
  • 70
  • 58
  • 56
  • 47
  • 44
  • 43
  • 42
  • 42
  • 37
  • 37
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Privačių ženklų keliamų grėsmių gamintojo ženklams mažinimas / Mitigating threats that private label pose to manufacturer brands

Miknevičiūtė, Dovilė 25 November 2009 (has links)
Pagrindinis darbo tikslas yra išanalizuoti privačių ženklų keliamas grėsmes gamintojų ženklams ir atlikus tyrimą aprašyti galimus grėsmių sušvelninimo metodus. Darbas yra sudarytas iš trijų dalių: teorinė analizė, situacijos analizė ir projektinių sprendimų. Privačių prekių ženklų produktai, tai produktai, kurie parduodami tik tuose prekybos tinkluose, kuriems šie prekių ženklai priklauso. Juos kontroliuoja mažmenininkai, kurie turi išskirtines teises į šiuos prekių ženklus. Ilgą laiką privatūs ženklai buvo asocijuojami su maža kaina ir prasta kokybe, bet šiame darbe atliktas tyrimas parodė, kad privačių ženklų produkcija ir jų kokybė yra gana teigiamai vertinama . Šio reiškinio pasekmė yra nauja ir daug sudėtingesnė bei kelianti vis didesnes grėsmes gamintojų ženklams. Trečioje dalyje bus pateikti metodai , kaip būtų galima sušvelninti keliamas privačių ženklų grėsmes. / The main aim of the work is analyzing threats that private labels pose to manufacturer brands and, upon the completion of a research, describing methods of mitigating these threats. The work consists of three parts i.e. theoretical analysis, situational analysis and project solutions. Private label products are the products which are sold in stores of retail networks which own these private labels. They are controlled by retailers which have executive rights for these private labels. For a significant period of time private labels were associated with low price and low quality, but the research carried out in this work demonstrates that private label products and their quality are evaluated quite positively. The outcome of this phenomenon is new and more complex as well as posing growing threats to manufacturer brands. The third part presents methods of mitigating the threats which private labels pose.
122

A high-speed Iterative Closest Point tracker on an FPGA platform

Belshaw, Michael Sweeney 16 July 2008 (has links)
The Iterative Closest Point (ICP) algorithm is one of the most commonly used range image processing methods. However, slow operational speeds and high input band-widths limit the use of ICP in high-speed real-time applications. This thesis presents and examines a novel hardware implementation of a high-speed ICP object tracking system that uses stereo vision disparities as input. Although software ICP trackers already exist, this innovative hardware tracker utilizes the efficiencies of custom hardware processing, thus enabling faster high-speed real-time tracking. A custom hardware design has been implemented in an FPGA to handle the inherent bottlenecks that result from the large input and processing band-widths of the range data. The hardware ICP design consists of four stages: Pre-filter, Transform, Nearest Neighbor, and Transform Recovery. This custom hardware has been implemented and tested on various objects, using both software simulation and hardware tests. Results indicate that the tracker is able to successfully track free-form objects at over 200 frames-per-second along arbitrary paths. Tracking errors are low, in spite of substantial noisy stereo input. The tracker is able to track stationary paths within 0.42mm and 1.42degs, linear paths within 1.57mm and 2.80degs, and rotational paths within 0.39degs axis error. With further degraded data by occlusion, the tracker is able to handle 60% occlusion before a slow decline in performance. The high-speed hardware implementation (that uses 16 parallel nearest neighbor circuits), is more then five times faster than the software K-D tree implementation. This tracker has been designed as the hardware component of ‘FastTrack’, a high frame rate, stereo vision tracking system, that will provide a known object’s pose in real-time at 200 frames per second. This hardware ICP tracker is compact, lightweight, has low power requirements, and is integratable with the stereo sensor and stereo extraction components of the FastTrack’ system on a single FPGA platform. High-speed object tracking is useful for many innovative applications, including advanced spaced-based robotics. Because of this project’s success, the ‘FastTrack’ system will be able to aid in performing in-orbit, automated, remote satellite recovery for maintenance. / Thesis (Master, Electrical & Computer Engineering) -- Queen's University, 2008-07-15 22:50:30.369
123

MONOCULAR POSE ESTIMATION AND SHAPE RECONSTRUCTION OF QUASI-ARTICULATED OBJECTS WITH CONSUMER DEPTH CAMERA

Ye, Mao 01 January 2014 (has links)
Quasi-articulated objects, such as human beings, are among the most commonly seen objects in our daily lives. Extensive research have been dedicated to 3D shape reconstruction and motion analysis for this type of objects for decades. A major motivation is their wide applications, such as in entertainment, surveillance and health care. Most of existing studies relied on one or more regular video cameras. In recent years, commodity depth sensors have become more and more widely available. The geometric measurements delivered by the depth sensors provide significantly valuable information for these tasks. In this dissertation, we propose three algorithms for monocular pose estimation and shape reconstruction of quasi-articulated objects using a single commodity depth sensor. These three algorithms achieve shape reconstruction with increasing levels of granularity and personalization. We then further develop a method for highly detailed shape reconstruction based on our pose estimation techniques. Our first algorithm takes advantage of a motion database acquired with an active marker-based motion capture system. This method combines pose detection through nearest neighbor search with pose refinement via non-rigid point cloud registration. It is capable of accommodating different body sizes and achieves more than twice higher accuracy compared to a previous state of the art on a publicly available dataset. The above algorithm performs frame by frame estimation and therefore is less prone to tracking failure. Nonetheless, it does not guarantee temporal consistent of the both the skeletal structure and the shape and could be problematic for some applications. To address this problem, we develop a real-time model-based approach for quasi-articulated pose and 3D shape estimation based on Iterative Closest Point (ICP) principal with several novel constraints that are critical for monocular scenario. In this algorithm, we further propose a novel method for automatic body size estimation that enables its capability to accommodate different subjects. Due to the local search nature, the ICP-based method could be trapped to local minima in the case of some complex and fast motions. To address this issue, we explore the potential of using statistical model for soft point correspondences association. Towards this end, we propose a unified framework based on Gaussian Mixture Model for joint pose and shape estimation of quasi-articulated objects. This method achieves state-of-the-art performance on various publicly available datasets. Based on our pose estimation techniques, we then develop a novel framework that achieves highly detailed shape reconstruction by only requiring the user to move naturally in front of a single depth sensor. Our experiments demonstrate reconstructed shapes with rich geometric details for various subjects with different apparels. Last but not the least, we explore the applicability of our method on two real-world applications. First of all, we combine our ICP-base method with cloth simulation techniques for Virtual Try-on. Our system delivers the first promising 3D-based virtual clothing system. Secondly, we explore the possibility to extend our pose estimation algorithms to assist physical therapist to identify their patients’ movement dysfunctions that are related to injuries. Our preliminary experiments have demonstrated promising results by comparison with the gold standard active marker-based commercial system. Throughout the dissertation, we develop various state-of-the-art algorithms for pose estimation and shape reconstruction of quasi-articulated objects by leveraging the geometric information from depth sensors. We also demonstrate their great potentials for different real-world applications.
124

IMU-baserad skattning av verktygets position och orientering hos industrirobot / IMU-based Robot Tool Pose Estimation

Norén, Johan January 2014 (has links)
Robotar är en självklar del av modern automation och produktion. Användningsområdenaär många och innefattar bland annat repetitiva arbetsuppgifter ochuppgifter som kan vara hälsofarliga för oss människor, så som t.ex. målning,punktsvetsning och materialhantering. Ett problem inom robotik är att noggrant skatta position och orientering för robotensverktyg. Detta examensarbete syftar till att ta fram metoder för dennaskattning baserad på mätningar från en Inertial Measurement Unit (IMU) sommonteras vid robotens verktyg. En IMU är en kombinationsenhet som består av flera sensorer, vanligtvis accelerometeroch gyroskop. Enheten mäter då acceleration och rotationshastighetbaserat på kroppars tröghet. Examensarbetet presenterar tre metoder för att skatta position och orienteringav robotens verktyg. En skattningsmetod endast är baserad på mätningar frånIMU:n, död räkning, samt två filter där även robotkinematiken tillsammans meduppmätta motorvinklar används, extended Kalmanfilter (EKF) och komplementärfilter(CF). Resultat för skattningsmetoderna visas för experimentell data från en högpresterandeIMU tillsammans med en industrirobot med sex frihetsgrader. / Industrial robots have a well established part within modern automation and production.The uses for robots are many and include e.g. repetitive tasks, painting, spot welding and material handling. One problem in robotics is to sufficiently well estimate the position and orientation for the end effector of the robot. This thesis aims to present estimationmethods based on data from an Inertial Measurement Unit (IMU) mounted onthe end effector of the robot. An IMU is a combination unit typically containing accelerometers and gyroscopes.The unit measures acceleration and rotational speed based on the inertia of bodies. The thesis presents three methods for position and orientation estimation. One based exclusively on IMU data, dead reckoning, and two filters based on IMUdata in combination with robot kinematics and motor angles, extended Kalmanfilter (EKF) and complementary filter (CF). Results for the estimation methods are shown based on experimental data froma high-performance IMU and a industrial robot with six degrees of freedom.
125

Single View Human Pose Tracking

Li, Zhenning January 2013 (has links)
Recovery of human pose from videos has become a highly active research area in the last decade because of many attractive potential applications, such as surveillance, non-intrusive motion analysis and natural human machine interaction. Video based full body pose estimation is a very challenging task, because of the high degree of articulation of the human body, the large variety of possible human motions, and the diversity of human appearances. Methods for tackling this problem can be roughly categorized as either discriminative or generative. Discriminative methods can work on single images, and are able to recover the human poses efficiently. However, the accuracy and generality largely depend on the training data. Generative approaches usually formulate the problem as a tracking problem and adopt an explicit human model. Although arbitrary motions can be tracked, such systems usually have difficulties in adapting to different subjects and in dealing with tracking failures. In this thesis, an accurate, efficient and robust human pose tracking system from a single view camera is developed, mainly following a generative approach. A novel discriminative feature is also proposed and integrated into the tracking framework to improve the tracking performance. The human pose tracking system is proposed within a particle filtering framework. A reconfigurable skeleton model is constructed based on the Acclaim Skeleton File convention. A basic particle filter is first implemented for upper body tracking, which fuses time efficient cues from monocular sequences and achieves real-time tracking for constrained motions. Next, a 3D surface model is added to the skeleton model, and a full body tracking system is developed for more general and complex motions, assuming a stereo camera input. Partitioned sampling is adopted to deal with the high dimensionality problem, and the system is capable of running in near real-time. Multiple visual cues are investigated and compared, including a newly developed explicit depth cue. Based on the comparative analysis of cues, which reveals the importance of depth and good bottom-up features, a novel algorithm for detecting and identifying endpoint body parts from depth images is proposed. Inspired by the shape context concept, this thesis proposes a novel Local Shape Context (LSC) descriptor specifically for describing the shape features of body parts in depth images. This descriptor describes the local shape of different body parts with respect to a given reference point on a human silhouette, and is shown to be effective at detecting and classifying endpoint body parts. A new type of interest point is defined based on the LSC descriptor, and a hierarchical interest point selection algorithm is designed to further conserve computational resources. The detected endpoint body parts are then classified according to learned models based on the LSC feature. The algorithm is tested using a public dataset and achieves good accuracy with a 100Hz processing speed on a standard PC. Finally, the LSC descriptor is improved to be more generalized. Both the endpoint body parts and the limbs are detected simultaneously. The generalized algorithm is integrated into the tracking framework, which provides a very strong cue and enables tracking failure recovery. The skeleton model is also simplified to further increase the system efficiency. To evaluate the system on arbitrary motions quantitatively, a new dataset is designed and collected using a synchronized Kinect sensor and a marker based motion capture system, including 22 different motions from 5 human subjects. The system is capable of tracking full body motions accurately using a simple skeleton-only model in near real-time on a laptop PC before optimization.
126

Ρωμαλέες-χαμηλής πολυπλοκότητας τεχνικές εκτίμησης στάσης κάμερας

Σέχου, Αουρέλα 31 August 2012 (has links)
Το πρόβλημα της εκτίμησης θέσης και του προσανατολισμού της κάμερας από τις γνωστές 3D συντεταγμένες n σημείων της σκηνής και των 2D προβολών τους στο επίπεδο της εικόνας, είναι γνωστό στην βιβλιογραφία ως "Perspective n Point(PnP)" πρόβλημα. Το πρόβλημα αυτό συναντάται σε πολλά σημαντικά επιστημονικά πεδία όπως αυτά της υπολογιστικής όρασης, της ρομποτικής, της αυτοματοποιημένης χαρτογραφίας, της επαυξημένης πραγματικότητας κ.α, και μπορεί να θεωρηθεί ως μια ειδική περίπτωση του προβλήματος βαθμονόμησης της κάμερας. Η ανάγκη για την ανάπτυξη ρωμαλέων και χαμηλής πολυπλοκότητας μεθόδων για την επίλυση του "PnP" προβλήματος σε πραγματικό χρόνο έχει αναδειχθεί από πολλούς ερευνητές τα τελευταία χρόνια. Στο πλαίσιο της προτεινόμενης διπλωματικής μελετήθηκαν σε βάθος οι πιο σημαντικές μέθοδοι που έχουν προταθεί στην διεθνή βιβλιογραφία μέχρι σήμερα. / The perspective camera pose estimation problem, given known 3D coordinates in the world coordinate system and their correspondent 2D image projections, is known as "Perspective n Point(PnP)" problem. It has many applications in Photogrammetry, Computer Vision, Robotics, Augmented Reality and can be considered as a special case of camera calibration problem. The need for development of robust and simultaneously low computational complexity real time solutions for the PnP problem is very strong as it has attracted much attention in the literature during the last few years. In this master thesis, most significant as well as state of the art techniques which provide solutions to camera pose estimation problem have been thoroughly studied.
127

Learning to Predict Dense Correspondences for 6D Pose Estimation

Brachmann, Eric 06 June 2018 (has links) (PDF)
Object pose estimation is an important problem in computer vision with applications in robotics, augmented reality and many other areas. An established strategy for object pose estimation consists of, firstly, finding correspondences between the image and the object’s reference frame, and, secondly, estimating the pose from outlier-free correspondences using Random Sample Consensus (RANSAC). The first step, namely finding correspondences, is difficult because object appearance varies depending on perspective, lighting and many other factors. Traditionally, correspondences have been established using handcrafted methods like sparse feature pipelines. In this thesis, we introduce a dense correspondence representation for objects, called object coordinates, which can be learned. By learning object coordinates, our pose estimation pipeline adapts to various aspects of the task at hand. It works well for diverse object types, from small objects to entire rooms, varying object attributes, like textured or texture-less objects, and different input modalities, like RGB-D or RGB images. The concept of object coordinates allows us to easily model and exploit uncertainty as part of the pipeline such that even repeating structures or areas with little texture can contribute to a good solution. Although we can train object coordinate predictors independent of the full pipeline and achieve good results, training the pipeline in an end-to-end fashion is desirable. It enables the object coordinate predictor to adapt its output to the specificities of following steps in the pose estimation pipeline. Unfortunately, the RANSAC component of the pipeline is non-differentiable which prohibits end-to-end training. Adopting techniques from reinforcement learning, we introduce Differentiable Sample Consensus (DSAC), a formulation of RANSAC which allows us to train the pose estimation pipeline in an end-to-end fashion by minimizing the expectation of the final pose error.
128

3D pose estimation of flying animals in multi-view video datasets

Breslav, Mikhail 04 December 2016 (has links)
Flying animals such as bats, birds, and moths are actively studied by researchers wanting to better understand these animals’ behavior and flight characteristics. Towards this goal, multi-view videos of flying animals have been recorded both in lab- oratory conditions and natural habitats. The analysis of these videos has shifted over time from manual inspection by scientists to more automated and quantitative approaches based on computer vision algorithms. This thesis describes a study on the largely unexplored problem of 3D pose estimation of flying animals in multi-view video data. This problem has received little attention in the computer vision community where few flying animal datasets exist. Additionally, published solutions from researchers in the natural sciences have not taken full advantage of advancements in computer vision research. This thesis addresses this gap by proposing three different approaches for 3D pose estimation of flying animals in multi-view video datasets, which evolve from successful pose estimation paradigms used in computer vision. The first approach models the appearance of a flying animal with a synthetic 3D graphics model and then uses a Markov Random Field to model 3D pose estimation over time as a single optimization problem. The second approach builds on the success of Pictorial Structures models and further improves them for the case where only a sparse set of landmarks are annotated in training data. The proposed approach first discovers parts from regions of the training images that are not annotated. The discovered parts are then used to generate more accurate appearance likelihood terms which in turn produce more accurate landmark localizations. The third approach takes advantage of the success of deep learning models and adapts existing deep architectures to perform landmark localization. Both the second and third approaches perform 3D pose estimation by first obtaining accurate localization of key landmarks in individual views, and then using calibrated cameras and camera geometry to reconstruct the 3D position of key landmarks. This thesis shows that the proposed algorithms generate first-of-a-kind and leading results on real world datasets of bats and moths, respectively. Furthermore, a variety of resources are made freely available to the public to further strengthen the connection between research communities.
129

Pose Estimation in an Outdoors Augmented Reality Mobile Application

Nordlander, Rickard January 2018 (has links)
This thesis proposes a solution to the pose estimation problem for mobile devices in an outdoors environment. The proposed solution is intended for usage within an augmented reality application to visualize large objects such as buildings. As such, the system needs to provide both accurate and stable pose estimations with real-time requirements. The proposed solution combines inertial navigation for orientation estimation with a vision-based support component to reduce noise from the inertial orientation estimation. A GNSS-based component provides the system with an absolute reference of position. The orientation and position estimation were tested in two separate experiments. The orientation estimate was tested with the camera in a static position and orientation and was able to attain an estimate that is accurate and stable down to a few fractions of a degree. The position estimation was able to achieve centimeter-level stability during optimal conditions. Once the position had converged to a location, it was stable down to a couple of centimeters, which is sufficient for outdoors augmented reality applications.
130

Estimativa da pose da cabeça em imagens monoculares usando um modelo no espaço 3D / Estimation of the head pose based on monocular images

Ramos, Yessenia Deysi Yari January 2013 (has links)
Esta dissertação apresenta um novo método para cálculo da pose da cabeça em imagens monoculares. Este cálculo é estimado no sistema de coordenadas da câmera, comparando as posições das características faciais específicas com as de múltiplas instâncias do modelo da face em 3D. Dada uma imagem de uma face humana, o método localiza inicialmente as características faciais, como nariz, olhos e boca. Estas últimas são detectadas e localizadas através de um modelo ativo de forma para faces. O algoritmo foi treinado sobre um conjunto de dados com diferentes poses de cabeça. Para cada face, obtemos um conjunto de pontos característicos no espaço de imagem 2D. Esses pontos são usados como referências na comparação com os respectivos pontos principais das múltiplas instâncias do nosso modelo de face em 3D projetado no espaço da imagem. Para obter a profundidade de cada ponto, usamos as restrições impostas pelo modelo 3D da face por exemplo, os olhos tem uma determinada profundidade em relação ao nariz. A pose da cabeça é estimada, minimizando o erro de comparação entre os pontos localizados numa instância do modelo 3D da face e os localizados na imagem. Nossos resultados preliminares são encorajadores e indicam que a nossa abordagem produz resultados mais precisos que os métodos disponíveis na literatura. / This dissertation presents a new method to accurately compute the head pose in mono cular images. The head pose is estimated in the camera coordinate system, by comparing the positions of specific facial features with the positions of these facial features in multiple instances of a prior 3D face model. Given an image containing a face, our method initially locates some facial features, such as nose, eyes, and mouth; these features are detected and located using an Adaptive Shape Model for faces , this algorithm was trained using on a data set with a variety of head poses. For each face, we obtain a collection of feature locations (i.e. points) in the 2D image space. These 2D feature locations are then used as references in the comparison with the respective feature locations of multiple instances of our 3D face model, projected on the same 2D image space. To obtain the depth of every feature point, we use the 3D spatial constraints imposed by our face model (i.e. eyes are at a certain depth with respect to the nose, and so on). The head pose is estimated by minimizing the comparison error between the 3D feature locations of the face in the image and a given instance of the face model (i.e. a geometrical transformation of the face model in the 3D camera space). Our preliminary experimental results are encouraging, and indicate that our approach can provide more accurate results than comparable methods available in the literature.

Page generated in 0.0329 seconds