Spelling suggestions: "subject:"dose"" "subject:"pose""
41 |
Towards Man-Machine Interfaces: Combining Top-down Constraints with Bottom-up Learning in Facial AnalysisKumar, Vinay P. 01 September 2002 (has links)
This thesis proposes a methodology for the design of man-machine interfaces by combining top-down and bottom-up processes in vision. From a computational perspective, we propose that the scientific-cognitive question of combining top-down and bottom-up knowledge is similar to the engineering question of labeling a training set in a supervised learning problem. We investigate these questions in the realm of facial analysis. We propose the use of a linear morphable model (LMM) for representing top-down structure and use it to model various facial variations such as mouth shapes and expression, the pose of faces and visual speech (visemes). We apply a supervised learning method based on support vector machine (SVM) regression for estimating the parameters of LMMs directly from pixel-based representations of faces. We combine these methods for designing new, more self-contained systems for recognizing facial expressions, estimating facial pose and for recognizing visemes.
|
42 |
Cooperative self-localization in a multi-robot-no-landmark scenario using fuzzy logicSinha, Dhirendra Kumar 17 February 2005 (has links)
In this thesis, we develop a method using fuzzy logic to do cooperative localization. In a group of robots, at a given instant, each robot gives crisp pose estimates for all the other robots. These crisp pose values are converted to fuzzy membership functions based on various physical factors like acceleration of the robot and distance of separation of the two robots. For a given robot, all these fuzzy estimates are taken and fused together using fuzzy fusion techniques to calculate a possibility distribution function of the pose values. Finally, these possibility distributions are defuzzified using fuzzy techniques to find a crisp pose value for each robot. A MATLAB code is written to simulate this fuzzy logic algorithm. A Kalman filter approach is also implemented and then the results are compared qualitatively and quantitatively.
|
43 |
Local Features for Range and Vision-Based Robotic AutomationViksten, Fredrik January 2010 (has links)
Robotic automation has been a part of state-of-the-art manufacturing for many decades. Robotic manipulators are used for such tasks as welding, painting, pick and place tasks etc. Robotic manipulators are quite flexible and adaptable to new tasks, but a typical robot-based production cell requires extensive specification of the robot motion and construction of tools and fixtures for material handling. This incurs a large effort both in time and monetary expenses. The task of a vision system in this setting is to simplify the control and guidance of the robot and to reduce the need for supporting material handling machinery. This dissertation examines performance and properties of the current state-of-the-art local features within the setting of object pose estimation. This is done through an extensive set of experiments replicating various potential problems to which a vision system in a robotic cell could be subjected. The dissertation presents new local features which are shown to increase the performance of object pose estimation. A new local descriptor details how to use log-polar sampled image patches for truly rotational invariant matching. This representation is also extended to use a scale-space interest point detector which in turn makes it very competitive in our experiments. A number of variations of already available descriptors are constructed resulting in new and competitive features, among them a scale-space based Patch-duplet. In this dissertation a successful vision-based object pose estimation system is extended for multi-cue integration, yielding increased robustness and accuracy. Robustness is increased through algorithmic multi-cue integration, combining the individual strengths of multiple local features. Increased accuracy is achieved by utilizing manipulator movement and applying temporal multi-cue integration. This is implemented using a real flexible robotic manipulator arm. Besides work done on local features for ordinary image data a number of local features for range data has also been developed. This dissertation describes the theory behind and the application of the scene tensor to the problem of object pose estimation. The scene tensor is a fourth order tensor representation using projective geometry. It is shown how to use the scene tensor as a detector as well as how to apply it to the task of object pose estimation. The object pose estimation system is extended to work with 3D data. A novel way of handling sampling of range data when constructing a detector is discussed. A volume rasterization method is presented and the classic Harris detector is adapted to it. Finally, a novel region detector, called Maximally Robust Range Regions, is presented. All developed detectors are compared in a detector repeatability test.
|
44 |
Automated Pose Correction for Face RecognitionGodzich, Elliot J. 01 January 2012 (has links)
This paper describes my participation in a MITRE Corporation sponsored computer science clinic project at Harvey Mudd College as my senior project. The goal of the project was to implement a landmark-based pose correction system as a component in a larger, existing face recognition system. The main contribution I made to the project was the implementation of the Active Shape Models (ASM) algorithm; the inner workings of ASM are explained as well as how the pose correction system makes use of it. Included is the most recent draft (as of this writing) of the final report that my teammates and I produced highlighting the year's accomplishments. Even though there are few quantitative results to show because the clinic program is ongoing, our qualitative results are quite promising.
|
45 |
Pose estimation of a VTOL UAV using IMU, Camera and GPS / Position- och orienteringsskattning av en VTOL UAV med IMU, Kamera och GPSBodesund, Fredrik January 2010 (has links)
When an autonomous vehicle has a mission to perform, it is of high importance that the robot has good knowledge about its position. Without good knowledge of the position, it will not be able to navigate properly and the data that it gathers, which could be of importance for the mission, might not be usable. A helicopter could for example be used to collect laser data from the terrain beneath it, which could be used to produce a 3D map of the terrain. If the knowledge about the position and orientation of the helicopter is poor then the collected laser data will be useless since it is unknown what the laser actually measures. A successful solution to position and orientation (pose) estimation of an autonomous helicopter, using an inertial measurement unit (IMU), a camera and a GPS, is proposed in this thesis. The problem is to estimate the unknown pose using sensors that measure different physical attributes and give readings containing noise. An extended Kalman filter solution to the simultaneous localisation and mapping (SLAM) is used to fuse data from the different sensors and estimate the pose of the robot. The scale invariant feature transform (SIFT) is used for feature extraction and the unified inverse depth parametrisation (UIDP) model is used to parametrise the landmarks. The orientation of the robot is described by quaternions. To be able to evaluate the performance of the filter an ABB industrial robot has been used as reference. The pose of the end tool of the robot is known with high accuracy and gives a good ground truth so that the estimations can be evaluated. The results shows that the algorithm performs well and that the pose is estimated with good accuracy. / När en autonom farkost skall utföra ett uppdrag är det av högsta vikt att den har god kännedom av sin position. Utan detta kommer den inte att kunna navigera och den data som den samlar in, relevant för uppdraget, kan vara oanvändbar. Till exempel skulle en helikopter kunna användas för att samla in laser data av terrängen under den, för att skapa en 3D karta av terrängen. Om kännedomen av helikopterns position och orientering är dålig kommer de insamlade lasermätningarna att vara oanvändbara eftersom det inte är känt vad lasern faktiskt mäter. I detta examensarbete presenteras en väl fungerande lösning för position och orienterings estimering av autonom helikopter med hjälp av en inertial measurement unit (IMU), en kamera och GPS. Problemet är att skatta positionen och orienteringen med hjälp av sensorer som mäter olika fysiska storheter och vars mätningar innehåller brus. En extended Kalman filter (EKF) lösning för simultaneous localisation and mapping (SLAM) problemet används för att fusionera data från de olika sensorerna och estimera positionen och orienteringen. För feature extrahering används scale invariant feature transform (SIFT) och för att parametrisera landmärken används unified inverse depth parametrisation (UIDP). Orienteringen av roboten beskrivs med hjälp av qvartinjoner. För att evaluera skattningarna har en ABB robot används som referens vid datainsamling. Då roboten har god kännedom om position och orientering av sitt främre verktyg gör detta att prestandan i filtret kan undersökas. Resultaten visar att algorithmen fungerar bra och att skattningar har hög noggrannhet.
|
46 |
Cooperative self-localization in a multi-robot-no-landmark scenario using fuzzy logicSinha, Dhirendra Kumar 17 February 2005 (has links)
In this thesis, we develop a method using fuzzy logic to do cooperative localization. In a group of robots, at a given instant, each robot gives crisp pose estimates for all the other robots. These crisp pose values are converted to fuzzy membership functions based on various physical factors like acceleration of the robot and distance of separation of the two robots. For a given robot, all these fuzzy estimates are taken and fused together using fuzzy fusion techniques to calculate a possibility distribution function of the pose values. Finally, these possibility distributions are defuzzified using fuzzy techniques to find a crisp pose value for each robot. A MATLAB code is written to simulate this fuzzy logic algorithm. A Kalman filter approach is also implemented and then the results are compared qualitatively and quantitatively.
|
47 |
Indoor 3D Mapping using Kinect / Kartering av inomhusmiljöer med KinectBengtsson, Morgan January 2014 (has links)
In recent years several depth cameras have emerged on the consumer market, creating many interesting possibilities forboth professional and recreational usage. One example of such a camera is the Microsoft Kinect sensor originally usedwith the Microsoft Xbox 360 game console. In this master thesis a system is presented that utilizes this device in order to create an as accurate as possible 3D reconstruction of an indoor environment. The major novelty of the presented system is the data structure based on signed distance fields and voxel octrees used to represent the observed environment. / Under de senaste åren har flera olika avståndskameror lanserats på konsumentmarkanden. Detta har skapat många intressanta applikationer både i professionella system samt för underhållningssyfte. Ett exempel på en sådan kamera är Microsoft Kinect som utvecklades för Microsofts spelkonsol Xbox 360. I detta examensarbete presenteras ett system som använder Kinect för att skapa en så exakt rekonstruktion i 3D av en innomhusmiljö som möjligt. Den främsta innovationen i systemet är en datastruktur baserad på signed distance fields (SDF) och octrees, vilket används för att representera den rekonstruerade miljön.
|
48 |
Belief driven autonomous manipulator pose selection for less controlled environmentsWebb, Stephen Scott, Mechanical & Manufacturing Engineering, Faculty of Engineering, UNSW January 2008 (has links)
This thesis presents a new approach for selecting a manipulator arm configuration (a pose) in an environment where the positions of the work items are not able to be fully controlled. The approach utilizes a belief formed from a priori knowledge, observations and predictive models to select manipulator poses and motions. Standard methods for manipulator control provide a fully specified Cartesian pose as the input to a robot controller which is assumed to act as an ideal Cartesian motion device. While this approach simplifies the controller and makes it more portable, it is not well suited for less-controlled environments where the work item position or orientation may not be completely observable and where a measure of the accuracy of the available observations is required. The proposed approach suggests selecting a manipulator configuration using two types of rating function. When uncertainty is high, configurations are rated by combining a belief, represented by a probability density function, and a value function in a decision theoretic manner enabling selection of the sensor??s motion based on its probabilistic contribution to information gain. When uncertainty is low the mean or mode of the environment state probability density function is utilized in task specific linear or angular distances constraints to map a configuration to a cost. The contribution of this thesis is in providing two formulations that allow joint configurations to be found using non-linear optimization algorithms. The first formulation shows how task specific linear and angular distance constraints are combined in a cost function to enable a satisfying pose to be selected. The second formulation is based on the probabilistic belief of the predicted environment state. This belief is formed by utilizing a Bayesian estimation framework to combine the a priori knowledge with the output of sensor data processing, a likelihood function over the state space, thereby handling the uncertainty associated with sensing in a less controlled environment. Forward models are used to transform the belief to a predicted state which is utilized in motion selection to provide the benefits of a feedforward control strategy. Extensive numerical analysis of the proposed approach shows that using the fed-forward belief improves tracking performance by up to 19%. It is also shown that motion selection based on the dynamically maintained belief reduces time to target detection by up to 50% compared to two other control approaches. These and other results show how the proposed approach is effectively able to utilize an uncertain environment state belief to select manipulator arm configurations.
|
49 |
3D-Kopfposenschätzung in monochromatischen Videosequenzen geringer AuflösungGründig, Martin. Unknown Date (has links) (PDF)
Techn. Universiẗat, Diss., 2005--Berlin.
|
50 |
Embedded eye-gaze tracking on mobile devicesAckland, Stephen Marc January 2017 (has links)
The eyes are one of the most expressive non-verbal tools a person has and they are able to communicate a great deal to the outside world about the intentions of that person. Being able to decipher these communications through robust and non-intrusive gaze tracking techniques is increasingly important as we look toward improving Human-Computer Interaction (HCI). Traditionally, devices which are able to determine a user's gaze are large, expensive and often restrictive. This work investigates the prospect of using common mobile devices such as tablets and phones as an alternative means for obtaining a user's gaze. Mobile devices now often contain high resolution cameras, and their ever increasing computational power allows increasingly complex algorithms to be performed in real time. A mobile solution allows us to turn that device into a dedicated portable gaze-tracking device for use in a wide variety of situations. This work specifically looks at where the challenges lie in transitioning current state-of-the-art gaze methodologies to mobile devices and suggests novel solutions to counteract the specific challenges of the medium. In particular, when the mobile device is held in the hands fast changes in position and orientation of the user can occur. In addition, since these devices lack the technologies typically ubiquitous to gaze estimation such as infra-red lighting, novel alternatives are required that work under common everyday conditions. A person's gaze can be determined through both their head pose as well as the orientation of the eye relative to the head. To meet the challenges outlined a geometric approach is taken where a new model for each is introduced that by design are completely synchronised through a common origin. First, a novel 3D head-pose estimation model called the 2.5D Constrained Local Model (2.5D CLM) is introduced that directly and reliably obtains the head-pose from a monocular camera. Then, a new model for gaze-estimation is introduced -- the Constrained Geometric Binocular Model (CGBM), where the visual ray representing the gaze from each eye is jointly optimised to intersect a known monitor plane in 3D space. The potential for both is that the burden of calibration is placed on the camera and monitor setup, which on mobile devices are fixed and can be determined during factory construction. In turn, the user requires either no calibration or optionally a one-time estimation of the visual offset angle. This work details the new models and specifically investigates their applicability and suitability in terms of their potential to be used on mobile platforms.
|
Page generated in 0.0407 seconds