• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 6
  • 6
  • 6
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A novel photogrammetric technique using DLT to measure golf shaft dynamics

Jowett, Simon January 2002 (has links)
No description available.
2

Optical Navigation by recognition of reference labels using 3D calibration of camera.

Anwar, Qaiser January 2013 (has links)
In this thesis a machine vision based indoor navigation system is presented. This is achieved by using rotationally independent optimized color reference labels and a geometrical camera calibration model which determines a set of camera parameters. All reference labels carry one byte of information (0 to 255), which can be designed for different values. An algorithm in Matlab has been developed so that a machine vision system for N number of symbols can recognize the symbols at different orientations. A camera calibration model describes the mapping between the 3-D world coordinates and the 2-D image coordinates. The reconstruction system uses the direct linear transform (DLT) method with a set of control reference labels in relation to the camera calibration. The least-squares adjustment method has been developed to calculate the parameters of the machine vision system. In these experiments it has been demonstrated that the pose of the camera can be calculated, with a relatively high precision, by using the least-squares estimation.
3

Measurement of range of motion of human finger joints, using a computer vision system

Ben-Naser, Abdusalam January 2011 (has links)
Assessment of finger range of motion (ROM) is often required for monitoring the effectiveness of rehabilitative treatments and for evaluating patients' functional impairment. There are several devices which are used to measure this motion, such as wire tracing, tracing onto paper and mechanical and electronic goniometry. These devices are quite cheap, excluding electronic goniometry; however the drawbacks of these devices are their lack of accuracy and the time- consuming nature of the measurement process. The work described in this thesis considers the design, implementation and validation of a new medical measurement system utilized in the evaluation of the range of motion of the human finger joints instead of the current measurement tools. The proposed system is a non-contact measurement device based on computer vision technology and has many advantages over the existing measurement devices. In terms of accuracy, better results are achieved by this system, it can be operated by semi-skilled person, and is time saving for the evaluator. The computer vision system in this study consists of CCD cameras to capture the images, a frame-grabber to change the analogue signal from the cameras to digital signals which can be manipulated by a computer, Ultra Violet light (UV) to illuminate the measurement space, software to process the images and perform the required computation, a darkened enclosure to accommodate the cameras and UV light and to shield the working area from any undesirable ambient light. Two calibration techniques were used to calibrate the cameras, Direct Linear Transformation and Tsai. A calibration piece that suits this application was designed and manufactured. A steel hand model was used to measure the fingers joint angles. The average error from measuring the finger angles using this system was around 1 degree compared with 5 degrees for the existing used techniques.
4

Design and Analysis of a Flapping Wing Mechanism for Optimization

George, Ryan Brandon 15 July 2011 (has links) (PDF)
Furthering our understanding of the physics of flapping flight has the potential to benefit the field of micro air vehicles. Advancements in micro air vehicles can benefit applications such as surveillance, reconnaissance, and search and rescue. In this research, flapping kinematics of a ladybug was explored using a direct linear transformation. A flapping mechanism design is presented that was capable of executing ladybug or other species-specific kinematics. The mechanism was based on a differential gear design, had two wings, and could flap in harsh environments. This mechanism served as a test bed for force analysis and optimization studies. The first study was based on a Box-Behnken screening design to explore wing kinematic parameter design space and manually search in the direction of flapping kinematics that optimized the objective of maximum combined lift and thrust. The second study used a Box-Behnken screening design to build a response surface. Using gradient-based techniques, this surface was optimized for maximum combined lift and thrust. Box-Behnken design coupled with response surface methodology was an efficient method for exploring the mechanism force response. Both methods for optimization were capable of successfully improving lift and thrust force outputs. The incorporation of the results of these studies will aid in the design of more efficient micro air vehicles and with the ultimate goal of leading to a better understanding of flapping wing aerodynamics and the development of aerodynamic models.
5

Odhad pózy kamery z přímek pomocí přímé lineární transformace / Camera Pose Estimation from Lines using Direct Linear Transformation

Přibyl, Bronislav Unknown Date (has links)
Tato disertační práce se zabývá odhadem pózy kamery z korespondencí 3D a 2D přímek, tedy tzv. perspektivním problémem n  přímek (angl. Perspective- n -Line, PnL). Pozornost je soustředěna na případy s velkým počtem čar, které mohou být efektivně řešeny metodami využívajícími lineární formulaci PnL. Dosud byly známy pouze metody pracující s korespondencemi 3D bodů a 2D přímek. Na základě tohoto pozorování byly navrženy dvě nové metody založené na algoritmu přímé lineární transformace (angl. Direct Linear Transformation, DLT): Metoda DLT-Plücker-Lines pracující s korespondencemi 3D a 2D přímek a metoda DLT-Combined-Lines pracující jak s korespondencemi 3D bodů a 2D přímek, tak s korespondencemi 3D přímek a 2D přímek. Ve druhém případě je redundantní 3D informace využita k redukci minimálního počtu požadovaných korespondencí přímek na 5 a ke zlepšení přesnosti metody. Navržené metody byly důkladně testovány za různých podmínek včetně simulovaných a reálných dat a porovnány s nejlepšími existujícími PnL metodami. Metoda DLT-Combined-Lines dosahuje výsledků lepších nebo srovnatelných s nejlepšími existujícími metodami a zároveň je značně rychlá. Tato disertační práce také zavádí jednotný rámec pro popis metod pro odhad pózy kamery založených na algoritmu DLT. Obě navržené metody jsou definovány v tomto rámci.
6

Vision-based navigation and mapping for flight in GPS-denied environments

Wu, Allen David 15 November 2010 (has links)
Traditionally, the task of determining aircraft position and attitude for automatic control has been handled by the combination of an inertial measurement unit (IMU) with a Global Positioning System (GPS) receiver. In this configuration, accelerations and angular rates from the IMU can be integrated forward in time, and position updates from the GPS can be used to bound the errors that result from this integration. However, reliance on the reception of GPS signals places artificial constraints on aircraft such as small unmanned aerial vehicles (UAVs) that are otherwise physically capable of operation in indoor, cluttered, or adversarial environments. Therefore, this work investigates methods for incorporating a monocular vision sensor into a standard avionics suite. Vision sensors possess the potential to extract information about the surrounding environment and determine the locations of features or points of interest. Having mapped out landmarks in an unknown environment, subsequent observations by the vision sensor can in turn be used to resolve aircraft position and orientation while continuing to map out new features. An extended Kalman filter framework for performing the tasks of vision-based mapping and navigation is presented. Feature points are detected in each image using a Harris corner detector, and these feature measurements are corresponded from frame to frame using a statistical Z-test. When GPS is available, sequential observations of a single landmark point allow the point's location in inertial space to be estimated. When GPS is not available, landmarks that have been sufficiently triangulated can be used for estimating vehicle position and attitude. Simulation and real-time flight test results for vision-based mapping and navigation are presented to demonstrate feasibility in real-time applications. These methods are then integrated into a practical framework for flight in GPS-denied environments and verified through the autonomous flight of a UAV during a loss-of-GPS scenario. The methodology is also extended to the application of vehicles equipped with stereo vision systems. This framework enables aircraft capable of hovering in place to maintain a bounded pose estimate indefinitely without drift during a GPS outage.

Page generated in 0.1668 seconds