• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 35
  • 9
  • 7
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 85
  • 85
  • 30
  • 28
  • 25
  • 21
  • 19
  • 16
  • 13
  • 13
  • 13
  • 10
  • 10
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Exploring gesture based interaction and visualizations for supporting collaboration

Simonsson Huck, Andreas January 2011 (has links)
This thesis will introduce the concept of collaboratively using freehand gestures to interact with visualizations. It could be problematic to work with data and visualizations together with others in the traditional desktop setting because of the limited screen size and a single user input device. Therefore this thesis suggests a solution by integrating computer vision and gestures with interactive visualizations. This integration resulted in a prototype where multiple users can interact with the same visualizations simultaneously. The prototype was evaluated and tested on ten potential users. The results from the tests show that using gestures have potential to support collaboration while working with interactive visualizations. It also shows what components are needed in order to enable gestural interaction with visualizations.
22

A single-chip real-Time range finder

Chen, Sicheng 30 September 2004 (has links)
Range finding are widely used in various industrial applications, such as machine vision, collision avoidance, and robotics. Presently most range finders either rely on active transmitters or sophisticated mechanical controllers and powerful processors to extract range information, which make the range finders costly, bulky, or slowly, and limit their applications. This dissertation is a detailed description of a real-time vision-based range sensing technique and its single-chip CMOS implementation. To the best of our knowledge, this system is the first single chip vision-based range finder that doesn't need any mechanical position adjustment, memory or digital processor. The entire signal processing on the chip is purely analog and occurs in parallel. The chip captures the image of an object and extracts the depth and range information from just a single picture. The on-chip, continuous-time, logarithmic photoreceptor circuits are used to couple spatial image signals into the range-extracting processing network. The photoreceptor pixels can adjust their operating regions, simultaneously achieving high sensitivity and wide dynamic range. The image sharpness processor and Winner-Take-All circuits are characterized and analyzed carefully for their temporal bandwidth and detection performance. The mathematical and optical models of the system are built and carefully verified. A prototype based on this technique has been fabricated and tested. The experimental results prove that the range finder can achieve acceptable range sensing precision with low cost and excellent speed performance in short-to-medium range coverage. Therefore, it is particularly useful for collision avoidance.
23

Uncalibrated Vision-Based Control and Motion Planning of Robotic Arms in Unstructured Environments

Shademan, Azad Unknown Date
No description available.
24

Vision-based Strategies for Landing of Fixed Wing Unmanned Aerial Vehicles

Marianandam, Peter Arun January 2015 (has links) (PDF)
Vision-based conventional landing of a fixed wing UAV is addressed in this thesis. The work includes mathematical modeling, interface to a software for rendering the outside scenery, image processing techniques, control law development and outdoor experimentation. This research focuses on detecting the lines or the edges that flank the landing site, use them as visual cues to extract the geometrical parameters such as the line co-ordinates and the line slopes, that are mapped to the control law, to align and conventionally land the fixed wing UAV. Pre-processing and image processing techniques such as Canny Edge detection and Hough Transforms have been used to detect the runway lines or the edges of a landing strip. A Vision-in-the-Loop Simulation (VILS) set up on a personal computer or laptop, has been developed, without any external camera/equipment or networking cables that enables visual serving toper form vision-based studies and simulation. UAV mass, inertia, engine and aero data from literature has been used along withUAV6DOF equations to represent the UAV mathematical model. The UAV model is interfaced to a software using UDP data packets via ports, for rendering the outside scenery in accordance with the UAV’s translation and orientation. The snapshots of the outside scenery, that is passed through an internet URL by including the ‘http’ protocol, is image processed to detect the lines and the line parameters for the control. VILS set has been used to simulate UAV alignment to the runway and landing. Vision-based alignment is achieved by rolling the UAV such that the landing strip that is off center is brought to the center of the image plane. A two stage proportional aileron control input using the line co-ordinates, bringing the midpoints of the top ends of the runway lines to the center of the image, followed by bringing the mid points of the bottom ends of the runway lines to the center of the image has been demonstrated through simulation. A vision-based control for landing has been developed, that consists of an elevator command that is commiserate with the acceptable range of glide slope followed by a flare command till touch down, which is a function of the flare height and estimated height from the 3rd order polynomial of the runway slope obtained by characterization. The feasibility of using the algorithms for a semi-prepared or unprepared landing strip with no visible runway lines have also been demonstrated. Landing on an empty tract of land and in poor visibility condition, by synthetically drawing the runway lines based on a single 3rd order slope. vs height polynomial solution are also presented. A fixed area, and a dynamic area search for the Hough peaks in the Hough accumulator array for the correct detection of lines are addressed. A novel technique for crosswind landing, quite different from conventional techniques, has been introduced, using only the aileron control input for correcting the drift. Three different strategies using the line co-ordinates and the line slopes, with varying levels of accuracy have been presented and compared. About 125 landing data of a manned instrumented prototype aircraft have been analysed to corroborate the findings of this research. Outdoor experiments are also conducted to verify the feasibility of using the line detection algorithm in a realistic scenario and to generate experimental evidence for the findings of this research. Computation time estimates are presented to establish the feasibility of using vision for the problem of conventional landing. The thesis concludes with the findings and direction for future work.
25

Implementace algoritmu pro vizuální segmentaci www stránek / Implementation of Algorithm for Visual Web Page Segmentation

Popela, Tomáš January 2012 (has links)
Segmentation of WWW pages or page division on di erent semantics blocks is one of the disciplines of information extraction. Master's thesis deals with Vision-based Page Segmentation - VIPS method, which consist in division based on visual properties of page's elements. The method is given in context of other prominent segmentation procedures. In this work, the key steps, that this method consist of are shown and described on examples. For VIPS method it is necessary to cooperate with WWW pages rendering engine in order to obtain Document Object Model of page. The paper presents and describes four most important engines for Java programming language. The output of this work is implementation of VIPS algorithm just in Java language with usage of CSSBox core. The original algorithm implementation from Microsoft's labs is presented. The di erent development stages of library implementing VIPS method and my approach to it's solution are described. In the end of this work the work's outcome is demonstrated on several pages segmentation.
26

A Vision-Based Bee Counting Algorithm for Electronic Monitoring of Langsthroth Beehives

Reka, Sai Kiran 01 May 2016 (has links)
An algorithm is presented to count bee numbers in images of Langsthroth hive entrances. The algorithm computes approximate bee counts by adjusting the brightness of the image, cropping a white or green area in the image, removing the background and noise from the cropped area, finding the total number of bee pixels, and dividing that number by the average number of pixels in a single bee. On 1005 images with green landing pads, the algorithm achieved an accuracy of 80 percent when compared to the human bee counting. On 776 images with white landing pads, the algorithm achieved an accuracy of 85% compared to the human bee counting.
27

Policy Hyperparameter Exploration for Behavioral Learning of Smartphone Robots / スマートフォンロボットの行動学習のための方策ハイパーパラメータ探索法

Wang, Jiexin 23 March 2017 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第20519号 / 情博第647号 / 新制||情||112(附属図書館) / 京都大学大学院情報学研究科システム科学専攻 / (主査)教授 石井 信, 教授 杉江 俊治, 教授 大塚 敏之, 銅谷 賢治 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
28

e-DTS 2.0: A Next-Generation of a Distributed Tracking System

Rybarczyk, Ryan Thomas 20 March 2012 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / A key component in tracking is identifying relevant data and combining the data in an effort to provide an accurate estimate of both the location and the orientation of an object marker as it moves through an environment. This thesis proposes an enhancement to an existing tracking system, the enhanced distributed tracking system (e-DTS), in the form of the e-DTS 2.0 and provides an empirical analysis of these enhancements. The thesis also provides suggestions on future enhancements and improvements. When a Camera identifies an object within its frame of view, it communicates with a JINI-based service in an effort to expose this information to any client who wishes to consume it. This aforementioned communication utilizes the JINI Multicast Lookup Protocol to provide the means for a dynamic discovery of any sensors as they are added or removed from the environment during the tracking process. The client can then retrieve this information from the service and perform a fusion technique in an effort to provide an estimation of the marker's current location with respect to a given coordinate system. The coordinate system handoff and transformation is a key component of the e-DTS 2.0 tracking process as it improves the agility of the system.
29

Vision-Based Precision Landings of a Tailsitter UAV

Millet, Paul Travis 24 November 2009 (has links) (PDF)
We present a method of performing precision landings of a vertical take-off and landing (VTOL) unmanned air vehicle (UAV) with the use of an onboard vision sensor and information about the aircraft's orientation and altitude above ground level (AGL). A method for calculating the 3-dimensional location of the UAV relative to a ground target of interest is presented as well as a navigational controller to position the UAV above the target. A method is also presented to prevent the UAV from moving in a way that will cause the ground target of interest to go out of view of the UAV's onboard camera. These methods are tested in simulation and in hardware and resulting data is shown. Hardware flight testing yielded an average position estimation error of 22 centimeters. The method presented is capable of performing precision landings of VTOL UAV's with submeter accuracy.
30

Advanced Force Sensing and Novel Microrobotic Mechanisms for Biomedical Applications

Georges Adam (13237722) 12 August 2022 (has links)
<p>Over the years, research and development of micro-force sensing techniques has gained a lot of traction, especially for microrobotic applications, such as micromanipulation and biomedical material characterization studies. Moreover, in recent years, new microfabrication techniques have been developed, such as two-photon polymerization (TPP), which enables fast prototyping, high resolution features, and the utilization of a wide range of materials. In general, the main goals of this work are to improve the resolution and range of novel vision-based force sensors, create microrobotic and micromanipulation systems capable of tackling a multitude of applications, and ensuring these systems are flexible and provide a sold foundation to the advancement of the field as a whole.</p> <p><br></p> <p>The current work can be divided into three main parts: (i) a wireless magnetic microrobot with 2D vision-based force sensing, (ii) a 3D vision-based force sensing probe for tethered micromanipulators, and (iii) a micromanipulation system capable of accurately controlling and performing advanced tasks. The vision-based force sensors developed here have resolutions ranging from the mN range to even sub-$\mu$N range, depending on the material used, geometry, and overall footprint. </p> <p><br></p> <p>In part (i), the microrobot has been developed mainly for biomedical applications \textit{in vitro}, with the ability to perform mechanical characterization and microassembly tasks of different rigid and biomedical materials. In part (ii), a similar sensor mechanic is used, but now adapted to a micromanipulation probe, which is able to detect forces in three dimensions and work in dry environments. In conjunction with the micromanipulation system described in part (iii), the system is capable of performing advanced assembly applications, including accurate assembly and 3D mounting of microparts. </p> <p><br></p> <p>With the introduction of TPP technologies to these works, the next generation of adaptable microrobotics and micromanipulation systems for advanced biomedical applications is starting to take shape, ever more versatile, smaller, more accurate, and with more advanced capabilities. This work shows the progression of these overall systems and gives a glimpse of what is possible with TPP and the technologies to come.</p>

Page generated in 0.1761 seconds