• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3699
  • 688
  • 657
  • 331
  • 278
  • 263
  • 90
  • 87
  • 70
  • 46
  • 46
  • 46
  • 46
  • 46
  • 46
  • Tagged with
  • 7700
  • 3909
  • 1265
  • 1041
  • 1014
  • 951
  • 832
  • 814
  • 770
  • 761
  • 732
  • 703
  • 694
  • 634
  • 593
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

Q-Learning for Robot Control

Gaskett, Chris, cgaskett@it.jcu.edu.au January 2002 (has links)
Q-Learning is a method for solving reinforcement learning problems. Reinforcement learning problems require improvement of behaviour based on received rewards. Q-Learning has the potential to reduce robot programming effort and increase the range of robot abilities. However, most currentQ-learning systems are not suitable for robotics problems: they treat continuous variables, for example speeds or positions, as discretised values. Discretisation does not allow smooth control and does not fully exploit sensed information. A practical algorithm must also cope with real-time constraints, sensing and actuation delays, and incorrect sensor data. This research describes an algorithm that deals with continuous state and action variables without discretising. The algorithm is evaluated with vision-based mobile robot and active head gaze control tasks. As well as learning the basic control tasks, the algorithm learns to compensate for delays in sensing and actuation by predicting the behaviour of its environment. Although the learned dynamic model is implicit in the controller, it is possible to extract some aspects of the model. The extracted models are compared to theoretically derived models of environment behaviour. The difficulty of working with robots motivates development of methods that reduce experimentation time. This research exploits Q-learning’s ability to learn by passively observing the robot’s actions—rather than necessarily controlling the robot. This is a valuable tool for shortening the duration of learning experiments.
292

Towards an estimation framework for some problems in computer vision.

Gawley, Darren J. January 2004 (has links)
This thesis is concerned with fundamental algorithms for estimating parameters of geometric models that are particularly relevant to computer vision. A general framework is considered which accommodates several important problems involving estimation in a maximum likelihood setting. By considering a special form of a commonly used cost function, a new, iterative, estimation method is evolved. This method is subsequently expanded to enable incorporation of a so-called ancillary constraint. An important feature of these methods is that they can serve as a basis for conducting theoretical comparison of various estimation approaches. Two specific applications are considered: conic fitting, and estimation of the fundamental matrix (a matrix arising in stereo vision). In the case of conic fitting, unconstrained methods are first treated. The problem of producing ellipse-specific estimates is subsequently tackled. For the problem of estimating the fundamental matrix, the new constrained method is applied to generate an estimate which satisfies the necessary rank-two constraint. Other constrained and unconstrained methods are compared within this context. For both of these example problems, the unconstrained and constrained methods are shown to perform with high accuracy and efficiency. The value of incorporating covariance information characterising the uncertainty of measured image point locations within the estimation process is also explored. Covariance matrices associated with data points are modelled, then an empirical study is made of the conditions under which covariance information enables generation of improved parameter estimates. Under the assumption that covariance information is, in itself, subject to estimation error, tests are undertaken to determine the effect of imprecise information upon the quality of parameter estimates. Finally, these results are carried over to experiments to assess the value of covariance information in estimating the fundamental matrix from real images. The use of such information is shown to be of potential benefit when the measurement process of image features is considered. / Thesis (Ph.D.)--School of Computer Science, 2004.
293

Correlation between visual deviation and side forces in a car

Haag, Lena January 2002 (has links)
No description available.
294

The Curvature Primal Sketch

Asada, Haruo, Brady, Michael 01 February 1984 (has links)
In this paper we introduce a novel representation of the significant changes in curvature along the bounding contour of planar shape. We call the representation the curvature primal sketch. We describe an implemented algorithm that computes the curvature primal sketch and illustrate its performance on a set of tool shapes. The curvature primal sketch derives its name from the close analogy to the primal sketch representation advocated by Marr for describing significant intensity changes. We define a set of primitive parameterized curvature discontinuities, and derive expressions for their convolutions with the first and second derivatives of a Gaussian. The convolved primitives, sorted according to the scale at which they are detected, provide us with a multi-scaled interpretation of the contour of a shape.
295

Why Do We See Three-dimensional Objects?

Marill, Thomas 01 June 1992 (has links)
When we look at certain line-drawings, we see three-dimensional objects. The question is why; why not just see two-dimensional images? We theorize that we see objects rather than images because the objects we see are, in a certain mathematical sense, less complex than the images; and that furthermore the particular objects we see will be the least complex of the available alternatives. Experimental data supporting the theory is reported. The work is based on ideas of Solomonoff, Kolmogorov, and the "minimum description length'' concepts of Rissanen.
296

The role of darkness in students' conceptions about light propagation and vision

Wells, Mary Anne. January 2007 (has links)
Thesis (M.Ed.)--University of Delaware, 2006. / Principal faculty advisor: Eric Eslinger, School of Education. Includes bibliographical references.
297

Modeling motion with the selective tuning model /

Zhou, Kunhao. January 2004 (has links)
Thesis (M.Sc.)--York University, 2004. Graduate Programme in Computer Science. / Typescript. Includes bibliographical references (leaves 130-144). Also available on the Internet. MODE OF ACCESS via web browser by entering the following URL:http://gateway.proquest.com/openurl?url%5Fver=Z39.88-2004&res%5Fdat=xri:pqdiss&rft%5Fval%5Ffmt=info:ofi/fmt:kev:mtx:dissertation&rft%5Fdat=xri:pqdiss:MQ99409
298

A representation for visual information /

Crowley, James L. January 1900 (has links)
Thesis (Ph. D.)--Carnegie-Mellon University, 1982. / "CMU-RI-TR-82-7." Includes bibliographical references (p. 221-226).
299

Exploiting structure in man-made environments

Aydemir, Alper January 2012 (has links)
Robots are envisioned to take on jobs that are dirty, dangerous and dull, the three D's of robotics. With this mission, robotic technology today is ubiquitous on the factory floor. However, the same level of success has not occurred when it comes to robots that operate in everyday living spaces, such as homes and offices. A big part of this is attributed to domestic environments being complex and unstructured as opposed to factory settings which can be set up and precisely known in advance. In this thesis we challenge the point of view which regards man-made environments as unstructured and that robots should operate without prior assumptions about the world. Instead, we argue that robots should make use of the inherent structure of everyday living spaces across various scales and applications, in the form of contextual and prior information, and that doing so can improve the performance of robotic tasks. To investigate this premise, we start by attempting to solve a hard and realistic problem, active visual search. The particular scenario considered is that of a mobile robot tasked with finding an object on an entire unexplored building floor. We show that a search strategy which exploits the structure of indoor environments offers significant improvements on state of the art and is comparable to humans in terms of search performance. Based on the work on active visual search, we present two specific ways of making use of the structure of space. First, we propose to use the local 3D geometry as a strong indicator of objects in indoor scenes. By learning a 3D context model for various object categories, we demonstrate a method that can reliably predict the location of objects. Second, we turn our attention to predicting what lies in the unexplored part of the environment at the scale of rooms and building floors. By analyzing a large dataset, we propose that indoor environments can be thought of as being composed out of frequently occurring functional subparts. Utilizing these, we present a method that can make informed predictions about the unknown part of a given indoor environment. The ideas presented in this thesis explore various sides of the same idea: modeling and exploiting the structure inherent in indoor environments for the sake of improving robot's performance on various applications. We believe that in addition to contributing some answers, the work presented in this thesis will generate additional, fruitful questions. / <p>QC 20121105</p> / CogX
300

On the design and implementation of decision-theoretic, interactive, and vision-driven mobile robots

Elinas, Pantelis 05 1900 (has links)
We present a framework for the design and implementation of visually-guided, interactive, mobile robots. Essential to the framework's robust performance is our behavior-based robot control architecture enhanced with a state of the art decision-theoretic planner that takes into account the temporal characteristics of robot actions and allows us to achieve principled coordination of complex subtasks implemented as robot behaviors/skills. We study two different models of the decision theoretic layer: Multiply Sectioned Markov Decision Processes (MSMDPs) under the assumption that the world state is fully observable by the agent, and Partially Observable Markov Decision Processes (POMDPs) that remove the latter assumption and allow us to model the uncertainty in sensor measurements. The MSMDP model utilizes a divide-and-conquer approach for solving problems with millions of states using concurrent actions. For solving large POMDPs, we present heuristics that improve the computational efficiency of the point-based value iteration algorithm while tackling the problem of multi-step actions using Dynamic Bayesian Networks. In addition, we describe a state-of-the-art simultaneous localization and mapping algorithm for robots equipped with stereo vision. We first present the Monte-Carlo algorithm sigmaMCL for robot localization in 3D using natural landmarks identified by their appearance in images. Secondly, we extend sigmaMCL and develop the sigmaSLAM algorithm for solving the simultaneous localization and mapping problem for visually-guided, mobile robots. We demonstrate our real-time algorithm mapping large, indoor environments in the presence of large changes in illumination, image blurring and dynamic objects. Finally, we demonstrate empirically the applicability of our framework for developing interactive, mobile robots capable of completing complex tasks with the aid of a human companion. We present an award winning robot waiter for serving hors d'oeuvres at receptions and a robot for delivering verbal messages among inhabitants of an office-like environment.

Page generated in 0.0662 seconds