• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1932
  • 313
  • 150
  • 112
  • 108
  • 69
  • 56
  • 46
  • 24
  • 20
  • 14
  • 13
  • 13
  • 13
  • 13
  • Tagged with
  • 3555
  • 3555
  • 966
  • 850
  • 789
  • 786
  • 641
  • 616
  • 569
  • 535
  • 528
  • 525
  • 475
  • 443
  • 442
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
371

Learning Three-Dimensional Shape Models for Sketch Recognition

Kaelbling, Leslie P., Lozano-Pérez, Tomás 01 1900 (has links)
Artifacts made by humans, such as items of furniture and houses, exhibit an enormous amount of variability in shape. In this paper, we concentrate on models of the shapes of objects that are made up of fixed collections of sub-parts whose dimensions and spatial arrangement exhibit variation. Our goals are: to learn these models from data and to use them for recognition. Our emphasis is on learning and recognition from three-dimensional data, to test the basic shape-modeling methodology. In this paper we also demonstrate how to use models learned in three dimensions for recognition of two-dimensional sketches of objects. / Singapore-MIT Alliance (SMA)
372

Improvement of the camera calibration through the use of machine learning techniques

Nichols, Scott A., January 2001 (has links) (PDF)
Thesis (M.S.)--University of Florida, 2001. / Title from first page of PDF file. Document formatted into pages; contains vii, 45 p.; also contains graphics. Vita. Includes bibliographical references (p. 43-44).
373

A methodology for resolving multiple vehicle occlusion in visual traffic surveillance

Pang, Chun-cheong. January 2005 (has links)
Thesis (Ph. D.)--University of Hong Kong, 2006. / Title proper from title frame. Also available in printed format.
374

Image Analysis using the Physics of Light Scattering

Nillius, Peter January 2004 (has links)
Any generic computer vision algorithm must be able to copewith the variations in appearance of objects due to differentillumination conditions. While these variations in the shadingof a surface may seem a nuisance, they in fact containinformation about the world. This thesis tries to provide anunderstanding what information can be extracted from theshading in a single image and how to achieve this. One of thechallenges lies in finding accurate models for the wide varietyof conditions that can occur. Frequency space representations are powerful tools foranalyzing shading theoretically. Surfaces act as low-passfilters on the illumination making the reflected lightband-limited. Hence, it can be represented by a finite numberof components in the Fourier domain, despite having arbitraryillumination. This thesis derives a basis for shading byrepresenting the illumination in spherical harmonics and theBRDF in a basis for isotropic reflectance. By analyzing thecontributing variance of this basis it is shown how to createfinite dimensional representations for any surface withisotropic reflectance. The finite representation is used to analytically derive aprincipal component analysis (PCA) basis of the set of imagesdue to the variations in the illumination and BRDF. The PCA isperformed model-based so that the variations in the images aredescribed by the variations in the illumination and the BRDF.This has a number of advantages. The PCA can be performed overa wide variety of conditions, more than would be practicallypossible if the images were captured or rendered. Also, thereis an explicit mapping between the principal components and theillumination and BRDF so that the PCA basis can be used as aphysical model. By combining a database of captured illumination and adatabase of captured BRDFs a general basis for shading iscreated. This basis is used to investigate materialclassification from a single image with known geometry butarbitrary unknown illumination. An image is classified byestimating the coecients in this basis and comparing them to adatabase. Experiments on synthetic data show that materialclassification from reflectance properties is hard. There aremis-classifications and the materials seem to cluster intogroups. The materials are grouped using a greedy algorithm.Experiments on real images show promising results. Keywords:computer vision, shading, illumination,reflectance, image irradiance, frequency space representations,spherical harmonics, analytic PCA, model-based PCA, materialclassification, illumination estimation
375

Local spatio-temporal image features for motion interpretation

Laptev, Ivan January 2004 (has links)
Visual motion carries information about the dynamics of ascene. Automatic interpretation of this information isimportant when designing computer systems forvisualnavigation, surveillance, human-computer interaction, browsingof video databases and other growing applications. In this thesis, we address the issue of motionrepresentation for the purpose of detecting and recognizingmotion patterns in video sequences. We localize the motion inspace and time and propose to use local spatio-temporal imagefeatures as primitives when representing and recognizingmotions. To detect such features, we propose to maximize ameasure of local variation of the image function over space andtime and show that such a method detects meaningful events inimage sequences. Due to its local nature, the proposed methodavoids the in.uence of global variations in the scene andovercomes the need for spatial segmentation and tracking priorto motion recognition. These properties are shown to be highlyuseful when recognizing human actions in complexscen es. Variations in scale and in relative motions of the cameramay strongly in.uence the structure of image sequences andtherefore the performance of recognition schemes. To addressthis problem, we develop a theory of local spatio-temporaladaptation and show that this approach provides invariance whenanalyzing image sequences under scaling and velocitytransformations. To obtain discriminative representations ofmotion patterns, we also develop several types of motiondescriptors and use them for classifying and matching localfeatures in image sequences. An extensive evaluation of thisapproach is performed and results in the context of the problemof human action recognition are presented. I n summary, this thesis provides the following contributions:(i) it introduces the notion of local features in space-timeand demonstrates the successful application of such featuresfor motion interpretation; (ii) it presents a theory and anevaluation of methods for local adaptation with respect toscale and velocity transformations in image sequences and (iii)it presents and evaluates a set of local motion descriptors,which in combination with methods for feature detection andfeature adaptation allow for robust recognition of humanactions in complexs cenes with cluttered and non-stationarybackgrounds as well as camera motion.
376

Två olika tårsubstituts påverkan av synkvaliteten

Tigerström, Kristoffer January 2010 (has links)
Tårsubstitut används mycket bland linsbärare och personer med torra ögon. Detär vanligt nuförtiden att arbete i kontorsmiljö och vid datorer ger problem medtorra ögon, Computer vision syndrome (CVS), och att dessa personer dåanvänder tårsubstitut. Ofta står det i tårsubstitutens bipacksedel att de kan geproblem med dimsyn en stund efter applicering.Tidigare studier har visat att aberrationerna i ögat ökar vid applicering avtårsubstitut och möjligtvis är det anledningen till att dimsynen uppkommer.Syfte: Syftet med studien är att ta reda på hur mycket synkvaliteten påverkas avtvå olika tårsubstitut och hur lång tid den påverkas.Metod: Metoden innebar att närvisus och aberrationer mättes på 30 patienter (60ögon) först utan tårsubstitut. Sedan applicerades det första tårsubstitutet(Systane) i höger öga och ytterligare en mätning av närvisus och aberrationerutfördes. Därefter gjordes ytterligare 5 mätningar av aberrationerna, en var fjärdeminut. Samma sak utfördes sedan på vänster öga men då med Lacryvisc iställetför Systane.Resultat: Resultatet visade att med Systane försämrades visus hos 11 patienter.Aberrationerna ökade vid appliceringen av tårsubstitutet. Med Lacryvsicförsämrades visus hos 29 av patienterna. Abberrationerna ökade även där vidappliceringen.
377

Two Case Studies on Vision-based Moving Objects Measurement

Zhang, Ji 2011 August 1900 (has links)
In this thesis, we presented two case studies on vision-based moving objects measurement. In the first case, we used a monocular camera to perform ego-motion estimation for a robot in an urban area. We developed the algorithm based on vertical line features such as vertical edges of buildings and poles in an urban area, because vertical lines are easy to be extracted, insensitive to lighting conditions/shadows, and sensitive to camera/robot movements on the ground plane. We derived an incremental estimation algorithm based on the vertical line pairs. We analyzed how errors are introduced and propagated in the continuous estimation process by deriving the closed form representation of covariance matrix. Then, we formulated the minimum variance ego-motion estimation problem into a convex optimization problem, and solved the problem with the interior-point method. The algorithm was extensively tested in physical experiments and compared with two popular methods. Our estimation results consistently outperformed the two counterparts in robustness, speed, and accuracy. In the second case, we used a camera-mirror system to measure the swimming motion of a live fish and the extracted motion data was used to drive animation of fish behavior. The camera-mirror system captured three orthogonal views of the fish. We also built a virtual fish model to assist the measurement of the real fish. The fish model has a four-link spinal cord and meshes attached to the spinal cord. We projected the fish model into three orthogonal views and matched the projected views with the real views captured by the camera. Then, we maximized the overlapping area of the fish in the projected views and the real views. The maximization result gave us the position, orientation, and body bending angle for the fish model that was used for the fish movement measurement. Part of this algorithm is still under construction and will be updated in the future.
378

Package inspection with a machine vision system

Song, Zhao-ming 30 January 1991 (has links)
Machine Vision has been extensively applied in industry. This thesis project, which originated with a local food processor, applies a vision system to inspection of packages for cosmetric errors. The basic elements and theory of the machine vision system are introduced, and some image processing techniques, such as histogram analysis, thresholding, and SRI algorithm, are utilized in this thesis. Computer programs written in C and Pascal are described. Hardware setup and computer interface, such as RS-232 serial interface, parallel digital I/O interface, conveyor control, and incremental shaft encoder, are described. Test results are presented and discussed. / Graduation date: 1991
379

Multi-Modal Scene Understanding for Robotic Grasping

Bohg, Jeannette January 2011 (has links)
Current robotics research is largely driven by the vision of creatingan intelligent being that can perform dangerous, difficult orunpopular tasks. These can for example be exploring the surface of planet mars or the bottomof the ocean, maintaining a furnace or assembling a car.   They can also be more mundane such as cleaning an apartment or fetching groceries. This vision has been pursued since the 1960s when the first robots were built. Some of the tasks mentioned above, especially those in industrial manufacturing, arealready frequently performed by robots. Others are still completelyout of reach. Especially, household robots are far away from beingdeployable as general purpose devices. Although advancements have beenmade in this research area, robots are not yet able to performhousehold chores robustly in unstructured and open-ended environments givenunexpected events and uncertainty in perception and execution.In this thesis, we are analyzing which perceptual andmotor capabilities are necessaryfor the robot to perform common tasks in a household scenario. In that context, an essential capability is tounderstand the scene that the robot has to interact with. This involvesseparating objects from the background but also from each other.Once this is achieved, many other tasks becomemuch easier. Configuration of objectscan be determined; they can be identified or categorized; their pose can be estimated; free and occupied space in the environment can be outlined.This kind of scene model can then inform grasp planning algorithms to finally pick up objects.However, scene understanding is not a trivial problem and evenstate-of-the-art methods may fail. Given an incomplete, noisy andpotentially erroneously segmented scene model, the questions remain howsuitable grasps can be planned and how they can be executed robustly.In this thesis, we propose to equip the robot with a set of predictionmechanisms that allow it to hypothesize about parts of the sceneit has not yet observed. Additionally, the robot can alsoquantify how uncertain it is about this prediction allowing it toplan actions for exploring the scene at specifically uncertainplaces. We consider multiple modalities includingmonocular and stereo vision, haptic sensing and information obtainedthrough a human-robot dialog system. We also study several scene representations of different complexity and their applicability to a grasping scenario. Given an improved scene model from this multi-modalexploration, grasps can be inferred for each objecthypothesis. Dependent on whether the objects are known, familiar orunknown, different methodologies for grasp inference apply. In thisthesis, we propose novel methods for each of these cases. Furthermore,we demonstrate the execution of these grasp both in a closed andopen-loop manner showing the effectiveness of the proposed methods inreal-world scenarios. / <p>QC 20111125</p> / GRASP
380

Astrometry.net: Automatic Recognition and Calibration of Astronomical Images

Lang, Dustin 03 March 2010 (has links)
We present Astrometry.net, a system for automatically recognizing and astrometrically calibrating astronomical images, using the information in the image pixels alone. The system is based on the geometric hashing approach in computer vision: We use the geometric relationships between low-level features (stars and galaxies), which are relatively indistinctive, to create geometric features that are distinctive enough that we can recognize images that cover less than one-millionth of the area of the sky. The geometric features are used to generate rapidly hypotheses about the location---the pointing, scale, and rotation---of an image on the sky. Each hypothesis is then evaluated in a Bayesian decision theory framework in order to ensure that most correct hypotheses are accepted while false hypotheses are almost never accepted. The feature-matching process is accelerated by using a new fast and space-efficient kd-tree implementation. The Astrometry.net system is available via a web interface, and the software is released under an open-source license. It is being used by hundreds of individual astronomers and several large-scale projects, so we have at least partially achieved our goal of helping ``to organize, annotate and make searchable all the world's astronomical information.''

Page generated in 0.0758 seconds