• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1946
  • 314
  • 150
  • 112
  • 108
  • 69
  • 56
  • 46
  • 25
  • 20
  • 14
  • 13
  • 13
  • 13
  • 13
  • Tagged with
  • 3602
  • 3602
  • 977
  • 880
  • 795
  • 794
  • 649
  • 619
  • 584
  • 541
  • 532
  • 525
  • 482
  • 454
  • 452
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
311

Två olika tårsubstituts påverkan av synkvaliteten

Tigerström, Kristoffer January 2010 (has links)
<p>Tårsubstitut används mycket bland linsbärare och personer med torra ögon. Detär vanligt nuförtiden att arbete i kontorsmiljö och vid datorer ger problem medtorra ögon, Computer vision syndrome (CVS), och att dessa personer dåanvänder tårsubstitut. Ofta står det i tårsubstitutens bipacksedel att de kan geproblem med dimsyn en stund efter applicering.Tidigare studier har visat att aberrationerna i ögat ökar vid applicering avtårsubstitut och möjligtvis är det anledningen till att dimsynen uppkommer.Syfte: Syftet med studien är att ta reda på hur mycket synkvaliteten påverkas avtvå olika tårsubstitut och hur lång tid den påverkas.Metod: Metoden innebar att närvisus och aberrationer mättes på 30 patienter (60ögon) först utan tårsubstitut. Sedan applicerades det första tårsubstitutet(Systane) i höger öga och ytterligare en mätning av närvisus och aberrationerutfördes. Därefter gjordes ytterligare 5 mätningar av aberrationerna, en var fjärdeminut. Samma sak utfördes sedan på vänster öga men då med Lacryvisc iställetför Systane.Resultat: Resultatet visade att med Systane försämrades visus hos 11 patienter.Aberrationerna ökade vid appliceringen av tårsubstitutet. Med Lacryvsicförsämrades visus hos 29 av patienterna. Abberrationerna ökade även där vidappliceringen.</p>
312

Real Time Human Tracking in Unconstrained Environments

Gao, Hongzhi January 2011 (has links)
The tabu search particle filter is proposed in this research based on the integration of the modified tabu search metaheuristic optimization and the genetic particle filter. Experiments with this algorithm in real time human tracking applications in unconstrained environments show that it is more robust, accurate and faster than a number of other existing metaheuristic filters, including the evolution particle filter, particle swarm filter, simulated annealing filter, path relink filter and scatter search filter. Quantitative evaluation illustrates that even with only ten particles in the system, the proposed tabu search particle filter has a success rate of 93.85% whereas the success rate of other metaheuristic filters ranged from 68.46% to 17.69% under the same conditions. The accuracy of the proposed algorithm (with ten particles in the tracking system) is 2.69 pixels on average, which is over 3.85 times better than the second best metaheuristic filters in accuracy and 18.13 times better than the average accuracy of all other filters. The proposed algorithm is also the fastest among all metaheuristic filters that have been tested. It achieves approximately 50 frames per second, which is 1.5 times faster than the second fastest algorithm and nineteen times faster than the average speed of all other metaheuristic filters. Furthermore, a unique colour sequence model is developed in this research based on a degenerated form of the hidden Markov model. Quantitative evaluations based on rigid object matching experiments illustrate that the successful matching rate is 5.73 times better than the widely used colour histogram. In terms of speed, the proposed algorithm achieves twice the successful matching rate in about three quarters of the processing time consumed by the colour histogram model. Overall, these results suggest that the two proposed algorithms would be useful in many applications due to their efficiently, accuracy and ability to robustly track people and coloured objects.
313

An embedded augmented reality system

Groufsky, Michael Edward January 2011 (has links)
This report describes an embedded system designed to support the development of embedded augmented reality applications. It includes an integrated camera and built-in graphics acceleration hardware. An example augmented reality application serves as a demonstration of how these features are accessed, as well as providing an indication of the performance of the device. The embedded augmented reality development platform consists of the Gumstix Overo computer-on-module paired with the custom-built Overocam camera board. This device offers an ARM Cortex-A8 CPU running at 600 MHZ and 256 MB of RAM, along with the ability to capture VGA video at 30 frames per second. The device runs an operating system based on version 2.6.33 of the Linux kernel. The main feature of the device is the OMAP3530 multimedia applications processor from Texas Instruments. In addition to the ARM CPU, it provides an on-board 2D/3D graphics accelerator and a digital signal processor. It also includes a built-in camera peripheral interface, reducing the complexity of the camera board design. A working example of an augmented reality application is included as a demonstration of the device's capabilities. The application was designed to represent a basic augmented reality task: tracking a single marker and rendering a simple virtual object. It runs at around 8 frames per second when a marker is visible and 13 frames per second otherwise. The result of the project is a self-contained computing platform for vision-based augmented reality. It may either be used as-is or customised with additional hardware peripherals, depending on the requirements of the developer.
314

Vision based autonomous road following

Gibbs, Francis William John January 1996 (has links)
No description available.
315

Linear methods for camera motion recovery

Lawn, Jonathan Marcus January 1995 (has links)
No description available.
316

Artificial intelligence techniques and concepts for integrating a robot vision system with a solid modeller

Tabandeh, Amir S. January 1988 (has links)
No description available.
317

Robust Upper Body Pose Recognition in Unconstrained Environments Using Haar-Disparity

Chu, Cheng-Tse January 2008 (has links)
In this research, an approach is proposed for the robust tracking of upper body movement in unconstrained environments by using a Haar- Disparity algorithm together with a novel 2D silhouette projection algorithm. A cascade of boosted Haar classifiers is used to identify human faces in video images, where a disparity map is then used to establish the 3D locations of detected faces. Based on this information, anthropometric constraints are used to define a semi-spherical interaction space for upper body poses. This constrained region serves the purpose of pruning the search space as well as validating user poses. Haar-Disparity improves on the traditional skin manifold tracking by relaxing constraints on clothing, background and illumination. The 2D silhouette projection algorithm provides three orthogonal views of the 3D objects. This allows tracking of upper limbs to be performed in the 2D space as opposed to manipulating 3D noisy data directly. This thesis also proposes a complete optimal set of interactions for very large interactive displays. Experimental evaluation includes the performance of alternative camera positions and orientations, accuracy of pointing, direct manipulative gestures, flag semaphore emulation, and principal axes. As a minor part of this research interest, the usability of interacting using only arm gestures is also evaluated based on ISO 9241-9 standard. The results suggest that the proposed algorithm and optimal set of interactions are useful for interacting with large displays.
318

Matching Slides to Presentation Videos

Fan, Quanfu January 2008 (has links)
Video streaming is becoming a major channel for distance learning (or e-learning). A tremendous number of videos for educational purpose are capturedand archived in various e-learning systems today throughout schools, corporations and over the Internet. However, making information searchable and browsable, and presenting results optimally for a wide range of users and systems, remains a challenge.In this work two core algorithms have been developedto support effective browsing and searching of educational videos. The first is a fully automatic approach that recognizes slides in the videowith high accuracy. Built upon SIFT (scale invariant feature transformation) keypoint matching using RANSAC (random sample consensus), the approach is independent of capture systems and can handle a variety of videos with different styles and plentiful ambiguities. In particular, we propose a multi-phase matching pipeline that incrementally identifies slides from the easy ones to the difficult ones. We achieve further robustness by using the matching confidence as part of a dynamic Hidden Markov model (HMM) that integrates temporal information, taking camera operations into account as well.The second algorithm locates slides in the video. We develop a non-linear optimization method (bundle adjustment) to accurately estimate the projective transformations (homographies) between slides and video frames. Different from estimating homography from a single image, our method solves a set of homographies jointly in a frame sequence that is related to a single slide.These two algorithms open up a series of possibilities for making the video content more searchable, browsable and understandable, thus greatly enriching the user's learning experience. Their usefulness has been demonstrated in the SLIC (Semantically Linking Instructional Content) system, which aims to turnsimple video content into fully interactive learning experience for students and scholars.
319

Automated interpretation of complex line figures

Canham, Richard O. January 2001 (has links)
No description available.
320

On the object detecting artificial retina

Wilson, James George January 2001 (has links)
No description available.

Page generated in 0.101 seconds