• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1945
  • 313
  • 150
  • 112
  • 108
  • 69
  • 56
  • 46
  • 25
  • 20
  • 14
  • 13
  • 13
  • 13
  • 13
  • Tagged with
  • 3589
  • 3589
  • 976
  • 874
  • 793
  • 792
  • 647
  • 618
  • 581
  • 539
  • 531
  • 525
  • 480
  • 452
  • 449
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Smoothing the silhouettes of polyhedral meshes by boundary curve interpolation /

Wu, Sing-on. January 1999 (has links)
Thesis (M. Phil.)--University of Hong Kong, 2000. / Includes bibliographical references (leaf 56).
12

A Fast Localization Method Based on Distance Measurement in a Modeled Environment.

Deo, Ashwin P. January 2009 (has links)
Thesis (M.S.)--Case Western Reserve University, 2009 / Title from PDF (viewed on 19 August 2009) Department of Electrical Engineering and Computer Science Includes abstract Includes bibliographical references Available online via the OhioLINK ETD Center
13

What can your computer recognize chemical and facial pattern recognition through the use of Eigen Analysis Method /

Giordano, Anthony J. January 2007 (has links) (PDF)
Senior Honors thesis--Regis University, Denver, Colo., 2007. / Title from PDF title page (viewed on June 26, 2007). Includes bibliographical references.
14

The use of formal language theory in computer vision

Van Niekerk, Graeme Neill 20 November 2014 (has links)
M.Sc. (Computer Science) / In this dissertation, a study of the field of computer vision as well as various fields relating to computer vision is made. An investigation of organic vision is made involving the study of the organic focusing device and visual cortex in humans. This is also done from a psychological point-of-view. Various network models emulating the neuronic networks as well as component networks of the human visual cortex are investigated. Recent work done in the area of neural networks and computer vision is also mentioned. The mathematical theory and techniques used in the area of image formation and image processing, is studied. The study of the field of artificial intelligence and its relation towards the computer vision problem, is made as well as a discussion of numerous application systems that have been developed. Existing industrial applications of computer vision are studied as well as the mentioning of systems that have been developed for this purpose. The use of parallel architectures and multiresolution systems for computer vision application, are investigated. Finally, a discussion of the formal language theory and automata is given in terms of its relevance to computer vision. The discussion centers around the the recognition of two and three-dimensional structures by various automata in the two dimensions. From this study, a formal model for the recognition of three-dimensional digital structures, is proposed and informally defined. It will be the aim of further study to fully develop and implement this model.
15

Distributed bit-parallel architecture and algorithms for early vision

Bolotski, Michael January 1990 (has links)
A new form of parallelism, distributed bit-parallelism, is introduced. A distributed bit-parallel organization distributes each bit of a data item to a different processor. Bit-parallelism allows computation that is sub-linear with word size for such operations as integer addition, arithmetic shifts, and data moves. The implications of bit-parallelism for system architecture are analyzed. An implementation of a bit-parallel architecture based on a mesh with bypass network is presented. The performance of bit-parallel algorithms on this architecture is analyzed and found to be several times faster than bit-serial algorithms. The application of the architecture to low level vision algorithms is discussed. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Graduate
16

Coaxial stereo and scale-based matching

Katz, Itzhak January 1985 (has links)
The past decade has seen a growing interest in computer stereo vision: the recovery of the depth map of a scene from two-dimensional images. The main problem of computer stereo is in establishing correspondence between features or regions in two or more images. This is referred to as the correspondence problem. One way to reduce the difficulty of the above problem is to constrain the camera modeling. Conventional stereo systems use two or more cameras, which are positioned in space at a uniform distance from the scene. These systems use epipolar geometry for their camera modeling, in order to curb the search space to be one-dimensional - along epipolar lines. Following Jain's approach, this thesis exploits a non-conventional camera modeling: the cameras are positioned in space one behind the other, such that their optical axes are collinear (hence the name coaxial stereo), and their distance apart is known. This approach complies with a simple case of epipolar geometry which further reduces the magnitude of the correspondence problem. The displacement of the projection of a stationary point occurs along a radial line, and depends only on its spatial depth and the distance between the cameras. Thus, to simplify (significantly) the recovery of depth from disparity, complex logarithmic mapping is applied to the original images. The logarithmic part of the transformation introduces great distortion to the image's resolution. Therefore, to minimize this distortion, it is applied to the features used in the matching process. The search for matching features is conducted along radial lines. Following Mokhtarian and Mackworth's approach, a scale-space image is constructed for each radial line by smoothing its intensity profile with a Gaussian filter, and finding zero-crossings in the second derivative at varying scale levels. Scale-space images of corresponding radial lines are then matched, based on a modified uniform cost algorithm. The matching algorithm is written with generality in mind. As a consequence, it can be easily adopted to other stereoscopic systems. Some new results on the structure of scale-space images of one dimensional functions are presented. / Science, Faculty of / Computer Science, Department of / Graduate
17

An adaptable recognition system for biological and other irregular objects /

Bernier, Thomas. January 2001 (has links)
No description available.
18

AUTOMATED SORTING OF PEGS USING COMPUTER VISION

Taylor W. Hubbard (5930666) 17 January 2019 (has links)
<p>The thesis covered the creation and testing of a low cost and modular sorting system of pegs used in products by Lafayette Instruments. The system is designed to check peg dimensions through use of computer vision while sorting out nonconforming parts and counting ones that are conforming. Conforming parts are separated into bins of predetermined quantities so that they do not need manual counting. The developed system will save engineers and technicians at Lafayette instruments many man hours from manually sorting and counting the roughly 160,000 pegs a year. The system will be able to sort and count at a speed comparable to a human operator while achieving an overall average accuracy of 95% or higher.</p>
19

Efficiently mapping high-performance early vision algorithms onto multicore embedded platforms

Apewokin, Senyo 09 January 2009 (has links)
The combination of low-cost imaging chips and high-performance, multicore, embedded processors heralds a new era in portable vision systems. Early vision algorithms have the potential for highly data-parallel, integer execution. However, an implementation must operate within the constraints of embedded systems including low clock rate, low-power operation and with limited memory. This dissertation explores new approaches to adapt novel pixel-based vision algorithms for tomorrow's multicore embedded processors. It presents : - An adaptive, multimodal background modeling technique called Multimodal Mean that achieves high accuracy and frame rate performance with limited memory and a slow-clock, energy-efficient, integer processing core. - A new workload partitioning technique to optimize the execution of early vision algorithms on multi-core systems. - A novel data transfer technique called cat-tail dma that provides globally-ordered, non-blocking data transfers on a multicore system. By using efficient data representations, Multimodal Mean provides comparable accuracy to the widely used Mixture of Gaussians (MoG) multimodal method. However, it achieves a 6.2x improvement in performance while using 18% less storage than MoG while executing on a representative embedded platform. When this algorithm is adapted to a multicore execution environment, the new workload partitioning technique demonstrates an improvement in execution times of 25% with only a 125 ms system reaction time. It also reduced the overall number of data transfers by 50%. Finally, the cat-tail buffering technique reduces the data-transfer latency between execution cores and main memory by 32.8% over the baseline technique when executing Multimodal Mean. This technique concurrently performs data transfers with code execution on individual cores, while maintaining global ordering through low-overhead scheduling to prevent collisions.
20

A knowledge-based machine vision system for automated industrial web inspection /

Cho, Tai-Hoon, January 1991 (has links)
Thesis (Ph. D.)--Virginia Polytechnic Institute and State University, 1991. / Vita. Abstract. Includes bibliographical references (leaves 184-191). Also available via the Internet.

Page generated in 0.075 seconds