• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1941
  • 313
  • 150
  • 112
  • 108
  • 69
  • 56
  • 46
  • 24
  • 20
  • 14
  • 13
  • 13
  • 13
  • 13
  • Tagged with
  • 3581
  • 3581
  • 974
  • 869
  • 791
  • 791
  • 645
  • 617
  • 578
  • 538
  • 530
  • 525
  • 479
  • 449
  • 447
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
521

Markov random fields in visual reconstruction : a transputer-based multicomputer implementation

Siksik, Ola January 1990 (has links)
Markov Random Fields (MRFs) are used in computer vision as an effective method for reconstructing a function starting from a set of noisy, or sparse data, or in the integration of early vision processes to label physical discontinuities. The MRF formalism is attractive because it enables the assumptions used to be explicitly stated in the energy function. The drawbacks of such models have been the computational complexity of the implementation, and the difficulty in estimating the parameters of the model. In this thesis, the deterministic approximation to the MRF models derived by Girosi and Geiger[10] is investigated, and following that approach, a MIMD based algorithm is developed and implemented on a network of T800 transputers under the Trollius operating system. A serial version of the algorithm has also been implemented on a SUN 4 under Unix. The network of transputers is configured as a 2-dimensional mesh of processors (currently 16 configured as a 4 x 4 mesh), and the input partitioning method is used to distribute the original image across the network. The implementation of the algorithm is described, and the suitability of the transputer for image processing tasks is discussed. The algorithm was applied to a number of images for edge detection, and produced good results in a small number of iterations. / Science, Faculty of / Computer Science, Department of / Graduate
522

Computer vision based method for electrode slip measurement in a submerged arc-furnace

Jordan, Dominic Timothy 04 June 2012 (has links)
M. Ing. / The purpose of this study is to investigate the use of computer vision techniques to measure the electrode slip. The study investigates a potential location for camera placement in the furnace housing, as well as the use of computer vision algorithms that could be used to solve the problem. A slip measurement algorithm is then designed, implemented and tested. The implemented slip measurement algorithm is based on the manual slip measurement technique, by measuring relative electrode and slip arm displacement between the electrode and the slip arm. The algorithm uses SURF invariant features to extract the electrode features and slip arm features in one frame, and match these features to the next frame SURF. Scene calibration is then used to relate the pixel slip measurement to a metric distance measurement. The experimental results proved that there is scope for applying computer vision techniques to address the slip measurement problem, using a single HD camera. However, there is room for improvement and the recommendations and future work are also discussed.
523

irRotate - Automatic Screen Rotation Based on Face Orientation using Infrared Cameras

January 2020 (has links)
abstract: This work solves the problem of incorrect rotations while using handheld devices.Two new methods which improve upon previous works are explored. The first method uses an infrared camera to capture and detect the user’s face position and orient the display accordingly. The second method utilizes gyroscopic and accelerometer data as input to a machine learning model to classify correct and incorrect rotations. Experiments show that these new methods achieve an overall success rate of 67% for the first and 92% for the second which reaches a new high for this performance category. The paper also discusses logistical and legal reasons for implementing this feature into an end-user product from a business perspective. Lastly, the monetary incentive behind a feature like irRotate in a consumer device and explore related patents is discussed. / Dissertation/Thesis / Masters Thesis Computer Science 2020
524

The common self-polar triangle of conics and its applications to computer vision

Huang, Haifei 08 August 2017 (has links)
In projective geometry, the common self-polar triangle has often been used to discuss the location relationship of two planar conics. However, there are few researches on the properties of the common self-polar triangle, especially when the two planar conics are special conics. In this thesis, the properties of the common self-polar triangle of special conics are studied and their applications to computer vision are presented. Specifically, the applications focus on the two topics of the computer vision: camera calibration and homography estimation. This thesis first studies the common self-polar triangle of two sphere images and also that the common self-polar triangle of two concentric circles, and exploits its properties to solve the problem of camera calibration. For the sphere images, by recovering the constraints on the imaged absolute conic from the vertices of the common self-polar triangles, a novel method for estimating the intrinsic parameters of a camera from an image of three spheres has been developed. For the other case of concentric circles, it is shown in this thesis that the imaged circle center and the vanishing line of the support plane can be recovered simultaneously. Furthermore, many orthogonal vanishing points can be obtained from the common self-polar triangles. Consequently, two novel calibration methods have been developed. Based on our method, one of the state-of-the-art calibration methods has been well interpreted. This thesis then studies the common self-polar triangle of two separate ellipses, and applies it to planar homography estimation. For two images of the separate ellipses, by inducing four corresponding lines from the common self-polar triangle, a homography estimation method has been developed without ambiguity. Based on these results, a special case of planar rectification with two identical circles is also studied. It is shown that given one image of the two identical circles, the vanishing line of the support plane can be recovered from the common self-polar triangle and the imaged circle points can be obtained by intersecting the vanishing line with the image of the circle. Accordingly, a novel method for estimating the rectification homography has been developed and experimental results show the feasibility of our method.
525

Dependency modeling for information fusion with applications in visual recognition

Ma, Jinhua 01 January 2013 (has links)
No description available.
526

Aplikace stereovize a počítačového vidění / Computer vision and stereo vision

Bubák, Martin January 2014 (has links)
This dissertation work is describing the usage of the software tool Computer Vision System Toolbox to create applications in computer vision. At the beginning of the work is performed background research of image scanning and its representation by using colour models. It is followed by a description of epipolar geometry and lastly is stated a description of the Computer Vision System Toolbox. In the next section of the work we deal with setting of used Basler cameras and processing of the scanned image. The following is a description how to create applications for object detection and after this description, we get to know applications for creation of depth maps area.
527

Integrated inspection system in manufacturing: vision systems

Smith, Barry S. 27 April 2010 (has links)
Master of Science
528

Shape-Tailored Invariant Descriptors for Segmentation

Khan, Naeemullah 11 1900 (has links)
Segmentation is one of the first steps in human visual system which helps us see the world around us. Humans pre-attentively segment scenes into regions of unique textures in around 10-20 ms. In this thesis, we address the problem of segmentation by grouping dense pixel-wise descriptors. Our work is based on the fact that human vision has a feed forward and a feed backward loop, where low level feature are used to refine high level features in forward feed, and higher level feature information is used to refine the low level features in backward feed. Most vision algorithms are based on a feed-forward loop, where low-level features are used to construct and refine high level features, but they don’t have the feed back loop. We have introduced ”Shape-Tailored Local Descriptors”, where we use the high level feature information (region approximation) to update low level features i.e. the descriptor, and the low level feature information of the descriptor is used to update the segmentation regions. Our ”Shape-Tailored Local Descriptor” are dense local descriptors which are tailored to an arbitrarily shaped region, aggregating data only within the region of interest. Since the segmentation, i.e., the regions, are not known a-priori, we propose a joint problem for Shape-Tailored Local Descriptors and Segmentation (regions). Furthermore, since natural scenes consist of multiple objects, which may have different visual textures at different scales, we propose to use a multi-scale approach to segmentation. We have used a set of discrete scales, and a continuum of scales in our experiments, both resulted in state-of-the-art performance. Lastly we have looked into the nature of the features selected, we tried handcrafted color and gradient channels and we have also introduced an algorithm to incorporate learning optimal descriptors in segmentation approaches. In the final part of this thesis we have introduced techniques for unsupervised learning of descriptors for segmentation. This eliminates the problem of deep learning methods where we need huge amounts of training data to train the networks. The optimum descriptors are learned, without any training data, on the go during segmentation.
529

/Maybe/Probably/Certainly

Häggström, Frida January 2020 (has links)
This project is an experimentation and examination of contemporary computer vision and machine learning, with an emphasis on machine generated imagery and text, as well as object identification. In other words, this is a study of how computers and machines are learning to see and recognize the world. Computer vision is a kind of visual communication that we rarely think of as being designed. With an emphasis on written and visual research, this project aims to comprehend what exactly goes into the creation of machine generated imagery and artificial vision systems. I have spent the last couple of months looking through the lense of cameras, object identification apps and generative neural networks in order to try and understand how AI perceives reality. This resulted in a mixed media story about images and vision, told through the perspective of a fictional AI character. Visit ​www.maybe-probably.com​ to view the project.
530

Product-Matching mithilfe künstlicher neuronaler Netze basierend auf Match-R-CNN

Schmidt-Dichte, Stefan 15 June 2022 (has links)
In dieser Arbeit wird Match-R-CNN unter dem Gesichtspunkt des Product-Matchings analysiert und implementiert. Bei Match-R-CNN handelt es sich um ein Framework, welches zur Analyse von Bekleidungsbildern eingesetzt werden kann. Es wurde bei Ge et al. [GZW+19] eingeführt. Product-Matching ist die Aufgabe zwei identische Produkte zu identifizieren. Methoden der Bildverabeitung und maschinellen Lernens werden erläutert. Des Weiteren wird der aktuelle Forschungsstand in verwandten Gebieten erörtert. Es war möglich den Aufbau von Match-R-CNN zu analysieren. Hierfür wurden Ge et al. [GZW+19] und Diskussionen im dazugehörigen Github-Repository [git19] herangezogen. Um die Implementierung abschließend zu bewerten, ist weitere Arbeit notwendig.:1 Einleitung 2 Grundlagen 2.1 Bildverarbeitung 2.1.1 Kantenerkennung 2.1.2 Bildfaltung 2.1.3 Probleme bei der Umsetzung 2.2 Convolutional Neural Networks 2.2.1 Probleme bei konventionellen künstlichen neuronalen Netzen 2.2.2 Besonderheiten bei CNNs 2.2.3 Aufbau und Hyperparameter 2.2.4 Training von CNNs 2.2.5 Aktuelle Erkenntnisse 2.3 Ähnlichkeit auf Bildern 3 Verwandte Arbeiten 3.1 Clothing Retrieval und Detection 3.2 Product-Matching 3.3 Deep Similarity 4 Methodik und Umsetzung 4.1 Datensatz 4.2 Datenaufbereitung 4.3 Netzwerkarchitektur 4.3.1 Feature-Network 4.3.2 Matching-Network 4.4 Strategie zur Erzeugung der Trainingspaare 4.5 Matching-Network Training 4.6 Experimente und Zwischenergebnisse 4.7 Ergebnisse 5 Fazit 6 Ausblick Literaturverzeichnis Abbildungsverzeichnis

Page generated in 0.0641 seconds