• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3714
  • 690
  • 658
  • 336
  • 278
  • 263
  • 95
  • 90
  • 70
  • 46
  • 46
  • 46
  • 46
  • 46
  • 46
  • Tagged with
  • 7761
  • 3956
  • 1274
  • 1070
  • 1021
  • 956
  • 849
  • 820
  • 776
  • 765
  • 733
  • 706
  • 695
  • 635
  • 599
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

Self location of vision guided autonomous mobile robots.

January 2000 (has links)
Lau Ah Wai, Calvin. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2000. / Includes bibliographical references (leaves 108-111). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- An Overview --- p.4 / Chapter 1.1.1 --- Robot Self Location --- p.4 / Chapter 1.1.2 --- Robot Navigation --- p.10 / Chapter 1.2 --- Scope of Thesis --- p.12 / Chapter 2 --- Theory --- p.13 / Chapter 2.1 --- Coordinate Systems Transformations --- p.13 / Chapter 2.2 --- Problem Specification --- p.21 / Chapter 2.3 --- The Process of Stereo Vision --- p.22 / Chapter 2.3.1 --- Disparity and Depth --- p.22 / Chapter 2.3.2 --- Vertical Edge Detection and Extraction --- p.25 / Chapter 2.3.3 --- Line Matching Using Dynamic Programming --- p.27 / Chapter 3 --- Mobile Robot Self Location --- p.29 / Chapter 3.1 --- Physical Points by Stereo Reconstruction --- p.29 / Chapter 3.1.1 --- Physical Points Refinement --- p.32 / Chapter 3.2 --- Motion Uncertainties Modeling --- p.33 / Chapter 3.3 --- Improved Physical Point Estimations by EKF --- p.36 / Chapter 3.4 --- Matching Physical Points to Model by Geometric Hashing --- p.40 / Chapter 3.4.1 --- Similarity Invariant --- p.44 / Chapter 3.5 --- Initial Pose Estimation --- p.47 / Chapter 3.5.1 --- Initial Pose Refinement --- p.50 / Chapter 3.6 --- Self Location Using Other Camera Combinations --- p.50 / Chapter 4 --- Improvements to Self Location Using Bayesian Inference --- p.55 / Chapter 4.1 --- Statistical Characteristics of Edges --- p.57 / Chapter 4.2 --- Evidence at One Pixel --- p.60 / Chapter 4.3 --- Evidence Over All Pixels --- p.62 / Chapter 4.4 --- A Simplification Of Geometric Hashing --- p.62 / Chapter 4.4.1 --- Simplification of The Similarity Invariant --- p.63 / Chapter 4.4.2 --- Translation Invariant --- p.63 / Chapter 4.4.3 --- Simplification to The Hashing Table --- p.65 / Chapter 5 --- Robot Navigation --- p.67 / Chapter 5.1 --- Propagation of Motion Uncertainties to Estimated Pose --- p.68 / Chapter 5.2 --- Expectation Map Derived from the CAD Model --- p.70 / Chapter 6 --- Experimental Results --- p.74 / Chapter 6.1 --- Results Using Simulated Environment --- p.74 / Chapter 6.1.1 --- Results and Error Analysis --- p.75 / Chapter 6.2 --- Results Using Real Environment --- p.85 / Chapter 6.2.1 --- Camera Calibration Using Tsai's Algorithm --- p.85 / Chapter 6.2.2 --- Pose Estimation By Geometric Hashing --- p.88 / Chapter 6.2.3 --- Pose Estimation by Bayesian Inference and Geometric Hash- ing --- p.90 / Chapter 6.2.4 --- Comparison of Self Location Approaches --- p.92 / Chapter 6.2.5 --- Motion Tracking --- p.93 / Chapter 7 --- Conclusion and Discussion --- p.95 / Chapter 7.1 --- Conclusion and Discussion --- p.95 / Chapter 7.2 --- Contributions --- p.97 / Chapter 7.3 --- Subjects for Future Research --- p.98 / Chapter A --- Appendix --- p.100 / Chapter A.1 --- Extended Kalman Filter --- p.100 / Chapter A.2 --- Visualizing Uncertainty for 2D Points --- p.105
272

Accurate and fast stereo vision

Kordelas, Georgios January 2015 (has links)
Stereo vision from short-baseline image pairs is one of the most active research fields in computer vision. The estimation of dense disparity maps from stereo image pairs is still a challenging task and there is further space for improving accuracy, minimizing the computational cost and handling more efficiently outliers, low-textured areas, repeated textures, disparity discontinuities and light variations. This PhD thesis presents two novel methodologies relating to stereo vision from short-baseline image pairs: I. The first methodology combines three different cost metrics, defined using colour, the CENSUS transform and SIFT (Scale Invariant Feature Transform) coefficients. The selected cost metrics are aggregated based on an adaptive weights approach, in order to calculate their corresponding cost volumes. The resulting cost volumes are merged into a combined one, following a novel two-phase strategy, which is further refined by exploiting semi-global optimization. A mean-shift segmentation-driven approach is exploited to deal with outliers in the disparity maps. Additionally, low-textured areas are handled using disparity histogram analysis, which allows for reliable disparity plane fitting on these areas. II. The second methodology relies on content-based guided image filtering and weighted semi-global optimization. Initially, the approach uses a pixel-based cost term that combines gradient, Gabor-Feature and colour information. The pixel-based matching costs are filtered by applying guided image filtering, which relies on support windows of two different sizes. In this way, two filtered costs are estimated for each pixel. Among the two filtered costs, the one that will be finally assigned to each pixel, depends on the local image content around this pixel. The filtered cost volume is further refined by exploiting weighted semi-global optimization, which improves the disparity accuracy. The handling of the occluded areas is enhanced by incorporating a straightforward and time efficient scheme. The evaluation results show that both methodologies are very accurate, since they handle efficiently low-textured/occluded areas and disparity discontinuities. Additionally, the second approach has very low computational complexity. Except for the aforementioned two methodologies that use as input short-baseline image pairs, this PhD thesis presents a novel methodology for generating 3D point clouds of good accuracy from wide-baseline stereo pairs.
273

Turing-completeness as medium : art, computers and intentionality

Davis, Paul B. January 2018 (has links)
This PhD is a practice-based study of how the computer functions in art practice, which takes on the notion of a fine art computing “medium”. Current research, while sometimes referencing the computer as a potential art medium, mostly defines it non-explicitly as a type of “hybrid” media device or some sort of “multimedia” machine. These terms leave the existence of a specific computing medium in art practice undefined and have historically led the analysis of artworks that employ computers to rely on critical frameworks that were either developed for earlier physical media, or have no structural similarities to computers. Such approaches can fail to examine unique ontological issues that arise - especially at a structural level - when using a computer to produce art. To achieve a formal description of a hitherto loosely defined (or non-defined) art medium, the research employs a range of critical and theoretical material from fields outside art practice, chiefly among them Alan Turing’s definition of a "a(utomatic)-machine", (nowadays called a “Turing machine”) from his 1936 paper "On Computable Numbers, with an Application to the Entscheidungsproblem". Turing described a machine which can “simulate” any other computing machine including all modern computers. His machine is here used to propose a ‘Turing-complete medium’ of art, of which every computer is a computationally equivalent member. Using this perspective/definition, the research undertook an investigation of a ‘Turing-complete medium’ by developing creative practice in the form of individual works that explored specific aspects of computing systems. The research then engaged in a written analysis of the practice, again employing the concept of a ‘Turing-complete medium’, working towards the development of medium-specific critique of any art made with any computer. In foregrounding the nature and functions of computing machines, the research explores how these elements can be made intrinsic to our interpretations of computer-based art while also being aware of the limitations of medium-specific critique as exposed within the modernist tradition.
274

Modelling visual objects regardless of depictive style

Wu, Qi January 2015 (has links)
Visual object classifcation and detection are major problems in contemporary com- puter vision. State-of-art algorithms allow thousands of visual objects to be learned and recognized, under a wide range of variations including lighting changes, occlusion and point of view etc. However, only a small fraction of the literature addresses the problem of variation in depictive styles (photographs, drawings, paintings etc.). This is a challenging gap but the ability to process images of all depictive styles and not just photographs has potential value across many applications. This thesis aims to narrow this gap. Our studies begin with primitive shapes. We provide experimental evidence that primitives shapes such as `triangle', `square', or `circle' can be found and used to fit regions in segmentations. These shapes corresponds to those used by artists as they draw. We then assume that an object class can be characterised by the qualitative shape of object parts and their structural arrangement. Hence, a novel hierarchical graph representation labeled with primitive shapes is proposed. The model is learnable and is able to classify over a broad range of depictive styles. However, as more depictive styles join, how to capture the wide variation in visual appearance exhibited by visual objects across them is still an open question. We believe that the use of a graph with multi-labels to represent visual words that exists in possibly discontinuous regions of a feature space can be helpful.
275

Trick of the Light: A Game Engine for Exploring Novel Fog of War Mechanics

Mason, Zackery 29 April 2018 (has links)
Trick of the Light is an experiment in strategic game design based on imperfect information in a unique fog of war setting. A hybrid of real-time-strategy, role-playing-game and roguelike genres, the game challenges players to maintain an expansive base system without being able to see anything beyond their own limited vision radius. All units, allied or enemy, maintain private memories about what they have seen, and must directly exchange information to keep up to date. The player acts as commander, making decisions and giving orders while dealing with adversaries, sabotage and misinformation. Testing was done to see if the new concepts could be understood in-game and garner any interest for further development, which proved to be positive in both cases despite complaints related to having less direct control over allies.
276

The components of colour vision

Rogers, Marie Rosanna January 2018 (has links)
No description available.
277

A learning-by-example method for reducing BDCT compression artifacts in high-contrast images.

January 2004 (has links)
Wang, Guangyu. / Thesis submitted in: December 2003. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2004. / Includes bibliographical references (leaves 70-75). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- BDCT Compression Artifacts --- p.1 / Chapter 1.2 --- Previous Artifact Removal Methods --- p.3 / Chapter 1.3 --- Our Method --- p.4 / Chapter 1.4 --- Structure of the Thesis --- p.4 / Chapter 2 --- Related Work --- p.6 / Chapter 2.1 --- Image Compression --- p.6 / Chapter 2.2 --- A Typical BDCT Compression: Baseline JPEG --- p.7 / Chapter 2.3 --- Existing Artifact Removal Methods --- p.10 / Chapter 2.3.1 --- Post-Filtering --- p.10 / Chapter 2.3.2 --- Projection onto Convex Sets --- p.12 / Chapter 2.3.3 --- Learning by Examples --- p.13 / Chapter 2.4 --- Other Related Work --- p.14 / Chapter 3 --- Contamination as Markov Random Field --- p.17 / Chapter 3.1 --- Markov Random Field --- p.17 / Chapter 3.2 --- Contamination as MRF --- p.18 / Chapter 4 --- Training Set Preparation --- p.22 / Chapter 4.1 --- Training Images Selection --- p.22 / Chapter 4.2 --- Bit Rate --- p.23 / Chapter 5 --- Artifact Vectors --- p.26 / Chapter 5.1 --- Formation of Artifact Vectors --- p.26 / Chapter 5.2 --- Luminance Remapping --- p.29 / Chapter 5.3 --- Dominant Implication --- p.29 / Chapter 6 --- Tree-Structured Vector Quantization --- p.32 / Chapter 6.1 --- Background --- p.32 / Chapter 6.1.1 --- Vector Quantization --- p.32 / Chapter 6.1.2 --- Tree-Structured Vector Quantization --- p.33 / Chapter 6.1.3 --- K-Means Clustering --- p.34 / Chapter 6.2 --- TSVQ in Artifact Removal --- p.35 / Chapter 7 --- Synthesis --- p.39 / Chapter 7.1 --- Color Processing --- p.39 / Chapter 7.2 --- Artifact Removal --- p.40 / Chapter 7.3 --- Selective Rejection of Synthesized Values --- p.42 / Chapter 8 --- Experimental Results --- p.48 / Chapter 8.1 --- Image Quality Assessments --- p.48 / Chapter 8.1.1 --- Peak Signal-Noise Ratio --- p.48 / Chapter 8.1.2 --- Mean Structural SIMilarity --- p.49 / Chapter 8.2 --- Performance --- p.50 / Chapter 8.3 --- How Size of Training Set Affects the Performance --- p.52 / Chapter 8.4 --- How Bit Rates Affect the Performance --- p.54 / Chapter 8.5 --- Comparisons --- p.56 / Chapter 9 --- Conclusion --- p.61 / Chapter A --- Color Transformation --- p.63 / Chapter B --- Image Quality --- p.64 / Chapter B.1 --- Image Quality vs. Quantization Table --- p.64 / Chapter B.2 --- Image Quality vs. Bit Rate --- p.66 / Chapter C --- Arti User's Manual --- p.68 / Bibliography --- p.70
278

Correspondence-free stereo vision.

January 2004 (has links)
by Yuan, Ding. / Thesis submitted in: December 2003. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2004. / Includes bibliographical references (leaves 69-71). / Abstracts in English and Chinese. / ABSTRACT --- p.i / 摘要 --- p.iii / ACKNOWLEDGEMENTS --- p.v / TABLE OF CONTENTS --- p.vi / LIST OF FIGURES --- p.viii / LIST OF TABLES --- p.xii / Chapter 1 --- INTRODUCTION --- p.1 / Chapter 2 --- PREVIOUS WORK --- p.5 / Chapter 2.1 --- Traditional Stereo Vision --- p.5 / Chapter 2.1.1 --- Epipolar Constraint --- p.7 / Chapter 2.1.2 --- Some Constraints Based on Properties of Scene Objects --- p.9 / Chapter 2.1.3 --- Two Classes of Algorithms for Correspondence Establishment --- p.10 / Chapter 2.2 --- Correspondenceless Stereo Vision Algorithm for Single Planar Surface Recovery under Parallel-axis Stereo Geometry --- p.13 / Chapter 3 --- CORRESPONDENCE-FREE STEREO VISION UNDER GENERAL STEREO SETUP --- p.19 / Chapter 3.1 --- Correspondence-free Stereo Vision Algorithm for Single Planar Surface Recovery under General Stereo Geometry --- p.20 / Chapter 3.1.1 --- Algorithm in Its Basic Form --- p.21 / Chapter 3.1.2 --- Algorithm Combined with Epipolar Constraint --- p.25 / Chapter 3.1.3 --- Algorithm Combined with SVD And Robust Estimation --- p.36 / Chapter 3.2 --- Correspondence-free Stereo Vision Algorithm for Multiple Planar Surface Recovery --- p.45 / Chapter 3.2.1 --- Plane Hypothesis --- p.46 / Chapter 3.2.2 --- Plane Confirmation And 3D Reconstruction --- p.48 / Chapter 3.2.3 --- Experimental Results --- p.50 / Chapter 3.3 --- Experimental Results on Correspondence-free Vs. Correspondence Based Methods --- p.60 / Chapter 4 --- CONCLUSION AND FUTURE WORK --- p.65 / APPENDIX --- p.67 / BIBLIOGRAPHY --- p.69
279

Color simulation: the activation of perceptual color representation in language comprehension. / CUHK electronic theses & dissertations collection

January 2009 (has links)
In study III, two event-related-potential (ERP) experiments show a clear modulation from preceding object noun on the early ERP components of the following object picture that are known to be associated with perceptual processes and provide by far the strongest evidence that semantic processing cannot account fully for the congruence effects supposed to indicate color representation. / In summary, color representaion is found to be present not only for color information implied by the global phrase context but also for color information irrelevant to the global phrase context, not only for words with direct and concrete associations with color but also for words where such associations are indirect and less concrete. ERP results also provide strong support that color simulation does occur at the perceptual level as argued by embodied cognition theorists and cannot be attributed totally to semantic processing. Briefly, the present research provides a rich dataset and valuable insights deepening the understanding of perceptual color simulation in phrase and words. / Results from all three experiments in the first study showed a robust demonstration of the activation of perceptual representation of color information or the presence of color simulation in phase processing. Results from primetarget stimulus-onset-asynchrony (SOA) manipulation provided time course information of the relative activation of the two types of colors. / The present research was conducted to give a systematic treatment of color simulation in language processing to enrich understanding of perceptual simulation. Two main questions have been addressed here, namely 'what is the time course of color activation in language unites such as noun phrase and abstract words? ' and 'do linguistic simulation and perceptual simulation (especially the unconscious part) of color co-exist in language understanding? ' / The second study involving three experiments, further extended the finding in Study I to demonstrate the presence of color simulation to an even smaller and abstracter linguistic unit of single words. Results from SOA manipulation indicates a more rapid activation of color information for the words psychologically-related to color, followed by activation of color for object nouns, and slowest color activation for verbs. / Lu, Aitao / Adviser: Wai Chan. / Source: Dissertation Abstracts International, Volume: 72-11, Section: B, page: . / Thesis (Ph.D.)--Chinese University of Hong Kong, 2009. / Includes bibliographical references (leaves 89-99). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [201-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese; some appendices include Chinese characters.
280

A computer stereo vision system: using horizontal intensity line segments bounded by edges.

January 1996 (has links)
by Chor-Tung Yau. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1996. / Includes bibliographical references (leaves 106-110). / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Objectives --- p.1 / Chapter 1.2 --- Factors of Depth Perception in Human Visual System --- p.2 / Chapter 1.2.1 --- Oculomotor Cues --- p.2 / Chapter 1.2.2 --- Pictorial Cues --- p.3 / Chapter 1.2.3 --- Movement-Produced Cues --- p.4 / Chapter 1.2.4 --- Binocular Disparity --- p.5 / Chapter 1.3 --- What Cues to Use in Computer Vision? --- p.6 / Chapter 1.4 --- The Process of Stereo Vision --- p.8 / Chapter 1.4.1 --- Depth and Disparity --- p.8 / Chapter 1.4.2 --- The Stereo Correspondence Problem --- p.10 / Chapter 1.4.3 --- Parallel and Nonparallel Axis Stereo Geometry --- p.11 / Chapter 1.4.4 --- Feature-based and Area-based Stereo Matching --- p.12 / Chapter 1.4.5 --- Constraints --- p.13 / Chapter 1.5 --- Organization of this thesis --- p.16 / Chapter 2 --- Related Work --- p.18 / Chapter 2.1 --- Marr and Poggio's Computational Theory --- p.18 / Chapter 2.2 --- Cooperative Methods --- p.19 / Chapter 2.3 --- Dynamic Programming --- p.21 / Chapter 2.4 --- Feature-based Methods --- p.24 / Chapter 2.5 --- Area-based Methods --- p.26 / Chapter 3 --- Overview of the Method --- p.30 / Chapter 3.1 --- Considerations --- p.31 / Chapter 3.2 --- Brief Description of the Method --- p.33 / Chapter 4 --- Preprocessing of Images --- p.35 / Chapter 4.1 --- Edge Detection --- p.35 / Chapter 4.1.1 --- The Laplacian of Gaussian (∇2G) operator --- p.37 / Chapter 4.1.2 --- The Canny edge detector --- p.40 / Chapter 4.2 --- Extraction of Horizontal Line Segments for Matching --- p.42 / Chapter 5 --- The Matching Process --- p.45 / Chapter 5.1 --- Reducing the Search Space --- p.45 / Chapter 5.2 --- Similarity Measure --- p.47 / Chapter 5.3 --- Treating Inclined Surfaces --- p.49 / Chapter 5.4 --- Ambiguity Caused By Occlusion --- p.51 / Chapter 5.5 --- Matching Segments of Different Length --- p.53 / Chapter 5.5.1 --- Cases Without Partial Occlusion --- p.53 / Chapter 5.5.2 --- Cases With Partial Occlusion --- p.55 / Chapter 5.5.3 --- Matching Scheme To Handle All the Cases --- p.56 / Chapter 5.5.4 --- Matching Scheme for Segments of same length --- p.57 / Chapter 5.6 --- Assigning Disparity Values --- p.58 / Chapter 5.7 --- Another Case of Partial Occlusion Not Handled --- p.60 / Chapter 5.8 --- Matching in Two passes --- p.61 / Chapter 5.8.1 --- Problems encountered in the First pass --- p.61 / Chapter 5.8.2 --- Second pass of matching --- p.63 / Chapter 5.9 --- Refinement of Disparity Map --- p.64 / Chapter 6 --- Coarse-to-fine Matching --- p.67 / Chapter 6.1 --- The Wavelet Representation --- p.67 / Chapter 6.2 --- Coarse-to-fine Matching --- p.71 / Chapter 7 --- Experimental Results and Analysis --- p.74 / Chapter 7.1 --- Experimental Results --- p.74 / Chapter 7.1.1 --- Image Pair 1 - The Pentagon Images --- p.74 / Chapter 7.1.2 --- Image Pair 2 - Random dot stereograms --- p.79 / Chapter 7.1.3 --- Image Pair 3 ´ؤ The Rubik Block Images --- p.81 / Chapter 7.1.4 --- Image Pair 4 - The Stack of Books Images --- p.85 / Chapter 7.1.5 --- Image Pair 5 - The Staple Box Images --- p.87 / Chapter 7.1.6 --- Image Pair 6 - Circuit Board Image --- p.91 / Chapter 8 --- Conclusion --- p.94 / Chapter A --- The Wavelet Transform --- p.96 / Chapter A.l --- Fourier Transform and Wavelet Transform --- p.96 / Chapter A.2 --- Continuous wavelet Transform --- p.97 / Chapter A.3 --- Discrete Time Wavelet Transform --- p.99 / Chapter B --- Acknowledgements to Testing Images --- p.100 / Chapter B.l --- The Circuit Board Image --- p.100 / Chapter B.2 --- The Stack of Books Image --- p.101 / Chapter B.3 --- The Rubik Block Images --- p.104 / Bibliography --- p.106

Page generated in 0.0539 seconds