• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1942
  • 313
  • 150
  • 112
  • 108
  • 69
  • 56
  • 46
  • 25
  • 20
  • 14
  • 13
  • 13
  • 13
  • 13
  • Tagged with
  • 3584
  • 3584
  • 974
  • 871
  • 791
  • 791
  • 646
  • 618
  • 578
  • 539
  • 530
  • 525
  • 479
  • 451
  • 448
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
671

Active haptic exploration for 3D shape reconstruction.

January 1996 (has links)
by Fung Wai Keung. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1996. / Includes bibliographical references (leaves 146-151). / Acknowledgements --- p.viii / Abstract --- p.1 / Chapter 1 --- Overview --- p.3 / Chapter 1.1 --- Tactile Sensing in Human and Robot --- p.4 / Chapter 1.1.1 --- Human Hands and Robotic Hands --- p.4 / Chapter 1.1.2 --- Mechanoreceptors in skin and Tactile Sensor Arrays --- p.7 / Chapter 1.2 --- Motivation --- p.12 / Chapter 1.3 --- Objectives --- p.13 / Chapter 1.4 --- Related Work --- p.14 / Chapter 1.4.1 --- Using Vision Alone --- p.15 / Chapter 1.4.2 --- Integration of Vision and Touch --- p.15 / Chapter 1.4.3 --- Using Touch Sensing Alone --- p.17 / Chapter 1.4.3.1 --- Ronald S. Fearing's Work --- p.18 / Chapter 1.4.3.2 --- Peter K. Allen's Work --- p.22 / Chapter 1.5 --- Outline --- p.26 / Chapter 2 --- Geometric Models --- p.27 / Chapter 2.1 --- Introduction --- p.27 / Chapter 2.2 --- Superquadrics --- p.27 / Chapter 2.2.1 --- 2D Superquadrics --- p.27 / Chapter 2.2.2 --- 3D Superquadrics --- p.29 / Chapter 2.3 --- Model Recovery of Superquadric Models --- p.31 / Chapter 2.3.1 --- Problem Formulation --- p.31 / Chapter 2.3.2 --- Least Squares Optimization --- p.33 / Chapter 2.4 --- Free-Form Deformations --- p.34 / Chapter 2.4.1 --- Bernstein Basis --- p.36 / Chapter 2.4.2 --- B-Spline Basis --- p.38 / Chapter 2.5 --- Other Geometric Models --- p.41 / Chapter 2.5.1 --- Generalized Cylinders --- p.41 / Chapter 2.5.2 --- Hyperquadrics --- p.42 / Chapter 2.5.3 --- Polyhedral Models --- p.44 / Chapter 2.5.4 --- Function Representation --- p.45 / Chapter 3 --- Sensing Strategy --- p.54 / Chapter 3.1 --- Introduction --- p.54 / Chapter 3.2 --- Sensing Algorithm --- p.55 / Chapter 3.2.1 --- Assumption of objects --- p.55 / Chapter 3.2.2 --- Haptic Exploration Procedures --- p.56 / Chapter 3.3 --- Contour Tracing --- p.58 / Chapter 3.4 --- Tactile Sensor Data Preprocessing --- p.59 / Chapter 3.4.1 --- Data Transformation and Sensor Calibration --- p.60 / Chapter 3.4.2 --- Noise Filtering --- p.61 / Chapter 3.5 --- Curvature Determination --- p.64 / Chapter 3.6 --- Step Size Determination --- p.73 / Chapter 4 --- 3D Shape Reconstruction --- p.80 / Chapter 4.1 --- Introduction --- p.80 / Chapter 4.2 --- Correspondence Problem --- p.81 / Chapter 4.2.1 --- Affine Invariance Property of B-splines --- p.84 / Chapter 4.2.2 --- Point Inversion Problem --- p.87 / Chapter 4.3 --- Parameter Triple Interpolation --- p.91 / Chapter 4.4 --- 3D Object Shape Reconstruction --- p.94 / Chapter 4.4.1 --- Heuristic Approach --- p.94 / Chapter 4.4.2 --- Closed Contour Recovery --- p.97 / Chapter 4.4.3 --- Control Lattice Recovery --- p.102 / Chapter 5 --- Implementation --- p.105 / Chapter 5.1 --- Introduction --- p.105 / Chapter 5.2 --- Implementation Tool - MATLAB --- p.105 / Chapter 5.2.1 --- Optimization Toolbox --- p.107 / Chapter 5.2.2 --- Splines Toolbox --- p.108 / Chapter 5.3 --- Geometric Model Implementation --- p.109 / Chapter 5.3.1 --- FFD Examples --- p.111 / Chapter 5.4 --- Shape Reconstruction Implementation --- p.112 / Chapter 5.5 --- 3D Model Reconstruction Examples --- p.120 / Chapter 5.5.1 --- Example 1 --- p.120 / Chapter 5.5.2 --- Example 2 --- p.121 / Chapter 6 --- Conclusion --- p.128 / Chapter 6.1 --- Future Work --- p.129 / Appendix --- p.133 / Bibliography --- p.146
672

Mosaicking video with parallax.

January 2001 (has links)
Cheung Man-Tai. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2001. / Includes bibliographical references (leaves 81-84). / Abstracts in English and Chinese. / List of Figures --- p.vi / List of Tables --- p.viii / Chapter Chapter 1. --- Introduction --- p.1 / Chapter 1.1. --- Background --- p.1 / Chapter 1.1.1. --- Parallax --- p.2 / Chapter 1.2. --- Literature Review --- p.3 / Chapter 1.3. --- Research Objective --- p.6 / Chapter 1.4. --- Organization of Thesis --- p.6 / Chapter Chapter 2. --- The 3-Image Algorithm --- p.1 / Chapter 2.1. --- Projective Reconstruction --- p.10 / Chapter 2.2. --- Epipolar Geometry and Fundamental Matrix --- p.11 / Chapter 2.3. --- Determine the Projective Mapping --- p.12 / Chapter 2.3.1. --- Conditions for Initial Matches --- p.13 / Chapter 2.3.2. --- Obtaining the Feature Correspondence --- p.17 / Chapter 2.4. --- Registering Pixel Element --- p.21 / Chapter 2.4.1. --- Single Homography Approach --- p.22 / Chapter 2.4.2. --- Multiple Homography Approach --- p.23 / Chapter 2.4.3. --- Triangular Patches Clustering --- p.24 / Chapter 2.4.3.1. --- Delaunay Triangulation --- p.25 / Chapter 2.5. --- Mosaic Construction --- p.29 / Chapter Chapter 3. --- The n-Image Algorithm --- p.31 / Chapter Chapter 4. --- The Uneven-Sampling-Rate n-Image Algorithm --- p.34 / Chapter 4.1. --- Varying the Reference-Target Images Separation --- p.35 / Chapter 4.2. --- Varying the Target-Intermediate Images Separation --- p.38 / Chapter Chapter 5. --- Experiments --- p.43 / Chapter 5.1. --- Experimental Setup --- p.43 / Chapter 5.1.1. --- Measuring the Performance --- p.43 / Chapter 5.2. --- Experiments on the 3-Image Algorithm --- p.44 / Chapter 5.2.1. --- Planar Scene --- p.44 / Chapter 5.2.2. --- Comparison between a Global Parametric Transformation and the 3-Image Algorithm --- p.46 / Chapter 5.2.3. --- Generic Scene --- p.49 / Chapter 5.2.4. --- The Triangular Patches Clustering against the Multiple Homography Approach --- p.52 / Chapter 5.3. --- Experiments on the n-Image Algorithm --- p.56 / Chapter 5.3.1. --- Initial Experiment on the n-Image Algorithm --- p.56 / Chapter 5.3.2. --- Another Experiment on the n-Image Algorithm --- p.58 / Chapter 5.3.3. --- the n-Image Algorithm over a Longer Image Stream --- p.61 / Chapter 5.4. --- Experiments on the Uneven-Sampling-Rate n-Image Algorithm --- p.65 / Chapter 5.4.1. --- Varying Reference-Target Images Separation --- p.65 / Chapter 5.4.2. --- Varying Target-Intermediate Images Separation --- p.69 / Chapter 5.4.3. --- Comparing the Uneven-Sampling-Rate n-Image Algorithm and Global Transformation Method --- p.73 / Chapter Chapter 6. --- Conclusion and Discussion --- p.76 / Bibliography --- p.81
673

Visual modeling and analysis of articulated motions =: 關節運動的視覺模型製作及分析. / 關節運動的視覺模型製作及分析 / Visual modeling and analysis of articulated motions =: Guan jie yun dong de shi jue mo xing zhi zuo ji fen xi. / Guan jie yun dong de shi jue mo xing zhi zuo ji fen xi

January 2001 (has links)
Lee Kwok Wai. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2001. / Includes bibliographical references (leaves 144-148). / Text in English; abstracts in English and Chinese. / Lee Kwok Wai. / Abstract --- p.i / 摘要 --- p.ii / Acknowledgements --- p.iii / Table of Content --- p.iv / List of Figures & Tables --- p.viii / Chapter Chapter 1 --- Introduction --- p.1 / Chapter Chapter 2 --- Motion Symmetry and its Application in Classification of Articulated Motions --- p.5 / Chapter 2.1 --- Introduction --- p.7 / Chapter 2.1.1 --- Motivation & Related Works --- p.7 / Chapter 2.1.2 --- Transformation Matrix for a Rigid Motion --- p.8 / Chapter 2.2 --- Review of Motion Estimation --- p.9 / Chapter 2.2.1 --- Motion Estimation & Motion Fields --- p.9 / Chapter 2.2.2 --- Motion Field Construction from Optical Flow --- p.10 / Chapter 2.2.3 --- Motion Field Construction from Image Matching --- p.12 / Chapter 2.3 --- Motion Symmetry --- p.13 / Chapter 2.3.1 --- Problem Definition --- p.13 / Chapter 2.3.2 --- Definitions of Transformation Symmetry & Anti- symmetry --- p.14 / Chapter 2.3.2.1 --- Translation Symmetry --- p.15 / Chapter 2.3.2.2 --- Translation Anti-symmetry --- p.16 / Chapter 2.3.2.3 --- Rotation Symmetry --- p.17 / Chapter 2.3.2.4 --- Rotation Anti-symmetry --- p.18 / Chapter 2.3.2.5 --- Scaling Symmetry … --- p.18 / Chapter 2.3.2.6 --- Scaling Anti-symmetry --- p.19 / Chapter 2.3.3 --- Transformation Quasi-symmetry & Quasi-anti- symmetry --- p.19 / Chapter 2.3.4 --- Symmetric Transform of a Transformation --- p.19 / Chapter 2.3.5 --- Symmetric Motions & Periodic Symmetric Motions --- p.20 / Chapter 2.3.6 --- Transformation Vector Fields of Symmetric Motions --- p.20 / Chapter 2.4 --- Detection of Motion Symmetry --- p.23 / Chapter 2.4.1 --- Model-based Motion Parameter Analysis --- p.24 / Chapter 2.4.2 --- Transformation Matrices Analysis --- p.25 / Chapter 2.4.3 --- Simultaneous Resultant Transformation Matrix Analysis --- p.31 / Chapter 2.4.4 --- Motion Symmetry as a Continuous Feature --- p.38 / Chapter 2.5 --- Illustrations & Results … --- p.39 / Chapter 2.5.1 --- Experiment 1: Randomly Generated Data --- p.39 / Chapter 2.5.2 --- Experiment 2: Symmetry Axis for a 3D object --- p.41 / Chapter 2.6 --- Summary & Discussion --- p.44 / Chapter 2.7 --- Appendices --- p.47 / Chapter 2.7.1 --- Appendix 1: Reflection of a Point about a Line --- p.47 / Chapter 2.7.2 --- Appendix 2: Symmetric Transform of a Transformation --- p.49 / Chapter Chapter 3 --- Motion Representation by Feedforward Neural Networks --- p.53 / Chapter 3.1 --- Introduction --- p.54 / Chapter 3.2 --- Motion Modeling in Animation --- p.57 / Chapter 3.2.1 --- Parameterized Motion Representation --- p.58 / Chapter 3.2.2 --- Problems of Motion Analysis --- p.62 / Chapter 3.3 --- Multi-value Regression by Feedforward Neural Networks --- p.66 / Chapter 3.3.1 --- Review of Multi-value Regression Methods --- p.66 / Chapter 3.3.2 --- Problem Definition --- p.68 / Chapter 3.3.3 --- Proposed Methods --- p.69 / Chapter 3.3.3.1 --- Modular Networks with Verification Module --- p.69 / Chapter (a) --- Validation by Decoding --- p.70 / Chapter (b) --- Validation by inverse mapping --- p.71 / Chapter 3.3.3.2 --- Partition Algorithm --- p.73 / Chapter 3.4 --- Illustration & Results --- p.76 / Chapter 3.4.1 --- Cylindrical Spiral Function --- p.76 / Chapter 3.4.2 --- Elongated Cylindrical Spiral Function --- p.79 / Chapter 3.4.3 --- Cylindrical Spiral Surface --- p.83 / Chapter 3.4.4 --- S-curve Data --- p.87 / Chapter 3.4.5 --- Inverse Sine function --- p.89 / Chapter 3.5 --- Motion Analysis … --- p.91 / Chapter 3.6 --- Summary & Discussion --- p.94 / Chapter Chapter 4 --- Motion Representation by Recurrent Neural Networks --- p.98 / Chapter 4.1 --- Introduction --- p.99 / Chapter 4.1.1 --- Recurrent Neural Networks (RNNs) --- p.99 / Chapter 4.1.2 --- Fully & Partially Recurrent Neural Networks --- p.101 / Chapter 4.1.3 --- Back-propagation Training Algorithm --- p.105 / Chapter 4.2 --- Sequence Encoding by Recurrent Neural Networks --- p.106 / Chapter 4.2.1 --- Random Binary Sequence --- p.107 / Chapter 4.2.2 --- Angular Positions of Clock Needles --- p.108 / Chapter 4.2.3 --- Absolute Positions of Clock Needles´ةTips --- p.109 / Chapter 4.2.4 --- Henon Time Series --- p.111 / Chapter 4.2.5 --- Ikeda Time Series … --- p.112 / Chapter 4.2.6 --- Single-Input-Single-Output (SISO) Non-linear System --- p.114 / Chapter 4.2.7 --- Circular Trajectory --- p.115 / Chapter 4.2.8 --- Number Trajectories --- p.118 / Chapter 4.3 --- Animation Generation by Recurrent Neural Networks --- p.123 / Chapter 4.3.1 --- Storage & Generation of Animations --- p.127 / Chapter 4.3.2 --- Interpolation between two Motion Segments --- p.127 / Chapter 4.4 --- Motion Analysis by Recurrent Neural Networks --- p.129 / Chapter 4.5 --- Experimental Results --- p.130 / Chapter 4.5.1 --- Network Training --- p.130 / Chapter 4.5.2 --- Motion Interpolation --- p.134 / Chapter 4.5.3 --- Motion Recognition --- p.135 / Chapter 4.6 --- Summary & Discussion --- p.138 / Chapter Chapter 5 --- Conclusion --- p.141 / References --- p.144
674

Qualifying 4D deforming surfaces by registered differential features

Lukins, Timothy Campbell January 2009 (has links)
Recent advances in 4D data acquisition systems in the field of Computer Vision have opened up many exciting new possibilities for the interpretation of complex moving surfaces. However, a fundamental problem is that this has also led to a huge increase in the volume of data to be handled. Attempting to make sense of this wealth of information is then a core issue to be addressed if such data can be applied to more complex tasks. Similar problems have been historically encountered in the analysis of 3D static surfaces, leading to the extraction of higher-level features based on analysis of the differential geometry. Our central hypothesis is that there exists a compact set of similarly useful descriptors for the analysis of dynamic 4D surfaces. The primary advantages in considering localised changes are that they provide a naturally useful set of invariant characteristics. We seek a constrained set of terms - a vocabulary - for describing all types of deformation. By using this, we show how to describe what the surface is doing more effectively; and thereby enable better characterisation, and consequently more effective visualisation and comparison. This thesis investigates this claim. We adopt a bottom-up approach of the problem, in which we acquire raw data from a newly constructed commercial 4D data capture system developed by our industrial partners. A crucial first step resolves the temporal non-linear registration between instances of the captured surface. We employ a combined optical/range flow to guide a conformation over a sequence. By extending the use of aligned colour information alongside the depth data we improve this estimation in the case of local surface motion ambiguities. By employing a KLT/thin-plate-spline method we also seek to preserve global deformation for regions with no estimate. We then extend aspects of differential geometry theory for existing static surface analysis to the temporal domain. Our initial formulation considers the possible intrinsic transitions from the set of shapes defined by the variations in the magnitudes of the principal curvatures. This gives rise to a total of 15 basic types of deformation. The change in the combined magnitudes also gives an indication of the extent of change. We then extend this to surface characteristics associated with expanding, rotating and shearing; to derive a full set of differential features. Our experimental results include qualitative assessment of deformations for short episodic registered sequences of both synthetic and real data. The higher-level distinctions extracted are furthermore a useful first step for parsimonious feature extraction, which we then proceed to demonstrate can be used as a basis for further analysis. We ultimately evaluate this approach by considering shape transition features occurring within the human face, and the applicability for identification and expression analysis tasks.
675

Video-based face alignment using efficient sparse and low-rank approach.

January 2011 (has links)
Wu, King Keung. / "August 2011." / Thesis (M.Phil.)--Chinese University of Hong Kong, 2011. / Includes bibliographical references (p. 119-126). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgement --- p.v / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Overview of Face Alignment Algorithms --- p.1 / Chapter 1.1.1 --- Objectives --- p.1 / Chapter 1.1.2 --- Motivation: Photo-realistic Talking Head --- p.2 / Chapter 1.1.3 --- Existing methods --- p.5 / Chapter 1.2 --- Contributions --- p.8 / Chapter 1.3 --- Outline of the Thesis --- p.11 / Chapter 2 --- Sparse Signal Representation --- p.13 / Chapter 2.1 --- Introduction --- p.13 / Chapter 2.2 --- Problem Formulation --- p.15 / Chapter 2.2.1 --- l0-nonn minimization --- p.15 / Chapter 2.2.2 --- Uniqueness --- p.16 / Chapter 2.3 --- Basis Pursuit --- p.18 / Chapter 2.3.1 --- From l0-norm to l1-norm --- p.19 / Chapter 2.3.2 --- l0-l1 Equivalence --- p.20 / Chapter 2.4 --- l1-Regularized Least Squares --- p.21 / Chapter 2.4.1 --- Noisy case --- p.22 / Chapter 2.4.2 --- Over-determined systems of linear equations --- p.22 / Chapter 2.5 --- Summary --- p.24 / Chapter 3 --- Sparse Corruptions and Principal Component Pursuit --- p.25 / Chapter 3.1 --- Introduction --- p.25 / Chapter 3.2 --- Sparse Corruptions --- p.26 / Chapter 3.2.1 --- Sparse Corruptions and l1-Error --- p.26 / Chapter 3.2.2 --- l1-Error and Least Absolute Deviations --- p.28 / Chapter 3.2.3 --- l1-Regularized l1-Error --- p.29 / Chapter 3.3 --- Robust Principal Component Analysis (RPCA) and Principal Component Pursuit --- p.31 / Chapter 3.3.1 --- Principal Component Analysis (PCA) and RPCA --- p.31 / Chapter 3.3.2 --- Principal Component Pursuit --- p.33 / Chapter 3.4 --- Experiments of Sparse and Low-rank Approach on Surveillance Video --- p.34 / Chapter 3.4.1 --- Least Squares --- p.35 / Chapter 3.4.2 --- l1-Regularized Least Squares --- p.35 / Chapter 3.4.3 --- l1-Error --- p.36 / Chapter 3.4.4 --- l1-Regularized l1-Error --- p.36 / Chapter 3.5 --- Summary --- p.37 / Chapter 4 --- Split Bregman Algorithm for l1-Problem --- p.45 / Chapter 4.1 --- Introduction --- p.45 / Chapter 4.2 --- Bregman Distance --- p.46 / Chapter 4.3 --- Bregman Iteration for Constrained Optimization --- p.47 / Chapter 4.4 --- Split Bregman Iteration for l1-Regularized Problem --- p.50 / Chapter 4.4.1 --- Formulation --- p.51 / Chapter 4.4.2 --- Advantages of Split Bregman Iteration . . --- p.52 / Chapter 4.5 --- Fast l1 Algorithms --- p.54 / Chapter 4.5.1 --- l1-Regularized Least Squares --- p.54 / Chapter 4.5.2 --- l1-Error --- p.55 / Chapter 4.5.3 --- l1-Regularized l1-Error --- p.57 / Chapter 4.6 --- Summary --- p.58 / Chapter 5 --- Face Alignment Using Sparse and Low-rank Decomposition --- p.61 / Chapter 5.1 --- Robust Alignment by Sparse and Low-rank Decomposition for Linearly Correlated Images (RASL) --- p.61 / Chapter 5.2 --- Problem Formulation --- p.62 / Chapter 5.2.1 --- Theory --- p.62 / Chapter 5.2.2 --- Algorithm --- p.64 / Chapter 5.3 --- Direct Extension of RASL: Multi-RASL --- p.66 / Chapter 5.3.1 --- Formulation --- p.66 / Chapter 5.3.2 --- Algorithm --- p.67 / Chapter 5.4 --- Matlab Implementation Details --- p.68 / Chapter 5.4.1 --- Preprocessing --- p.70 / Chapter 5.4.2 --- Transformation --- p.73 / Chapter 5.4.3 --- Jacobian Ji --- p.74 / Chapter 5.5 --- Experiments --- p.75 / Chapter 5.5.1 --- Qualitative Evaluations Using Small Dataset --- p.76 / Chapter 5.5.2 --- Large Dataset Test --- p.81 / Chapter 5.5.3 --- Conclusion --- p.85 / Chapter 5.6 --- Sensitivity analysis on selection of references --- p.87 / Chapter 5.6.1 --- References from consecutive frames --- p.88 / Chapter 5.6.2 --- References from RASL-aligned images --- p.91 / Chapter 5.7 --- Summary --- p.92 / Chapter 6 --- Extension of RASL for video: One-by-One Approach --- p.96 / Chapter 6.1 --- One-by-One Approach --- p.96 / Chapter 6.1.1 --- Motivation --- p.97 / Chapter 6.1.2 --- Algorithm --- p.97 / Chapter 6.2 --- Choices of Optimization --- p.101 / Chapter 6.2.1 --- l1-Regularized Least Squares --- p.101 / Chapter 6.2.2 --- l1-Error --- p.102 / Chapter 6.2.3 --- l1-Regularized l1-Error --- p.103 / Chapter 6.3 --- Experiments --- p.104 / Chapter 6.3.1 --- Evaluation for Different l1 Algorithms --- p.104 / Chapter 6.3.2 --- Conclusion --- p.108 / Chapter 6.4 --- Exploiting Property of Video --- p.109 / Chapter 6.5 --- Summary --- p.110 / Chapter 7 --- Conclusion and Future Work --- p.112 / Chapter A --- Appendix --- p.117 / Bibliography --- p.119
676

Systematic generation of datasets and benchmarks for modern computer vision

Malireddi, Sri Raghu 03 April 2019 (has links)
Deep Learning is dominant in the field of computer vision, thanks to its high performance. This high performance is driven by large annotated datasets and proper evaluation benchmarks. However, two important areas in computer vision, depth-based hand segmentation, and local features, respectively lack a large well-annotated dataset and a benchmark protocol that properly demonstrates its practical performance. Therefore, in this thesis, we focus on these two problems. For hand segmentation, we create a novel systematic way to easily create automatic semantic segmentation annotations for large datasets. We achieved this with the help of traditional computer vision techniques and minimal hardware setup of one RGB-D camera and two distinctly colored skin-tight gloves. Our method allows easy creation of large-scale datasets with high annotation quality. For local features, we create a new modern benchmark, that reveals their different aspects. Specifically wide-baseline stereo matching and Multi-View Stereo (MVS), of keypoints in a more practical setup, namely Structure-from-Motion (SfM). We believe that through our new benchmark, we will be able to spur research on learned local features to a more practical direction. In this respect, the benchmark developed for the thesis will be used to host a challenge on local features. / Graduate
677

Shadow Patching: Exemplar-Based Shadow Removal

Hintze, Ryan Sears 01 December 2017 (has links)
Shadow removal is an important problem for both artists and algorithms. Previous methods handle some shadows well but, because they rely on the shadowed data, perform poorly in cases with severe degradation. Image-completion algorithms can completely replace severely degraded shadowed regions, and perform well with smaller-scale textures, but often fail to reproduce larger-scale macrostructure that may still be visible in the shadowed region. This paper provides a general framework that leverages degraded (e.g., shadowed) data to guide the image completion process by extending the objective function commonly used in current state-of-the-art image completion energy-minimization methods. This approach achieves realistic shadow removal even in cases of severe degradation and could be extended to other types of localized degradation.
678

Capturer la géométrie dynamique vivante dans les cages / Capturing life-like dynamic geometry into cages

Savoye, Yann 19 December 2012 (has links)
Reconstruire, synthétiser, analyser et réutiliser les formes dynamiques capturées depuis le monde en mouvement est un défi récent qui reste encore en suspens. Dans cette thèse, nous abordons le problème de l'extraction, l'acquisition et la réutilisation d'une paramétrisation non-rigide pour l'animation basée vidéo. L'objectif principal étant de préserver les propriétés globales et locales de la surface capturée sans squelette articulé, grâce à un nombre limité de paramètres contrôlables, flexibles et réutilisables. Pour résoudre ce problème, nous nous appuyons sur une réduction de dimensions détachée de la surface reposant sur le paradigme de la représentation par cage. En conséquence, nous démontrons la force d'un sous-espace de la forme d'une cage géométrique pour encoder des surfaces fortement non-rigides. / Reconstructing, synthesizing, analyzing to re-using dynamic shapes that are captured from the real-world in motion isa recent and outstanding challenge. Nowadays, highly-detailed animations of live-actor performances are increasinglyeasier to acquire and 3D Video has reached considerable attention in visual media production. In this thesis, we addressthe problem of extracting or acquiring and then reusing non-rigid parametrization for video-based animations. At firstsight, a crucial challenge is to reproduce plausible boneless deformations while preserving global and local capturedproperties of the surface with a limited number of controllable, flexible and reusable parameters. To solve this challenge,we directly rely on a skin-detached dimension reduction thanks to the well-known cage-based paradigm. Indeed, to thebest of our knowledge, this dissertation opens the field of cage-based performance capture. First, we achieve ScalableInverse Cage-based Modeling by transposing the inverse kinematics paradigm on surfaces. To do this, we introduce acage inversion process with user-specified screen-space constraints. Secondly, we convert non-rigid animated surfacesinto a sequence of estimated optimal cage parameters via a process of Cage-based Animation Conversion. Building onthis reskinning procedure, we also develop a well-formed Animation Cartoonization algorithm for multi-view data in termof cage-based surface exaggeration and video-based appearance stylization. Thirdly, motivated by the relaxation of priorknowledge on the data, we propose a promising unsupervised approach to perform Iterative Cage-based GeometricRegistration. This novel registration scheme deals with reconstructed target point clouds obtained from multi-view videorecording, in conjunction with a static and wrinkled template mesh. Above all, we demonstrate the strength of cage-basedsubspaces in order to reparametrize highly non-rigid dynamic surfaces, without the need of secondary deformations. Inaddition, we state and discuss conclusions and several limitations of our cage-based strategies applied to life-like dynamicsurfaces, captured for vision-oriented applications. Finally, a variety of potential directions and open suggestions for furtherwork are outlined.
679

Deep Probabilistic Models for Camera Geo-Calibration

Zhai, Menghua 01 January 2018 (has links)
The ultimate goal of image understanding is to transfer visual images into numerical or symbolic descriptions of the scene that are helpful for decision making. Knowing when, where, and in which direction a picture was taken, the task of geo-calibration makes it possible to use imagery to understand the world and how it changes in time. Current models for geo-calibration are mostly deterministic, which in many cases fails to model the inherent uncertainties when the image content is ambiguous. Furthermore, without a proper modeling of the uncertainty, subsequent processing can yield overly confident predictions. To address these limitations, we propose a probabilistic model for camera geo-calibration using deep neural networks. While our primary contribution is geo-calibration, we also show that learning to geo-calibrate a camera allows us to implicitly learn to understand the content of the scene.
680

Real time object detection on a Raspberry Pi / Objektdetektering i realtid på en Raspberry Pi

Gunnarsson, Adam January 2019 (has links)
With the recent advancement of deep learning, the performance of object detection techniques has greatly increased in both speed and accuracy. This has made it possible to run highly accurate object detection with real time speed on modern desktop computer systems. Recently, there has been a growing interest in developing smaller and faster deep neural network architectures suited for embedded devices. This thesis explores the suitability of running object detection on the Raspberry Pi 3, a popular embedded computer board. Two controlled experiments are conducted where two state of the art object detection models SSD and YOLO are tested in how they perform in accuracy and speed. The results show that the SSD model slightly outperforms YOLO in both speed and accuracy, but with the low processing power that the current generation of Raspberry Pi has to offer, none of the two performs well enough to be viable in applications where high speed is necessary.

Page generated in 0.0629 seconds