Spelling suggestions: "subject:"shape from shading"" "subject:"shape from hading""
1 |
Interaction of Different Modules in Depth Perception: Stereo and ShadingBulthoff, Heinrich H., Mallot, Hanspeter A. 01 May 1987 (has links)
A method has been developed to measure the perceived depth of computer generated images of simple solid objects. Computer graphic techniques allow for independent control of different depth queues (stereo, shading, and texture) and enable the investigator thereby to study psychophysically the interaction of modules for depth perception. Accumulation of information from shading and stereo and vetoing of depth from shading by edge information have been found. Cooperativity and other types of interactions are discussed. If intensity edges are missing, as in a smooth-shaded surface, the image intensities themselves could be used for stereo matching. The results are compared with computer vision algorithms for both single modules and their integration for 3D vision.
|
2 |
Shape from Shading, Occlusion and TextureYuille, A.L. 01 May 1987 (has links)
Shape from Shading, Occlusion and Texture are three important sources of depth information. We review and summarize work done on these modules.
|
3 |
SHAPE FROM SHADING AND PHOTOMETRIC STEREO ALGORITHMIC MODIFICATION AND EXPERIMENTSPRASAD, PARIKSHIT 31 March 2004 (has links)
No description available.
|
4 |
The Shape of ShadingWeinshall, Daphna 01 October 1990 (has links)
This paper discusses the relationship between the shape of the shading, the surface whose depth at each point equals the brightness in the image, and the shape of the original surface. I suggest the shading as an initial local approximation to shape, and discuss the scope of this approximation and what it may be good for. In particular, qualitative surface features, such as the sign of the Gaussian curvature, can be computed in some cases directly from the shading. Finally, a method to compute the direction of the illuminant (assuming a single point light source) from shading on occluding contours is shown.
|
5 |
Robust Photo-topography by Fusing Shape-from-Shading and StereoThompson, Clay Matthew 01 February 1993 (has links)
Methods for fusing two computer vision methods are discussed and several example algorithms are presented to illustrate the variational method of fusing algorithms. The example algorithms seek to determine planet topography given two images taken from two different locations with two different lighting conditions. The algorithms each employ assingle cost function that combines the computer vision methods of shape-from-shading and stereo in different ways. The algorithms are closely coupled and take into account all the constraints of the photo-topography problem. The algorithms are run on four synthetic test image sets of varying difficulty.
|
6 |
A Modern Differential Geometric Approach to Shape from ShadingSaxberg, Bror V. H. 01 June 1989 (has links)
How the visual system extracts shape information from a single grey-level image can be approached by examining how the information about shape is contained in the image. This technical report considers the characteristic equations derived by Horn as a dynamical system. Certain image critical points generate dynamical system critical points. The stable and unstable manifolds of these critical points correspond to convex and concave solution surfaces, giving more general existence and uniqueness results. A new kind of highly parallel, robust shape from shading algorithm is suggested on neighborhoods of these critical points. The information at bounding contours in the image is also analyzed.
|
7 |
Detection of interesting areas in images by using convexity and rotational symmetries / Detection of interesting areas in images by using convexity and rotational symmetriesKarlsson, Linda January 2002 (has links)
<p>There are several methods avaliable to find areas of interest, but most fail at detecting such areas in cluttered scenes. In this paper two methods will be presented and tested in a qualitative perspective. The first is the darg operator, which is used to detect three dimensional convex or concave objects by calculating the derivative of the argument of the gradient in one direction of four rotated versions. The four versions are thereafter added together in their original orientation. A multi scale version is recommended to avoid the problem that the standard deviation of the Gaussians, combined with the derivatives, controls the scale of the object, which is detected. </p><p>Another feature detected in this paper is rotational symmetries with the help of approximative polynomial expansion. This approach is used in order to minimalize the number and sizes of the filters used for a correlation of a representation of the orientation and filters matching the rotational symmetries of order 0, 1 and 2. With this method a particular type of rotational symmetry can be extracted by using both the order and the orientation of the result. To improve the method’s selectivity a normalized inhibition is applied on the result, which causes a much weaker result in the two other resulting pixel values when one is high. </p><p>Both methods are not enough by themselves to give a definite answer to if the image consists of an area of interest or not, since several other things have these types of features. They can on the other hand give an indication where in the image the feature is found.</p>
|
8 |
Multigrid Relaxation Methods and the Analysis of Lightness, Shading and FlowTerzopoulos, Demetri 01 October 1984 (has links)
Image analysis problems, posed mathematically as variational principles or as partial differential equations, are amenable to numerical solution by relaxation algorithms that are local, iterative, and often parallel. Although they are well suited structurally for implementation on massively parallel, locally-interconnected computational architectures, such distributed algorithms are seriously handicapped by an inherent inefficiency at propagating constraints between widely separated processing elements. Hence, they converge extremely slowly when confronted by the large representations necessary for low-level vision. Application of multigrid methods can overcome this drawback, as we established in previous work on 3-D surface reconstruction. In this paper, we develop efficient multiresolution iterative algorithms for computing lightness, shape-from-shading, and optical flow, and we evaluate the performance of these algorithms on Synthetic images. The multigrid methodology that we describe is broadly applicable in low-level vision. Notably, it is an appealing strategy to use in conjunction with regularization analysis for the efficient solution of a wide range of ill-posed visual reconstruction problems.
|
9 |
Constructing a Depth Map from ImagesIkeuchi, Katsushi 01 August 1983 (has links)
This paper describes two methods for constructing a depth map from images. Each method has two stages. First, one or more needle maps are determined using a pair of images. This process employs either the Marr-Poggio-Grimson stereo and shape-from-shading, or, instead, photometric stereo. Secondly, a depth map is constructed from the needle map or needle maps computed by the first stage. Both methods make use of an iterative relaxation method to obtain the final depth map.
|
10 |
Model-based 3D hand pose estimation from monocular video / Suivi automatique de la main à partir de séquences vidéo monoculairesLa Gorce, Martin de 14 December 2009 (has links)
Dans cette thèse sont présentées deux méthodes visant à obtenir automatiquement une description tridimensionnelle des mouvements d'une main étant donnée une séquence vidéo monoculaire de cette main. En utilisant l'information fournie par la vidéo, l'objectif est de déterminer l'ensemble des paramètres cinématiques nécessaires à la description de la configuration spatiale des différentes parties de la main. Cet ensemble de paramètres est composé des angles de chaque articulation ainsi que de la position et de l'orientation globale du poignet. Ce problème est un problème difficile. La main a de nombreux degrés de liberté et les auto-occultations sont omniprésentes, ce qui rend difficile l'estimation de la configuration des parties partiellement ou totalement cachées. Dans cette thèse sont proposées deux nouvelles méthodes qui améliorent par certains aspects l'état de l'art pour ce problème. Ces deux méthodes sont basées sur un modèle de la main dont la configuration spatiale est ajustée pour que sa projection dans l'image corresponde au mieux à l'image de main observée. Ce processus est guidé par une fonction de coût qui définit une mesure quantitative de la qualité de l'alignement de la projection du modèle avec l'image observée. La procédure d'ajustement du modèle est réalisée grâce à un raffinement itératif de type descente de gradient quasi-newton qui vise à minimiser cette fonction de coût.Les deux méthodes proposées diffèrent principalement par le choix du modèle et de la fonction du coût. La première méthode repose sur un modèle de la main composé d'ellipsoïdes et d'une fonction coût utilisant un modèle de la distribution statistique de la couleur la main et du fond de l'image.La seconde méthode repose sur un modèle triangulé de la surface de la main qui est texturé est ombragé. La fonction de coût mesure directement, pixel par pixel, la différence entre l'image observée et l'image synthétique obtenue par projection du modèle de la main dans l'image. Lors du calcul du gradient de la fonction de coût, une attention particulière a été portée aux termes dûs aux changements de visibilité de la surface au voisinage des auto-occultations, termes qui ont été négligés dans les méthodes préexistantes.Ces deux méthodes ne fonctionnement malheureusement pas en temps réel, ce qui rend leur utilisation pour l'instant impossible dans un contexte d'interaction homme-machine. L'amélioration de la performance des ordinateur combinée avec une amélioration de ces méthodes pourrait éventuellement permettre d'obtenir un résultat en temps réel. / In this thesis we propose two methods that allow to recover automatically a full description of the 3d motion of a hand given a monocular video sequence of this hand. Using the information provided by the video, our aimto is to determine the full set of kinematic parameters that are required to describe the pose of the skeleton of the hand. This set of parameters is composed of the angles associate to each joint/articulation and the global position and orientation of the wrist. This problem is extremely challenging. The hand as many degrees of freedom and auto-occlusion are ubiquitous, which makes difficult the estimation of occluded or partially ocluded hand parts.In this thesis, we introduce two novel methods of increasing complexity that improve to certain extend the state-of-the-art for monocular hand tracking problem. Both are model-based methods and are based on a hand model that is fitted to the image. This process is guided by an objective function that defines some image-based measure of the hand projection given the model parameters. The fitting process is achieved through an iterative refinement technique that is based on gradient-descent and aims a minimizing the objective function. The two methos differ mainly by the choice of the hand model and of the cost function.The first method relies on a hand model made of ellipsoids and a simple discrepancy measure based on global color distributions of the hand and the background. The second method uses a triangulated surface model with texture and shading and exploits a robust distance between the synthetic and observed image as discrepancy measure.While computing the gradient of the discrepancy measure, a particular attention is given to terms related to the changes of visibility of the surface near self occlusion boundaries that are neglected in existing formulations. Our hand tracking method is not real-time, which makes interactive applications not yet possible. Increase of computation power of computers and improvement of our method might make real-time attainable.
|
Page generated in 0.0864 seconds