• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 62
  • 11
  • 10
  • 5
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 109
  • 109
  • 109
  • 27
  • 23
  • 19
  • 15
  • 13
  • 13
  • 13
  • 12
  • 12
  • 12
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Optimal Bayesian Estimators for Image Segmentation and Surface Reconstruction

Marroquin, Jose L. 01 April 1985 (has links)
sA very fruitful approach to the solution of image segmentation andssurface reconstruction tasks is their formulation as estimationsproblems via the use of Markov random field models and Bayes theory.sHowever, the Maximuma Posteriori (MAP) estimate, which is the one mostsfrequently used, is suboptimal in these cases. We show that forssegmentation problems the optimal Bayesian estimator is the maximizersof the posterior marginals, while for reconstruction tasks, thesthreshold posterior mean has the best possible performance. We presentsefficient distributed algorithms for approximating these estimates insthe general case. Based on these results, we develop a maximumslikelihood that leads to a parameter-free distributed algorithm forsrestoring piecewise constant images. To illustrate these ideas, thesreconstruction of binary patterns is discussed in detail.
42

Parallel and Deterministic Algorithms for MRFs: Surface Reconstruction and Integration

Geiger, Davi, Girosi, Federico 01 May 1989 (has links)
In recent years many researchers have investigated the use of Markov random fields (MRFs) for computer vision. The computational complexity of the implementation has been a drawback of MRFs. In this paper we derive deterministic approximations to MRFs models. All the theoretical results are obtained in the framework of the mean field theory from statistical mechanics. Because we use MRFs models the mean field equations lead to parallel and iterative algorithms. One of the considered models for image reconstruction is shown to give in a natural way the graduate non-convexity algorithm proposed by Blake and Zisserman.
43

Probabilistic Solution of Inverse Problems

Marroquin, Jose Luis 01 September 1985 (has links)
In this thesis we study the general problem of reconstructing a function, defined on a finite lattice from a set of incomplete, noisy and/or ambiguous observations. The goal of this work is to demonstrate the generality and practical value of a probabilistic (in particular, Bayesian) approach to this problem, particularly in the context of Computer Vision. In this approach, the prior knowledge about the solution is expressed in the form of a Gibbsian probability distribution on the space of all possible functions, so that the reconstruction task is formulated as an estimation problem. Our main contributions are the following: (1) We introduce the use of specific error criteria for the design of the optimal Bayesian estimators for several classes of problems, and propose a general (Monte Carlo) procedure for approximating them. This new approach leads to a substantial improvement over the existing schemes, both regarding the quality of the results (particularly for low signal to noise ratios) and the computational efficiency. (2) We apply the Bayesian appraoch to the solution of several problems, some of which are formulated and solved in these terms for the first time. Specifically, these applications are: teh reconstruction of piecewise constant surfaces from sparse and noisy observationsl; the reconstruction of depth from stereoscopic pairs of images and the formation of perceptual clusters. (3) For each one of these applications, we develop fast, deterministic algorithms that approximate the optimal estimators, and illustrate their performance on both synthetic and real data. (4) We propose a new method, based on the analysis of the residual process, for estimating the parameters of the probabilistic models directly from the noisy observations. This scheme leads to an algorithm, which has no free parameters, for the restoration of piecewise uniform images. (5) We analyze the implementation of the algorithms that we develop in non-conventional hardware, such as massively parallel digital machines, and analog and hybrid networks.
44

Automated Building Detection From Satellite Images By Using Shadow Information As An Object Invariant

Baris, Yuksel 01 October 2012 (has links) (PDF)
Apart from classical pattern recognition techniques applied for automated building detection in satellite images, a robust building detection methodology is proposed, where self-supervision data can be automatically extracted from the image by using shadow and its direction as an invariant for building object. In this methodology / first the vegetation, water and shadow regions are detected from a given satellite image and local directional fuzzy landscapes representing the existence of building are generated from the shadow regions using the direction of illumination obtained from image metadata. For each landscape, foreground (building) and background pixels are automatically determined and a bipartitioning is obtained using a graph-based algorithm, Grabcut. Finally, local results are merged to obtain the final building detection result. Considering performance evaluation results, this approach can be seen as a proof of concept that the shadow is an invariant for a building object and promising detection results can be obtained when even a single invariant for an object is used.
45

Multiresolution image segmentation based on camporend random fields: Application to image coding

Marqués Acosta, Fernando 22 November 1992 (has links)
La segmentación de imágenes es una técnica que tiene como finalidad dividir una imagen en un conjunto de regiones, asignando a cada objeto en la escena una o varias regiones. Para obtener una segmentación correcta, cada una de las regiones debe cumplir con un criterio de homogeneidad impuesto a priori. Cuando se fija un criterio de homogeneidad, lo que implícitamente se esta haciendo es asumir un modelo matemático que caracteriza las regiones.En esta tesis se introduce un nuevo tipo de modelo denominado modelo jerárquico, ya que tiene dos niveles diferentes sobrepuestos uno sobre el otro. El nivel inferior (o subyacente) modela la posición que ocupa cada una de las regiones dentro de la imagen; mientras que, por su parte, el nivel superior (u observable) esta compuesto por un conjunto de submodelos independientes (un submodelo por región) que caracterizan el comportamiento del interior de las regiones. Para el primero se usa un campo aleatorio Markoviano de orden dos que modelara los contornos de las regiones, mientras que para el segundo nivel se usa un modelo Gausiano. En el trabajo se estudian los mejores potenciales que deben asignarse a los tipos de agrupaciones que permiten definir los contornos. Con todo ello la segmentación se realiza buscando la partición más probable (criterio MAP) para una realización concreta (imagen observable).El proceso de búsqueda de la partición optima para imágenes del tamaño habitual seria prácticamente inviable desde un punto de vista de tiempo de cálculo. Para que se pueda realizar debe partirse de una estimación inicial suficientemente buena y de una algoritmo rápido de mejora como es una búsqueda local. Para ello se introduce la técnica de segmentación piramidal (multirresolucion). La pirámide se genera con filtrado Gausiano y diezmado. En el nivel mas alto de la pirámide, al tener pocos píxels, si que se puede encontrar la partición óptima.
46

Example Based Processing For Image And Video Synthesis

Haro, Antonio 25 November 2003 (has links)
The example based processing problem can be expressed as: "Given an example of an image or video before and after processing, apply a similar processing to a new image or video". Our thesis is that there are some problems where a single general algorithm can be used to create varieties of outputs, solely by presenting examples of what is desired to the algorithm. This is valuable if the algorithm to produce the output is non-obvious, e.g. an algorithm to emulate an example painting's style. We limit our investigations to example based processing of images, video, and 3D models as these data types are easy to acquire and experiment with. We represent this problem first as a texture synthesis influenced sampling problem, where the idea is to form feature vectors representative of the data and then sample them coherently to synthesize a plausible output for the new image or video. Grounding the problem in this manner is useful as both problems involve learning the structure of training data under some assumptions to sample it properly. We then reduce the problem to a labeling problem to perform example based processing in a more generalized and principled manner than earlier techniques. This allows us to perform a different estimation of what the output should be by approximating the optimal (and possibly not known) solution through a different approach.
47

Investigation on Gauss-Markov Image Modeling

You, Jhih-siang 30 August 2006 (has links)
Image modeling is a foundation for many image processing applications. The compound Gauss-Markov (CGM) image model has been proven useful in picture restoration for natural images. In contrast, other Markov Random Fields (MRF) such as Gaussian MRF models are specialized on segmentation for texture image. The CGM image is restored in two steps iteratively: restoring the line field by the assumed image field and restoring the image field by the just computed line field. The line fields are most important for a successful CGM modeling. A convincing line fields should be fair on both fields: horizontal and vertical lines. The working order and update occasions have great effects on the results of line fields in iterative computation procedures. The above two techniques are the basic for our research in finding the best modeling for CGM. Besides, we impose an extra condition for a line to exist to compensate the bias of line fields. This condition is based upon a requirement of a brightness contrast on the line field. Our best modeling is verified by the effect of image restoration in visual quality and numerical values for natural images. Furthermore, an artificial image generated by CGM is tested to prove that our best modeling is correct.
48

Segmentation Of Human Facial Muscles On Ct And Mri Data Using Level Set And Bayesian Methods

Kale, Hikmet Emre 01 July 2011 (has links) (PDF)
Medical image segmentation is a challenging problem, and is studied widely. In this thesis, the main goal is to develop automatic segmentation techniques of human mimic muscles and to compare them with ground truth data in order to determine the method that provides best segmentation results. The segmentation methods are based on Bayesian with Markov Random Field (MRF) and Level Set (Active Contour) models. Proposed segmentation methods are multi step processes including preprocess, main muscle segmentation step and post process, and are applied on three types of data: Magnetic Resonance Imaging (MRI) data, Computerized Tomography (CT) data and unified data, in which case, information coming from both modalities are utilized. The methods are applied both in three dimensions (3D) and two dimensions (2D) data cases. A simulation data and two patient data are utilized for tests. The patient data results are compared statistically with ground truth data which was labeled by an expert radiologist.
49

High-dimensional statistics : model specification and elementary estimators

Yang, Eunho 16 January 2015 (has links)
Modern statistics typically deals with complex data, in particular where the ambient dimension of the problem p may be of the same order as, or even substantially larger than, the sample size n. It has now become well understood that even in this type of high-dimensional scaling, statistically consistent estimators can be achieved provided one imposes structural constraints on the statistical models. In spite of great success over the last few decades, we are still experiencing bottlenecks of two distinct kinds: (I) in multivariate modeling, data modeling assumption is typically limited to instances such as Gaussian or Ising models, and hence handling varied types of random variables is still restricted, and (II) in terms of computation, learning or estimation process is not efficient especially when p is extremely large, since in the current paradigm for high-dimensional statistics, regularization terms induce non-differentiable optimization problems, which do not have closed-form solutions in general. The thesis addresses these two distinct but highly complementary problems: (I) statistical model specification beyond the standard Gaussian or Ising models for data of varied types, and (II) computationally efficient elementary estimators for high-dimensional statistical models. / text
50

Greedy structure learning of Markov Random Fields

Johnson, Christopher Carroll 04 November 2011 (has links)
Probabilistic graphical models are used in a variety of domains to capture and represent general dependencies in joint probability distributions. In this document we examine the problem of learning the structure of an undirected graphical model, also called a Markov Random Field (MRF), given a set of independent and identically distributed (i.i.d.) samples. Specifically, we introduce an adaptive forward-backward greedy algorithm for learning the structure of a discrete, pairwise MRF given a high dimensional set of i.i.d. samples. The algorithm works by greedily estimating the neighborhood of each node independently through a series of forward and backward steps. By imposing a restricted strong convexity condition on the structure of the learned graph we show that the structure can be fully learned with high probability given $n=\Omega(d\log (p))$ samples where $d$ is the dimension of the graph and $p$ is the number of nodes. This is a significant improvement over existing convex-optimization based algorithms that require a sample complexity of $n=\Omega(d^2\log(p))$ and a stronger irrepresentability condition. We further support these claims with an empirical comparison of the greedy algorithm to node-wise $\ell_1$-regularized logistic regression as well as provide a real data analysis of the greedy algorithm using the Audioscrobbler music listener dataset. The results of this document provide an additional representation of work submitted by A. Jalali, C. Johnson, and P. Ravikumar to NIPS 2011. / text

Page generated in 0.2092 seconds