• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 131
  • 32
  • 22
  • 12
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 229
  • 229
  • 111
  • 41
  • 40
  • 37
  • 35
  • 34
  • 32
  • 27
  • 25
  • 24
  • 23
  • 21
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Multiresolution image segmentation based on camporend random fields: Application to image coding

Marqués Acosta, Fernando 22 November 1992 (has links)
La segmentación de imágenes es una técnica que tiene como finalidad dividir una imagen en un conjunto de regiones, asignando a cada objeto en la escena una o varias regiones. Para obtener una segmentación correcta, cada una de las regiones debe cumplir con un criterio de homogeneidad impuesto a priori. Cuando se fija un criterio de homogeneidad, lo que implícitamente se esta haciendo es asumir un modelo matemático que caracteriza las regiones.En esta tesis se introduce un nuevo tipo de modelo denominado modelo jerárquico, ya que tiene dos niveles diferentes sobrepuestos uno sobre el otro. El nivel inferior (o subyacente) modela la posición que ocupa cada una de las regiones dentro de la imagen; mientras que, por su parte, el nivel superior (u observable) esta compuesto por un conjunto de submodelos independientes (un submodelo por región) que caracterizan el comportamiento del interior de las regiones. Para el primero se usa un campo aleatorio Markoviano de orden dos que modelara los contornos de las regiones, mientras que para el segundo nivel se usa un modelo Gausiano. En el trabajo se estudian los mejores potenciales que deben asignarse a los tipos de agrupaciones que permiten definir los contornos. Con todo ello la segmentación se realiza buscando la partición más probable (criterio MAP) para una realización concreta (imagen observable).El proceso de búsqueda de la partición optima para imágenes del tamaño habitual seria prácticamente inviable desde un punto de vista de tiempo de cálculo. Para que se pueda realizar debe partirse de una estimación inicial suficientemente buena y de una algoritmo rápido de mejora como es una búsqueda local. Para ello se introduce la técnica de segmentación piramidal (multirresolucion). La pirámide se genera con filtrado Gausiano y diezmado. En el nivel mas alto de la pirámide, al tener pocos píxels, si que se puede encontrar la partición óptima.
82

Example Based Processing For Image And Video Synthesis

Haro, Antonio 25 November 2003 (has links)
The example based processing problem can be expressed as: "Given an example of an image or video before and after processing, apply a similar processing to a new image or video". Our thesis is that there are some problems where a single general algorithm can be used to create varieties of outputs, solely by presenting examples of what is desired to the algorithm. This is valuable if the algorithm to produce the output is non-obvious, e.g. an algorithm to emulate an example painting's style. We limit our investigations to example based processing of images, video, and 3D models as these data types are easy to acquire and experiment with. We represent this problem first as a texture synthesis influenced sampling problem, where the idea is to form feature vectors representative of the data and then sample them coherently to synthesize a plausible output for the new image or video. Grounding the problem in this manner is useful as both problems involve learning the structure of training data under some assumptions to sample it properly. We then reduce the problem to a labeling problem to perform example based processing in a more generalized and principled manner than earlier techniques. This allows us to perform a different estimation of what the output should be by approximating the optimal (and possibly not known) solution through a different approach.
83

Investigation on Gauss-Markov Image Modeling

You, Jhih-siang 30 August 2006 (has links)
Image modeling is a foundation for many image processing applications. The compound Gauss-Markov (CGM) image model has been proven useful in picture restoration for natural images. In contrast, other Markov Random Fields (MRF) such as Gaussian MRF models are specialized on segmentation for texture image. The CGM image is restored in two steps iteratively: restoring the line field by the assumed image field and restoring the image field by the just computed line field. The line fields are most important for a successful CGM modeling. A convincing line fields should be fair on both fields: horizontal and vertical lines. The working order and update occasions have great effects on the results of line fields in iterative computation procedures. The above two techniques are the basic for our research in finding the best modeling for CGM. Besides, we impose an extra condition for a line to exist to compensate the bias of line fields. This condition is based upon a requirement of a brightness contrast on the line field. Our best modeling is verified by the effect of image restoration in visual quality and numerical values for natural images. Furthermore, an artificial image generated by CGM is tested to prove that our best modeling is correct.
84

Segmentation Of Human Facial Muscles On Ct And Mri Data Using Level Set And Bayesian Methods

Kale, Hikmet Emre 01 July 2011 (has links) (PDF)
Medical image segmentation is a challenging problem, and is studied widely. In this thesis, the main goal is to develop automatic segmentation techniques of human mimic muscles and to compare them with ground truth data in order to determine the method that provides best segmentation results. The segmentation methods are based on Bayesian with Markov Random Field (MRF) and Level Set (Active Contour) models. Proposed segmentation methods are multi step processes including preprocess, main muscle segmentation step and post process, and are applied on three types of data: Magnetic Resonance Imaging (MRI) data, Computerized Tomography (CT) data and unified data, in which case, information coming from both modalities are utilized. The methods are applied both in three dimensions (3D) and two dimensions (2D) data cases. A simulation data and two patient data are utilized for tests. The patient data results are compared statistically with ground truth data which was labeled by an expert radiologist.
85

About the Influence of Randomness of Hydraulic Conductivity on Solute Transport in Saturated Soil: Numerical Experiments

Noack, Klaus, Prigarin, S. M. 31 March 2010 (has links) (PDF)
Up-to-date methods of numerical modelling of random fields were applied to investigate some features of solute transport in saturated porous media with stochastic hydraulic conductivity. The paper describes numerical experiments which were performed and presents the first results.
86

High-dimensional statistics : model specification and elementary estimators

Yang, Eunho 16 January 2015 (has links)
Modern statistics typically deals with complex data, in particular where the ambient dimension of the problem p may be of the same order as, or even substantially larger than, the sample size n. It has now become well understood that even in this type of high-dimensional scaling, statistically consistent estimators can be achieved provided one imposes structural constraints on the statistical models. In spite of great success over the last few decades, we are still experiencing bottlenecks of two distinct kinds: (I) in multivariate modeling, data modeling assumption is typically limited to instances such as Gaussian or Ising models, and hence handling varied types of random variables is still restricted, and (II) in terms of computation, learning or estimation process is not efficient especially when p is extremely large, since in the current paradigm for high-dimensional statistics, regularization terms induce non-differentiable optimization problems, which do not have closed-form solutions in general. The thesis addresses these two distinct but highly complementary problems: (I) statistical model specification beyond the standard Gaussian or Ising models for data of varied types, and (II) computationally efficient elementary estimators for high-dimensional statistical models. / text
87

Greedy structure learning of Markov Random Fields

Johnson, Christopher Carroll 04 November 2011 (has links)
Probabilistic graphical models are used in a variety of domains to capture and represent general dependencies in joint probability distributions. In this document we examine the problem of learning the structure of an undirected graphical model, also called a Markov Random Field (MRF), given a set of independent and identically distributed (i.i.d.) samples. Specifically, we introduce an adaptive forward-backward greedy algorithm for learning the structure of a discrete, pairwise MRF given a high dimensional set of i.i.d. samples. The algorithm works by greedily estimating the neighborhood of each node independently through a series of forward and backward steps. By imposing a restricted strong convexity condition on the structure of the learned graph we show that the structure can be fully learned with high probability given $n=\Omega(d\log (p))$ samples where $d$ is the dimension of the graph and $p$ is the number of nodes. This is a significant improvement over existing convex-optimization based algorithms that require a sample complexity of $n=\Omega(d^2\log(p))$ and a stronger irrepresentability condition. We further support these claims with an empirical comparison of the greedy algorithm to node-wise $\ell_1$-regularized logistic regression as well as provide a real data analysis of the greedy algorithm using the Audioscrobbler music listener dataset. The results of this document provide an additional representation of work submitted by A. Jalali, C. Johnson, and P. Ravikumar to NIPS 2011. / text
88

Understanding, Modeling and Detecting Brain Tumors : Graphical Models and Concurrent Segmentation/Registration methods

Parisot, Sarah 18 November 2013 (has links) (PDF)
The main objective of this thesis is the automatic modeling, understanding and segmentation of diffusively infiltrative tumors known as Diffuse Low-Grade Gliomas. Two approaches exploiting anatomical and spatial prior knowledge have been proposed. We first present the construction of a tumor specific probabilistic atlas describing the tumors' preferential locations in the brain. The proposed atlas constitutes an excellent tool for the study of the mechanisms behind the genesis of the tumors and provides strong spatial cues on where they are expected to appear. The latter characteristic is exploited in a Markov Random Field based segmentation method where the atlas guides the segmentation process as well as characterizes the tumor's preferential location. Second, we introduce a concurrent tumor segmentation and registration with missing correspondences method. The anatomical knowledge introduced by the registration process increases the segmentation quality, while progressively acknowledging the presence of the tumor ensures that the registration is not violated by the missing correspondences without the introduction of a bias. The method is designed as a hierarchical grid-based Markov Random Field model where the segmentation and registration parameters are estimated simultaneously on the grid's control point. The last contribution of this thesis is an uncertainty-driven adaptive sampling approach for such grid-based models in order to ensure precision and accuracy while maintaining robustness and computational efficiency. The potentials of both methods have been demonstrated on a large data-set of heterogeneous Diffuse Low-Grade Gliomas. The proposed methods go beyond the scope of the presented clinical context due to their strong modularity and could easily be adapted to other clinical or computer vision problems.
89

Yield Curve Modelling Via Two Parameter Processes

Pekerten, Uygar 01 February 2005 (has links) (PDF)
Random field models have provided a flexible environment in which the properties of the term structure of interest rates are captured almost as observed. In this study we provide an overview of the forward rate random fiield models and propose an extension in which the forward rates fluctuate along with a two parameter process represented by a random field. We then provide a mathematical expression of the yield curve under this model and sketch the prospective utilities and applications of this model for interest rate management.
90

Robust and efficient intrusion detection systems

Gupta, Kapil Kumar January 2009 (has links)
Intrusion Detection systems are now an essential component in the overall network and data security arsenal. With the rapid advancement in the network technologies including higher bandwidths and ease of connectivity of wireless and mobile devices, the focus of intrusion detection has shifted from simple signature matching approaches to detecting attacks based on analyzing contextual information which may be specific to individual networks and applications. As a result, anomaly and hybrid intrusion detection approaches have gained significance. However, present anomaly and hybrid detection approaches suffer from three major setbacks; limited attack detection coverage, large number of false alarms and inefficiency in operation. / In this thesis, we address these three issues by introducing efficient intrusion detection frameworks and models which are effective in detecting a wide variety of attacks and which result in very few false alarms. Additionally, using our approach, attacks can not only be accurately detected but can also be identified which helps to initiate effective intrusion response mechanisms in real-time. Experimental results performed on the benchmark KDD 1999 data set and two additional data sets collected locally confirm that layered conditional random fields are particularly well suited to detect attacks at the network level and user session modeling using conditional random fields can effectively detect attacks at the application level. / We first introduce the layered framework with conditional random fields as the core intrusion detector. Layered conditional random field can be used to build scalable and efficient network intrusion detection systems which are highly accurate in attack detection. We show that our systems can operate either at the network level or at the application level and perform better than other well known approaches for intrusion detection. Experimental results further demonstrate that our system is robust to noise in training data and handles noise better than other systems such as the decision trees and the naive Bayes. We then introduce our unified logging framework for audit data collection and perform user session modeling using conditional random fields to build real-time application intrusion detection systems. We demonstrate that our system can effectively detect attacks even when they are disguised within normal events in a single user session. Using our user session modeling approach based on conditional random fields also results in early attack detection. This is desirable since intrusion response mechanisms can be initiated in real-time thereby minimizing the impact of an attack.

Page generated in 0.0705 seconds