Spelling suggestions: "subject:"graphbased"" "subject:"graphsbased""
1 |
A Markov Random Field Based Approach to 3D Mosaicing and Registration Applied to Ultrasound SimulationKutarnia, Jason Francis 27 August 2014 (has links)
"
A novel Markov Random Field (MRF) based method for the mosaicing of 3D ultrasound volumes is presented in this dissertation. The motivation for this work is the production of training volumes for an affordable ultrasound simulator, which offers a low-cost/portable training solution for new users of diagnostic ultrasound, by providing the scanning experience essential for developing the necessary psycho-motor skills. It also has the potential for introducing ultrasound instruction into medical education curriculums. The interest in ultrasound training stems in part from the widespread adoption of point-of-care scanners, i.e. low cost portable ultrasound scanning systems in the medical community.
This work develops a novel approach for producing 3D composite image volumes and validates the approach using clinically acquired fetal images from the obstetrics department at the University of Massachusetts Medical School (UMMS). Results using the Visible Human Female dataset as well as an abdominal trauma phantom are also presented. The process is broken down into five distinct steps, which include individual 3D volume acquisition, rigid registration, calculation of a mosaicing function, group-wise non-rigid registration, and finally blending. Each of these steps, common in medical image processing, has been investigated in the context of ultrasound mosaicing and has resulted in improved algorithms. Rigid and non-rigid registration methods are analyzed in a probabilistic framework and their sensitivity to ultrasound shadowing artifacts is studied.
The group-wise non-rigid registration problem is initially formulated as a maximum likelihood estimation, where the joint probability density function is comprised of the partially overlapping ultrasound image volumes. This expression is simplified using a block-matching methodology and the resulting discrete registration energy is shown to be equivalent to a Markov Random Field. Graph based methods common in computer vision are then used for optimization, resulting in a set of transformations that bring the overlapping volumes into alignment. This optimization is parallelized using a fusion approach, where the registration problem is divided into 8 independent sub-problems whose solutions are fused together at the end of each iteration. This method provided a speedup factor of 3.91 over the single threaded approach with no noticeable reduction in accuracy during our simulations. Furthermore, the registration problem is simplified by introducing a mosaicing function, which partitions the composite volume into regions filled with data from unique partially overlapping source volumes. This mosaicing functions attempts to minimize intensity and gradient differences between adjacent sources in the composite volume.
Experimental results to demonstrate the performance of the group-wise registration algorithm are also presented. This algorithm is initially tested on deformed abdominal image volumes generated using a finite element model of the Visible Human Female to show the accuracy of its calculated displacement fields. In addition, the algorithm is evaluated using real ultrasound data from an abdominal phantom. Finally, composite obstetrics image volumes are constructed using clinical scans of pregnant subjects, where fetal movement makes registration/mosaicing especially difficult.
Our solution to blending, which is the final step of the mosaicing process, is also discussed. The trainee will have a better experience if the volume boundaries are visually seamless, and this usually requires some blending prior to stitching. Also, regions of the volume where no data was collected during scanning should have an ultrasound-like appearance before being displayed in the simulator. This ensures the trainee's visual experience isn't degraded by unrealistic images. A discrete Poisson approach has been adapted to accomplish these tasks. Following this, we will describe how a 4D fetal heart image volume can be constructed from swept 2D ultrasound. A 4D probe, such as the Philips X6-1 xMATRIX Array, would make this task simpler as it can acquire 3D ultrasound volumes of the fetal heart in real-time; However, probes such as these aren't widespread yet.
Once the theory has been introduced, we will describe the clinical component of this dissertation. For the purpose of acquiring actual clinical ultrasound data, from which training datasets were produced, 11 pregnant subjects were scanned by experienced sonographers at the UMMS following an approved IRB protocol. First, we will discuss the software/hardware configuration that was used to conduct these scans, which included some custom mechanical design. With the data collected using this arrangement we generated seamless 3D fetal mosaics, that is, the training datasets, loaded them into our ultrasound training simulator, and then subsequently had them evaluated by the sonographers at the UMMS for accuracy. These mosaics were constructed from the raw scan data using the techniques previously introduced. Specific training objectives were established based on the input from our collaborators in the obstetrics sonography group. Important fetal measurements are reviewed, which form the basis for training in obstetrics ultrasound. Finally clinical images demonstrating the sonographer making fetal measurements in practice, which were acquired directly by the Philips iU22 ultrasound machine from one of our 11 subjects, are compared with screenshots of corresponding images produced by our simulator. "
|
2 |
Vector Space Embedding of Graphs via Statistics of Labelling InformationGibert Domingo, Jaume 14 September 2012 (has links)
El reconeixement de patrons és la tasca que pretén distingir objectes entre diferents classes. Quan aquesta tasca es vol solucionar de forma automàtica un pas crucial és el com representar formalment els patrons a l'ordinador. En funció d'aquests formalismes, podem distingir entre el reconeixement estadístic i l'estructural. El primer descriu objectes com un conjunt de mesures col·locats en forma del que s'anomena un vector de característiques. El segon assumeix que hi ha relacions entre parts dels objectes que han de quedar explícitament representades i per tant fa servir estructures relacionals com els grafs per codificar la seva informació inherent. Els espais vectorials són una estructura matemàtica molt flexible que ha permès definir diverses maneres eficients d'analitzar patrons sota la forma de vectors de característiques. De totes maneres, la representació vectorial no és capaç d'expressar explícitament relacions binàries entre parts dels objectes i està restrigida a mesurar sempre, independentment de la complexitat dels patrons, el mateix nombre de característiques per cadascun d'ells. Les representacions en forma de graf presenten la situació contrària. Poden adaptar-se fàcilment a la complexitat inherent dels patrons però introdueixen un problema d'alta complexitat computational, dificultant el disseny d'eines eficients per al procés i l'anàlisis de patrons.
Resoldre aquesta paradoxa és el principal objectiu d'aquesta tesi. La situació ideal per resoldre problemes de reconeixement de patrons seria el representar-los fent servir estructures relacionals com els grafs, i a l'hora, poder fer ús del ric repositori d'eines pel processament de dades del reconeixement estadístic. Una solució elegant a aquest problema és la de transformar el domini dels grafs en el domini dels vectors, on podem aplicar qualsevol algorisme de processament de dades. En altres paraules, assignant a cada graf un punt en un espai vectorial, automàticament tenim accés al conjunt d'algorismes del món estadístic per aplicar-los al domini dels grafs. Aquesta metodologia s'anomena graph embedding.
En aquesta tesi proposem de fer una associació de grafs a vectors de característiques de forma simple i eficient fixant l'atenció en la informació d'etiquetatge dels grafs. En particular, comptem les freqüències de les etiquetes dels nodes així com de les aretes entre etiquetes determinades. Tot i la seva localitat, aquestes característiques donen una representació prou robusta de les propietats globals dels grafs. Primer tractem el cas de grafs amb etiquetes discretes, on les característiques són sencilles de calcular. El cas continu és abordat com una generalització del cas discret, on enlloc de comptar freqüències d'etiquetes, ho fem de representants d'aquestes. Ens trobem que les representacions vectorials que proposem pateixen d'alta dimensionalitat i correlació entre components, i tractem aquests problems mitjançant algorismes de selecció de característiques. També estudiem com la diversitat de diferents representacions pot ser explotada per tal de millorar el rendiment de classificadors base en el marc d'un sistema de múltiples classificadors. Finalment, amb una extensa evaluació experimental mostrem com la metodologia proposada pot ser calculada de forma eficient i com aquesta pot competir amb altres metodologies per a la comparació de grafs. / Pattern recognition is the task that aims at distinguishing objects among different classes. When such a task wants to be solved in an automatic way a crucial step is how to formally represent such patterns to the computer. Based on the different representational formalisms, we may distinguish between statistical and structural pattern recognition. The former describes objects as a set of measurements arranged in the form of what is called a feature vector. The latter assumes that relations between parts of the underlying objects need to be explicitly represented and thus it uses relational structures such as graphs for encoding their inherent information. Vector spaces are a very flexible mathematical structure that has allowed to come up with several efficient ways for the analysis of patterns under the form of feature vectors. Nevertheless, such a representation cannot explicitly cope with binary relations between parts of the objects and it is restricted to measure the exact same number of features for each pattern under study regardless of their complexity. Graph-based representations present the contrary situation. They can easily adapt to the inherent complexity of the patterns but introduce a problem of high computational complexity, hindering the design of efficient tools to process and analyze patterns.
Solving this paradox is the main goal of this thesis. The ideal situation for solving pattern recognition problems would be to represent the patterns using relational structures such as graphs, and to be able to use the wealthy repository of data processing tools from the statistical pattern recognition domain. An elegant solution to this problem is to transform the graph domain into a vector domain where any processing algorithm can be applied. In other words, by mapping each graph to a point in a vector space we automatically get access to the rich set of algorithms from the statistical domain to be applied in the graph domain. Such methodology is called graph embedding.
In this thesis we propose to associate feature vectors to graphs in a simple and very efficient way by just putting attention on the labelling information that graphs store. In particular, we count frequencies of node labels and of edges between labels. Although their locality, these features are able to robustly represent structurally global properties of graphs, when considered together in the form of a vector. We initially deal with the case of discrete attributed graphs, where features are easy to compute. The continuous case is tackled as a natural generalization of the discrete one, where rather than counting node and edge labelling instances, we count statistics of some representatives of them. We encounter how the proposed vectorial representations of graphs suffer from high dimensionality and correlation among components and we face these problems by feature selection algorithms. We also explore how the diversity of different embedding representations can be exploited in order to boost the performance of base classifiers in a multiple classifier systems framework. An extensive experimental evaluation finally shows how the methodology we propose can be efficiently computed and compete with other graph matching and embedding methodologies.
|
3 |
Graph-Based Sparse Learning: Models, Algorithms, and ApplicationsJanuary 2014 (has links)
abstract: Sparse learning is a powerful tool to generate models of high-dimensional data with high interpretability, and it has many important applications in areas such as bioinformatics, medical image processing, and computer vision. Recently, the a priori structural information has been shown to be powerful for improving the performance of sparse learning models. A graph is a fundamental way to represent structural information of features. This dissertation focuses on graph-based sparse learning. The first part of this dissertation aims to integrate a graph into sparse learning to improve the performance. Specifically, the problem of feature grouping and selection over a given undirected graph is considered. Three models are proposed along with efficient solvers to achieve simultaneous feature grouping and selection, enhancing estimation accuracy. One major challenge is that it is still computationally challenging to solve large scale graph-based sparse learning problems. An efficient, scalable, and parallel algorithm for one widely used graph-based sparse learning approach, called anisotropic total variation regularization is therefore proposed, by explicitly exploring the structure of a graph. The second part of this dissertation focuses on uncovering the graph structure from the data. Two issues in graphical modeling are considered. One is the joint estimation of multiple graphical models using a fused lasso penalty and the other is the estimation of hierarchical graphical models. The key technical contribution is to establish the necessary and sufficient condition for the graphs to be decomposable. Based on this key property, a simple screening rule is presented, which reduces the size of the optimization problem, dramatically reducing the computational cost. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2014
|
4 |
Abstractions of Graph ModelsJohnson, Charles Addison 04 June 2020 (has links)
Building models, whether to explain or to predict observed data, is an exercise of describing how the values of observed variables depend on those of others. Black box models only describe relationships between observed variables, and they are evaluated by their ability to accurately describe the values of observed variables in new situations not previously available to the model–such as the output response to a new set of inputs, for example. Black box models describe the observed behavior of the underlying system, but they may not correctly describe the way in which the system computes this behavior. White box models, on the other hand, describe the observed behavior and also incorporate hidden, intermediate variables that are used to describe the specific computation the underlying system uses to generate its observed behavior. In this sense, we say the white box model captures the structure of the system, in addition to its observed behavior. Since a given white box model may be accurately described by an infinite variety of black box models, all computing the same observed behavior but using different structures to do so, we say that any of these black box models is an abstraction of the white box model. This thesis constructs foundational pieces of a unifying theory of linear mathematical abstractions that are central to scientific modeling. It offers a precise description of the spectrums of grey box models linking any white and black box representation. There are various motivations for having this rich variety of representations of a given system. One key motivation is that of consilience, that is, to deepen our understanding of the modeling process by connecting various well developed theories under the umbrella of a broader theory. This work offers a precise relationship between Mason’s signal flow graphs [34] and Willem’s behavioral systems theory [66], in addition to linking the classical transfer function theory used by Nyquist [44], Bode [3], and Weiner [65] to the state space theory preferred by Kalman [27]. Another motivation comes from the application of identification or learning a model of the system from data. Learning problems trade off the number of a priori assumptions that one must make about a system, as well as the richness of available data, with the complexity of a model that one is able to confidently learn from measured observations. This work offers insight into these tradeoffs by characterizing them precisely over entire spectrums of grey box models of increasing complexity. A third motivation comes from the application of vulnerability analysis, which is the study of sensitivities of system behavior to structural perturbations in a grey-box model describing the attack surface, or representation of the system as visible to a potential attacker. The main results of this work, and its specific contributions, are as follows: 1. We define new graph-theoretic constructs and use them to create a unified framework for structural abstractions, 2. We demonstrate that there will always exist a complete, structure preserving, acyclic abstraction for every single-input, fully-connected system, 3. We define structural controllability of an abstraction of a system and argue why our definition is good, and 4. We show how complete abstractions preserve structural controllability. These results were accepted for publication in two papers at the 2020 International Federation of Automatic Control World Congress, each submitted to a different special invited session. These papers comprise Chapters 2 and 3, respectively, of the thesis presented here, and they are expected to appear in print July 2020.
|
5 |
Botnet Detection Using Graph Based Feature ClusteringAkula, Ravi Kiran 04 May 2018 (has links)
Detecting botnets in a network is crucial because bot-activities impact numerous areas such as security, finance, health care, and law enforcement. Most existing rule and flow-based detection methods may not be capable of detecting bot-activities in an efficient manner. Hence, designing a robust botnet-detection method is of high significance. In this study, we propose a botnet-detection methodology based on graph-based features. Self-Organizing Map is applied to establish the clusters of nodes in the network based on these features. Our method is capable of isolating bots in small clusters while containing most normal nodes in the big-clusters. A filtering procedure is also developed to further enhance the algorithm efficiency by removing inactive nodes from bot detection. The methodology is verified using real-world CTU-13 and ISCX botnet datasets and benchmarked against classification-based detection methods. The results show that our proposed method can efficiently detect the bots despite their varying behaviors.
|
6 |
From Pixels to People: Graph Based Methods for Grouping Problems in Computer VisionDing, Lei 14 December 2010 (has links)
No description available.
|
7 |
Graph-based Inference with Constraints for Object Detection and SegmentationMa, Tianyang January 2013 (has links)
For many fundamental problems of computer vision, adopting a graph-based framework can be straight-forward and very effective. In this thesis, I propose several graph-based inference methods tailored for different computer vision applications. It starts from studying contour-based object detection methods. In particular, We propose a novel framework for contour based object detection, by replacing the hough-voting framework with finding dense subgraph inference. Compared to previous work, we propose a novel shape matching scheme suitable for partial matching of edge fragments. The shape descriptor has the same geometric units as shape context but our shape representation is not histogram based. The key contribution is that we formulate the grouping of partial matching hypotheses to object detection hypotheses is expressed as maximum clique inference on a weighted graph. Consequently, each detection result not only identifies the location of the target object in the image, but also provides a precise location of its contours, since we transform a complete model contour to the image. We achieve very competitive results on ETHZ dataset, obtained in a pure shape-based framework, demonstrate that our method achieves not only accurate object detection but also precise contour localization on cluttered background. Similar to the task of grouping of partial matches in the contour-based method, in many computer vision problems, we would like to discover certain pattern among a large amount of data. For instance, in the application of unsupervised video object segmentation, where we need automatically identify the primary object and segment the object out in every frame. We propose a novel formulation of selecting object region candidates simultaneously in all frames as finding a maximum weight clique in a weighted region graph. The selected regions are expected to have high objectness score (unary potential) as well as share similar appearance (binary potential). Since both unary and binary potentials are unreliable, we introduce two types of mutex (mutual exclusion) constraints on regions in the same clique: intra-frame and inter-frame constraints. Both types of constraints are expressed in a single quadratic form. An efficient algorithm is applied to compute the maximal weight cliques that satisfy the constraints. We apply our method to challenging benchmark videos and obtain very competitive results that outperform state-of-the-art methods. We also show that the same maximum weight subgraph with mutex constraints formulation can be used to solve various computer vision problems, such as points matching, solving image jigsaw puzzle, and detecting object using 3D contours. / Computer and Information Science
|
8 |
Graph-based approaches for semi-supervised and cross-domain sentiment analysisPonomareva, Natalia January 2014 (has links)
The rapid development of Internet technologies has resulted in a sharp increase in the number of Internet users who create content online. User-generated content often represents people's opinions, thoughts, speculations and sentiments and is a valuable source of information for companies, organisations and individual users. This has led to the emergence of the field of sentiment analysis, which deals with the automatic extraction and classification of sentiments expressed in texts. Sentiment analysis has been intensively researched over the last ten years, but there are still many issues to be addressed. One of the main problems is the lack of labelled data necessary to carry out precise supervised sentiment classification. In response, research has moved towards developing semi-supervised and cross-domain techniques. Semi-supervised approaches still need some labelled data and their effectiveness is largely determined by the amount of these data, whereas cross-domain approaches usually perform poorly if training data are very different from test data. The majority of research on sentiment classification deals with the binary classification problem, although for many practical applications this rather coarse sentiment scale is not sufficient. Therefore, it is crucial to design methods which are able to perform accurate multiclass sentiment classification. The aims of this thesis are to address the problem of limited availability of data in sentiment analysis and to advance research in semi-supervised and cross-domain approaches for sentiment classification, considering both binary and multiclass sentiment scales. We adopt graph-based learning as our main method and explore the most popular and widely used graph-based algorithm, label propagation. We investigate various ways of designing sentiment graphs and propose a new similarity measure which is unsupervised, easy to compute, does not require deep linguistic analysis and, most importantly, provides a good estimate for sentiment similarity as proved by intrinsic and extrinsic evaluations. The main contribution of this thesis is the development and evaluation of a graph-based sentiment analysis system that a) can cope with the challenges of limited data availability by using semi-supervised and cross-domain approaches b) is able to perform multiclass classification and c) achieves highly accurate results which are superior to those of most state-of-the-art semi-supervised and cross-domain systems. We systematically analyse and compare semi-supervised and cross-domain approaches in the graph-based framework and propose recommendations for selecting the most pertinent learning approach given the data available. Our recommendations are based on two domain characteristics, domain similarity and domain complexity, which were shown to have a significant impact on semi-supervised and cross-domain performance.
|
9 |
GRAPH-BASED ANALYSIS FOR E-COMMERCE RECOMMENDATIONHuang, Zan January 2005 (has links)
Recommender systems automate the process of recommending products and services to customers based on various types of data including customer demographics, product features, and, most importantly, previous interactions between customers and products (e.g., purchasing, rating, and catalog browsing). Despite significant research progress and growing acceptance in real-world applications, two major challenges remain to be addressed to implement effective e-commerce recommendation applications. The first challenge is concerned with making recommendations based on sparse transaction data. The second challenge is the lack of a unified framework to integrate multiple types of input data and recommendation approaches.This dissertation investigates graph-based algorithms to address these two problems. The proposed approach is centered on consumer-product graphs that represent sales transactions as links connecting consumer and product nodes. In order to address the sparsity problem, I investigate the network spreading activation algorithms and a newly proposed link analysis algorithm motivated by ideas from Web graph analysis techniques. Experimental results with several e-commerce datasets indicated that both classes of algorithms outperform a wide range of existing collaborative filtering algorithms, especially under sparse data. Two graph-based models that enhance the simple consumer-product graph were proposed to provide unified recommendation frameworks. The first model, a two-layer graph model, enhances the consumer-product graph by incorporating the consumer/product attribute information as consumer and product similarity links. The second model is based on probabilistic relational models (PRMs) developed in the relational learning literature. It is demonstrated with e-commerce datasets that the proposed frameworks not only conceptually unify many of the existing recommendation approaches but also allow the exploitation of a wider range of data patterns in an integrated manner, leading to improved recommendation performance.In addition to the recommendation algorithm design research, this dissertation also employs the random graph theory to study the topological characteristics of consumer-product graphs and the fundamental mechanisms that generate the sales transaction data. This research represents the early step towards a meta-level analysis framework for validating the fundamental assumptions made by different recommendation algorithms regarding the consumer-product interaction generation process and thus supporting systematic recommendation model/algorithm selection and evaluation.
|
10 |
Semi-Automatic assessment of students' graph-based diagramsBatmaz, Firat January 2011 (has links)
Diagrams are increasingly used in many design methods, and are being taught in a variety of contexts in higher education such as database conceptual design or software design in computer science. They are an important part of many assessments. Currently computer aided assessments are widely used for multiple choice questions. They lack the ability to assess a student's knowledge in a more comprehensive way, which is required for diagram-type student work. The aim of this research is to develop a semi-automatic assessment framework, which enables the use of computer to support the assessment process of diagrammatic solutions, with the focus of ensuring the consistency of grades and feedback on solutions. A novel trace model, that captures design traces of student solutions, was developed as a part of the framework and was used to provide the matching criteria for grouping the solutions. A new marking style, partial marking, was developed to mark these solution groups manually. The Case-Based Reasoning method is utilised in the framework to mark some of the groups automatically. A guideline for scenario writing was proposed to increase the efficiency of automatic marking. A prototype diagram editor, a marking tool and scenario writing environment were implemented for the proposed framework in order to demonstrate proof of concept. The results of experiments show that the framework is feasible to use in the formative assessment and it provides consistent marking and personalised feedback to the students. The framework also has the potential to significantly reduce the time and effort required by the examiner to mark student diagrams. Although the constructed framework was specifically used for the assessment of database diagrams, the framework is generic enough to be used for other types of graph-based diagram.
|
Page generated in 0.0592 seconds