1161 |
Simultaneous Graph Representation ProblemsJampani, Krishnam Raju January 2011 (has links)
Many graphs arising in practice can be represented in a concise and intuitive way that conveys their structure. For example: A planar graph can be represented in the plane with points for vertices and non-crossing curves for edges. An interval graph can be represented on the real line with intervals for vertices and intersection of intervals representing edges. The concept of ``simultaneity'' applies for several types of graphs: the idea is to find representations for two graphs that share some common vertices and edges, and ensure that the common vertices and edges are represented the same way. Simultaneous representation problems arise in any situation where two related graphs should be represented consistently. A main instance is for temporal relationships, where an old graph and a new graph share some common parts. Pairs of related graphs arise in many other situations. For example, two social networks that share some members; two schedules that share some events, overlap graphs of DNA fragments of two similar organisms, circuit graphs of two adjacent layers on a computer chip etc. In this thesis, we study the simultaneous
representation problem for several graph classes.
For planar graphs the problem is defined as follows. Let G1 and G2 be two graphs sharing some vertices and edges. The simultaneous planar embedding problem asks whether there exist planar embeddings (or drawings) for G1 and G2 such that every vertex shared by the two graphs is mapped to the same point and every shared edge is mapped to the same curve in both embeddings. Over the last few years there has been a lot of work on simultaneous planar embeddings, which have been called `simultaneous embeddings with fixed edges'. A major open question is whether simultaneous planarity for two graphs can be tested in polynomial time. We give a linear-time algorithm for testing the simultaneous planarity of any two graphs that share a 2-connected subgraph. Our algorithm also extends to the case of k planar graphs, where each vertex [edge] is either common to all graphs
or belongs to exactly one of them.
Next we introduce a new notion of simultaneity for intersection graph classes (interval graphs, chordal graphs etc.) and for comparability graphs. For interval graphs, the problem is defined as follows. Let G1 and G2 be two interval graphs sharing some vertices I and the edges induced by I. G1 and G2 are said to be `simultaneous interval graphs' if there exist interval representations of G1 and G2 such that any vertex of I is assigned to the same interval in both the representations. The `simultaneous representation problem' for interval graphs asks whether G1 and G2 are simultaneous interval graphs. The problem is defined in a similar way for other intersection graph classes.
For comparability graphs and any intersection graph class, we show that the simultaneous representation problem for the graph class is equivalent to a graph augmentation problem: given graphs G1 and G2, sharing vertices I and the corresponding induced edges, do there exist edges E' between G1-I and G2-I such that the graph G1 U G_2 U E' belongs to the graph class. This equivalence implies that the simultaneous representation problem is closely related to other well-studied classes in the literature, namely, sandwich graphs and probe graphs.
We give efficient algorithms for solving the simultaneous representation problem for interval graphs, chordal graphs, comparability graphs and permutation graphs. Further, our algorithms for comparability and permutation graphs solve a more general version of the problem when there are multiple graphs, any two of which share the same common graph. This version of the problem also generalizes probe graphs.
|
1162 |
Surface Topological Analysis for Image SynthesisZhang, Eugene 09 July 2004 (has links)
Topology-related issues are becoming increasingly important in Computer Graphics. This research examines the use of topological analysis for solving two important problems in 3D Graphics: surface parameterization, and vector field design on surfaces. Many applications, such as high-quality and interactive image synthesis, benefit from the solutions to these problems.
Surface parameterization refers to segmenting a 3D surface into a number of patches and unfolding them onto a plane. A surface parameterization allows surface properties to be sampled and stored in a texture map for high-quality and interactive display. One of the most important quality measurements for surface parameterization is stretch, which causes an uneven sampling rate across the surface and needs to be avoided whenever possible. In this thesis, I present an automatic parameterization technique that segments the surface according to the handles and large protrusions in the surface. This results in a small number of large patches that can be unfolded with relatively little stretch. To locate the handles and large protrusions, I make use of topological analysis of a distance-based function on the surface.
Vector field design refers to creating continuous vector fields on 3D surfaces with control over vector field topology, such as the number and location of the singularities. Many graphics applications make use of an input vector field. The singularities in the input vector field often cause visual artifacts for these applications, such as texture synthesis and non-photorealistic rendering. In this thesis, I describe a vector field design system for both planar domains and 3D mesh surfaces. The system provides topological editing operations that allow the user to control the number and location of the singularities in the vector field. For the system to work for 3D meshes surface, I present a novel piecewise interpolating scheme that produces a continuous vector field based on the vector values defined at the vertices of the mesh. I demonstrate the effectiveness of the system through several graphics applications: painterly rendering of still images, pencil-sketches of surfaces, and texture synthesis.
|
1163 |
3D face recognition with wireless transportationZou, Le 15 May 2009 (has links)
In this dissertation, we focus on two related parts of a 3D face recognition system with wireless transportation. In the first part, the core components of the system, namely, the feature extraction and classification component, are introduced. In the feature extraction component, range images are taken as inputs and processed in order to extract features. The classification component uses the extracted features as inputs and makes classification decisions based on trained classifiers. In the second part, we consider the wireless transportation problem of range images, which are captured by scattered sensor nodes from target objects and are forwarded to the core components (i.e., feature extraction and classification components) of the face recognition system. Contrary to the conventional definition of being a transducer, a sensor node can be a person, a vehicle, etc. The wireless transportation component not only brings flexibility to the system but also makes the “proactive” face recognition possible.
For the feature extraction component, we first introduce the 3D Morphable Model. Then a 3D feature extraction algorithm based on the 3D Morphable Model is presented. The algorithm is insensitive to facial expression. Experimental results show that it can accurately extract features. Following that, we discuss the generic face warping algorithm that can quickly extract features with high accuracy. The proposed algorithm is robust to holes, facial expressions and hair. Furthermore, our experimental results show that the generated features can highly differentiate facial images.
For the classification component, a classifier based on Mahalanobis distance is introduced. Based on the classifier, recognition performances of the extracted features are given. The classification results demonstrate the advantage of the features from the generic face warping algorithm.
For the wireless transportation of the captured images, we consider the location-based wireless sensor networks (WSN). In order to achieve efficient routing perfor¬mance, a set of distributed stateless routing protocols (PAGER) are proposed for wireless sensor networks. The loop-free and delivery-guaranty properties of the static version (PAGER-S) are proved. Then the performance of PAGER protocols are compared with other well-known routing schemes using network simulator 2 (NS2). Simulation results demonstrate the advantages of PAGER.
|
1164 |
Applications of graph-based codes in networks: analysis of capacity and design of improved algorithmsVellambi, Badri Narayanan 25 August 2008 (has links)
The conception of turbo codes by Berrou et al. has created a renewed interest in modern graph-based codes. Several encouraging results that have come to light since then have fortified the role these codes shall play as potential solutions for present and future communication problems.
This work focuses on both practical and theoretical aspects of graph-based codes. The
thesis can be broadly categorized into three parts. The first part of the thesis focuses on
the design of practical graph-based codes of short lengths. While both low-density parity-check
codes and rateless codes have been shown to be asymptotically optimal under the message-passing (MP) decoder, the performance of short-length codes from these families under MP decoding is starkly sub-optimal. This work first addresses the
structural characterization of stopping sets to understand this sub-optimality. Using this
characterization, a novel improved decoder that offers several orders of magnitude improvement in bit-error rates is introduced. Next, a novel scheme for the design of a good rate-compatible family of punctured codes is proposed.
The second part of the thesis aims at establishing these codes as a good tool to develop
reliable, energy-efficient and low-latency data dissemination schemes in networks. The problems of broadcasting in wireless multihop networks and that of unicast in delay-tolerant networks are investigated. In both cases, rateless coding is seen to offer an elegant means of achieving the goals of the chosen communication protocols. It was noticed that the ratelessness and the randomness in encoding process make this scheme
specifically suited to such network applications.
The final part of the thesis investigates an application of a specific class of codes called
network codes to finite-buffer wired networks. This part of the work aims at establishing a framework for the theoretical study and understanding of finite-buffer networks. The
proposed Markov chain-based method extends existing results to develop an iterative
Markov chain-based technique for general acyclic wired networks. The framework not only estimates the capacity of such networks, but also provides a means to monitor network traffic and packet drop rates on various links of the network.
|
1165 |
Computational Protein Structure Analysis : Kernel And Spectral MethodsBhattacharya, Sourangshu 08 1900 (has links)
The focus of this thesis is to develop computational techniques for analysis of protein structures. We model protein structures as points in 3-dimensional space which in turn are modeled as weighted graphs. The problem of protein structure comparison is posed as a weighted graph matching problem and an algorithm motivated from the spectral graph matching techniques is developed. The thesis also proposes novel similarity measures by deriving kernel functions. These kernel functions allow the data to be mapped to a suitably defined Reproducing kernel Hilbert Space(RKHS), paving the way for efficient algorithms for protein structure classification.
Protein structure comparison (structure alignment)is a classical method of determining overall similarity between two protein structures. This problem can be posed as the approximate weighted subgraph matching problem, which is a well known NP-Hard problem. Spectral graph matching techniques provide efficient heuristic solution for the weighted graph matching problem using eigenvectors of adjacency matrices of the graphs. We propose a novel and efficient algorithm for protein structure comparison using the notion of neighborhood preserving projections (NPP) motivated from spectral graph matching. Empirically, we demonstrate that comparing the NPPs of two protein structures gives the correct equivalences when the sizes of proteins being compared are roughly similar. Also, the resulting algorithm is 3 -20 times faster than the existing state of the art techniques. This algorithm was used for retrieval of protein structures from standard databases with accuracies comparable to the state of the art.
A limitation of the above method is that it gives wrong results when the number of unmatched residues, also called insertions and deletions (indels), are very high. This problem was tackled by matching neighborhoods, rather than entire structures. For each pair of neighborhoods, we grow the neighborhood alignments to get alignments for entire structures. This results in a robust method that has outperformed the existing state of the art methods on standard benchmark datasets. This method was also implemented using MPI on a cluster for database search.
Another important problem in computational biology is classification of protein structures into classes exhibiting high structural similarity. Many manual and semi-automatic structural classification databases exist. Kernel methods along with support vector machines (SVM) have proved to be a robust and principled tool for classification. We have proposed novel positive semidefinite kernel functions on protein structures based on spatial neighborhoods. The kernels were derived using a general technique called convolution kernel, and showed to be related to the spectral alignment score in a limiting case. These kernels have outperformed the existing tools when validated on a well known manual classification scheme called SCOP. The kernels were designed keeping the general problem of capturing structural similarity in mind, and have been successfully applied to problems in other domains, e.g. computer vision.
|
1166 |
Τεχνικές ταξινόμησης σεισμογραμμάτωνΠίκουλης, Βασίλης 01 October 2008 (has links)
Σεισμικά γεγονότα τα οποία προέρχονται από σεισμικές πηγές των οποίων η απόσταση μεταξύ τους είναι πολύ μικρότερη από την απόσταση μέχρι τον κοντινότερο σταθμό καταγραφής, είναι γνωστά στη βιβλιογραφία σαν όμοια σεισμικά γεγονότα και αποτελούν αντικείμενο έρευνας εδώ και μια εικοσαετία. Η διαδικασία επαναπροσδιορισμού των υποκεντρικών παραμέτρων ή επανεντοπισμού όμοιων σεισμικών γεγονότων οδηγεί σε εκτιμήσεις των παραμέτρων που είναι συνήθως μεταξύ μίας και δύο τάξεων μεγέθους μικρότερου σφάλματος από τις αντίστοιχες των συνηθισμένων διαδικασιών εντοπισμού και επομένως, μπορεί εν δυνάμει να παράξει μια λεπτομερέστερη εικόνα της σεισμικότητας μιας περιοχής, από την οποία μπορεί στη συνέχεια να προκύψει η ακριβής χαρτογράφηση των ενεργών ρηγμάτων της. Πρόκειται για μια σύνθετη διαδικασία που μπορεί να αναλυθεί στα παρακάτω τρία βασικά βήματα:
1. Αναγνώριση ομάδων όμοιων σεισμικών γεγονότων.
2. Υπολογισμός διαφορών χρόνων άφιξης μεταξύ όμοιων σεισμικών γεγονότων.
3. Επίλυση προβλήματος αντιστροφής.
Το πρώτο από τα παραπάνω βήματα είναι η αναγνώριση των λεγόμενων σεισμικών οικογενειών που υπάρχουν στον διαθέσιμο κατάλογο και έχει ξεχωριστή σημασία για την ολική επιτυχία της διαδικασίας. Μόνο εάν εξασφαλιστεί η ορθότητα της επίλυσης αυτού του προβλήματος τίθενται σε ισχύ οι προϋποθέσεις για την εφαρμογή της διαδικασίας και άρα έχει νόημα η γεωλογική ανάλυση που ακολουθεί. Είναι επίσης ένα πρόβλημα που απαντάται και σε άλλες γεωλογικές εφαρμογές, όπως είναι για παράδειγμα ο αυτόματος εντοπισμός του ρήγματος γένεσης ενός άγνωστου σεισμικού γεγονότος μέσω της σύγκρισής του με διαθέσιμες αντιπροσωπευτικές οικογένειες.
Το πρόβλημα της αναγνώρισης είναι στην ουσία ένα πρόβλημα ταξινόμησης και ως εκ τούτου προϋποθέτει την επίλυση δύο σημαντικών επιμέρους υποπροβλημάτων. Συγκεκριμένα, αυτό της αντιστοίχισης των σεισμικών κυματομορφών (matching problem) και αυτό της κατηγοριοποίησής τους (clustering problem). Το πρώτο έχει να κάνει με τη σύγκριση όλων των δυνατών ζευγών σεισμογραμμάτων του καταλόγου ώστε να εντοπισθούν όλα τα όμοια ζεύγη, ενώ το δεύτερο αφορά την ομαδοποίηση των ομοίων σεισμογραμμάτων ώστε να προκύψουν οι σεισμικές οικογένειες.
Στα πλαίσια αυτής της εργασίας, λαμβάνοντας υπόψη τις ιδιομορφίες που υπεισέρχονται στο παραπάνω πρόβλημα ταξινόμησης από τις ιδιαιτερότητες των σεισμογραμμάτων αλλά και την ιδιαίτερη φύση της εφαρμογής, προτείνουμε μια μέθοδο σύγκρισης που βασίζεται σε μια γενικευμένη μορφή του συντελεστή συσχέτισης και μια μέθοδο κατηγοριοποίησης βασισμένη σε γράφους, με στόχο την αποτελεσματική αλλά και αποδοτική επίλυσή του. / Seismic events that occur in a confined region, meaning that the distance separating the sources is very small compared to the distance between the sources and the recording station, are known in the literature as similar seismic events and have been under study for the past two decades.
The re-estimation of the hypocenter parameters or the relocation of similar events gives an estimation error that is between one and two orders of magnitude lower that the one produced by the conventional location procedures. As a result, the application of this approach creates a much more detailed image of the seismicity of the region under study, from which the exact mapping of the active faults of the region can occur.
The relocation procedure is in fact a complex procedure, consisting of three basic steps:
1. Identification of groups of similar seismic events.
2. Estimation of the arrival time differences between events of the same group.
3. Solution of the inverse problem.
The first of the above steps, namely the identification of the seismic families of the given catalog plays an important role in the total success of the procedure, since only the correct solution of this problem can ensure that the requirements for the application of the procedure are met and therefore the geological analysis that is based on its outcome is meaningful. The problem is also encountered in other geological applications, such as the automatic location of the fault mechanism of an unknown event by comparison with available representative families.
The problem of the identification of the seismic families is a classification problem and as such, requires the solution of two subproblems, namely the matching problem and the clustering problem. The object of the first one is the comparison of all the possible event pairs of the catalog with the purpose of locating all the existing similar pairs, while the second one is concerned with the grouping of the similar pairs into seismic families.
In this work, taking into consideration the particularities that supersede the classification problem described above due to the special nature of the seismograms and also the specific requirements of the application, we propose a comparing method which is based on a generalized form of the correlation coefficient and a graph – based clustering technique, as an effective solution of the problem at hand.
|
1167 |
Data driven analysis of brain activity and functional connectivity in fMRI / Explorative Datenanalyse und Identifikation funktioneller Konnektivität aus fMRT-DatenDodel, Silke 20 December 2002 (has links)
No description available.
|
1168 |
Statistical analysis of networks and biophysical systems of complex architectureValba, Olga 15 October 2013 (has links) (PDF)
Complex organization is found in many biological systems. For example, biopolymers could possess very hierarchic structure, which provides their functional peculiarity. Understating such, complex organization allows describing biological phenomena and predicting molecule functions. Besides, we can try to characterize the specific phenomenon by some probabilistic quantities (variances, means, etc), assuming the primary biopolymer structure to be randomly formed according to some statistical distribution. Such a formulation is oriented toward evolutionary problems.Artificially constructed biological network is another common object of statistical physics with rich functional properties. A behavior of cells is a consequence of complex interactions between its numerous components, such as DNA, RNA, proteins and small molecules. Cells use signaling pathways and regulatory mechanisms to coordinate multiple processes, allowing them to respond and to adapt to changing environment. Recent theoretical advances allow us to describe cellular network structure using graph concepts to reveal the principal organizational features shared with numerous non-biological networks.The aim of this thesis is to develop bunch of methods for studying statistical and dynamic objects of complex architecture and, in particular, scale-free structures, which have no characteristic spatial and/or time scale. For such systems, the use of standard mathematical methods, relying on the average behavior of the whole system, is often incorrect or useless, while a detailed many-body description is almost hopeless because of the combinatorial complexity of the problem. Here we focus on two problems.The first part addresses to statistical analysis of random biopolymers. Apart from the evolutionary context, our studies cover more general problems of planar topology appeared in description of various systems, ranging from gauge theory to biophysics. We investigate analytically and numerically a phase transition of a generic planar matching problem, from the regime, where almost all the vertices are paired, to the situation, where a finite fraction of them remains unmatched.The second part of this work focus on statistical properties of networks. We demonstrate the possibility to define co-expression gene clusters within a network context from their specific motif distribution signatures. We also show how a method based on the shortest path function (SPF) can be applied to gene interactions sub-networks of co-expression gene clusters, to efficiently predict novel regulatory transcription factors (TFs). The biological significance of this method by applying it on groups of genes with a shared regulatory locus, found by genetic genomics, is presented. Finally, we discuss formation of stable patters of motifs in networks under selective evolution in context of creation of islands of "superfamilies".
|
1169 |
Interference-Optimal Frequency Allocation in Femtocellular NetworksOuda, Mahmoud 02 April 2012 (has links)
The evolution of Mobile Internet has led to the growth of bandwidth demanding applications like video streaming and social networking. The required data rates projected for such applications cannot be sustained by current cellular networks. New network architectures like Long Term Evolution (LTE) and LTE Advanced have been carefully engineered and introduced to fulfill such large data rates.
The recent introduction of femtocells enabled high data rates and better coverage indoors, without the need for site establishment or upgrading the network infrastructure. Femtocells, however, will potentially suffer from major interference problems due to their expected dense and ad hoc deployment. The main contribution in this thesis is the introduction of a new and a very promising direction in deriving capable and efficient interference mitigation schemes, and comparing this direction to current techniques in the literature. Several works have studied the effect of interference on networks employing femtocells. In this thesis, we also survey such works and provide an overview of the elements considered in mitigating interference.
We introduce a new scheme known for its optimality, and use it for frequency assignment in downlink femtocell networks. The algorithm is based on optimization search rather than greedy or heuristic methods. Experimental simulations will be shown to evaluate the proposed scheme against other schemes from the literature. / Thesis (Master, Computing) -- Queen's University, 2012-03-31 02:14:28.549
|
1170 |
Graph Theory and Dynamic Programming Framework for Automated Segmentation of Ophthalmic Imaging BiomarkersChiu, Stephanie Ja-Yi January 2014 (has links)
<p>Accurate quantification of anatomical and pathological structures in the eye is crucial for the study and diagnosis of potentially blinding diseases. Earlier and faster detection of ophthalmic imaging biomarkers also leads to optimal treatment and improved vision recovery. While modern optical imaging technologies such as optical coherence tomography (OCT) and adaptive optics (AO) have facilitated in vivo visualization of the eye at the cellular scale, the massive influx of data generated by these systems is often too large to be fully analyzed by ophthalmic experts without extensive time or resources. Furthermore, manual evaluation of images is inherently subjective and prone to human error.</p><p>This dissertation describes the development and validation of a framework called graph theory and dynamic programming (GTDP) to automatically detect and quantify ophthalmic imaging biomarkers. The GTDP framework was validated as an accurate technique for segmenting retinal layers on OCT images. The framework was then extended through the development of the quasi-polar transform to segment closed-contour structures including photoreceptors on AO scanning laser ophthalmoscopy images and retinal pigment epithelial cells on confocal microscopy images. </p><p>The GTDP framework was next applied in a clinical setting with pathologic images that are often lower in quality. Algorithms were developed to delineate morphological structures on OCT indicative of diseases such as age-related macular degeneration (AMD) and diabetic macular edema (DME). The AMD algorithm was shown to be robust to poor image quality and was capable of segmenting both drusen and geographic atrophy. To account for the complex manifestations of DME, a novel kernel regression-based classification framework was developed to identify retinal layers and fluid-filled regions as a guide for GTDP segmentation.</p><p>The development of fast and accurate segmentation algorithms based on the GTDP framework has significantly reduced the time and resources necessary to conduct large-scale, multi-center clinical trials. This is one step closer towards the long-term goal of improving vision outcomes for ocular disease patients through personalized therapy.</p> / Dissertation
|
Page generated in 0.0587 seconds