1331 |
Optimizing Extremal Eigenvalues of Weighted Graph Laplacians and Associated Graph RealizationsReiß, Susanna 17 July 2012 (has links)
This thesis deals with optimizing extremal eigenvalues of weighted graph Laplacian matrices. In general, the Laplacian matrix of a (weighted) graph is of particular importance in spectral graph theory and combinatorial optimization (e.g., graph partition like max-cut and graph bipartition). Especially the pioneering work of M. Fiedler investigates extremal eigenvalues of weighted graph Laplacians and provides close connections to the node- and edge-connectivity of a graph. Motivated by Fiedler, Göring et al. were interested in further connections between structural properties of the graph and the eigenspace of the second smallest eigenvalue of weighted graph Laplacians using a semidefinite optimization approach.
By redistributing the edge weights of a graph, the following three optimization problems are studied in this thesis: maximizing the second smallest eigenvalue (based on the mentioned work of Göring et al.), minimizing the maximum eigenvalue and minimizing the difference of maximum and second smallest eigenvalue of the weighted Laplacian. In all three problems a semidefinite optimization formulation allows to interpret the corresponding semidefinite dual as a graph realization problem. That is, to each node of the graph a vector in the Euclidean space is assigned, fulfilling some constraints depending on the considered problem.
Optimal realizations are investigated and connections to the eigenspaces of corresponding optimized eigenvalues are established.
Furthermore, optimal realizations are closely linked to the separator structure of the graph. Depending on this structure, on the one hand folding properties of optimal realizations are characterized and on the other hand the existence of optimal realizations of bounded dimension is proven. The general bounds depend on the tree-width of the graph. In the case of minimizing the maximum eigenvalue, an important family of graphs are bipartite graphs, as an optimal one-dimensional realization may be constructed.
Taking the symmetry of the graph into account, a particular optimal edge weighting exists.
Considering the coupled problem, i.e., minimizing the difference of maximum and second smallest eigenvalue and the single problems, i.e., minimizing the maximum and maximizing the second smallest eigenvalue, connections between the feasible (optimal) sets are established.
|
1332 |
Community-Based Intrusion DetectionWeigert, Stefan 11 April 2016 (has links)
Today, virtually every company world-wide is connected to the Internet. This wide-spread connectivity has given rise to sophisticated, targeted, Internet-based attacks. For example, between 2012 and 2013 security researchers counted an average of about 74 targeted attacks per day. These attacks are motivated by economical, financial, or political interests and commonly referred to as “Advanced Persistent Threat (APT)” attacks. Unfortunately, many of these attacks are successful and the adversaries manage to steal important data or disrupt vital services. Victims are preferably companies from vital industries, such as banks, defense contractors, or power plants. Given that these industries are well-protected, often employing a team of security specialists, the question is: How can these attacks be so successful?
Researchers have identified several properties of APT attacks which make them so efficient. First, they are adaptable. This means that they can change the way they attack and the tools they use for this purpose at any given moment in time. Second, they conceal their actions and communication by using encryption, for example. This renders many defense systems useless as they assume complete access to the actual communication content. Third, their
actions are stealthy — either by keeping communication to the bare minimum or by mimicking legitimate users. This makes them “fly below the radar” of defense systems which check for anomalous communication. And finally, with the goal to increase their impact or monetisation prospects, their attacks are targeted against several companies from the same industry. Since months can pass between the first attack, its detection, and comprehensive analysis, it is often too late to deploy appropriate counter-measures at businesses peers. Instead, it is much more likely that they have already been attacked successfully.
This thesis tries to answer the question whether the last property (industry-wide attacks) can be used to detect such attacks. It presents the design, implementation and evaluation of a community-based intrusion detection system, capable of protecting businesses at industry-scale. The contributions of this thesis are as follows. First, it presents a novel algorithm for community detection which can detect an industry (e.g., energy, financial, or defense industries) in Internet communication. Second, it demonstrates the design, implementation, and evaluation of a distributed graph mining engine that is able to scale with the throughput of the input data while maintaining an end-to-end latency for updates in the range of a few milliseconds. Third, it illustrates the usage of this engine to detect APT attacks against industries by analyzing IP flow information from an Internet service provider.
Finally, it introduces a detection algorithm- and input-agnostic intrusion detection engine which supports not only intrusion detection on IP flow but any other intrusion detection algorithm and data-source as well.
|
1333 |
Algorithmic and Graph-Theoretic Approaches for Optimal Sensor Selection in Large-Scale SystemsLintao Ye (9741149) 15 December 2020 (has links)
<div>Using sensor measurements to estimate the states and parameters of a system is a fundamental task in understanding the behavior of the system. Moreover, as modern systems grow rapidly in scale and complexity, it is not always possible to deploy sensors to measure all of the states and parameters of the system, due to cost and physical constraints. Therefore, selecting an optimal subset of all the candidate sensors to deploy and gather measurements of the system is an important and challenging problem. In addition, the systems may be targeted by external attackers who attempt to remove or destroy the deployed sensors. This further motivates the formulation of resilient sensor selection strategies. In this thesis, we address the sensor selection problem under different settings as follows. </div><div><br></div><div>First, we consider the optimal sensor selection problem for linear dynamical systems with stochastic inputs, where the Kalman filter is applied based on the sensor measurements to give an estimate of the system states. The goal is to select a subset of sensors under certain budget constraints such that the trace of the steady-state error covariance of the Kalman filter with the selected sensors is minimized. We characterize the complexity of this problem by showing that the Kalman filtering sensor selection problem is NP-hard and cannot be approximated within any constant factor in polynomial time for general systems. We then consider the optimal sensor attack problem for Kalman filtering. The Kalman filtering sensor attack problem is to attack a subset of selected sensors under certain budget constraints in order to maximize the trace of the steady-state error covariance of the Kalman filter with sensors after the attack. We show that the same results as the Kalman filtering sensor selection problem also hold for the Kalman filtering sensor attack problem. Having shown that the general sensor selection and sensor attack problems for Kalman filtering are hard to solve, our next step is to consider special classes of the general problems. Specifically, we consider the underlying directed network corresponding to a linear dynamical system and investigate the case when there is a single node of the network that is affected by a stochastic input. In this setting, we show that the corresponding sensor selection and sensor attack problems for Kalman filtering can be solved in polynomial time. We further study the resilient sensor selection problem for Kalman filtering, where the problem is to find a sensor selection strategy under sensor selection budget constraints such that the trace of the steady-state error covariance of the Kalman filter is minimized after an adversary removes some of the deployed sensors. We show that the resilient sensor selection problem for Kalman filtering is NP-hard, and provide a pseudo-polynomial-time algorithm to solve it optimally.</div><div> </div><div> Next, we consider the sensor selection problem for binary hypothesis testing. The problem is to select a subset of sensors under certain budget constraints such that a certain metric of the Neyman-Pearson (resp., Bayesian) detector corresponding to the selected sensors is optimized. We show that this problem is NP-hard if the objective is to minimize the miss probability (resp., error probability) of the Neyman-Pearson (resp., Bayesian) detector. We then consider three optimization objectives based on the Kullback-Leibler distance, J-Divergence and Bhattacharyya distance, respectively, in the hypothesis testing sensor selection problem, and provide performance bounds on greedy algorithms when applied to the sensor selection problem associated with these optimization objectives.</div><div> </div><div> Moving beyond the binary hypothesis setting, we also consider the setting where the true state of the world comes from a set that can have cardinality greater than two. A Bayesian approach is then used to learn the true state of the world based on the data streams provided by the data sources. We formulate the Bayesian learning data source selection problem under this setting, where the goal is to minimize the cost spent on the data sources such that the learning error is within a certain range. We show that the Bayesian learning data source selection is also NP-hard, and provide greedy algorithms with performance guarantees.</div><div> </div><div> Finally, in light of the COVID-19 pandemic, we study the parameter estimation measurement selection problem for epidemics spreading in networks. Here, the measurements (with certain costs) are collected by conducting virus and antibody tests on the individuals in the epidemic spread network. The goal of the problem is then to optimally estimate the parameters (i.e., the infection rate and the recovery rate of the virus) in the epidemic spread network, while satisfying the budget constraint on collecting the measurements. Again, we show that the measurement selection problem is NP-hard, and provide approximation algorithms with performance guarantees.</div>
|
1334 |
Unraveling the Structure and Assessing the Quality of Protein Interaction Networks with Power Graph AnalysisRoyer, Loic 11 October 2010 (has links)
Molecular biology has entered an era of systematic and automated experimentation. High-throughput techniques have moved biology from small-scale experiments focused on specific genes and proteins to genome and proteome-wide screens. One result of this endeavor is the compilation of complex networks of interacting proteins. Molecular biologists hope to understand life's complex molecular machines by studying these networks. This thesis addresses tree open problems centered upon their analysis and quality assessment.
First, we introduce power graph analysis as a novel approach to the representation and visualization of biological networks. Power graphs are a graph theoretic approach to lossless and compact representation of complex networks. It groups edges into cliques and bicliques, and nodes into a neighborhood hierarchy. We demonstrate power graph analysis on five examples, and show its advantages over traditional network representations. Moreover, we evaluate the algorithm performance on a benchmark, test the robustness of the algorithm to noise, and measure its empirical time complexity at O (e1.71)- sub-quadratic in the number of edges e.
Second, we tackle the difficult and controversial problem of data quality in protein interaction networks. We propose a novel measure for accuracy and completeness of genome-wide protein interaction networks based on network compressibility. We validate this new measure by i) verifying the detrimental effect of false positives and false negatives, ii) showing that gold standard networks are highly compressible, iii) showing that authors' choice of confidence thresholds is consistent with high network compressibility, iv) presenting evidence that compressibility is correlated with co-expression, co-localization and shared function, v) showing that complete and accurate networks of complex systems in other domains exhibit similar levels of compressibility than current high quality interactomes.
Third, we apply power graph analysis to networks derived from text-mining as well to gene expression microarray data. In particular, we present i) the network-based analysis of genome-wide expression profiles of the neuroectodermal conversion of mesenchymal stem cells. ii) the analysis of regulatory modules in a rare mitochondrial cytopathy: emph{Mitochondrial Encephalomyopathy, Lactic acidosis, and Stroke-like episodes} (MELAS), and iii) we investigate the biochemical causes behind the enhanced biocompatibility of tantalum compared with titanium.
|
1335 |
Data Science Approaches on Brain Connectivity: Communication Dynamics and Fingerprint GradientsUttara Vinay Tipnis (10514360) 07 May 2021 (has links)
<div>The innovations in Magnetic Resonance Imaging (MRI) in the recent decades have given rise to large open-source datasets. MRI affords researchers the ability to look at both structure and function of the human brain. This dissertation will make use of one of these large open-source datasets, the Human Connectome Project (HCP), to study the structural and functional connectivity in the brain.</div><div>Communication processes within the human brain at different cognitive states are neither well understood nor completely characterized. We assess communication processes in the human connectome using ant colony-inspired cooperative learning algorithm, starting from a source with no <i>a priori</i> information about the network topology, and cooperatively searching for the target through a pheromone-inspired model. This framework relies on two parameters, namely <i>pheromone</i> and <i>edge perception</i>, to define the cognizance and subsequent behaviour of the ants on the network and the communication processes happening between source and target. Simulations with different configurations allow the identification of path-ensembles that are involved in the communication between node pairs. In order to assess the different communication regimes displayed on the simulations and their associations with functional connectivity, we introduce two network measurements, effective path-length and arrival rate. These measurements are tested as individual and combined descriptors of functional connectivity during different tasks. Finally, different communication regimes are found in different specialized functional networks. This framework may be used as a test-bed for different communication regimes on top of an underlying topology.</div><div>The assessment of brain <i>fingerprints</i> has emerged in the recent years as an important tool to study individual differences. Studies so far have mainly focused on connectivity fingerprints between different brain scans of the same individual. We extend the concept of brain connectivity fingerprints beyond test/retest and assess <i>fingerprint gradients</i> in young adults by developing an extension of the differential identifiability framework. To do so, we look at the similarity between not only the multiple scans of an individual (<i>subject fingerprint</i>), but also between the scans of monozygotic and dizygotic twins (<i>twin fingerprint</i>). We have carried out this analysis on the 8 fMRI conditions present in the Human Connectome Project -- Young Adult dataset, which we processed into functional connectomes (FCs) and time series parcellated according to the Schaefer Atlas scheme, which has multiple levels of resolution. Our differential identifiability results show that the fingerprint gradients based on genetic and environmental similarities are indeed present when comparing FCs for all parcellations and fMRI conditions. Importantly, only when assessing optimally reconstructed FCs, we fully uncover fingerprints present in higher resolution atlases. We also study the effect of scanning length on subject fingerprint of resting-state FCs to analyze the effect of scanning length and parcellation. In the pursuit of open science, we have also made available the processed and parcellated FCs and time series for all conditions for ~1200 subjects part of the HCP-YA dataset to the scientific community.</div><div>Lastly, we have estimated the effect of genetics and environment on the original and optimally reconstructed FC with an ACE model.</div>
|
1336 |
Design of robust networks : application to the design of wind farm cabling networks / Conception de réseaux robustes : application à des problèmes de câblage dans les parcs éoliensRidremont, Thomas 09 April 2019 (has links)
Aujourd’hui, la conception de réseaux est une problématique cruciale qui se pose dans beaucoup de domaines tels que le transport ou l’énergie. En particulier, il est devenu nécessaire d’optimiser la façon dont sont conçus les réseaux permettant de produire de l’énergie. On se concentre ici sur la production électrique produite à travers des parcs éoliens. Cette énergie apparait plus que jamais comme une bonne alternative à la production d’électricité via des centrales thermiques ou nucléaires.Nous nous intéressons dans cette thèse à la conception du câblage collectant l’énergie dans les parcs éoliens. On connaît alors la position de l’ensemble des éoliennes appartenant au parc ainsi que celle du site central collecteur vers laquelle l’énergie doit être acheminée. On connaît également la position des câbles que l’on peut construire, leurs capacités, et la position des nœuds d’interconnexion possibles. Il s’agit de déterminer un câblage de coût minimal permettant de relier l’ensemble des éoliennes à la sous-station, tel que celui-ci soit résistant à un certain nombre de pannes sur le réseau. / Nowadays, the design of networks has become a decisive problematic which appears in many fields such as transport or energy. In particular, it has become necessary and important to optimize the way in which networks used to produce, collect or transport energy are designed. We focus in this thesis on electricity produced through wind farms. The production of energy by wind turbines appears more than ever like a good alternative to the electrical production of thermal or nuclear power plants.We focus in this thesis on the design of the cabling network which allows to collect and route the energy from the wind turbines to a sub-station, linking the wind farm to the electrical network. In this problem, we know the location of each wind turbine of the farm and the one of the sub-station. We also know the location of possible inter-connection nodes which allow to connect different cables between them. Each wind turbine produces a known quantity of energy and with each cable are associated a cost and a capacity (the maximum amount of energy that can be routed through this cable). The optimizationproblem that we consider is to select a set of cables of minimum cost such that the energy produced from the wind turbines can be routed to the sub-station in the network induced by this set of cables, without exceeding the capacity of each cable. We focus on cabling networks resilient to breakdowns.
|
1337 |
Gérer et analyser les grands graphes des entités nommées / Manage and analyze data graphs of Named EntitiesBernard, Jocelyn 06 June 2019 (has links)
Dans cette thèse nous étudierons des problématiques de graphes. Nous proposons deux études théoriques sur la recherche et l'énumération de cliques et quasi-cliques. Ensuite nous proposons une étude appliquée sur la propagation d'information dans un graphe d'entités nommées. Premièrement, nous étudierons la recherche de cliques dans des graphes compressés. Les problèmes MCE et MCP sont des problèmes rencontrés dans l'analyse des graphes. Ce sont des problèmes difficiles, pour lesquels des solutions adaptées doivent être conçues pour les grands graphes. Nous proposons de travailler sur une version compressée du graphe. Nous montrons les bons résultats obtenus par notre méthode pour l'énumération de cliques maximales. Secondement, nous étudierons l'énumération de quasi-cliques maximales. Nous proposons un algorithme distribué qui énumère l'ensemble des quasi-cliques maximales. Nous proposons aussi une heuristique qui liste des quasi-cliques plus rapidement. Nous montrons l'intérêt de l'énumération de ces quasi-cliques par une évaluation des relations en regardant la co-occurrence des noeuds dans l'ensemble des quasi-cliques énumérées. Troisièmement, nous travaillerons sur la diffusion d'événements dans un graphe d'entités nommées. De nombreux modèles existent pour simuler des problèmes de diffusion de rumeurs ou de maladies dans des réseaux sociaux ou des problèmes de propagation de faillites dans les milieux bancaires. Nous proposons de répondre au problème de diffusion d'événements dans des réseaux hétérogènes représentant un environnement économique du monde. Nous proposons un problème de diffusion, nommé problème de classification de l'infection, qui consiste à déterminer quelles entités sont concernées par un événement. Pour ce problème, nous proposons deux modèles inspirés du modèle de seuil linéaire auxquels nous ajoutons différentes fonctionnalités. Finalement, nous testons et validons nos modèles sur un ensemble d'événements / In this thesis we will study graph problems. We will study theoretical problems in pattern research and applied problems in information diffusion. We propose two theoretical studies on the identification/detection and enumeration of dense subgraphs, such as cliques and quasi-cliques. Then we propose an applied study on the propagation of information in a named entities graph. First, we will study the identification/detection of cliques in compressed graphs. The MCE and MCP are problems that are encountered in the analysis of data graphs. These problem are difficult to solve (NP-Hard for MCE and NP-Complete for MCP), and adapted solutions must be found for large graphs. We propose to solve these problems by working on a compressed version of the initial graph. We show the correct results obtained by our method for the enumeration of maximal cliques on compressed graphs. Secondly, we will study the enumeration of maximal quasi-cliques. We propose a distributed algorithm that enumerates the set of maximal quasi-cliques of the graph. We show that this algorithm lists the set of maximal quasi-cliques of the graph. We also propose a heuristic that lists a set of quasi-cliques more quickly. We show the interest of enumerating these quasi-cliques by an evaluation of relations by looking at the co-occurrence of nodes in the set of enumerated quasi-cliques. Finally, we work on the event diffusion in a named entities graph. Many models exist to simulate diffusion problems of rumors or diseases in social networks and bankruptcies in banking networks. We address the issue of significant events diffusion in heterogeneous networks, representing a global economic environment. We propose a diffusion problem, called infection classification problem, which consists to dertemine which entities are concerned by an event. To solve this problem we propose two models inspired by the linear threshold model to which we add different features. Finally, we test and validate our models on a set of events
|
1338 |
Caractérisation de structures explorées dans les simulations de dynamique moléculaire. / Characterization of structures explored in molecular dynamics simulations.Bougueroua, Sana 13 December 2017 (has links)
L’objectif de cette thèse est d’analyser et prédire les conformations d’un système moléculaire en combinant la théorie des graphes et la chimie computationnelle.Dans le cadre des simulations de dynamique moléculaire, une molécule peut avoir une ou plusieurs conformations au cours du temps. Dans les trajectoires de simulation de dynamique moléculaire, on peut avoir des trajectoires n’explorant qu’une seule conformation ou des trajectoires explorant plusieurs conformations, donc plusieurs transitions entre conformations sont observées. L’exploration de ces conformations dépend du temps de la simulation et de l'énergie (température) fixée dans le système. Pour avoir une bonne exploration des conformations d’un système moléculaire, il faut générer et analyser plusieurs trajectoires à différentes énergies. Notre objectif est de proposer un algorithme universel qui permet d’analyser la dynamique conformationnelle de ces trajectoires d’une façon rapide et automatique. Les trajectoires fournissent les positions cartésiennes des atomes du système moléculaire à des intervalles de temps réguliers. Chaque intervalle contenant un ensemble de positions est appelé image. L’algorithme utilise des règles de géométrie (distances, angles, etc.) sur les positions pour trouver les liaisons (liaisons covalentes, liaisons hydrogène et interactions électrostatiques) créées entre les atomes, permettant par la suite d’obtenir le graphe mixte qui modélise une conformation. Nous ne considérons un changement conformationnel que s’il y a un changement dans les liaisons calculées à partir des positions données. L’algorithme permet de donner l’ensemble des conformations explorées sur une ou plusieurs trajectoires, la durée d’exploration de chaque conformation, ainsi que le graphe de transitions qui contient tous les changements conformationnels observés.Les conformations se caractérisent par une énergie appelée énergie potentielle. Cette énergie est représentée par une courbe appelée surface d’énergie potentielle. En chimie théorique et computationnelle, certains s’intéressent à trouver des points particuliers sur cette surface. Il s'agit des minima qui représentent les conformations les plus stables et des maxima ou états de transition qui représentent les points de passage d'une conformation à une autre. En effet, d'une part, la conformation la plus stable est celle de plus basse énergie. D'autres part, pour aller d’une conformation à une autre il faut une énergie supplémentaire, le point maximum représente l'état de transition. Les méthodes développées pour calculer ces points nécessitent une connaissance de l’énergie potentielle ce qui est coûteux en temps et en calculs. Notre objectif est de proposer une méthode alternative en utilisant des mesures ah doc basées sur des propriétés des graphes qu’on a utilisées dans le premier algorithme et sans faire appel à la géométrie ni aux calculs moléculaires. Ces mesures permettent de générer des conformations avec un classement énergétique ainsi de définir le coût énergétique de chaque transition permise. Les conformations possibles avec les transitions représentent respectivement les sommets et les arcs de ce qu’on appelle le “graphe des possibles”. Les hypothèses utilisées dans le modèle proposé est que seules les liaisons hydrogène peuvent changer entre les conformations et que le nombre de liaisons hydrogène présentes dans le système permet de déterminer son coût énergétique.L’algorithme d'analyser des trajectoires a été testé sur trois types de systèmes moléculaires en phase gazeuse de taille et de complexité croissantes. Bien que la complexité théorique de l’algorithme est exponentielle (tests d’isomorphisme) les résultats ont montré que l’algorithme est rapide (quelques secondes). De plus, cet algorithme peut être facilement adapté et appliqué à d’autres systèmes. Pour la prédiction conformationnelle, le modèle proposé a été testé sur des peptides isolés. / This PhD is part of transdisciplinary works, combining graph theory and computational chemistry.In molecular dynamics simulations, a molecular system can adopt different conformations over time. Along a trajectory, one conformation or more can thus be explored. This depends on the simulation time and energy within the system. To get a good exploration of the molecular conformations, one must generate and analyse several trajectories (this can amount to thousands of trajectories). Our objective is to propose an automatic method that provides rapid and efficient analysis of the conformational dynamics explored over these trajectories. The trajectories of interest here are in cartesian coordinates of the atoms that constitute the molecular system, recorded at regular time intervals (time-steps). Each interval containing a set of positions is called a snapshot. At each snapshot, our developed algorithm uses geometric rules (distances, angles, etc.) to compute bonds (covalent bonds, hydrogen bonds and any other kind of intermolecular criterium) formed between atoms in order to get the mixed graph modelling one given conformation. Within our current definitions, a conformational change is characterized by either a change in the hydrogen bonds or in the covalent bonds. One choice or the other depends on the underlying physics and chemistry of interest. The proposed algorithm provides all conformations explored along one or several trajectories, the period of time for the existence of each one of these conformations, and also provides the graph of transitions that shows all conformational changes that have been observed during the trajectories. A user-friendly interface has been developed, that can de distributed freely.Our proposed algorithm for analysing the trajectories of molecular dynamics simulations has been tested on three kinds of gas phase molecular systems (peptides, ionic clusters). This model can be easily adapted and applied to any other molecular systems as well as to condensed matter systems, with little effort. Although the theoretical complexity of the algorithm is exponential (isomorphism tests), results have shown that the algorithm is rapid.We have also worked on computationally low cost graph methods that can be applied in order to pre-characterize specific conformations/points on a potential energy surface (it describes the energy of a system in terms of positions of the atoms). These points are the minima on the surface, representing the most stable conformations of a molecular system, and the maxima on that surface, representing transition states between two conformers. Our developed methods and algorithms aim at getting these specific points, without the prerequisite knowledge/calculation of the potential energy surface by quantum chemistry methods (or even by classical representations). By avoiding an explicit calculation of the potential energy surface by quantum chemistry methods, one saves computational time and effort. We have proposed an alternative method using ad doc measures based on properties of the graphs (already used in the first part of the PhD), without any knowledge of energy and/or molecular calculations. These measures allow getting the possible conformations with a realistic energy classification, as well as transition states, at very low computational cost. The algorithm has been tested on gas phase peptides.
|
1339 |
Brain Connectivity Networks for the Study of Nonlinear Dynamics and Phase Synchrony in EpilepsyRajaei, Hoda 09 October 2018 (has links)
Assessing complex brain activity as a function of the type of epilepsy and in the context of the 3D source of seizure onset remains a critical and challenging endeavor. In this dissertation, we tried to extract the attributes of the epileptic brain by looking at the modular interactions from scalp electroencephalography (EEG). A classification algorithm is proposed for the connectivity-based separation of interictal epileptic EEG from normal. Connectivity patterns of interictal epileptic discharges were investigated in different types of epilepsy, and the relation between patterns and the epileptogenic zone are also explored in focal epilepsy.
A nonlinear recurrence-based method is applied to scalp EEG recordings to obtain connectivity maps using phase synchronization attributes. The pairwise connectivity measure is obtained from time domain data without any conversion to the frequency domain. The phase coupling value, which indicates the broadband interdependence of input data, is utilized for the graph theory interpretation of local and global assessment of connectivity activities.
The method is applied to the population of pediatric individuals to delineate the epileptic cases from normal controls. A probabilistic approach proved a significant difference between the two groups by successfully separating the individuals with an accuracy of 92.8%. The investigation of connectivity patterns of the interictal epileptic discharges (IED), which were originated from focal and generalized seizures, was resulted in a significant difference ( ) in connectivity matrices. It was observed that the functional connectivity maps of focal IED showed local activities while generalized cases showed global activated areas. The investigation of connectivity maps that resulted from temporal lobe epilepsy individuals has shown the temporal and frontal areas as the most affected regions.
In general, functional connectivity measures are considered higher order attributes that helped the delineation of epileptic individuals in the classification process. The functional connectivity patterns of interictal activities can hence serve as indicators of the seizure type and also specify the irritated regions in focal epilepsy. These findings can indeed enhance the diagnosis process in context to the type of epilepsy and effects of relative location of the 3D source of seizure onset on other brain areas.
|
1340 |
On Graph Embeddings and a new Minor Monotone Graph Parameter associated with the Algebraic Connectivity of a GraphWappler, Markus 30 May 2013 (has links)
We consider the problem of maximizing the second smallest eigenvalue of the weighted Laplacian of a (simple) graph over all nonnegative edge weightings with bounded total weight.
We generalize this problem by introducing node significances and edge lengths.
We give a formulation of this generalized problem as a semidefinite program.
The dual program can be equivalently written as embedding problem. This is fifinding an embedding of the n nodes of the graph in n-space so that their barycenter is at the origin, the distance between adjacent nodes is bounded by the respective edge length, and the embedded nodes are spread as much as possible. (The sum of the squared norms is maximized.)
We proof the following necessary condition for optimal embeddings. For any separator of the graph at least one of the components fulfills the following property: Each straight-line segment between the origin and an embedded node of the component intersects the convex hull of the embedded nodes of the separator.
There exists always an optimal embedding of the graph whose dimension is bounded by the tree-width of the graph plus one.
We defifine the rotational dimension of a graph. This is the minimal dimension k such that for all choices of the node significances and edge lengths an optimal embedding of the graph can be found in k-space.
The rotational dimension of a graph is a minor monotone graph parameter.
We characterize the graphs with rotational dimension up to two.:1 Introduction
1.1 Notations and Preliminaries
1.2 The Algebraic Connectivity
1.3 Two applications
1.4 Outline
2 The Embedding Problem
2.1 Semidefinite formulation
2.2 The dual as geometric embedding problem
2.3 Physical interpretation and examples
2.4 Formulation without fifixed barycenter
3 Geometrical Operations
3.1 Congruent transformations
3.2 Folding a flat halfspace
3.3 Folding and Collapsing
4 Structural properties of optimal embeddings
4.1 Separator-Shadow
4.2 Separators containing the origin
4.3 The tree-width bound
4.4 Application to trees
5 The Rotational Dimension of a graph
5.1 Defifinition and basic properties
5.2 Characterization of graphs with small rotational dimension
5.3 The Colin de Verdi ere graph parameter
List of Figures
Bibliography
Theses
|
Page generated in 0.2589 seconds