• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 33
  • 10
  • 7
  • 5
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 74
  • 17
  • 14
  • 13
  • 13
  • 13
  • 11
  • 10
  • 9
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Exploring the Viability of PageRank for Attack Graph Analysis and Defence Prioritization / Undersökning av PageRanks användbarhet för analys av attackgrafer och prioritering av försvar

Dypbukt Källman, Marcus January 2023 (has links)
In today's digital world, cybersecurity is becoming increasingly critical. Essential services that we rely on every day such as finance, transportation, and healthcare all rely on complex networks and computer systems. As these systems and networks become larger and more complex, it becomes increasingly challenging to identify and protect against potential attacks. This thesis addresses the problem of efficiently analysing large attack graphs and prioritizing defences in the field of cybersecurity. The research question guiding this study is whether PageRank, originally designed for ranking the importance of web pages, can be extended with additional parameters to effectively analyze large vulnerability-based attack graphs. To address this question, a modified version of the PageRank algorithm is proposed, which considers additional parameters present in attack graphs such as Time-To-Compromise values. The proposed algorithm is evaluated on various attack graphs to assess its accuracy, efficiency, and scalability. The evaluation shows that the algorithm exhibits relatively short running times even for larger attack graphs, demonstrating its efficiency and scalability. The algorithm achieves a reasonably high level of accuracy when compared to an optimal defence selection, showcasing its ability to effectively identify vulnerable nodes within the attack graphs. In conclusion, this study demonstrates that PageRank is a viable alternative for the security analysis of attack graphs. The proposed algorithm shows promise in efficiently and accurately analyzing large-scale attack graphs, providing valuable insight for identifying threats and defence prioritization. / I dagens digitala värld blir cybersäkerhet allt viktigare. Viktiga tjänster som vi förlitar oss på varje dag, inom t.ex. finans, transport och hälsovård, är alla beroende av komplexa nätverk och datorsystem. I takt med att dessa system och nätverk blir större och mer komplexa blir det allt svårare att identifiera och skydda sig mot potentiella attacker. Denna uppsats studerar problemet med att effektivt analysera stora attackgrafer och prioritera försvar inom cybersäkerhet. Den forskningsfråga som styr denna studie är om PageRank, ursprungligen utformad för att rangordna webbsidor, kan utökas med ytterligare parametrar för att effektivt analysera stora attackgrafer. För att besvara denna fråga föreslås en modifierad version av PageRank-algoritmen, som beaktar ytterligare parametrar som finns i attackgrafer, såsom ”Time-To-Compromise”-värden. Den föreslagna algoritmen utvärderas på olika attackgrafer för att bedöma dess noggrannhet, effektivitet och skalbarhet. Utvärderingen visar att den föreslagna algoritmen uppvisar relativt korta körtider även för större attackgrafer, vilket visar på hög effektivitet och skalbarhet. Algoritmen uppnår en rimligt hög nivå av noggrannhet jämfört med det optimala valet av försvar, vilket visar på dess förmåga att effektivt identifiera sårbara noder inom attackgraferna. Sammanfattningsvis visar denna studie att PageRank är ett potentiellt alternativ för säkerhetsanalys av attackgrafer. Den föreslagna algoritmen visar lovande resultat när det gäller att effektivt och noggrant analysera storskaliga attackgrafer, samt erbjuda värdefull information för att identifiera hot och prioritera försvar.
32

Méthodes d’apprentissage semi-supervisé basé sur les graphes et détection rapide des nœuds centraux / Graph-based semi-supervised learning methods and quick detection of central nodes

Sokol, Marina 29 April 2014 (has links)
Les méthodes d'apprentissage semi-supervisé constituent une catégorie de méthodes d'apprentissage automatique qui combinent points étiquetés et données non labellisées pour construire le classifieur. Dans la première partie de la thèse, nous proposons un formalisme d'optimisation général, commun à l'ensemble des méthodes d'apprentissage semi-supervisé et en particulier aux Laplacien Standard, Laplacien Normalisé et PageRank. En utilisant la théorie des marches aléatoires, nous caractérisons les différences majeures entre méthodes d'apprentissage semi-supervisé et nous définissons des critères opérationnels pour guider le choix des paramètres du noyau ainsi que des points étiquetés. Nous illustrons la portée des résultats théoriques obtenus sur des données synthétiques et réelles, comme par exemple la classification par le contenu et par utilisateurs des systèmes pair-à-pair. Cette application montre de façon édifiante que la famille de méthodes proposée passe parfaitement à l’échelle. Les algorithmes développés dans la deuxième partie de la thèse peuvent être appliquées pour la sélection des données étiquetées, mais également aux autres applications dans la recherche d'information. Plus précisément, nous proposons des algorithmes randomisés pour la détection rapide des nœuds de grands degrés et des nœuds avec de grandes valeurs de PageRank personnalisé. A la fin de la thèse, nous proposons une nouvelle mesure de centralité, qui généralise à la fois la centralité d'intermédiarité et PageRank. Cette nouvelle mesure est particulièrement bien adaptée pour la détection de la vulnérabilité de réseau. / Semi-supervised learning methods constitute a category of machine learning methods which use labelled points together with unlabeled data to tune the classifier. The main idea of the semi-supervised methods is based on an assumption that the classification function should change smoothly over a similarity graph. In the first part of the thesis, we propose a generalized optimization approach for the graph-based semi-supervised learning which implies as particular cases the Standard Laplacian, Normalized Laplacian and PageRank based methods. Using random walk theory, we provide insights about the differences among the graph-based semi-supervised learning methods and give recommendations for the choice of the kernel parameters and labelled points. We have illustrated all theoretical results with the help of synthetic and real data. As one example of real data we consider classification of content and users in P2P systems. This application demonstrates that the proposed family of methods scales very well with the volume of data. The second part of the thesis is devoted to quick detection of network central nodes. The algorithms developed in the second part of the thesis can be applied for the selections of quality labelled data but also have other applications in information retrieval. Specifically, we propose random walk based algorithms for quick detection of large degree nodes and nodes with large values of Personalized PageRank. Finally, in the end of the thesis we suggest new centrality measure, which generalizes both the current flow betweenness centrality and PageRank. This new measure is particularly well suited for detection of network vulnerability.
33

Complex systems and health systems, computational challenges / Systèmes complexes et systèmes de santé, défis calculatoires

Liu, Zifan 11 February 2015 (has links)
Le calcul des valeurs propres intervient dans des modèles de maladies d’épidémiques et pourrait être utilisé comme un allié des campagnes de vac- cination dans les actions menées par les organisations de soins de santé. La modélisation épidémique peut être considérée, par analogie, comme celle des viruses d’ordinateur qui dépendent de l’état de graphe sous-jacent à un moment donné. Nous utilisons PageRank comme méthode pour étudier la propagation de l’épidémie et d’envisager son calcul dans le cadre de phé- nomène petit-monde. Une mise en œuvre parallèle de méthode multiple de "implicitly restar- ted Arnoldi method" (MIRAM) est proposé pour calculer le vecteur propre dominant de matrices stochastiques issus de très grands réseaux réels. La grande valeur de "damping factor" pour ce problème fait de nombreux algo- rithmes existants moins efficace, tandis que MIRAM pourrait être promet- teuse. Nous proposons également dans cette thèse un générateur de graphe parallèle qui peut être utilisé pour générer des réseaux synthétisés distri- bués qui présentent des structures "scale-free" et petit-monde. Ce générateur pourrait servir de donnée pour d’autres algorithmes de graphes également. MIRAM est mis en œuvre dans le cadre de trilinos, en ciblant les grandes données et matrices creuses représentant des réseaux sans échelle, aussi connu comme les réseaux de loi de puissance. Hypergraphe approche de partitionnement est utilisé pour minimiser le temps de communication. L’al- gorithme est testé sur un grille national de Grid5000. Les expériences sur les très grands réseaux tels que Twitter et Yahoo avec plus de 1 milliard de nœuds sont exécutées. Avec notre mise en œuvre parallèle, une accélération de 27× est satisfaite par rapport au solveur séquentiel / The eigenvalue equation intervenes in models of infectious disease prop- agation and could be used as an ally of vaccination campaigns in the ac- tions carried out by health care organizations. The epidemiological model- ing techniques can be considered by analogy, as computer viral propagation which depends on the underlying graph status at a given time. We point out PageRank as method to study the epidemic spread and consider its calcula- tion in the context of small-world phenomenon. A parallel implementation of multiple implicitly restarted Arnoldi method (MIRAM) is proposed for calculating dominant eigenpair of stochastic matrices derived from very large real networks. Their high damp- ing factor makes many existing algorithms less efficient, while MIRAM could be promising. We also propose in this thesis a parallel graph gen- erator that can be used to generate distributed synthesized networks that display scale-free and small-world structures. This generator could serve as a testbed for graph related algorithms. MIRAM is implemented within the framework of Trilinos, targeting big data and sparse matrices representing scale-free networks, also known as power law networks. Hypergraph partitioning approach is employed to minimize the communication overhead. The algorithm is tested on a nation wide cluster of clusters Grid5000. Experiments on very large networks such as twitter and yahoo with over 1 billion nodes are conducted. With our parallel implementation, a speedup of 27× is met compared to the sequential solver
34

Métodos de busca em coordenada / Coordinate descent methods

Santos, Luiz Gustavo de Moura dos 22 November 2017 (has links)
Problemas reais em áreas como aprendizado de máquina têm chamado atenção pela enorme quantidade de variáveis (> 10^6) e volume de dados. Em problemas dessa escala o custo para se obter e trabalhar com informações de segunda ordem são proibitivos. Tais problemas apresentam características que podem ser aproveitadas por métodos de busca em coordenada. Essa classe de métodos é caracterizada pela alteração de apenas uma ou poucas variáveis a cada iteração. A variante do método comumente descrita na literatura é a minimização cíclica de variáveis. Porém, resultados recentes sugerem que variantes aleatórias do método possuem melhores garantias de convergência. Nessa variante, a cada iteração, a variável a ser alterada é sorteada com uma probabilidade preestabelecida não necessariamente uniforme. Neste trabalho estudamos algumas variações do método de busca em coordenada. São apresentados aspectos teóricos desses métodos, porém focamos nos aspectos práticos de implementação e na comparação experimental entre variações do método de busca em coordenada aplicados a diferentes problemas com aplicações reais. / Real world problemas in areas such as machine learning are known for the huge number of decision variables (> 10^6) and data volume. For such problems working with second order derivatives is prohibitive. These problems have properties that benefits the application of coordinate descent/minimization methods. These kind of methods are defined by the change of a single, or small number of, decision variable at each iteration. In the literature, the commonly found description of this type of method is based on the cyclic change of variables. Recent papers have shown that randomized versions of this method have better convergence properties. This version is based on the change of a single variable chosen randomly at each iteration, based on a fixed, but not necessarily uniform, distribution. In this work we present some theoretical aspects of such methods, but we focus on practical aspects.
35

Uma análise cienciométrica das subáreas da ciência da computação / A scientometric analysis of computer science subfields

Braga, Adriano Honorato 15 October 2013 (has links)
Submitted by Erika Demachki (erikademachki@gmail.com) on 2014-12-10T13:29:38Z No. of bitstreams: 1 Dissertação - Adriano Honorato Braga - 2013.pdf: 1269065 bytes, checksum: 31a4446602e6fc177149b700de63b216 (MD5) / Rejected by Erika Demachki (erikademachki@gmail.com), reason: on 2014-12-10T13:30:49Z (GMT) / Submitted by Erika Demachki (erikademachki@gmail.com) on 2014-12-15T16:01:05Z No. of bitstreams: 2 license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Dissertação - Adriano Honorato Braga - 2013.pdf: 1269065 bytes, checksum: 31a4446602e6fc177149b700de63b216 (MD5) / Approved for entry into archive by Erika Demachki (erikademachki@gmail.com) on 2014-12-15T16:30:34Z (GMT) No. of bitstreams: 2 license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Dissertação - Adriano Honorato Braga - 2013.pdf: 1269065 bytes, checksum: 31a4446602e6fc177149b700de63b216 (MD5) / Made available in DSpace on 2014-12-15T16:30:34Z (GMT). No. of bitstreams: 2 license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Dissertação - Adriano Honorato Braga - 2013.pdf: 1269065 bytes, checksum: 31a4446602e6fc177149b700de63b216 (MD5) Previous issue date: 2013-10-15 / Scientific studies about bibliographic productions in specific areas of science are becoming common, mainly in the last decade. Such investigations usually make use of bibliometric indices to evaluate relevance of the actors that take part in scientific production process, such as: authors, institutions, venues, and subfields of the scientific area being considered. Many studies have investigated the scientific production in computer science under different views. In this work, its presented an analysis about the production of scientific article in computer science and an analysis of citations among subfields, derived from article citation network. The work present novelty not only because it considers many common sense fields in computer science, but also because it presents citation related measures chronologically. The following bibliometric measures were used: number of publications in each subfield, number of citations received by a subfield, Impact Factor, PageRank, and a measure of diversity of subfields that cite a given subfield. Most of those metrics were proposed to study articles, web pages or scientific journals and they had to be adapted to be applied to the subfield analysis. This work has derived many interesting information to computer science community. It presents an historical evolution of the computer science subfields, showing how interest in publishing in subfields and how citations among subfields have evolved during the years. Some trends are revealed, some patterns are recognized to be stable along the time and some subfields are becoming less attractive than others. / Trabalhos que analisam as produções bibliográficas nos mais variados ramos da ciência têm se tornados comuns, principalmente na última década. Tais análises geralmente utilizam-se de índices ou medidas propostas para avaliar relevância de diferentes atores envolvidos no processo de produção científica, tais como: autores, instituições, veículos de publicação e subáreas do ramo da ciência considerado. Vários trabalhos têm investigado a produção científica em ciência da computação sob diversos aspectos. Neste trabalho é apresentado um estudo sobre a produção de artigos científicos nas subáreas da computação, bem como uma análise das citações entre as subáreas, derivadas das citações existentes nos artigos pertencentes a cada subárea. Esse estudo apresenta novidade não apenas pela abrangência das subáreas da computação analisadas, mas principalmente por levar em consideração o aspecto cronológico (ano de publicação) do comportamento de cada subárea sob diferentes índices bibliométricos: quantidade de publicações, quantidades de citações recebidas, PageRank, Fator de impacto e um índice para aferir se uma determinada subárea é citada diversificadamente por várias áreas ou se as citações são feitas principalmente por determinado grupo de subáreas. Algumas dessas medidas utilizadas tradicionalmente para mensurar relevância de artigos ou veículos de publicação e tiveram que ser adaptadas para a análise de subáreas. O trabalho permitiu derivar informações interessantes para a comunidade científica em ciência da computação. É apresentada a evolução histórica das subáreas da computação, mostrando como o interesse por autores e como as citações entre subáreas têm mudado ao longo do tempo. Algumas tendências são reveladas, alguns padrões são reconhecidos como sendo cronologicamente estáveis e algumas subáreas têm se tornado menos atrativas do que outras.
36

Métodos de busca em coordenada / Coordinate descent methods

Luiz Gustavo de Moura dos Santos 22 November 2017 (has links)
Problemas reais em áreas como aprendizado de máquina têm chamado atenção pela enorme quantidade de variáveis (> 10^6) e volume de dados. Em problemas dessa escala o custo para se obter e trabalhar com informações de segunda ordem são proibitivos. Tais problemas apresentam características que podem ser aproveitadas por métodos de busca em coordenada. Essa classe de métodos é caracterizada pela alteração de apenas uma ou poucas variáveis a cada iteração. A variante do método comumente descrita na literatura é a minimização cíclica de variáveis. Porém, resultados recentes sugerem que variantes aleatórias do método possuem melhores garantias de convergência. Nessa variante, a cada iteração, a variável a ser alterada é sorteada com uma probabilidade preestabelecida não necessariamente uniforme. Neste trabalho estudamos algumas variações do método de busca em coordenada. São apresentados aspectos teóricos desses métodos, porém focamos nos aspectos práticos de implementação e na comparação experimental entre variações do método de busca em coordenada aplicados a diferentes problemas com aplicações reais. / Real world problemas in areas such as machine learning are known for the huge number of decision variables (> 10^6) and data volume. For such problems working with second order derivatives is prohibitive. These problems have properties that benefits the application of coordinate descent/minimization methods. These kind of methods are defined by the change of a single, or small number of, decision variable at each iteration. In the literature, the commonly found description of this type of method is based on the cyclic change of variables. Recent papers have shown that randomized versions of this method have better convergence properties. This version is based on the change of a single variable chosen randomly at each iteration, based on a fixed, but not necessarily uniform, distribution. In this work we present some theoretical aspects of such methods, but we focus on practical aspects.
37

Extraction automatique de caractéristiques malveillantes et méthode de détection de malware dans un environnement réel / Automatic extraction of malicious features and method for detecting malware in a real environment

Angoustures, Mark 14 December 2018 (has links)
Pour faire face au volume considérable de logiciels malveillants, les chercheurs en sécurité ont développé des outils dynamiques automatiques d’analyse de malware comme la Sandbox Cuckoo. Ces types d’analyse sont partiellement automatiques et nécessite l’intervention d’un expert humain en sécurité pour détecter et extraire les comportements suspicieux. Afin d’éviter ce travail fastidieux, nous proposons une méthodologie pour extraire automatiquement des comportements dangereux données par les Sandbox. Tout d’abord, nous générons des rapports d’activités provenant des malware depuis la Sandbox Cuckoo. Puis, nous regroupons les malware faisant partie d’une même famille grâce à l’algorithme Avclass. Cet algorithme agrège les labels de malware donnés par VirusTotal. Nous pondérons alors par la méthode TF-IDF les comportements les plus singuliers de chaque famille de malware obtenue précédemment. Enfin, nous agrégeons les familles de malware ayant des comportements similaires par la méthode LSA.De plus, nous détaillons une méthode pour détecter des malware à partir du même type de comportements trouvés précédemment. Comme cette détection est réalisée en environnement réel, nous avons développé des sondes capables de générer des traces de comportements de programmes en exécution de façon continue. A partir de ces traces obtenues, nous construisons un graphe qui représente l’arbre des programmes en exécution avec leurs comportements. Ce graphe est mis à jour de manière incrémentale du fait de la génération de nouvelles traces. Pour mesurer la dangerosité des programmes, nous exécutons l’algorithme PageRank thématique sur ce graphe dès que celui-ci est mis à jour. L’algorithme donne un classement de dangerosité des processus en fonction de leurs comportements suspicieux. Ces scores sont ensuite reportés sur une série temporelle pour visualiser l’évolution de ce score de dangerosité pour chaque programme. Pour finir, nous avons développé plusieurs indicateurs d’alertes de programmes dangereux en exécution sur le système. / To cope with the large volume of malware, researchers have developed automatic dynamic tools for the analysis of malware like the Cuckoo sandbox. This analysis is partially automatic because it requires the intervention of a human expert in security to detect and extract suspicious behaviour. In order to avoid this tedious work, we propose a methodology to automatically extract dangerous behaviors. First of all, we generate activity reports from malware from the sandbox Cuckoo. Then, we group malware that are part of the same family using the Avclass algorithm. We then weight the the most singular behaviors of each malware family obtained previously. Finally, we aggregate malware families with similar behaviors by the LSA method.In addition, we detail a method to detect malware from the same type of behaviors found previously. Since this detection isperformed in real environment, we have developed probes capable of generating traces of program behaviours in continuous execution. From these traces obtained, we let’s build a graph that represents the tree of programs in execution with their behaviors. This graph is updated incrementally because the generation of new traces. To measure the dangerousness of programs, we execute the personalized PageRank algorithm on this graph as soon as it is updated. The algorithm gives a dangerousness ranking processes according to their suspicious behaviour. These scores are then reported on a time series to visualize the evolution of this dangerousness score for each program. Finally, we have developed several alert indicators of dangerous programs in execution on the system.
38

Сбор и анализ данных из открытых источников для разработки рекомендательной системы в сфере туризма : магистерская диссертация / Collection and analysis of data from open sources to develop a recommendation system in the field of tourism

Крайнов, А. И., Krainov, A. I. January 2023 (has links)
В данной дипломной работе была поставлена цель разработки эффективной рекомендательной системы для туристических достопримечательностей на основе графов и алгоритмов машинного обучения. Основная задача состояла в создании системы, которая может анализировать обширный набор данных о туристических достопримечательностях, извлекаемых из Википедии. Используя дампы Википедии, содержащие информацию о миллионах статей, был выполнен обзор существующих рекомендательных систем и методов машинного обучения, применяемых для предоставления рекомендаций в области туризма. Затем были выбраны определенные категории туристических достопримечательностей, которые были использованы для построения моделей рекомендаций. Для обработки и анализа данных из Википедии был использован современный технический стек инструментов, включающий Python, библиотеки networkx и pandas для работы с графами и данными, а также библиотеку scikit-learn для применения алгоритмов машинного обучения. Кроме того, для разработки интерактивного веб-интерфейса был использован фреймворк Streamlit. Процесс работы включал сбор и предварительную обработку данных из Википедии, включая информацию о достопримечательностях, связях между ними и характеристиках. Для создания графа данных на основе загруженных и обработанных данных были применены выбранные алгоритмы машинного обучения. Алгоритм PageRank был использован для определения важности каждой достопримечательности в графе и формирования персонализированных рекомендаций. Демонстрационный пользовательский интерфейс, разработанный на основе фреймворка Streamlit, позволяет пользователям взаимодействовать с системой, вводить запросы о местах и получать персонализированные рекомендации. С помощью выпадающего списка можно выбрать конкретную достопримечательность, к которой требуется получить рекомендации, а с помощью ползунка можно настроить количество рекомендаций. / This thesis aimed to develop an effective recommendation system for tourist attractions based on graphs and machine learning algorithms. The main challenge was to create a system that can analyze a large set of tourist attraction data extracted from Wikipedia. Using Wikipedia dumps containing information on millions of articles, a review of existing recommender systems and machine learning methods used to provide recommendations in the field of tourism was performed. Specific categories of tourist attractions were then selected and used to build recommendation models. To process and analyze data from Wikipedia, a modern technical stack of tools was used, including Python, the networkx and pandas libraries for working with graphs and data, as well as the scikit-learn library for applying machine learning algorithms. In addition, the Streamlit framework was used to develop an interactive web interface. The work process included the collection and preliminary processing of data from Wikipedia, including information about attractions, connections between them and characteristics. Selected machine learning algorithms were applied to create a data graph based on the downloaded and processed data. The PageRank algorithm was used to determine the importance of each point of interest in the graph and generate personalized recommendations. The demo user interface, developed using the Streamlit framework, allows users to interact with the system, enter queries about places and receive personalized recommendations. Using the drop-down list, you can select a specific attraction for which you want to receive recommendations, and using the slider, you can adjust the number of recommendations.
39

Google matrix analysis of Wikipedia networks

El zant, Samer 06 July 2018 (has links) (PDF)
Cette thèse s’intéresse à l’analyse du réseau dirigé extrait de la structure des hyperliens deWikipédia. Notre objectif est de mesurer les interactions liant un sous-ensemble de pages duréseau Wikipédia. Par conséquent, nous proposons de tirer parti d’une nouvelle représentationmatricielle appelée matrice réduite de Google ou "reduced Google Matrix". Cette matrice réduitede Google (GR) est définie pour un sous-ensemble de pages donné (c-à-d un réseau réduit).Comme pour la matrice de Google standard, un composant de GR capture la probabilité que deuxnoeuds du réseau réduit soient directement connectés dans le réseau complet. Une desparticularités de GR est l’existence d’un autre composant qui explique la probabilité d’avoir deuxnoeuds indirectement connectés à travers tous les chemins possibles du réseau entier. Dans cettethèse, les résultats de notre étude de cas nous montrent que GR offre une représentation fiabledes liens directs et indirects (cachés). Nous montrons que l’analyse de GR est complémentaire àl’analyse de "PageRank" et peut être exploitée pour étudier l’influence d’une variation de lien surle reste de la structure du réseau. Les études de cas sont basées sur des réseaux Wikipédiaprovenant de différentes éditions linguistiques. Les interactions entre plusieurs groupes d’intérêtont été étudiées en détail : peintres, pays et groupes terroristes. Pour chaque étude, un réseauréduit a été construit. Les interactions directes et indirectes ont été analysées et confrontées à desfaits historiques, géopolitiques ou scientifiques. Une analyse de sensibilité est réalisée afin decomprendre l’influence des liens dans chaque groupe sur d’autres noeuds (ex : les pays dansnotre cas). Notre analyse montre qu’il est possible d’extraire des interactions précieuses entre lespeintres, les pays et les groupes terroristes. On retrouve par exemple, dans le réseau de peintresissu de GR, un regroupement des artistes par grand mouvement de l’histoire de la peinture. Lesinteractions bien connues entre les grands pays de l’UE ou dans le monde entier sont égalementsoulignées/mentionnées dans nos résultats. De même, le réseau de groupes terroristes présentedes liens pertinents en ligne avec leur idéologie ou leurs relations historiques ou géopolitiques.Nous concluons cette étude en montrant que l’analyse réduite de la matrice de Google est unenouvelle méthode d’analyse puissante pour les grands réseaux dirigés. Nous affirmons que cetteapproche pourra aussi bien s’appliquer à des données représentées sous la forme de graphesdynamiques. Cette approche offre de nouvelles possibilités permettant une analyse efficace desinteractions d’un groupe de noeuds enfoui dans un grand réseau dirigé
40

A framework for measuring organizational information security vulnerability

Zhang, Changli 30 October 2019 (has links)
In spite of the ever-growing technology in information security, organizations are still vulnerable to security attacks due to mistakes made by their employees. To evaluate organizational security vulnerability and keep organizations alert on their security situation, in this dissertation, we developed a framework for measuring the security vulnerability of organizations based on online behaviours analysis of their employees. In this framework, the behavioural data of employees for their online privacy are taken as input, and the personal vulnerability profiles of them are generated and represented as confusion matrices. Then, by incorporating the personal vulnerability data into the local social network of interpersonal security influence in the workplace, the overall security vulnerability of each organization is evaluated and rated as a percentile value representing its position to all other organizations. Through evaluation with real-world data and simulation, this framework is verified to be both effective and efficient in estimating the actual security vulnerability status of organizations. Besides, a demo application is developed to illustrate the feasibility of this framework in the practice of improving information security for organizations. / Graduate

Page generated in 0.4225 seconds