• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 2
  • 2
  • Tagged with
  • 10
  • 10
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Estudo avaliativo da informação mútua generalizada e de métricas clássicas como medidas de similaridade para corregistro em imagens fractais e cerebrais / Evaluative study of the generalized mutual information and classical metrics as similarity measures for coregistration of brain images and fractals.

Nali, Ivan Christensen 16 April 2012 (has links)
A integração de diferentes modalidades de imagens médicas possibilita uma análise mais detalhada de seu conteúdo, visando-se um diagnóstico mais preciso da patologia presente. Este processo, conhecido como corregistro, busca o alinhamento das imagens através da transformação rígida (ou não rígida) das mesmas, por algoritmos matemáticos de distorção, translação, rotação e ajuste de escala. A amplitude de cada transformação é determinada por uma medida de similaridade das imagens. Quanto menor a similaridade, maior será a transformação aplicada. Neste sentido, a métrica de similaridade é uma peça chave do processo de corregistro. No presente trabalho, inicialmente são propostas novas definições para o cálculo dos erros de alinhamento nas transformações de translação, rotação e escala, com o objetivo de se avaliar o desempenho do corregistro. Em seguida, cinco experimentos são realizados. No primeiro, a Informação Mútua Generalizada é avaliada como medida de similaridade para corregistro em imagens fractais e cerebrais. Neste caso, os resultados sugerem a viabilidade do emprego desta métrica, pois em geral conduz a erros de alinhamento muito pequenos, mas sem vantagens aparentes em relação à formulação de Shannon. No segundo experimento, um estudo comparativo entre a Informação Mútua e as métricas clássicas (Coeficiente de Correlação, Média dos Quadrados, Diferença de Gradiente e Cardinalidade) é então realizado. Para as imagens binárias analisadas, as métricas com menores valores de erro de alinhamento para os corregistros de translação e rotação foram a Informação Mútua e a Diferença de Gradiente. Para o corregistro de escala, todas as métricas conduziram a erros de alinhamento próximos de zero. No terceiro experimento, o processo de alinhamento é investigado em termos do número de iterações do algoritmo de corregistro. Considerando-se ambas as variáveis erro de alinhamento e número de iterações, conclui-se que o uso da Informação Mútua Generalizada com q = 1.0 é adequado ao corregistro. No quarto experimento, a influência da dimensão fractal no corregistro de imagens fractais binárias foi estudada. Para algumas métricas, a tendência geral observada é a de uma diminuição do erro de alinhamento em resposta ao aumento da dimensão fractal. Finalmente, no quinto experimento, constatou-se a existência de correlação linear entre os erros de alinhamento de imagens em tons de cinza do córtex cerebral e de fractais do conjunto Julia. / The integration of different modalities of medical images provides a detailed analysis of its contents, aiming at a more accurate diagnosis of the pathology. This process, known as coregistration, seeks to align the images through rigid (or non-rigid) transformations, by mathematical algorithms of distortion, translation, rotation and scaling. The amplitude of each transformation is determined by a similarity measure of the images. The lower the similarity, the greater the transformation applied. In this sense, the similarity metric is the key for the coregistration process. In this work, new definitions are proposed for the calculation of alignment errors in the transformations of translation, rotation and scale, with the objective of evaluating the performance of coregistration. Then, five experiments are performed. In the first one, the Generalized Mutual Information is evaluated as a similarity measure for coregistration of brain images and fractals. In this case, the results suggest the feasibility of using this measure, since it leads to very small alignment errors, although no advantages in relation to Shannon formulation are evident. In the second experiment, a comparative study between Mutual Information and the classical metrics (Correlation Coefficient, Mean Squares, Gradient Difference and Cardinality) is performed. For the binary images analyzed, the metrics with lower alignment errors for translation and rotation are the Mutual Information and Gradient Difference. For scaling transformation, all the metrics lead to alignment errors close to zero. In the third experiment, the alignment process is investigated in terms of number of iterations of the coregistration algorithm. Considering both variables alignment error and number of iterations, it is concluded that the use of Generalized Mutual Information with q =1 is appropriate for coregistration. In the fourth experiment, it is studied the influence of fractal dimension in coregistration of binary fractal images. For some metrics, as a general trend, one observes the decay of the alignment error in response to the increase of the fractal dimension. Finally, in the fifth experiment, the results indicate the existence of a linear correlation between the alignment errors of grayscale images of the cerebral cortex and Julia set fractals.
2

The Universal Similarity Metric, Applied to Contact Maps Comparison in A Two-Dimensional Space

Rahmati, Sara 27 September 2008 (has links)
Comparing protein structures based on their contact maps is an important problem in structural proteomics. Building a system for reconstructing protein tertiary structures from their contact maps is one of the motivations for devising novel contact map comparison algorithms. Several methods that address the contact map comparison problem have been designed which are briefly discussed in this thesis. However, they suggest scoring schemes that do not satisfy the two characteristics of “metricity” and “universality”. In this research we investigate the applicability of the Universal Similarity Metric (USM) to the contact map comparison problem. The USM is an information theoretical measure which is based on the concept of Kolmogorov complexity. The ultimate goal of this research is to use the USM in case-based reasoning system to predict protein structures from their predicted contact maps. The fact that the contact maps that will be used in such a system are the ones which are predicted from the protein sequences and are not noise-free, implies that we should investigate the noise-sensitivity of the USM. This is the first attempt to study the noise-tolerance of the USM. In this research, as the first implementation of the USM we converted the two-dimensional data structures (contact maps) to one-dimensional data structures (strings). The results of this implementation motivated us to circumvent the dimension reduction in our second attempt to implement the USM. Our suggested method in this thesis has the advantage of obtaining a measure which is noise tolerant. We assess the effectiveness of this noise tolerance by testing different USM implementation schemes against noise-contaminated versions of distinguished data-sets. / Thesis (Master, Computing) -- Queen's University, 2008-09-27 05:53:31.988
3

Estudo avaliativo da informação mútua generalizada e de métricas clássicas como medidas de similaridade para corregistro em imagens fractais e cerebrais / Evaluative study of the generalized mutual information and classical metrics as similarity measures for coregistration of brain images and fractals.

Ivan Christensen Nali 16 April 2012 (has links)
A integração de diferentes modalidades de imagens médicas possibilita uma análise mais detalhada de seu conteúdo, visando-se um diagnóstico mais preciso da patologia presente. Este processo, conhecido como corregistro, busca o alinhamento das imagens através da transformação rígida (ou não rígida) das mesmas, por algoritmos matemáticos de distorção, translação, rotação e ajuste de escala. A amplitude de cada transformação é determinada por uma medida de similaridade das imagens. Quanto menor a similaridade, maior será a transformação aplicada. Neste sentido, a métrica de similaridade é uma peça chave do processo de corregistro. No presente trabalho, inicialmente são propostas novas definições para o cálculo dos erros de alinhamento nas transformações de translação, rotação e escala, com o objetivo de se avaliar o desempenho do corregistro. Em seguida, cinco experimentos são realizados. No primeiro, a Informação Mútua Generalizada é avaliada como medida de similaridade para corregistro em imagens fractais e cerebrais. Neste caso, os resultados sugerem a viabilidade do emprego desta métrica, pois em geral conduz a erros de alinhamento muito pequenos, mas sem vantagens aparentes em relação à formulação de Shannon. No segundo experimento, um estudo comparativo entre a Informação Mútua e as métricas clássicas (Coeficiente de Correlação, Média dos Quadrados, Diferença de Gradiente e Cardinalidade) é então realizado. Para as imagens binárias analisadas, as métricas com menores valores de erro de alinhamento para os corregistros de translação e rotação foram a Informação Mútua e a Diferença de Gradiente. Para o corregistro de escala, todas as métricas conduziram a erros de alinhamento próximos de zero. No terceiro experimento, o processo de alinhamento é investigado em termos do número de iterações do algoritmo de corregistro. Considerando-se ambas as variáveis erro de alinhamento e número de iterações, conclui-se que o uso da Informação Mútua Generalizada com q = 1.0 é adequado ao corregistro. No quarto experimento, a influência da dimensão fractal no corregistro de imagens fractais binárias foi estudada. Para algumas métricas, a tendência geral observada é a de uma diminuição do erro de alinhamento em resposta ao aumento da dimensão fractal. Finalmente, no quinto experimento, constatou-se a existência de correlação linear entre os erros de alinhamento de imagens em tons de cinza do córtex cerebral e de fractais do conjunto Julia. / The integration of different modalities of medical images provides a detailed analysis of its contents, aiming at a more accurate diagnosis of the pathology. This process, known as coregistration, seeks to align the images through rigid (or non-rigid) transformations, by mathematical algorithms of distortion, translation, rotation and scaling. The amplitude of each transformation is determined by a similarity measure of the images. The lower the similarity, the greater the transformation applied. In this sense, the similarity metric is the key for the coregistration process. In this work, new definitions are proposed for the calculation of alignment errors in the transformations of translation, rotation and scale, with the objective of evaluating the performance of coregistration. Then, five experiments are performed. In the first one, the Generalized Mutual Information is evaluated as a similarity measure for coregistration of brain images and fractals. In this case, the results suggest the feasibility of using this measure, since it leads to very small alignment errors, although no advantages in relation to Shannon formulation are evident. In the second experiment, a comparative study between Mutual Information and the classical metrics (Correlation Coefficient, Mean Squares, Gradient Difference and Cardinality) is performed. For the binary images analyzed, the metrics with lower alignment errors for translation and rotation are the Mutual Information and Gradient Difference. For scaling transformation, all the metrics lead to alignment errors close to zero. In the third experiment, the alignment process is investigated in terms of number of iterations of the coregistration algorithm. Considering both variables alignment error and number of iterations, it is concluded that the use of Generalized Mutual Information with q =1 is appropriate for coregistration. In the fourth experiment, it is studied the influence of fractal dimension in coregistration of binary fractal images. For some metrics, as a general trend, one observes the decay of the alignment error in response to the increase of the fractal dimension. Finally, in the fifth experiment, the results indicate the existence of a linear correlation between the alignment errors of grayscale images of the cerebral cortex and Julia set fractals.
4

An Application of Dimension Reduction for Intention Groups in Reddit

Sun, Xuebo, Wang, Yudan January 2016 (has links)
Reddit (www.reddit.com) is a social news platform for information sharing and exchanging. The amount of data, in terms of both observations and dimensions is enormous because a large number of users express all aspects of knowledge in their own lives by publishing the comments. While it’s easy for a human being to understand the Reddit comments on an individual basis, it is a tremendous challenge to use a computer and extract insights from it. In this thesis, we seek one algorithmic driven approach to analyze both the unique Reddit data structure and the relations inside owners of comments by their similar features. We explore the various types of communications between two people with common characteristics and build a special communication model that characterizes the potential relationship between two users via their communication messages. We then seek a dimensionality reduction methodology that can merge users with similar behavior into same groups. Along the process, we develop computer program to collect data, define attributes based on the communication model and apply a rule-based group merging algorithm. We then evaluate the results to show the effectiveness of this methodology. Our results show reasonable success in producing user groups that have recognizable group characteristics and share similar intentions.
5

Similarity metric for crowd trajectory evaluation on a per-agent basis : An approach based on the sum of absolute differences / Likhetsmetrik för folkmassautvärdering ur ett per-agent perspektiv : En metod baserad på summan av absoluta skillnader

Brunnberg, Karl January 2023 (has links)
Simulation models that replicate realistic crowd behaviours and dynamics are of great societal use in a variety of fields of research and entertainment. In order to evaluate the accuracy of such models there is a demand for metrics and evaluation solutions that measure how well they simulate the dynamics of real crowds. A crowd similarity metric is a performance indicator which quantifies the similarity of crowd trajectories. Similarity metrics may be used to evaluate the validity of simulation models by comparing the content they produce to real-world crowd trajectory data. This thesis presents and evaluates a similarity metric which employs an approach based on the Sum of Absolute Differences to compare two-dimensional crowd trajectories. The metric encapsulates the similarity of crowd trajectories by iteratively summing time-wise positional differences on a per-agent basis. The resulting metric is simple, highly reproducible and simulatorindependent. Its accuracy in quantifying similarity is evaluated by means of a user study investigating the correlation between metric values and human perception of similarity for real and simulated crowd scenarios of varying density, trajectory, speed, and presence of environmental obstacles. The user study explores different aspects of crowd perception by dividing similarity ratings on a five-point Likert scale into four categories: overall, in terms of trajectories, speeds, and positions. Scenarios and rating categories that indicate high and low degrees of correlation between metric values and perceived similarity are identified and discussed. Furthermore, the findings are compared to previous research on crowd trajectory similarity metrics. The results indicate that the metric shows promising potential for accurate similarity measurement in simple and sparse scenarios across all rated categories. Moreover, the metric is strongly correlated with the trajectory-ratings of crowd motion similarity. However, it appears to not correlate well with the perception of overall similarity for large and dense crowds. / Simuleringsmodeller som efterhärmar realistiskt beteende och dynamik bland folkmassor är av stor samhällelig nytta i flertalet forskningsområden och i underhållningsbranschen. För att utvärdera sådana modellers noggrannhet finns det en efterfrågan av metriker och bedömningslösningar som mäter hur väl modeller simulerar verklig folkmassadynamik. En likhetsmetrik för folkmassautvärdering är en prestationsindikator som kvantifierar likheten mellan folkmassarörelser. Likhetsmetriker kan användas för att utvärdera simuleringsmodellers validitet genom att jämföra beteendet de producerar med rörelsedata från verkliga folkmassor. Följande examensarbete presenterar och utvärderar en likhetsmetrik för folkmassor som utnyttjar en metod baserad på ”Summan av Absoluta Skillnader” för att jämföra par av två-dimensionella folkmassarörelser. Metriken uppskattar likheten mellan två folkmassors rörelser genom att iterativt summera skillnaderna mellan folkmassornas positioner baserat på de individuella virtuella agenterna. Resultatet är en simpel, kraftigt reproducerbar och simulator-oberoende metrik. Dess noggrannhet avseende likhetsestimering utvärderas med en perceptuell användarstudie som undersöker korrelationen mellan metrikvärden och mänsklig perception av visuell likhet för flertalet verkliga och simulerade folkmassor av varierande densitet, färdväg, hastighet, och förekomst av hinder. Användarstudien utforskar olika aspekter av folkmassaperception genom att dela upp likhetsgradering på en femgradig skala i fyra kategorier: övergripande, avseende färdvägar, hastigheter, och positioner. Folkmassascenarier och graderingskategorier som indikerar höga och låga nivåer av korrelation mellan metrikvärden och uppfattad likhet identifieras och diskuteras. Fortsättningsvis jämförs resultaten med tidigare forskning om likhetsmetriker för folkmassautvärdering. Resultaten tyder på att metriken har lovande potential för noggrann likhetsestimering i enkla och glesa scenarier oavsett graderingskategori. Dessutom korrelerar metriken starkt med färdvägs-graderingar av likhet. Däremot verkar den inte korrelera väl med perceptionen av övergripande likhet för stora och täta folkmassor
6

Automatically Identifying Configuration Files

Huang, Zhen 19 January 2010 (has links)
Systems can become misconfigured for a variety of reasons such as operator errors or buggy patches. When a misconfiguration is discovered, usually the first order of business is to restore availability, often by undoing the misconfiguration. To simplify this task, we propose Ocasta to automatically determine which files contain configuration state. Ocasta uses a novel {\em similarity} metric to measures how similar a file's versions are to each other, and a set of filters to eliminate non-persistent files from consideration. These two mechanisms enable Ocasta to identify all 72 configuration files out of 2363 versioned files from 6 common applications in two user traces, while mistaking only 33 non-configuration files as configuration files. Ocasta allows a versioning file system to eliminate roughly 66\% of non-configuration file versions from its logs, thus reducing the number of file versions that a user must manually examine to recover from a misconfiguration.
7

Automatically Identifying Configuration Files

Huang, Zhen 19 January 2010 (has links)
Systems can become misconfigured for a variety of reasons such as operator errors or buggy patches. When a misconfiguration is discovered, usually the first order of business is to restore availability, often by undoing the misconfiguration. To simplify this task, we propose Ocasta to automatically determine which files contain configuration state. Ocasta uses a novel {\em similarity} metric to measures how similar a file's versions are to each other, and a set of filters to eliminate non-persistent files from consideration. These two mechanisms enable Ocasta to identify all 72 configuration files out of 2363 versioned files from 6 common applications in two user traces, while mistaking only 33 non-configuration files as configuration files. Ocasta allows a versioning file system to eliminate roughly 66\% of non-configuration file versions from its logs, thus reducing the number of file versions that a user must manually examine to recover from a misconfiguration.
8

Εξόρυξη θεματικών αλυσίδων από ιστοσελίδες για την δημιουργία ενός θεματολογικά προσανατολισμένου προσκομιστή / Lexical chain extraction for the creation of a topical focused crawler

Κοκόσης, Παύλος 16 May 2007 (has links)
Οι θεματολογικά προσανατολισμένοι προσκομιστές είναι εφαρμογές που έχουν στόχο την συλλογή ιστοσελίδων συγκεκριμένης θεματολογίας από τον Παγκόσμιο Ιστό. Αποτελούν ένα ανοικτό ερευνητικό πεδίο των τελευταίων χρόνων. Σε αυτήν την διπλωματική εργασία επιχειρείται η υλοποίηση ενός θεματολογικά προσανατολισμένου προσκομιστή με χρήση λεξικών αλυσίδων. Οι λεξικές αλυσίδες είναι ένα σημαντικό λεξιλογικό και υπολογιστικό εργαλείο για την αναπαράσταση της έννοιας ενός κειμένου. Έχουν χρησιμοποιηθεί με επιτυχία στην αυτόματη δημιουργία περιλήψεων για κείμενα, αλλά και στην κατηγοριοποίησή τους σε θεματικές κατηγορίες. Παρουσιάζουμε τις διαδικασίες βαθμολόγησης συνδέσμων και ιστοσελίδων, καθώς και τον υπολογισμό της σημασιολογικής ομοιότητας μεταξύ κειμένων με χρήση λεξικών αλυσίδων. Συνδυάζουμε και ενσωματώνουμε αυτές τις διαδικασίες σε έναν θεματολογικά προσανατολισμένο προσκομιστή, τα πειραματικά αποτελέσματα του οποίου είναι πολλά υποσχόμενα. / Topical focused crawlers are applications that aim at collecting web pages of a specific topic from the Web. Building topical focused crawlers is an open research field. In this master thesis we develop a topical focused crawler using lexical chains. Lexical chains are an important lexical and computational tool which is used for representing the meaning of text. They have been used with success in automatic text summarization and text classification in thematic categories. We present the processes of hyperlink and web page scoring, as well as the computation of the semantic similarity between documents by using lexical chains. Combining the aforementioned methods we embody them in a topical focused crawler. Its results are very promising.
9

Digital Twin Knowledge Graphs for IoT Platforms : Towards a Virtual Model for Real-Time Knowledge Representation in IoT Platforms / Digital Twin Kunskapsgrafer för IoT-Plattformar : Mot en Virtuell Modell för Kunskapsrepresentation i Realtid i IoT-Plattformar

Jarabo Peñas, Alejandro January 2023 (has links)
This thesis presents the design and prototype implementation of a digital twin based on a knowledge graph for Internet of Things (IoT) platforms. The digital twin is a virtual representation of a physical object or system that must continually integrate and update knowledge in rapidly changing environments. The proposed knowledge graph is designed to store and efficiently query a large number of IoT devices in a complex logical structure, use rule-based reasoning to infer new facts, and integrate unanticipated devices into the existing logical structure in order to adapt to changing environments. The digital twin is implemented using the open-source TypeDB knowledge graph and tested in a simplified automobile production line environment. The main focus of the work is on the integration of unanticipated devices, for which a similarity metric is implemented to identify similar existing devices and determine the appropriate integration into the knowledge graph. The proposed digital twin knowledge graph is a promising solution for managing and integrating knowledge in rapidly changing IoT environments, providing valuable insights and support for decision-making. / I den här avhandlingen presenteras utformningen och prototypimplementeringen av en digital tvilling baserad på en kunskapsgraf för IoT-plattformar (Internet of Things). Den digitala tvillingen är en virtuell representation av ett fysiskt objekt eller system som måste integrera och uppdatera kunskap i snabbt föränderliga miljöer. Den föreslagna kunskapsgrafen är utformad för att lagra och effektivt söka efter en stor uppsättning IoT-enheter i en komplex logisk struktur, använda regelbaserade resonemang för att härleda nya fakta och integrera oväntade enheter i den befintliga logiska strukturen för att anpassa sig till föränderliga miljöer. Den digitala tvillingen genomförs med hjälp av kunskapsgrafen TypeDB med öppen källkod och testas i en förenklad miljö för bilproduktion. Huvudfokus ligger på integrationen av oväntade enheter, för vilka ett likhetsmått implementeras för att identifiera liknande befintliga enheter och bestämma lämplig integration i kunskapsgrafen. Den föreslagna kunskapsgrafen för digitala tvillingar är en lovande lösning för att hantera och integrera kunskap i snabbt föränderliga IoT-miljöer, vilket ger värdefulla insikter och stöd för beslutsfattande. / Esta tesis presenta el diseño e implementación de un prototipo de gemelo digital basado en un grafo de conocimiento para plataformas de Internet de las Cosas (IoT). El gemelo digital es una representación virtual de un objeto o sistema físico que debe integrar y actualizar continuamente el conocimiento en entornos que cambian rápidamente. El grafo de conocimiento propuesto está diseñado para almacenar y consultar eficientemente un gran número de dispositivos IoT en una estructura lógica compleja, utilizar el razonamiento basado en reglas para inferir nuevos hechos e integrar dispositivos imprevistos en la estructura lógica existente para adaptarse a los cambios del entorno. El gemelo digital se implementa utilizando el grafo de conocimiento de código abierto TypeDB y se prueba en un entorno simplificado basado en una línea de producción de automóviles. El objetivo principal del trabajo es la integración de dispositivos no previstos, para lo cual se implementa una métrica de similitud para identificar dispositivos existentes similares y determinar la integración adecuada en el grafo de conocimiento. El grafo de conocimiento propuesto es una solución prometedora para la gestión del conocimiento y la integración en entornos IoT que cambian rápidamente, proporcionando información valiosa y apoyo a la toma de decisiones.
10

Big Graph Processing : Partitioning and Aggregated Querying / Traitement des graphes massifs : partitionnement et requêtage agrégatif

Echbarthi, Ghizlane 23 October 2017 (has links)
Avec l'avènement du « big data », de nombreuses répercussions ont eu lieu dans tous les domaines de la technologie de l'information, préconisant des solutions innovantes remportant le meilleur compromis entre coûts et précision. En théorie des graphes, où les graphes constituent un support de modélisation puissant qui permet de formaliser des problèmes allant des plus simples aux plus complexes, la recherche pour des problèmes NP-complet ou NP-difficils se tourne plutôt vers des solutions approchées, mettant ainsi en avant les algorithmes d'approximations et les heuristiques alors que les solutions exactes deviennent extrêmement coûteuses et impossible d'utilisation.Nous abordons dans cette thèse deux problématiques principales: dans un premier temps, le problème du partitionnement des graphes est abordé d'une perspective « big data », où les graphes massifs sont partitionnés en streaming. Nous étudions et proposons plusieurs modèles de partitionnement en streaming et nous évaluons leurs performances autant sur le plan théorique qu'empirique. Dans un second temps, nous nous intéressons au requêtage des graphes distribués/partitionnés. Dans ce cadre, nous étudions la problématique de la « recherche agrégative dans les graphes » qui a pour but de répondre à des requêtes interrogeant plusieurs fragments de graphes et qui se charge de la reconstruction de la réponse finale tel que l'on obtient un « matching approché » avec la requête initiale / With the advent of the "big data", many repercussions have taken place in all fields of information technology, advocating innovative solutions with the best compromise between cost and accuracy. In graph theory, where graphs provide a powerful modeling support for formalizing problems ranging from the simplest to the most complex, the search for NP-complete or NP-difficult problems is rather directed towards approximate solutions, thus Forward approximation algorithms and heuristics while exact solutions become extremely expensive and impossible to use. In this thesis we discuss two main problems: first, the problem of partitioning graphs is approached from a perspective big data, where massive graphs are partitioned in streaming. We study and propose several models of streaming partitioning and we evaluate their performances both theoretically and empirically. In a second step, we are interested in querying distributed / partitioned graphs. In this context, we study the problem of aggregative search in graphs, which aims to answer queries that interrogate several fragments of graphs and which is responsible for reconstructing the final response such that a Matching approached with the initial query

Page generated in 0.1033 seconds