• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 52
  • 8
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 83
  • 83
  • 43
  • 26
  • 25
  • 23
  • 16
  • 14
  • 12
  • 10
  • 10
  • 10
  • 9
  • 9
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

SNIF TOOL - Sniffing for Patterns in Continuous Streams

MUKHERJI, ABHISHEK 11 February 2008 (has links)
Recent technological advances in sensor networks and mobile devices give rise to new challenges in processing of live streams. In particular, time-series sequence matching, namely, the similarity matching of live streams against a set of predefined pattern sequence queries, is an important technology for a broad range of domains that include monitoring the spread of hazardous waste and administering network traffic. In this thesis, I use the time critical application of monitoring of fire growth in an intelligent building as my motivating example. Various measures and algorithms have been established in the current literature for similarity of static time-series data. Matching continuous data poses the following new challenges: 1) fluctuations in stream characteristics, 2) real-time requirements of the application, 3) limited system resources, and, 4) noisy data. Thus the matching techniques proposed for static time-series are mostly not applicable for live stream matching. In this thesis, I propose a new generic framework, henceforth referred to as the n-Snippet Indices Framework (in short, SNIF), for discovering the similarity between a live stream and pattern sequences. The framework is composed of two key phases: (1.) Off-line preprocessing phase: where the pattern sequences are processed offline and stored into an approximate 2-level index structure; and (2.) On-line live stream matching phase: streaming time-series (or the live stream) is on-the-fly matched against the indexed pattern sequences. I introduce the concept of n-Snippets for numeric data as the unit for matching. The insight is to match small snippets of the live stream against prefixes of the patterns and maintain them in succession. Longer the pattern prefixes identified to be similar to the live stream, better the confirmation of the match. Thus, the live stream matching is performed in two levels of matching: bag matching for matching snippets and order checking for maintaining the lengths of the match. I propose four variations of matching algorithms that allow the user the capability to choose between the two conflicting characteristics of result accuracy versus response time. The effectiveness of SNIF to detect patterns has been thoroughly tested through extensive experimental evaluations using the continuous query engine CAPE as platform. The evaluations made use of real datasets from multiple domains, including fire monitoring, chlorine monitoring and sensor networks. Moreover, SNIF is demonstrated to be tolerant to noisy datasets.
42

Un algorithme de fouille de données générique et parallèle pour architecture multi-coeurs / A generic and parallel pattern mining algorithm for multi-core architectures.

Negrevergne, Benjamin 29 November 2011 (has links)
Dans le domaine de l'extraction de motifs, il existe un grand nombre d'algorithmes pour résoudre une large variété de sous problèmes sensiblement identiques. Cette variété d'algorithmes freine l'adoption des techniques d'extraction de motifs pour l'analyse de données. Dans cette thèse, nous proposons un formalisme qui permet de capturer une large gamme de problèmes d'extraction de motifs. Pour démontrer la généralité de ce formalisme, nous l'utilisons pour décrire trois problèmes d'extraction de motifs : le problème d'extraction d'itemsets fréquents fermés, le problème d'extraction de graphes relationnels fermés ou le problème d'extraction d'itemsets graduels fermés. Ce formalisme nous permet de construire ParaMiner qui est un algorithme générique et parallèle pour les problèmes d'extraction de motifs. ParaMiner est capable de résoudre tous les problèmes d'extraction de motifs qui peuvent ˆtre décrit dans notre formalisme. Pour obtenir de bonne performances, nous avons généralisé plusieurs optimisations proposées par la communauté dans le cadre de problèmes spécifique d'extraction de motifs. Nous avons également exploité la puissance de calcul parallèle disponible dans les archi- tectures parallèles. Nos expériences démontrent qu'en dépit de la généricité de ParaMiner ses performances sont comparables avec celles obtenues par les algorithmes les plus rapides de l'état de l'art. Ces algorithmes bénéficient pourtant d'un avantage important, puisqu'ils incorporent de nombreuses optimisations spécifiques au sous problème d'extraction de motifs qu'ils résolvent. / In the pattern mining field, there exist a large number of algorithms that can solve a large variety of distinct but similar pattern mining problems. This variety prevent broad adoption of data analysis with pattern mining algorithms. In this thesis we propose a formal framework that is able to capture a broad range of pattern mining problems. We illustrate the generality of our framework by formalizing three different pattern mining problems: the problem of closed frequent itemset mining, the problem of closed relational graph mining and the problem of closed gradual itemset mining. Building on this framework, we have designed ParaMiner, a generic and parallel algorithm for pattern mining. ParaMiner is able to solve any pattern mining problem that can be formalized within our framework. In order to achieve practical efficiency we have generalized important optimizations from state of the art algorithms and we have made ParaMiner able to exploit parallel computing platforms. We have conducted thorough experiments that demonstrate that despite being a generic algorithm, ParaMiner can compete with the fastest ad-hoc algorithms.
43

Une approche de fouille de données pour le débogage temporel des applications embarquées de streaming / Data Mining Approach to Temporal Debugging of Embedded Streaming Applications

Iegorov, Oleg 08 April 2016 (has links)
Le déboggage des applications de streaming qui s'exécutent sur les systèmes embarqués multimédia est l'un des domaines les plus exigeants dans le développement de logiciel embarqué. Les nouvelles générations de materiel embarqué introduisent de nouvelles systèmes sur une puce, qui fait que les développeurs du logiciel doivent adapter leurs logiciels aux nouvelles platformes. Le logiciel embarqué doit non seulement fournir des résultats corrects mais aussi le faire en temps réel afin de respecter les propriétés de qualité de service (Quality-of-Service, QoS) du système. Lorsque les propriétés QoS ne sont pas respectées, des bugs temporels font leur apparition. Ces bugs se manifestent comme, par exemple, des glitches dans le flux vidéo ou des craquements dans le flux audio. Le déboggage temporel est en général difficile à effectuer car les bugs temporels n'ont pas souvent de rapport avec l'exactitude fonctionnelle du code des applications, ce qui rend les outils de débogage traditionels, comme GDB, peu utiles. Le non-respect des propriétés QoS peut provenir des interactions entre les applications, ou entre les applications et les processus systèmes. Par conséquent, le contexte d'exécution entier doit être pris en compte pour le déboggage temporel. Les avancements récents en collecte des traces d'exécution permettent aux développeurs de recueillir des traces et de les analyser après la fin d'exécution pour comprendre quelle activité système est responsable des bugs temporels. Cependant, les traces d'exécution ont une taille conséquente, ce qui demande aux devéloppeurs des connaissainces en analyse de données qu'ils n’ont souvent pas.Dans cette thèse, nous proposons SATM - une approche novatrice pour le déboggage temporel des applications de streaming. SATM repose sur la prémisse que les applications sont conçues avec le modèle dataflow, i.e. peuvent être représentées comme un graphe orienté où les données sont transmises entre des unités de calcul (fontions, modules, etc.) appelées "acteurs". Les acteurs doivent être exécutés de manière périodique afin de respecter les propriétés QoS représentées par les contraintes de temps-réél. Nous montrons qu'un acteur qui ne respecte pas de façon répétée sa période pendant l'exécution de l'application cause la violation des contraintes temps-reel de l'application. En pratique, SATM est un workflow d'analyse de données venant des traces d'exécution qui combine des mesures statistiques avec des algorithmes de fouille de données. SATM fournit une méthode automatique du débogage temporel des applications de streaming. Notre approche prend en entrée une trace d'exécution d'une application ayant une QoS basse ainsi qu'une liste de ses acteurs, et tout d'abord détecte des invocations des acteurs dans la trace. SATM découvre ensuite les périodes des acteurs ainsi que les séctions de la trace où la période n'a pas été respectée. Enfin, ces séctions sont analysées afin d'extraire des motifs de l'activité système qui différencient ces sections des autres séctions de la trace. De tels motifs peuvent donner des indices sur l'origine du problème temporel dans le systeme et sont rendus au devéloppeur. Plus précisément, nous représentons ces motifs comme des séquences contrastes minimales et nous étudions des différentes solutions pour fouiller ce type de motifs à partir des traces d'exécution.Enfin, nous montrons la capacité de SATM de détecter une perturbation temporelle injectée artificiellement dans un framework multimedia GStreamer, ainsi que des bugs temporels dans deux cas d'utilisation des applications de streaming industrielles provenant de la société STMicroelectronics. Nous fournissons également une analyse détaillée des algorithmes de fouille de motifs séquentiels appliqués sur les données venant des traces d'exécution, et nous expliquons pour quelle est la raison les algorithmes de pointe n'arrivent pas à fouiller les motifs séquentiels à partir des traces d'exécution de façon efficace. / Debugging streaming applications run on multimedia embedded systems found in modern consumer electronics (e.g. in set-top boxes, smartphones, etc) is one of the most challenging areas of embedded software development. With each generation of hardware, more powerful and complex Systems-on-Chip (SoC) are released, and developers constantly strive to adapt their applications to these new platforms. Embedded software must not only return correct results but also deliver these results on time in order to respect the Quality-of-Service (QoS) properties of the entire system. The non-respect of QoS properties lead to the appearance of temporal bugs which manifest themselves in multimedia embedded systems as, for example, glitches in the video or cracks in the sound. Temporal debugging proves to be tricky as temporal bugs are not related to the functional correctness of the code, thus making traditional GDB-like debuggers essentially useless. Violations of QoS properties can stem from complex interactions between a particular application and the system or other applications; the complete execution context must be, therefore, taken into account in order to perform temporal debugging. Recent advances in tracing technology allow software developers to capture a trace of the system's execution and to analyze it afterwards to understand which particular system activity is responsible for the violations of QoS properties. However, such traces have a large volume, and understanding them requires data analysis skills that are currently out of the scope of the developers' education.In this thesis, we propose SATM (Streaming Application Trace Miner) - a novel temporal debugging approach for embedded streaming applications. SATM is based on the premise that such applications are designed under the dataflow model of computation, i.e. as a directed graph where data flows between computational units called actors. In such setting, actors must be scheduled in a periodic way in order to meet QoS properties expressed as real-time constraints, e.g. displaying 30 video frames per second. We show that an actor which does not eventually respect its period at runtime causes the violation of the application’s real-time constraints. In practice, SATM is a data analysis workflow combining statistical measures and data mining algorithms. It provides an automatic solution to the problem of temporal debugging of streaming applications. Given an execution trace of a streaming application exhibiting low QoS as well as a list of its actors, SATM firstly determines exact actors’ invocations found in the trace. It then discovers the actors’ periods, as well as parts of the trace in which the periods are not respected. Those parts are further analyzed to extract patterns of system activity that differentiate them from other parts of the trace. Such patterns can give strong hints on the origin of the problem and are returned to the developer. More specifically, we represent those patterns as minimal contrast sequences and investigate various solutions to mine such sequences from execution trace data.Finally, we demonstrate SATM’s ability to detect both an artificial perturbation injected in an open source multimedia framework, as well as temporal bugs from two industrial use cases coming from STMicroelectronics. We also provide an extensive analysis of sequential pattern mining algorithms applied on execution trace data and explain why state-of-the-art algorithms fail to efficiently mine sequential patterns from real-world traces.
44

Process pattern mining: identifying sources of assignable error using event logs

Shetty, Bhupesh 01 December 2018 (has links)
This thesis examines the problem of identifying patterns in process event logs that are correlated with binary events that are undetected until the end of the process. Specifically, we consider the task of identifying patterns in a machine shop manufacturing process that are correlated with product defect. We introduce a pattern mining algorithm based on Apriori to identify frequent patterns, and use binary correlation measures to identify patterns associated with elevated defect rate. We design a simulation model to generate synthetic datasets to test our algorithm. We compare the effectiveness of different correlation measures, target pattern complexities, and sample sizes with and without knowledge of the underlying process. We show that knowledge of the underlying process helps in identifying the pattern that is associated with defects. We also develop a decision support tool based on p-value simulation to help managers identify sources of error in real-life settings. Finally, we apply our method to real world data and extract useful information from the data to help plant managers make decisions related to investments and workforce planning. This thesis also explores the problem of predicting the defect probability given an ordered list of events and its defect status. We develop a supervised learning model using the frequency of patterns deduced from the event log as the feature set. We discuss the challenges faced in this approach and conclude that random forest algorithm performs better than other methods. We apply this approach to a real world case study and discuss the applications in the machine shop. Finally, the thesis explores the order-bidding process in the machine shop industry, and proposes an optimization-based model to maximize the profit of the machine shop. Through a case study example, we show the advantages of using the defect probability in the proposed optimization model to determine the machine-worker schedule to execute job orders in a machine shop.
45

A New Wap-tree Based Sequential Pattern Mining Algorithm For Faster Pattern Extraction

Onal, Kezban Dilek 01 September 2012 (has links) (PDF)
Sequential pattern mining constitutes a basis for solution of problems in various domains like bio-informatics and web usage mining. Research on this field continues seeking faster algorithms. WAP-Tree based algorithms that emerged from web usage mining literature have shown a remarkable performance on single-item sequence databases. In this study, we investigated application of WAP-Tree based mining to multi-item sequential pattern mining and we designed an extension of WAP-Tree data structure for multi-item sequence databases, the MULTI-WAP-Tree. In addition, we propose a new mining strategy on WAP-Tree which involves a hybrid traversal strategy in possible sequences search space and a new early prunning idea called Sibling Principle on Pattern Tree. Two algorithms, FOF-PT and MULTI-FOF-PT, applying this strategy on WAP-Tree and MULTI-WAP-Tree respectively, are developed. Experiments showed that FOF-PT outperforms both other WAP-Tree based algorithms and PrefixSpan in terms of execution time. Moreover, experimental results revealed MULTI-FOF-PT finds patterns faster than PrefixSpan on dense multi-item sequence databases with small alphabets.
46

Efficient Temporal Synopsis of Social Media Streams

Abouelnagah, Younes January 2013 (has links)
Search and summarization of streaming social media, such as Twitter, requires the ongoing analysis of large volumes of data with dynamically changing characteristics. Tweets are short and repetitious -- lacking context and structure -- making it difficult to generate a coherent synopsis of events within a given time period. Although some established algorithms for frequent itemset analysis might provide an efficient foundation for synopsis generation, the unmodified application of standard methods produces a complex mass of rules, dominated by common language constructs and many trivial variations on topically related results. Moreover, these results are not necessarily specific to events within the time period of interest. To address these problems, we build upon the Linear time Closed itemset Mining (LCM) algorithm, which is particularly suited to the large and sparse vocabulary of tweets. LCM generates only closed itemsets, providing an immediate reduction in the number of trivial results. To reduce the impact of function words and common language constructs, we apply a filltering step that preserves these terms only when they may form part of a relevant collocation. To further reduce trivial results, we propose a novel strengthening of the closure condition of LCM to retain only those results that exceed a threshold of distinctiveness. Finally, we perform temporal ranking, based on information gain, to identify results that are particularly relevant to the time period of interest. We evaluate our work over a collection of tweets gathered in late 2012, exploring the efficiency and filtering characteristic of each processing step, both individually and collectively. Based on our experience, the resulting synopses from various time periods provide understandable and meaningful pictures of events within those periods, with potential application to tasks such as temporal summarization and query expansion for search.
47

Social game retrieval from unstructured videos

Wang, Ping 29 June 2010 (has links)
Parent-child social games, such as peek-a-boo and patty-cake, are a key element of an infant's earliest social interactions. The analysis of children's behaviors in social games based on video recordings provides a means for psychologists to study their social and cognitive development. However, the current practice in the use of video for behavioral research is extremely labor-intensive, involving many hours spent extracting and coding relevant video clips from a large corpus. From the standpoint of computer vision, such real-world video collections pose significant challenges in the automatic analysis of behavior, such as cluttered backgrounds, the effect of varying camera angles, clothing, subject appearance and lighting. These observations motivate my thesis work - automatic retrieval of social games from unstructured videos. The goal of this work is both to help accelerate the research progress in behavioral science and to take the initial steps towards the analysis of natural human interactions in natural settings. Social games are characterized by repetitions of turn-taking interactions between the parent and the child, with variations that are recognizable by both of them. I developed a computational model for social games that exploits the temporal structure over a long time-scale window as quasi-periodic patterns in a time series. I presented an unsupervised algorithm that mines the quasi-periodic patterns from videos. The algorithm consists of two functional modules: converting image sequences into discrete symbolic sequences and mining quasi-periodic patterns from the symbolic sequences. When this technique is applied to video of social games, the extracted quasi-periodic patterns often correspond to meaningful stages of the games. The retrieval performance on unstructured, lab-recorded videos and real-world family movies is promising. Building on this work, I developed a new feature extraction algorithm for social game categorization. Given a quasi-periodic pattern representation, my method automatically selects the most relevant space-time interest points to construct the feature representation. Our experiments demonstrate very promising classification performance on social games collected from YouTube. In addition, the method can also be used to categorize TV videos of sports rallies, demonstrating the generality of this approach. In order to support and encourage more research on human behavior analysis in realistic contexts, a video database of realistic child play in natural settings has been collected and is published on our project website (http://www.cc.gatech.edu/cpl/projects/socialgames), along with annotations. The unsupervised quasi-periodic pattern mining method represents a substantial generalization of conventional periodic motion analysis. Its generality is evaluated by retrieving motions of a range of quasi-periodicity from unstructured videos. The performance was compared with that of a periodic motion detection method based on motion self-similarity. Our method demonstrates superior retrieval performance with a 100% precision when the recall is up to 92.04%, with much fewer parameters than that of the other method.
48

Efficient Temporal Synopsis of Social Media Streams

Abouelnagah, Younes January 2013 (has links)
Search and summarization of streaming social media, such as Twitter, requires the ongoing analysis of large volumes of data with dynamically changing characteristics. Tweets are short and repetitious -- lacking context and structure -- making it difficult to generate a coherent synopsis of events within a given time period. Although some established algorithms for frequent itemset analysis might provide an efficient foundation for synopsis generation, the unmodified application of standard methods produces a complex mass of rules, dominated by common language constructs and many trivial variations on topically related results. Moreover, these results are not necessarily specific to events within the time period of interest. To address these problems, we build upon the Linear time Closed itemset Mining (LCM) algorithm, which is particularly suited to the large and sparse vocabulary of tweets. LCM generates only closed itemsets, providing an immediate reduction in the number of trivial results. To reduce the impact of function words and common language constructs, we apply a filltering step that preserves these terms only when they may form part of a relevant collocation. To further reduce trivial results, we propose a novel strengthening of the closure condition of LCM to retain only those results that exceed a threshold of distinctiveness. Finally, we perform temporal ranking, based on information gain, to identify results that are particularly relevant to the time period of interest. We evaluate our work over a collection of tweets gathered in late 2012, exploring the efficiency and filtering characteristic of each processing step, both individually and collectively. Based on our experience, the resulting synopses from various time periods provide understandable and meaningful pictures of events within those periods, with potential application to tasks such as temporal summarization and query expansion for search.
49

A data mining approach to ontology learning for automatic content-related question-answering in MOOCs

Shatnawi, Safwan January 2016 (has links)
The advent of Massive Open Online Courses (MOOCs) allows massive volume of registrants to enrol in these MOOCs. This research aims to offer MOOCs registrants with automatic content related feedback to fulfil their cognitive needs. A framework is proposed which consists of three modules which are the subject ontology learning module, the short text classification module, and the question answering module. Unlike previous research, to identify relevant concepts for ontology learning a regular expression parser approach is used. Also, the relevant concepts are extracted from unstructured documents. To build the concept hierarchy, a frequent pattern mining approach is used which is guided by a heuristic function to ensure that sibling concepts are at the same level in the hierarchy. As this process does not require specific lexical or syntactic information, it can be applied to any subject. To validate the approach, the resulting ontology is used in a question-answering system which analyses students' content-related questions and generates answers for them. Textbook end of chapter questions/answers are used to validate the question-answering system. The resulting ontology is compared vs. the use of Text2Onto for the question-answering system, and it achieved favourable results. Finally, different indexing approaches based on a subject's ontology are investigated when classifying short text in MOOCs forum discussion data; the investigated indexing approaches are: unigram-based, concept-based and hierarchical concept indexing. The experimental results show that the ontology-based feature indexing approaches outperform the unigram-based indexing approach. Experiments are done in binary classification and multiple labels classification settings . The results are consistent and show that hierarchical concept indexing outperforms both concept-based and unigram-based indexing. The BAGGING and random forests classifiers achieved the best result among the tested classifiers.
50

Debugging Embedded Multimedia Application Execution Traces through Periodic Pattern Mining / Débogage des traces d’exécution des applications multimédia embarquées en utilisant la recherche de motifs périodiques

Lopez Cueva, Patricia 08 July 2013 (has links)
La conception des systèmes multimédia embarqués présente de nombreux défis comme la croissante complexité du logiciel et du matériel sous-jacent, ou les pressions liées aux délais de mise en marche. L'optimisation du processus de débogage et validation du logiciel peut aider à réduire sensiblement le temps de développement. Parmi les outils de débogage de systèmes embarqués, un puissant outil largement utilise est l'analyse de traces d'exécution. Cependant, l'évolution des techniques de traçage dans les systèmes embarqués se traduit par des traces d'exécution avec une grande quantité d'information, à tel point que leur analyse manuelle devient ingérable. Dans ce cas, les techniques de recherche de motifs peuvent aider en trouvant des motifs intéressants dans de grandes quantités d'information. Concrètement, dans cette thèse, nous nous intéressons à la découverte de comportements périodiques sur des applications multimédia. Donc, les contributions de cette thèse concernent l'analyse des traces d'exécution d'applications multimédia en utilisant des techniques de recherche de motifs périodiques fréquents. Concernant la recherche de motifs périodiques, nous proposons une définition de motif périodique adaptée aux caractéristiques de la programmation parallèle. Nous proposons ensuite une représentation condensée de l'ensemble de motifs périodiques fréquents, appelée Core Periodic Concepts (CPC), en adoptant une approche basée sur les relations triadiques. De plus, nous définissons quelques propriétés de connexion entre ces motifs, ce qui nous permet de mettre en oeuvre un algorithme efficace de recherche de CPC, appelé PerMiner. Pour montrer l'efficacité et le passage à l'échelle de PerMiner, nous réalisons une analyse rigoureuse qui montre que PerMiner est au moins deux ordres de grandeur plus rapide que l'état de l'art. En plus, nous réalisons un analyse de l'efficacité de PerMiner sur une trace d'exécution d'une application multimédia réelle en présentant l'accélération accompli par la version parallèle de l'algorithme. Concernant les systèmes embarqués, nous proposons un premier pas vers une méthodologie qui explique comment utiliser notre approche dans l'analyse de traces d'exécution d'applications multimédia. Avant d'appliquer la recherche de motifs fréquents, les traces d'exécution doivent être traitées, et pour cela nous proposons plusieurs techniques de pré-traitement des traces. En plus, pour le post-traitement des motifs périodiques, nous proposons deux outils : un outil qui trouve des pairs de motifs en compétition ; et un outil de visualisation de CPC, appelé CPCViewer. Finalement, nous montrons que notre approche peut aider dans le débogage des applications multimédia à travers deux études de cas sur des traces d'exécution d'applications multimédia réelles. / Increasing complexity in both the software and the underlying hardware, and ever tighter time-to-market pressures are some of the key challenges faced when designing multimedia embedded systems. Optimizing software debugging and validation phases can help to reduce development time significantly. A powerful tool used extensively when debugging embedded systems is the analysis of execution traces. However, evolution in embedded system tracing techniques leads to execution traces with a huge amount of information, making manual trace analysis unmanageable. In such situations, pattern mining techniques can help by automatically discovering interesting patterns in large amounts of data. Concretely, in this thesis, we are interested in discovering periodic behaviors in multimedia applications. Therefore, the contributions of this thesis are focused on the definition of periodic pattern mining techniques for the analysis of multimedia applications execution traces. Regarding periodic pattern mining contributions, we propose a definition of periodic pattern adapted to the characteristics of concurrent software. We then propose a condensed representation of the set of frequent periodic patterns, called core periodic concepts (CPC), by adopting an approach originated in triadic concept approach. Moreover, we define certain connectivity properties of these patterns that allow us to implement an efficient CPC mining algorithm, called PerMiner. Then, we perform a thorough analysis to show the efficiency and scalability of PerMiner algorithm. We show that PerMiner algorithm is at least two orders of magnitude faster than the state of the art. Moreover, we evaluate the efficiency of PerMiner algorithm over a real multimedia application trace and also present the speedup achieved by a parallel version of the algorithm. Then, regarding embedded systems contributions, we propose a first step towards a methodology which aims at giving the first guidelines of how to use our approach in the analysis of multimedia applications execution traces. Besides, we propose several ways of preprocessing execution traces and a competitors finder tool to postprocess the mining results. Moreover, we present a CPC visualization tool, called CPCViewer, that facilitates the analysis of a set of CPCs. Finally, we show that our approach can help in debugging multimedia applications through the study of two use cases over real multimedia application execution traces.

Page generated in 0.1007 seconds