• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Analyse du flot de contrôle multivariante : application à la détection de comportements des programmes / Multivariant control flow analysis : application to behavior detection in programs

Laouadi, Rabah 14 December 2016 (has links)
Sans exécuter une application, est-il possible de prévoir quelle est la méthode cible d’un site d’appel ? Est-il possible de savoir quels sont les types et les valeurs qu’une expression peut contenir ? Est-il possible de déterminer de manière exhaustive l’ensemble de comportements qu’une application peut effectuer ? Dans les trois cas, la réponse est oui, à condition d’accepter une certaine approximation. Il existe une classe d’algorithmes − peu connus à l’extérieur du cercle académique − qui analysent et simulent un programme pour calculer de manière conservatrice l’ensemble des informations qui peuvent être véhiculées dans une expression.Dans cette thèse, nous présentons ces algorithmes appelés CFAs (acronyme de Control Flow Analysis), plus précisément l’algorithme multivariant k-l-CFA. Nous combinons l’algorithme k-l-CFA avec l’analyse de taches (taint analysis),qui consiste à suivre une donnée sensible dans le flot de contrôle, afin de déterminer si elle atteint un puits (un flot sortant du programme). Cet algorithme, en combinaison avec l’interprétation abstraite pour les valeurs, a pour objectif de calculer de manière aussi exhaustive que possible l’ensemble des comportements d’une application. L’un des problèmes de cette approche est le nombre élevé de faux-positifs, qui impose un post-traitement humain. Il est donc essentiel de pouvoir augmenter la précision de l’analyse en augmentant k.k-l-CFA est notoirement connu comme étant très combinatoire, sa complexité étant exponentielle dans la valeur de k. La première contribution de cette thèse est de concevoir un modèle et une implémentation la plus efficace possible, en séparant soigneusement les parties statiques et dynamiques de l’analyse, pour permettre le passage à l’échelle. La seconde contribution de cette thèse est de proposer une nouvelle variante de CFA basée sur k-l-CFA, et appelée *-CFA, qui consiste à faire du paramètre k une propriété de chaque variante, de façon à ne l’augmenter que dans les contextes qui le justifient.Afin d’évaluer l’efficacité de notre implémentation de k-l-CFA, nous avons effectué une comparaison avec le framework Wala. Ensuite, nous validons l’analyse de taches et la détection de comportements avec le Benchmark DroidBench. Enfin, nous présentons les apports de l’algorithme *-CFA par rapport aux algorithmes standards de CFA dans le contexte d’analyse de taches et de détection de comportements. / Without executing an application, is it possible to predict the target method of a call site? Is it possible to know the types and values that an expression can contain? Is it possible to determine exhaustively the set of behaviors that an application can perform? In all three cases, the answer is yes, as long as a certain approximation is accepted.There is a class of algorithms - little known outside of academia - that can simulate and analyze a program to compute conservatively all information that can be conveyed in an expression. In this thesis, we present these algorithms called CFAs (Control flow analysis), and more specifically the multivariant k-l-CFA algorithm.We combine k-l-CFA algorithm with taint analysis, which consists in following tainted sensitive data inthe control flow to determine if it reaches a sink (an outgoing flow of the program).This combination with the integration of abstract interpretation for the values, aims to identify asexhaustively as possible all behaviors performed by an application.The problem with this approach is the high number of false positives, which requiresa human post-processing treatment.It is therefore essential to increase the accuracy of the analysis by increasing k.k-l-CFA is notoriously known as having a high combinatorial complexity, which is exponential commensurately with the value of k.The first contribution of this thesis is to design a model and most efficient implementationpossible, carefully separating the static and dynamic parts of the analysis, to allow scalability.The second contribution of this thesis is to propose a new CFA variant based on k-l-CFA algorithm -called *-CFA - , which consists in keeping locally for each variant the parameter k, and increasing this parameter in the contexts which justifies it.To evaluate the effectiveness of our implementation of k-l-CFA, we make a comparison with the Wala framework.Then, we do the same with the DroidBench benchmark to validate out taint analysis and behavior detection. Finally , we present the contributions of *-CFA algorithm compared to standard CFA algorithms in the context of taint analysis and behavior detection.
2

[en] A MOBILE AND ONLINE OUTLIER DETECTION OVER MULTIPLE DATA STREAMS: A COMPLEX EVENT PROCESSING APPROACH FOR DRIVING BEHAVIOR DETECTION / [pt] DETECÇÃO MÓVEL E ONLINE DE ANOMALIA EM MÚLTIPLOS FLUXOS DE DADOS: UMA ABORDAGEM BASEADA EM PROCESSAMENTO DE EVENTOS COMPLEXOS PARA DETECÇÃO DE COMPORTAMENTO DE CONDUÇÃO

IGOR OLIVEIRA VASCONCELOS 24 July 2017 (has links)
[pt] Dirigir é uma tarefa diária que permite uma locomoção mais rápida e mais confortável, no entanto, mais da metade dos acidentes fatais estão relacionados à imprudência. Manobras imprudentes podem ser detectadas com boa precisão, analisando dados relativos à interação motorista-veículo, por exemplo, curvas, aceleração e desaceleração abruptas. Embora existam algoritmos para detecção online de anomalias, estes normalmente são projetados para serem executados em computadores com grande poder computacional. Além disso, geralmente visam escala através da computação paralela, computação em grid ou computação em nuvem. Esta tese apresenta uma abordagem baseada em complex event processing para a detecção online de anomalias e classificação do comportamento de condução. Além disso, objetivamos identificar se dispositivos móveis com poder computacional limitado, como os smartphones, podem ser usados para uma detecção online do comportamento de condução. Para isso, modelamos e avaliamos três algoritmos de detecção online de anomalia no paradigma de processamento de fluxos de dados, que recebem os dados dos sensores do smartphone e dos sensores à bordo do veículo como entrada. As vantagens que o processamento de fluxos de dados proporciona reside no fato de que este reduz a quantidade de dados transmitidos do dispositivo móvel para servidores/nuvem, bem como se reduz o consumo de energia/bateria devido à transmissão de dados dos sensores e possibilidade de operação mesmo se o dispositivo móvel estiver desconectado. Para classificar os motoristas, um mecanismo estatístico utilizado na mineração de documentos que avalia a importância de uma palavra em uma coleção de documentos, denominada frequência de documento inversa, foi adaptado para identificar a importância de uma anomalia em um fluxo de dados, e avaliar quantitativamente o grau de prudência ou imprudência das manobras dos motoristas. Finalmente, uma avaliação da abordagem (usando o algoritmo que obteve melhor resultado na primeira etapa) foi realizada através de um estudo de caso do comportamento de condução de 25 motoristas em cenário real. Os resultados mostram uma acurácia de classificação de 84 por cento e um tempo médio de processamento de 100 milissegundos. / [en] Driving is a daily task that allows individuals to travel faster and more comfortably, however, more than half of fatal crashes are related to recklessness driving behaviors. Reckless maneuvers can be detected with accuracy by analyzing data related to driver-vehicle interactions, abrupt turns, acceleration, and deceleration, for instance. Although there are algorithms for online anomaly detection, they are usually designed to run on computers with high computational power. In addition, they typically target scale through parallel computing, grid computing, or cloud computing. This thesis presents an online anomaly detection approach based on complex event processing to enable driving behavior classification. In addition, we investigate if mobile devices with limited computational power, such as smartphones, can be used for online detection of driving behavior. To do so, we first model and evaluate three online anomaly detection algorithms in the data stream processing paradigm, which receive data from the smartphone and the in-vehicle embedded sensors as input. The advantages that stream processing provides lies in the fact that reduce the amount of data transmitted from the mobile device to servers/the cloud, as well as reduce the energy/battery usage due to transmission of sensor data and possibility to operate even if the mobile device is disconnected. To classify the drivers, a statistical mechanism used in document mining that evaluates the importance of a word in a collection of documents, called inverse document frequency, has been adapted to identify the importance of an anomaly in a data stream, and then quantitatively evaluate how cautious or reckless drivers maneuvers are. Finally, an evaluation of the approach (using the algorithm that achieved better result in the first step) was carried out through a case study of the 25 drivers driving behavior. The results show an accuracy of 84 percent and an average processing time of 100 milliseconds.
3

Real-time Assessment, Prediction, and Scaffolding of Middle School Students’ Data Collection Skills within Physical Science Simulations

Sao Pedro, Michael A. 25 April 2013 (has links)
Despite widespread recognition by science educators, researchers and K-12 frameworks that scientific inquiry should be an essential part of science education, typical classrooms and assessments still emphasize rote vocabulary, facts, and formulas. One of several reasons for this is that the rigorous assessment of complex inquiry skills is still in its infancy. Though progress has been made, there are still many challenges that hinder inquiry from being assessed in a meaningful, scalable, reliable and timely manner. To address some of these challenges and to realize the possibility of formative assessment of inquiry, we describe a novel approach for evaluating, tracking, and scaffolding inquiry process skills. These skills are demonstrated as students experiment with computer-based simulations. In this work, we focus on two skills related to data collection, designing controlled experiments and testing stated hypotheses. Central to this approach is the use and extension of techniques developed in the Intelligent Tutoring Systems and Educational Data Mining communities to handle the variety of ways in which students can demonstrate skills. To evaluate students' skills, we iteratively developed data-mined models (detectors) that can discern when students test their articulated hypotheses and design controlled experiments. To aggregate and track students' developing latent skill across activities, we use and extend the Bayesian Knowledge-Tracing framework (Corbett & Anderson, 1995). As part of this work, we directly address the scalability and reliability of these models' predictions because we tested how well they predict for student data not used to build them. When doing so, we found that these models demonstrate the potential to scale because they can correctly evaluate and track students' inquiry skills. The ability to evaluate students' inquiry also enables the system to provide automated, individualized feedback to students as they experiment. As part of this work, we also describe an approach to provide such scaffolding to students. We also tested the efficacy of these scaffolds by conducting a study to determine how scaffolding impacts acquisition and transfer of skill across science topics. When doing so, we found that students who received scaffolding versus students who did not were better able to acquire skills in the topic in which they practiced, and also transfer skills to a second topic when was scaffolding removed. Our overall findings suggest that computer-based simulations augmented with real-time feedback can be used to reliably measure the inquiry skills of interest and can help students learn how to demonstrate these skills. As such, our assessment approach and system as a whole shows promise as a way to formatively assess students' inquiry.
4

Real-time Assessment, Prediction, and Scaffolding of Middle School Students’ Data Collection Skills within Physical Science Simulations

Sao Pedro, Michael A. 25 April 2013 (has links)
Despite widespread recognition by science educators, researchers and K-12 frameworks that scientific inquiry should be an essential part of science education, typical classrooms and assessments still emphasize rote vocabulary, facts, and formulas. One of several reasons for this is that the rigorous assessment of complex inquiry skills is still in its infancy. Though progress has been made, there are still many challenges that hinder inquiry from being assessed in a meaningful, scalable, reliable and timely manner. To address some of these challenges and to realize the possibility of formative assessment of inquiry, we describe a novel approach for evaluating, tracking, and scaffolding inquiry process skills. These skills are demonstrated as students experiment with computer-based simulations. In this work, we focus on two skills related to data collection, designing controlled experiments and testing stated hypotheses. Central to this approach is the use and extension of techniques developed in the Intelligent Tutoring Systems and Educational Data Mining communities to handle the variety of ways in which students can demonstrate skills. To evaluate students' skills, we iteratively developed data-mined models (detectors) that can discern when students test their articulated hypotheses and design controlled experiments. To aggregate and track students' developing latent skill across activities, we use and extend the Bayesian Knowledge-Tracing framework (Corbett & Anderson, 1995). As part of this work, we directly address the scalability and reliability of these models' predictions because we tested how well they predict for student data not used to build them. When doing so, we found that these models demonstrate the potential to scale because they can correctly evaluate and track students' inquiry skills. The ability to evaluate students' inquiry also enables the system to provide automated, individualized feedback to students as they experiment. As part of this work, we also describe an approach to provide such scaffolding to students. We also tested the efficacy of these scaffolds by conducting a study to determine how scaffolding impacts acquisition and transfer of skill across science topics. When doing so, we found that students who received scaffolding versus students who did not were better able to acquire skills in the topic in which they practiced, and also transfer skills to a second topic when was scaffolding removed. Our overall findings suggest that computer-based simulations augmented with real-time feedback can be used to reliably measure the inquiry skills of interest and can help students learn how to demonstrate these skills. As such, our assessment approach and system as a whole shows promise as a way to formatively assess students' inquiry.

Page generated in 0.094 seconds