• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 49
  • 9
  • 5
  • 3
  • 2
  • 1
  • Tagged with
  • 105
  • 105
  • 44
  • 33
  • 23
  • 23
  • 17
  • 17
  • 16
  • 15
  • 14
  • 14
  • 13
  • 12
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Heavyweight Pattern Mining in Attributed Flow Graphs

Simoes Gomes, Carolina Unknown Date
No description available.
32

Expression data flow graph: precise flow-sensitive pointer analysis for C programs

Thiessen, Rei Unknown Date
No description available.
33

Facilitating comprehension of Swift programs

Chernenko, Andrii January 2018 (has links)
Program comprehension is the process of gaining knowledge about software system by extracting it from its source code or observing its behavior at runtime. Often, when documentation is unavailable or missing, this is the only reliable source of knowledge about the system, and the fact that up to 50% of total maintenance effort is spent understanding the system makes it even more important. The source code of large software systems contains thousands, sometimes millions of lines of code, motivating the need for automation, which can be achieved with the help of program comprehension tools. This makes comprehension tools an essential factor in the adoption of new programming languages. This work proposes a way to fill this gap in the ecosystem of Swift, a new, innovative programming language aiming to cover a wide range of applications while being safe, expressive, and performant. The proposed solution is to bridge the gap between Swift and VizzAnalyzer, a program analysis framework featuring a range of analyses and visualizations, as well as modular architecture which makes adding new analyses and visualizations easier. The idea is to define a formal model for representing Swift programs and mapping it to the common program model used by VizzAnalyzer as the basis for analyses and visualizations. In addition to that, this paper discusses the differences between Swift and programming languages which are already supported by VizzAnalyzer, as well as practical aspects of extracting the models of Swift programs from their source code.
34

Next Generation Black-Box Web Application Vulnerability Analysis Framework

January 2017 (has links)
abstract: Web applications are an incredibly important aspect of our modern lives. Organizations and developers use automated vulnerability analysis tools, also known as scanners, to automatically find vulnerabilities in their web applications during development. Scanners have traditionally fallen into two types of approaches: black-box and white-box. In the black-box approaches, the scanner does not have access to the source code of the web application whereas a white-box approach has access to the source code. Today’s state-of-the-art black-box vulnerability scanners employ various methods to fuzz and detect vulnerabilities in a web application. However, these scanners attempt to fuzz the web application with a number of known payloads and to try to trigger a vulnerability. This technique is simple but does not understand the web application that it is testing. This thesis, presents a new approach to vulnerability analysis. The vulnerability analysis module presented uses a novel approach of Inductive Reverse Engineering (IRE) to understand and model the web application. IRE first attempts to understand the behavior of the web application by giving certain number of input/output pairs to the web application. Then, the IRE module hypothesizes a set of programs (in a limited language specific to web applications, called AWL) that satisfy the input/output pairs. These hypotheses takes the form of a directed acyclic graph (DAG). AWL vulnerability analysis module can then attempt to detect vulnerabilities in this DAG. Further, it generates the payload based on the DAG, and therefore this payload will be a precise payload to trigger the potential vulnerability (based on our understanding of the program). It then tests this potential vulnerability using the generated payload on the actual web application, and creates a verification procedure to see if the potential vulnerability is actually vulnerable, based on the web application’s response. / Dissertation/Thesis / Masters Thesis Computer Science 2017
35

Static WCET Analysis Based on Abstract Interpretation and Counting of Elements

Bygde, Stefan January 2010 (has links)
In a real-time system, it is crucial to ensure that all tasks of the system holdtheir deadlines. A missed deadline in a real-time system means that the systemhas not been able to function correctly. If the system is safety critical, this canlead to disaster. To ensure that all tasks keep their deadlines, the Worst-CaseExecution Time (WCET) of these tasks has to be known. This can be done bymeasuring the execution times of a task, however, this is inflexible, time consumingand in general not safe (i.e., the worst-casemight not be found). Unlessthe task is measured with all possible input combinations and configurations,which is in most cases out of the question, there is no way to guarantee that thelongest measured time actually corresponds to the real worst case.Static analysis analyses a safe model of the hardware together with thesource or object code of a program to derive an estimate of theWCET. This estimateis guaranteed to be equal to or greater than the real WCET. This is doneby making calculations which in all steps make sure that the time is exactlyor conservatively estimated. In many cases, however, the execution time of atask or a program is highly dependent on the given input. Thus, the estimatedworst case may correspond to some input or configuration which is rarely (ornever) used in practice. For such systems, where execution time is highly inputdependent, a more accurate timing analysis which take input into considerationis desired.In this thesis we present a framework based on abstract interpretation andcounting of possible semantic states of a program. This is a general methodof WCET analysis, which is language independent and platform independent.The two main applications of this framework are a loop bound analysis and aparametric analysis. The loop bound analysis can be used to quickly find upperbounds for loops in a program while the parametric framework provides aninput-dependent estimation of theWCET. The input-dependent estimation cangive much more accurate estimates if the input is known at run-time. / PROGRESS
36

Static Program Analysis

SHRESTHA, JAYESH January 2013 (has links)
No description available.
37

Vers l’établissement du flux d’information sûr dans les applications Web côté client / Enforcing secure information flow in client-side Web applications

Fragoso Femenin dos Santos, José 08 December 2014 (has links)
Nous nous intéressons à la mise en œuvre des politiques de confidentialité et d'intégrité des données dans le contexte des applications Web côté client. Étant donné que la plupart des applications Web est développée en JavaScript, on propose des mécanismes statiques, dynamiques et hybrides pour sécuriser le flux d'information en Core JavaScript - un fragment de JavaScript qui retient ses caractéristiques fondamentales. Nous étudions en particulier: une sémantique à dispositif de contrôle afin de garantir dynamiquement le respect des politiques de sécurité en Core JavaScript aussi bien qu'un compilateur qui instrumente un programme avec le dispositif de contrôle proposé, un système de types qui vérifie statiquement si un programme respecte une politique de sécurité donnée, un système de types hybride qui combine des techniques d'analyse statique à des techniques d'analyse dynamique afin d'accepter des programmes surs que sa version purement statique est obligée de rejeter. La plupart des programmes JavaScript s'exécute dans un navigateur Web dans le contexte d'une page Web. Ces programmes interagissent avec la page dans laquelle ils sont inclus parmi des APIs externes fournies par le navigateur. Souvent, l'exécution d'une API externe dépasse le périmètre de l'interprète du langage. Ainsi, une analyse réaliste des programmes JavaScript côté client doit considérer l'invocation potentielle des APIs externes. Pour cela, on présente une méthodologie générale qui permet d'étendre des dispositifs de contrôle de sécurité afin qu'ils prennent en compte l'invocation potentielle des APIs externes et on applique cette méthodologie à un fragment important de l'API DOM Core Level 1. / In this thesis, we address the issue of enforcing confidentiality and integrity policies in the context of client-side Web applications. Since most Web applications are developed in the JavaScript programming language, we study static, dynamic, and hybrid enforcement mechanisms for securing information flow in Core JavaScript --- a fragment of JavaScript that retains its defining features. Specifically, we propose: a monitored semantics for dynamically enforcing secure information flow in Core JavaScript as well as a source-to-source transformation that inlines the proposed monitor, a type system that statically checks whether or not a program abides by a given information flow policy, and a hybrid type system that combines static and dynamic analyses in order to accept more secure programs than its fully static counterpart. Most JavaScript programs are designed to be executed in a browser in the context of a Web page. These programs often interact with the Web page in which they are included via a large number of external APIs provided by the browser. The execution of these APIs usually takes place outside the perimeter of the language. Hence, any realistic analysis of client-side JavaScript must take into account possible interactions with external APIs. To this end, we present a general methodology for extending security monitors to take into account the possible invocation of arbitrary APIs and we apply this methodology to a representative fragment of the DOM Core Level 1 API that captures DOM-specific information flows.
38

Analýza datových toků pro knihovny se složitými vzory interakcí / Data Lineage Analysis of Frameworks with Complex Interaction Patterns

Hýbl, Oskar January 2020 (has links)
Manta Flow is a tool for analyzing data flow in enterprise environment. It features Java scanner, a module using static analysis to determine the flows through Java applications. To analyze an application using some framework, the scanner requires a dedicated plugin. Although Java scanner provides plugins for several frameworks, to be usable for real applications, it is essential that the scanner supports as many frameworks as possible, which requires implementation of new plugins. Application using Apache Spark, a framework for cluster computing, are increasingly popular. Therefore we designed and implemented Java scanner plugin that allows the scanner to analyze Spark applications. As Spark focuses on data processing, this presented several challenges that were not encountered in other frameworks. In particular it was necessary to resolve the data schema in various scenarios and track the schema changes throughout any operations invoked on the data. Of the multiple APIs Spark provides for data processing, we focused on Spark SQL module, notably on Dataset, omitting the legacy RDD. We also implemented support for data access, covering JDBC and chosen file formats. The implementation has been thoroughly tested and is proven to work correctly as a part of Manta Flow, which features the plugin in...
39

Exploring the use of call stack depth limits to reduce regression testing costs

Bogren, Patrik, Kristola, Isak January 2021 (has links)
Regression testing is performed after existing source code has been modified to verify that no new faults have been introduced by the changes. Test case selection can be used to reduce the effort of regression testing by selecting a smaller subset of the test suite for later execution. Several criteria and objectives can be used as constraints that should be satisfied by the selection process. One common criteria is function coverage, which can be represented by a coverage matrix that maps test cases to methods under test. The process of generating and evaluating these matrices can be very time consuming for large matrices since their complexity increases exponentially with the number of tests included. To the best of our knowledge, no techniques for reducing execution matrix size have been proposed. This thesis develops a matrix-reduction technique based on analysis of call stack data. It studies the effects of limiting the call stack depth in terms of coverage accuracy, matrix size, and generation costs. Further, it uses a tool that can instrument Java projects using Java’s instrumentation API to collect coverage information on open-source Java projects for varying depth limits of the call stack. Our results show that the stack depth limit can be significantly reduced while retaining high coverage and that matrix size can be decreased by up to 50%. The metric we used to indicate the difficulty of splitting up the matrix closely resembled the curve for coverage. However, we did not see any significant differences in execution time for lower depth limits.
40

Safe Concurrent Programming and Execution

Pyla, Hari Krishna 05 March 2013 (has links)
The increasing prevalence of multi and many core processors has brought the issues of concurrency and parallelism to the forefront of everyday computing. Even for applications amenable to traditional parallelization techniques, the subtleties of concurrent programming are known to introduce concurrency bugs. Due to the potential of concurrency bugs, programmers find it hard to write correct concurrent code. To take full advantage of parallel shared memory platforms, application programmers need safe and efficient mechanisms that can support a wide range of parallel applications. In addition, a large body of applications are inherently hard-to-parallelize; their data and control dependencies impose execution order constraints that preclude the use of traditional parallelization techniques. Sensitive to their input data, a substantial number of applications fail to scale well, leaving cores idle. To improve the performance of such applications, application programmers need effective mechanisms that can fully leverage multi and many core architectures. These challenges stand in the way of realizing the true potential of emerging many core platforms. The techniques described in this dissertation address these challenges. Specifically, this dissertation contributes techniques to transparently detect and eliminate several concurrency bugs, including deadlocks, asymmetric write-write data races, priority inversion, live-locks, order violations, and bugs that stem from the presence of asynchronous signaling and locks. A second major contribution of this dissertation is a programming framework that exploits coarse-grain speculative parallelism to improve the performance of otherwise hard-to-parallelize applications. / Ph. D.

Page generated in 0.0481 seconds