• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 47
  • 8
  • 5
  • 3
  • 2
  • 1
  • Tagged with
  • 100
  • 100
  • 42
  • 31
  • 22
  • 21
  • 17
  • 15
  • 14
  • 14
  • 14
  • 13
  • 12
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Expression data flow graph: precise flow-sensitive pointer analysis for C programs

Thiessen, Rei Unknown Date
No description available.
32

Facilitating comprehension of Swift programs

Chernenko, Andrii January 2018 (has links)
Program comprehension is the process of gaining knowledge about software system by extracting it from its source code or observing its behavior at runtime. Often, when documentation is unavailable or missing, this is the only reliable source of knowledge about the system, and the fact that up to 50% of total maintenance effort is spent understanding the system makes it even more important. The source code of large software systems contains thousands, sometimes millions of lines of code, motivating the need for automation, which can be achieved with the help of program comprehension tools. This makes comprehension tools an essential factor in the adoption of new programming languages. This work proposes a way to fill this gap in the ecosystem of Swift, a new, innovative programming language aiming to cover a wide range of applications while being safe, expressive, and performant. The proposed solution is to bridge the gap between Swift and VizzAnalyzer, a program analysis framework featuring a range of analyses and visualizations, as well as modular architecture which makes adding new analyses and visualizations easier. The idea is to define a formal model for representing Swift programs and mapping it to the common program model used by VizzAnalyzer as the basis for analyses and visualizations. In addition to that, this paper discusses the differences between Swift and programming languages which are already supported by VizzAnalyzer, as well as practical aspects of extracting the models of Swift programs from their source code.
33

Next Generation Black-Box Web Application Vulnerability Analysis Framework

January 2017 (has links)
abstract: Web applications are an incredibly important aspect of our modern lives. Organizations and developers use automated vulnerability analysis tools, also known as scanners, to automatically find vulnerabilities in their web applications during development. Scanners have traditionally fallen into two types of approaches: black-box and white-box. In the black-box approaches, the scanner does not have access to the source code of the web application whereas a white-box approach has access to the source code. Today’s state-of-the-art black-box vulnerability scanners employ various methods to fuzz and detect vulnerabilities in a web application. However, these scanners attempt to fuzz the web application with a number of known payloads and to try to trigger a vulnerability. This technique is simple but does not understand the web application that it is testing. This thesis, presents a new approach to vulnerability analysis. The vulnerability analysis module presented uses a novel approach of Inductive Reverse Engineering (IRE) to understand and model the web application. IRE first attempts to understand the behavior of the web application by giving certain number of input/output pairs to the web application. Then, the IRE module hypothesizes a set of programs (in a limited language specific to web applications, called AWL) that satisfy the input/output pairs. These hypotheses takes the form of a directed acyclic graph (DAG). AWL vulnerability analysis module can then attempt to detect vulnerabilities in this DAG. Further, it generates the payload based on the DAG, and therefore this payload will be a precise payload to trigger the potential vulnerability (based on our understanding of the program). It then tests this potential vulnerability using the generated payload on the actual web application, and creates a verification procedure to see if the potential vulnerability is actually vulnerable, based on the web application’s response. / Dissertation/Thesis / Masters Thesis Computer Science 2017
34

Static WCET Analysis Based on Abstract Interpretation and Counting of Elements

Bygde, Stefan January 2010 (has links)
In a real-time system, it is crucial to ensure that all tasks of the system holdtheir deadlines. A missed deadline in a real-time system means that the systemhas not been able to function correctly. If the system is safety critical, this canlead to disaster. To ensure that all tasks keep their deadlines, the Worst-CaseExecution Time (WCET) of these tasks has to be known. This can be done bymeasuring the execution times of a task, however, this is inflexible, time consumingand in general not safe (i.e., the worst-casemight not be found). Unlessthe task is measured with all possible input combinations and configurations,which is in most cases out of the question, there is no way to guarantee that thelongest measured time actually corresponds to the real worst case.Static analysis analyses a safe model of the hardware together with thesource or object code of a program to derive an estimate of theWCET. This estimateis guaranteed to be equal to or greater than the real WCET. This is doneby making calculations which in all steps make sure that the time is exactlyor conservatively estimated. In many cases, however, the execution time of atask or a program is highly dependent on the given input. Thus, the estimatedworst case may correspond to some input or configuration which is rarely (ornever) used in practice. For such systems, where execution time is highly inputdependent, a more accurate timing analysis which take input into considerationis desired.In this thesis we present a framework based on abstract interpretation andcounting of possible semantic states of a program. This is a general methodof WCET analysis, which is language independent and platform independent.The two main applications of this framework are a loop bound analysis and aparametric analysis. The loop bound analysis can be used to quickly find upperbounds for loops in a program while the parametric framework provides aninput-dependent estimation of theWCET. The input-dependent estimation cangive much more accurate estimates if the input is known at run-time. / PROGRESS
35

Static Program Analysis

SHRESTHA, JAYESH January 2013 (has links)
No description available.
36

Vers l’établissement du flux d’information sûr dans les applications Web côté client / Enforcing secure information flow in client-side Web applications

Fragoso Femenin dos Santos, José 08 December 2014 (has links)
Nous nous intéressons à la mise en œuvre des politiques de confidentialité et d'intégrité des données dans le contexte des applications Web côté client. Étant donné que la plupart des applications Web est développée en JavaScript, on propose des mécanismes statiques, dynamiques et hybrides pour sécuriser le flux d'information en Core JavaScript - un fragment de JavaScript qui retient ses caractéristiques fondamentales. Nous étudions en particulier: une sémantique à dispositif de contrôle afin de garantir dynamiquement le respect des politiques de sécurité en Core JavaScript aussi bien qu'un compilateur qui instrumente un programme avec le dispositif de contrôle proposé, un système de types qui vérifie statiquement si un programme respecte une politique de sécurité donnée, un système de types hybride qui combine des techniques d'analyse statique à des techniques d'analyse dynamique afin d'accepter des programmes surs que sa version purement statique est obligée de rejeter. La plupart des programmes JavaScript s'exécute dans un navigateur Web dans le contexte d'une page Web. Ces programmes interagissent avec la page dans laquelle ils sont inclus parmi des APIs externes fournies par le navigateur. Souvent, l'exécution d'une API externe dépasse le périmètre de l'interprète du langage. Ainsi, une analyse réaliste des programmes JavaScript côté client doit considérer l'invocation potentielle des APIs externes. Pour cela, on présente une méthodologie générale qui permet d'étendre des dispositifs de contrôle de sécurité afin qu'ils prennent en compte l'invocation potentielle des APIs externes et on applique cette méthodologie à un fragment important de l'API DOM Core Level 1. / In this thesis, we address the issue of enforcing confidentiality and integrity policies in the context of client-side Web applications. Since most Web applications are developed in the JavaScript programming language, we study static, dynamic, and hybrid enforcement mechanisms for securing information flow in Core JavaScript --- a fragment of JavaScript that retains its defining features. Specifically, we propose: a monitored semantics for dynamically enforcing secure information flow in Core JavaScript as well as a source-to-source transformation that inlines the proposed monitor, a type system that statically checks whether or not a program abides by a given information flow policy, and a hybrid type system that combines static and dynamic analyses in order to accept more secure programs than its fully static counterpart. Most JavaScript programs are designed to be executed in a browser in the context of a Web page. These programs often interact with the Web page in which they are included via a large number of external APIs provided by the browser. The execution of these APIs usually takes place outside the perimeter of the language. Hence, any realistic analysis of client-side JavaScript must take into account possible interactions with external APIs. To this end, we present a general methodology for extending security monitors to take into account the possible invocation of arbitrary APIs and we apply this methodology to a representative fragment of the DOM Core Level 1 API that captures DOM-specific information flows.
37

Analysis Techniques for Concurrent Programming Languages

Tamarit Muñoz, Salvador 02 September 2013 (has links)
Los lenguajes concurrentes est an cada d a m as presentes en nuestra sociedad, tanto en las nuevas tecnolog as como en los sistemas utilizados de manera cotidiana. M as a un, dada la actual distribuci on de los sistemas y su arquitectura interna, cabe esperar que este hecho siga siendo una realidad en los pr oximos a~nos. En este contexto, el desarrollo de herramientas de apoyo al desarrollo de programas concurrentes se vuelve esencial. Adem as, el comportamiento de los sistemas concurrentes es especialmente dif cil de analizar, por lo que cualquier herramienta que ayude en esta tarea, a un cuando sea limitada, ser a de gran utilidad. Por ejemplo, podemos encontrar herramientas para la depuraci on, an alisis, comprobaci on, optimizaci on, o simpli caci on de programas. Muchas de ellas son ampliamente utilizadas por los programadores hoy en d a. El prop osito de esta tesis es introducir, a trav es de diferentes lenguajes de programaci on concurrentes, t ecnicas de an alisis que puedan ayudar a mejorar la experiencia del desarrollo y publicaci on de software para modelos concurrentes. En esta tesis se introducen tanto an alisis est aticos (aproximando todas las posibles ejecuciones) como din amicos (considerando una ejecuci on en concreto). Los trabajos aqu propuestos di eren lo su ciente entre s para constituir ideas totalmente independientes, pero manteniendo un nexo com un: el hecho de ser un an alisis para un lenguaje concurrente. Todos los an alisis presentados han sido de nidos formalmente y se ha probado su correcci on, asegurando que los resultados obtenidos tendr an el grado de abilidad necesario en sistemas que lo requieran, como por ejemplo, en sistemas cr ticos. Adem as, se incluye la descripci on de las herramientas software que implementan las diferentes ideas propuestas. Esto le da al trabajo una utilidad m as all a del marco te orico, permitiendo poner en pr actica y probar con ejemplos reales los diferentes an alisis. Todas las ideas aqu presentadas constituyen, por s mismas, propuestas aplicables en multitud de contextos y problemas actuales. Adem as, individualmente sirven de punto de partida para otros an alisis derivados, as como para la adaptaci on a otros lenguajes de la misma familia. Esto le da un valor a~nadido a este trabajo, como bien atestiguan algunos trabajos posteriores que ya se est an bene ciando de los resultados obtenidos en esta tesis. / Concurrent languages are increasingly present in our society, both in new technologies and in the systems used on a daily basis. Moreover, given the current systems distribution and their internal architecture, one can expect that this remains so in the coming years. In this context, the development of tools to support the implementation of concurrent programs becomes essential. Futhermore, the behavior of concurrent systems is particularly difficult to analyse, so that any tool that helps in this task, even if in a limited way, will be very useful. For example, one can find tools for debugging, analysis, testing, optimisation, or simplification of programs, which are widely used by programmers nowadays. The purpose of this thesis is to introduce, through various concurrent programming languages, some analysis techniques that can help to improve the experience of the software development and release for concurrent models. This thesis introduces both static (approximating all possible executions) and dynamic (considering a specific execution) analysis. The topics considered here differ enough from each other to be fully independent. Nevertheless, they have a common link: they can be used to analyse properties of a concurrent programming language. All the analyses presented here have been formally defined and their correctness have been proved, ensuring that the results will have the reliability degree which is needed for some systems (for instance, for critical systems). It also includes a description of the software tools that implement the different ideas proposed. This gives the work a usefulness well beyond the theoretical aspect, allowing us to put it in practice and to test the different analyses with real-world examples All the ideas here presented are, by themselves, approaches that can be applied in many current contexts and problems. Moreover, individually they serve as a starting point for other derived analysis, as well as for the adaptation to other languages of the same family. This gives an added value to this work, a fact confirmed by some later works that are already benefiting from the results obtained in this thesis. / Tamarit Muñoz, S. (2013). Analysis Techniques for Concurrent Programming Languages [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/31651 / TESIS
38

User-Intention Based Program Analysis for Android Security

Elish, Karim Omar Mahmoud 29 July 2015 (has links)
The number of mobile applications (i.e., apps) is rapidly growing, as the mobile computing becomes an integral part of the modern user experience. Malicious apps have infiltrated open marketplaces for mobile platforms. These malicious apps can exfiltrate user's private data, abuse of system resources, or disrupting regular services. Despite the recent advances on mobile security, the problem of detecting vulnerable and malicious mobile apps with high detection accuracy remains an open problem. In this thesis, we address the problem of Android security by presenting a new quantitative program analysis framework for security vetting of Android apps. We first introduce a highly accurate proactive detection solution for detecting individual malicious apps. Our approach enforces benign property as opposed of chasing malware signatures, and uses one complex feature rather than multi-feature as in the existing malware detection methods. In particular, we statically extract a data-flow feature on how user inputs trigger sensitive critical operations, a property referred to as the user-trigger dependence. This feature is extracted through nontrivial Android-specific static program analysis, which can be used in various quantitative analytical methods. Our evaluation on thousands of malicious apps and free popular apps gives a detection accuracy (2% false negative rate and false positive rate) that is better than, or at least competitive against, the state-of-the-art. Furthermore, our method discovers new malicious apps available in the Google Play store that have not been previously detected by anti-virus scanning tools. Second, we present a new app collusion detection approach and algorithms to analyze pairs or groups of communicating apps. App collusion is a new technique utilized by the attackers to evade standard detection. It is a new threat where two or more apps, appearing benign, communicate to perform malicious task. Most of the existing solutions assume the attack model of a stand-alone malicious app, and hence cannot detect app collusion. We first demonstrate experimental evidence on the technical challenges associated with detecting app collusion. Then, we address these challenges by introducing a scalable and an in-depth cross-app static flow analysis approach to identify the risk level associated with communicating apps. Our approach statically analyzes the sensitivity and the context of each inter-app communication with low analysis complexity, and defines fine-grained security policies for the inter-app communication risk detection. Our evaluation results on thousands of free popular apps indicate that our technique is effective. It generates four times fewer false positives compared to the state-of-the-art collusion-detection solution, enhancing the detection capability. The advantages of our inter-app communication analysis approach are the analysis scalability with low complexity, and the substantially improved detection accuracy compared to the state-of-the-art solution. These types of proactive defenses solutions allow defenders to stay proactive when defending against constantly evolving malware threats. / Ph. D.
39

Analýza datových toků pro knihovny se složitými vzory interakcí / Data Lineage Analysis of Frameworks with Complex Interaction Patterns

Hýbl, Oskar January 2020 (has links)
Manta Flow is a tool for analyzing data flow in enterprise environment. It features Java scanner, a module using static analysis to determine the flows through Java applications. To analyze an application using some framework, the scanner requires a dedicated plugin. Although Java scanner provides plugins for several frameworks, to be usable for real applications, it is essential that the scanner supports as many frameworks as possible, which requires implementation of new plugins. Application using Apache Spark, a framework for cluster computing, are increasingly popular. Therefore we designed and implemented Java scanner plugin that allows the scanner to analyze Spark applications. As Spark focuses on data processing, this presented several challenges that were not encountered in other frameworks. In particular it was necessary to resolve the data schema in various scenarios and track the schema changes throughout any operations invoked on the data. Of the multiple APIs Spark provides for data processing, we focused on Spark SQL module, notably on Dataset, omitting the legacy RDD. We also implemented support for data access, covering JDBC and chosen file formats. The implementation has been thoroughly tested and is proven to work correctly as a part of Manta Flow, which features the plugin in...
40

Exploring the use of call stack depth limits to reduce regression testing costs

Bogren, Patrik, Kristola, Isak January 2021 (has links)
Regression testing is performed after existing source code has been modified to verify that no new faults have been introduced by the changes. Test case selection can be used to reduce the effort of regression testing by selecting a smaller subset of the test suite for later execution. Several criteria and objectives can be used as constraints that should be satisfied by the selection process. One common criteria is function coverage, which can be represented by a coverage matrix that maps test cases to methods under test. The process of generating and evaluating these matrices can be very time consuming for large matrices since their complexity increases exponentially with the number of tests included. To the best of our knowledge, no techniques for reducing execution matrix size have been proposed. This thesis develops a matrix-reduction technique based on analysis of call stack data. It studies the effects of limiting the call stack depth in terms of coverage accuracy, matrix size, and generation costs. Further, it uses a tool that can instrument Java projects using Java’s instrumentation API to collect coverage information on open-source Java projects for varying depth limits of the call stack. Our results show that the stack depth limit can be significantly reduced while retaining high coverage and that matrix size can be decreased by up to 50%. The metric we used to indicate the difficulty of splitting up the matrix closely resembled the curve for coverage. However, we did not see any significant differences in execution time for lower depth limits.

Page generated in 0.089 seconds