• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 2
  • 1
  • Tagged with
  • 8
  • 8
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

CONTROL FLOW OBFUSCATION COMPLEXITY

KUMAR, AMIT 13 July 2005 (has links)
No description available.
2

An Empirical Study of Alias Analysis Techniques

Tran, Andrew T 01 June 2018 (has links)
As software projects become larger and more complex, software optimization at that scale is only feasible through automated means. One such component of software optimization is alias analysis, which attempts to determine which variables in a program refer to the same area in memory, and is used to relocate instructions to improve performance without interfering with program execution. Several alias analyses have been proposed over the past few decades, with varying degrees of precision and time and space complexity, but few studies have been conducted to compare these techniques with one another, nor to measure with program data to confirm their accuracy. Normally, this is out of the scope of alias analyses because these processes are static, and can only rely upon the input source code. We address these limitations by instrumenting several benchmarks and combining their data with commonly used alias analyses to objectively measure the accuracy of those analyses. Additionally, we also gather additional program statistics to further determine which programs are the most suitable for evaluating subsequent alias analysis techniques.
3

Binary Redundancy Elimination

Fernández Gómez, Manuel 13 April 2005 (has links)
Dos de las limitaciones de rendimiento más importantes en los procesadores de hoy en día provienen de las operaciones de memoria y de las dependencias de control. Para resolver estos problemas, las memorias cache y los predictores de salto son dos alternativas hardware bien conocidas que explotan, entre otros factores, el reuso temporal de memoria y la correlación de saltos. En otras palabras, estas estructuras tratan de explotar la redundancia dinámica existente en los programas. Esta redundancia proviene parcialmente de la forma en que los programadores escriben código, pero también de limitaciones existentes en el modelo de compilación tradicional, lo cual introduce instrucciones de memoria y de salto innecesarias. Pensamos que los compiladores deberían ser muy agresivos optimizando programas, y por tanto ser capaces de eliminar una parte importante de esta redundancia. Por otro lado, las optimizaciones aplicadas en tiempo de enlace o directamente al programa ejecutable final han recibido una atención creciente en los últimos años, debido a limitaciones existentes en el modelo de compilación tradicional. Incluso aplicando sofisticados análisis y transformaciones interprocedurales, un compilador tradicional no es capaz de optimizar un programa como una entidad completa. Un problema similar aparece aplicando técnicas de compilación dirigidas por profiling: grandes proyectos se ven forzados a recompilar todos y cada uno de sus módulos para aprovechar dicha información. Por el contrario, seria más conveniente construir la aplicación completa, instrumentarla para obtener información de profiling y optimizar entonces el binario final sin recompilar ni un solo fichero fuente.En esta tesis presentamos nuevas técnicas de compilación dirigidas por profiling para eliminar la redundancia encontrada en programas ejecutables a nivel binario (esto es, redundancia binaria), incluso aunque estos programas hayan sido compilados agresivamente con un novísimo compilador comercial. Nuestras técnicas de eliminación de redundancia están diseñadas para eliminar operaciones de memoria y de salto redundantes, que son las más importantes para mitigar los problemas de rendimiento que hemos mencionado. Estas propuestas están basadas en técnicas de eliminación de redundancia parcial sensibles al camino de ejecución. Los resultados muestran que aplicando nuestras optimizaciones, somos capaces de alcanzar una reducción del 14% en el tiempo de ejecución de nuestro conjunto de programas.En este trabajo también revisamos el problemas del análisis de alias en programas ejecutables, identificando el por qué la desambiguación de memoria es uno de los puntos débiles en la modificación de código objeto. Proponemos varios análisis para ser aplicados en el contexto de optimizadores binarios. Primero un análisis de alias estricto para descubrir dependencias de memoria sensibles al camino de ejecución, el cual es usado en nuestras optimizaciones para la eliminación de redundancias de memoria. Seguidamente, dos análisis especulativos de posibles alias para detección de independencias de memoria. Estos análisis están basados en introducir información especulativa en tiempo de análisis, lo que incrementa la precisión en partes importantes de código manteniendo el análisis eficiente. Los resultados muestran que nuestras propuestas son altamente útiles para incrementar la desambiguación de memoria de código binario, lo que se traduce en oportunidades para aplicar optimizaciones. Todos nuestros algoritmos, tanto de análisis como de optimización, han sido implementados en un optimizador binario, enfatizando los problemas más relevantes en la aplicaciones de nuestros algoritmos en código ejecutable, sin la ayuda de gran parte de la información de alto nivel presente en compiladores tradicionales. / Two of the most important performance limiters in today's processor families comes from solving the memory wall and handling control dependencies. In order to address these issues, cache memories and branch predictors are well-known hardware proposals that take advantage of, among other things, exploiting both temporal memory reuse and branch correlation. In other words, they try to exploit the dynamic redundancy existing in programs. This redundancy comes partly from the way that programmers write source code, but also from limitations in the compilation model of traditional compilers, which introduces unnecessary memory and conditional branch instructions. We believe that today's optimizing compilers should be very aggressive in optimizing programs, and then they should be expected to optimize a significant part of this redundancy away.On the other hand, optimizations performed at link-time or directly applied to final program executables have received increased attention in recent years, due to limitations in the traditional compilation model. First, even though performing sophisticated interprocedural analyses and transformations, traditional compilers do not have the opportunity to optimize the program as a whole. A similar problem arises when applying profile-directe compilation techniques: large projects will be forced to re-build every source file to take advantage of profile information. By contrast, it would be more convenient to build the full application, instrument it to obtain profile data and then re-optimize the final binary without recompiling a single source file.In this thesis we present new profile-guided compiler optimizations for eliminating the redundancy encountered on executable programs at binary level (i.e.: binary redundancy), even though these programs have been compiled with full optimizations using a state-ofthe- art commercial compiler. In particular, our Binary Redundancy Elimination (BRE) techniques are targeted at eliminating both redundant memory operations and redundant conditional branches, which are the most important ones for addressing the performance issues that we mentioned above in today's microprocessors. These new proposals are mainly based on Partial Redundancy Elimination (PRE) techniques for eliminating partial redundancies in a path-sensitive fashion. Our results show that, by applying our optimizations, we are able to achieve a 14% execution time reduction in our benchmark suite.In this work we also review the problem of alias analysis at the executable program level, identifying why memory disambiguation is one of the weak points of object code modification. We then propose several alias analyses to be applied in the context of linktime or executable code optimizers. First, we present a must-alias analysis to recognize memory dependencies in a path- sensitive fashion, which is used in our optimization for eliminating redundant memory operations. Next, we propose two speculative may-alias data-flow algorithms to recognize memory independencies. These may-alias analyses are based on introducing unsafe speculation at analysis time, which increases alias precision on important portions of code while keeping the analysis reasonably cost-efficient. Our results show that our analyses prove to be very useful for increasing memory disambiguation accuracy of binary code, which turns out into opportunities for applying optimizations.All our algorithms, both for the analyses and the optimizations, have been implemented within a binary optimizer, which overcomes most of the existing limitations of traditional source-code compilers. Therefore, our work also points out the most relevant issues of applying our algorithms at the executable code level, since most of the high-level information available in traditional compilers is lost.
4

Expression data flow graph: precise flow-sensitive pointer analysis for C programs

Thiessen, Rei Unknown Date
No description available.
5

Combining Conditional Constant Propagation And Interprocedural Alias Analysis

Nandakumar, K S 05 1900 (has links) (PDF)
No description available.
6

A Hybrid Software Change Impact Analysis for Large-scale Enterprise Systems

Chen, Wen 11 1900 (has links)
This work is concerned with analysing the potential impact of direct changes to large- scale enterprise systems, and, in particular, how to minimise testing efforts on such changes. A typical enterprise system may consist of hundreds of thousands of classes and millions of methods. Thus, it is extremely costly and difficult to apply conventional testing techniques to such a system. Retesting everything after a change is very expensive, and in practice generally not necessary. Selective testing can be more effective. However, it requires a deep understanding of the target system and a lack of that understanding can lead to insufficient test coverage. Change Impact Analysis can be used to estimate the impacts of the changes to be applied, providing developers/testers with confidence in selecting necessary tests and identifying untested entities. Conventional change impact analysis approaches include static analysis, dynamic analysis or a hybrid of the two analyses. They have proved to be useful on small or medium size programs, providing users an inside view of the system within an acceptable running time. However, when it comes to large-scale enterprise systems, the sizes of the programs are orders of magnitude larger. Conventional approaches often run into resource problems such as insufficient memory and/or unacceptable running time (up to weeks). More critically, a large number of false-negatives and false-positives can be generated from those approaches.In this work, a conservative static analysis with the capability of dealing with inheritance was conducted on an enterprise system and associated changes to obtain all the potential impacts. Later an aspect-based dynamic analysis was used to instrument the system and collect a set of dynamic impacts at run-time. We are careful not to discard impacts unless we can show that they are definitely not impacted by the change. Reachability analysis examines the program to see “Whether a given path in a program representation corresponds to a possible execution path”. In other words, we employ reachability analysis to eliminate infeasible paths (i.e., miss-matched calls and returns) that are identified in the control-flow of the program. Furthermore, in the phase of alias analysis, we aim at identifying paths that are feasible but cannot be affected by the direct changes to the system, by searching a set of possible pairs of accesses that may be aliased at each program point of interest. Our contributions are, we designed a hybrid approach that combines static anal- ysis and dynamic analysis with reachability analysis and alias/pointer analysis, it can be used to (1) solve the scalability problem on large-scale systems, (2) reduce false-positives and not introduce false-negatives, (3) extract both direct and indirect changes, and (4) identify impacts even before making the changes. Using our approach, organizations can focus on a much smaller, relevant subset of the overall test suite instead of blindly doing their entire suite of tests. Also it enables testers to augment the test suite with tests applying to uncovered impacts. We include an empirical study that illustrates the savings that can be attained. / Thesis / Doctor of Philosophy (PhD)
7

Optimalizace v překladači C pro VLIW architektury / Optimizations in C Compiler for VLIW Architectures

Baručák, Robert January 2014 (has links)
Presented is implementation of algorithm for alias analysis, which was integrated into LLVM framework. Properties and limitations of various alias analysis algorithms are discussed. Demonstrated are different approaches to working with predicates and integration of these principles with LLVM. One of the outcomes of this master's thesis is design and implementation of algorithm for profile guided if-conversion.
8

Vylepšení analýzy živých proměnných pomocí points-to analýzy / Improvement of Live Variable Analysis Using Points-to Analysis

Raiskup, Pavel January 2012 (has links)
Languages such as C use pointers very heavily. Implementation of operations on dynamically linked structures is, however, quite difficult. This can cause the programmer to make more mistakes than usual. One method for dealing with this situation is to use the static analysis tools. This thesis elaborates on the extension to the Code Listener architecture which is an interface for building static analysis tools. Code Listener is able to construct a call-graph or a control flow graph for a given source code and send it to the analyzing tool. One ability of the architecture is that it can conduct the live variable analysis internally. It detects places in the control flow graph where some subset of variables may be killed. The problem was that every variable for which a pointer address was assigned could not been killed, before. This decision had been made because there was no assurance that the variable could never been used through the pointer. So the goal of this work was to design and incorporate a points-to analysis which is able to exclude some references from the set of considered pointers to improve the live variable analysis.

Page generated in 0.0737 seconds