• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 49
  • 9
  • 5
  • 3
  • 2
  • 1
  • Tagged with
  • 105
  • 105
  • 44
  • 33
  • 23
  • 23
  • 17
  • 17
  • 16
  • 15
  • 14
  • 14
  • 13
  • 12
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Efficient Instrumentation for Object Flow Profiling

Mudduluru, Rashmi January 2015 (has links) (PDF)
Profiling techniques to detect performance bugs in applications are usually customized to detect a specific bug pattern and involve significant engineering effort. In spite of this effort, many techniques either suffer from high runtime overheads or are imprecise. This necessitates the design of a common and efficient instrumentation substrate that profiles the flow of objects during an execution. Designing such a substrate which enables profile generation precisely with low overhead is non-trivial due to the number of objects created, accessed and paths traversed by them in an execution. In this thesis, we design and implement an efficient instrumentation substrate that efficiently generates object flow profiles for Java programs, without requiring any modifications to the underlying virtual machine. We achieve this by applying Ball-Larus numbering on a specialized hy-brid ow graph (hfg). The hfg path profiles that are collected during runtime are post-processed o ine to derive the object flow profiles. We extend the design to handle inter-procedural objec flows by constructing flow summaries for each method and incorporating them appropriately. We have implemented the substrate and validated its efficacy by applying it on programs from popular benchmark suites including dacapo and java-grande. The results demonstrate the scalability of our approach, which handles 0.2M to 0.55B object accesses with an average runtime overhead of 8x. We also demonstrate the effectiveness of the generated profiles by implementing three client analyses that consume the profiles to detect performance bugs. The analyses are able to detect 38 performance bugs which when refactored result in signi cant performance gains (up to 30%) in running times.
82

TIREX : une représentation textuelle intermédiaire pour un environnement d'exécution virtuel, échanger des informations du compilateur et d'analyse du programme / TIREX : A textual target-level intermediate representation for virtual execution environment, compiler information exchange and program analysis

Pietrek, Artur 02 October 2012 (has links)
Certains environnements ont besoin de plusieurs compilateurs, par exemple un pour le système d'exploitation, supportant la norme C/C++ complète, et l'autre pour les applications, qui supporte éventuellement un sous-ensemble de la norme, mais capable de fournir plus de performance. Le maintien de plusieurs compilateurs pour une plateforme cible représente un effort considérable. Il est donc plus facile d'implémenter et de maintenir un seul outil responsable des optimisations particulières au processeur ciblé. Il nous faut alors un moyen de relier ces compilateurs à l'optimiseur, de préférence, en gardant au passage certaines structures de données internes aux compilateurs qui, soit prendraient du temps, soit seraient impossible à reconstruire à partir du code assembleur par exemple. Dans cette thèse, nous introduisons Tirex, une représentation textuelle intermédiaire pour échanger des informations de bas niveau, déjà dépendantes de la cible, entre les compilateurs, les optimiseurs et les autres outils de la chaîne de compilation. Notre représentation contient un flot d'instructions du processeur cible, mais garde également la structure explicite du programme et supporte la forme SSA (Static Single Assignment). Elle est facilement extensible et très flexible, ce qui permet de transmettre toute donnée jugée importante à l'optimiseur. Nous construisons Tirex par extension de MinIR, une représentation intermédiaire elle-même basée sur un encodage YAML des structures du compilateur. Nos extensions de Tirex comprennent: l'abaissement de la représentation au niveau du processeur cible, la conservation du flot de données du programme, ainsi que l'ajout d'informations sur les structures de boucles et les dépendances de données. Nous montrons que Tirex est polyvalent et peut être utilisé dans une variété d'applications différentes, comme par exemple un environnement d'exécution virtuel (VEE),et fournit une base forte pour un environnement d'analyse du programme. Dans le cadre d'un VEE, nous présentons un interprèteur de la forme SSA et un compilateur just-in-time (JIT). Nous montrons comment l'interprétation d'une représentation au niveau du processeur cible élimine la plupart des problèmes liés à l'exécution en mode mixte. Nous explorons également les questions liées à l'interprétation efficace d'une représentation de programme sous la forme SSA. / Some environments require several compilers, for instance one for the operating system, supporting the full C/C++ norm, and one for the applications, potentially supporting less but able to derive more performance. Maintaining different compilers for a target requires considerable effort, thus it is easier to implement and maintain target-dependent optimizations in a single, external tool. This requires a way of connecting these compilers with the target-dependent optimizer, preferably passing along some internal compiler data structures that would be time-consuming, difficult or even impossible to reconstruct from assembly language for instance. In this thesis we introduce Tirex, a Textual Intermediate Representation for EXchanging target-level information between compilers, optimizers an different tools in the compilation toolchain. Our intermediate representation contains an instruction stream of the target processor, but still keeps the explicit program structure and supports the SSA form(Static Single Assignment). It is easily extensible and highly flexible, which allows any data to be passed for the purpose of the optimizer. We build Tirex by extending the existing Minimalist Intermediate Representation (MinIR), itself expressed as a YAML textual encoding of compiler structures. Our extensions in Tirex include: lowering the representation to a target level, conserving the program data stream, adding loop scoped information and data dependencies. Tirex is currently produced by the Open64 and the LLVM compilers, with a GCC producer under work. It is consumed by the Linear Assembly Optimizer (LAO), a specialized, target-specific, code optimizer. We show that Tirex is versatile and can be used in a variety of different applications, such as a virtual execution environment (VEE), and provides strong basis for a program analysis framework. As part of the VEE, we present an interpreter for a Static Single Assignment (SSA) form and a just-in-time (JIT) compiler. We show how interpreting a target-level representation eliminates most of the complexities of mixed-mode execution. We also explore the issues related to efficiently interpreting a SSA form program representation.
83

Generation of dynamic control-dependence graphs for binary programs

Pogulis, Jakob January 2014 (has links)
Dynamic analysis of binary files is an area of computer science that has many purposes. It is useful when it comes to debugging software in a development environment and the developer needs to know which statements affected the value of a specific variable. But it is also useful when analyzing a software for potential vulnerabilities, where data controlled by a malicious user could potentially result in the software executing adverse commands or executing malicious code. In this thesis a tool has been developed to perform dynamic analysis of x86 binaries in order to generate dynamic control-dependence graphs over the execution. These graphs can be used to determine which conditional statements that resulted in a certain outcome. The tool has been developed for x86 Linux systems using the dynamic binary instrumentation framework PIN, developed and maintained by Intel. Techniques for utilizing the additional information about the control flow for a program available during the dynamic analysis in order to improve the control flow information have been implemented and tested. The basic theory of dynamic analysis as well as dynamic slicing is discussed, and a basic overview of the implementation of a dynamic analysis tool is presented. The impact on the performance of the dynamic analysis tool for the techniques used to improve the control flow graph is significant, but approaches to improving the performance are discussed.
84

A Framework for Call Graph Construction

Honar, Elnaz, Mortazavi Jahromi, Seyed AmirHossein January 2010 (has links)
In object oriented programming, a Call Graph represents the calling relationships between the program’s methods. To be more precise, a Call Graph is a rooted directed graph where each node of the graph represents a method and each edge (u, v) represents a method call from method u to method v. Focus of this thesis is on building a framework for Call Graph construction algorithms which can be used in program analysis. Our framework is able to be initialized by different front-ends and constructs various Call Graph algorithms. Here, we instantiate framework with two bytecode readers (ASM and Soot) as front-ends and implement three Call Graph construction algorithms (CHA, RTA and CTA). At first, we used two above mentioned bytecode readers to read the bytecode of a specific Java program, then we found reachable methods for each invoked method; meanwhile we kept obtained details on our own data structures.  Creating data structures for storing required information about Classes, Methods, Fields and Statements, gives us a great opportunity to implement an independent framework for applying well known Call Graph algorithms. As a result of these data structures, Call Graph construction will not depend on bytecode readers; since, whenever we read the bytecode of a program, we accumulate all necessary points in pre-defined data structures and implement our Call Graphs based on this accumulated data. Finally, the result is a framework for different Call Graph construction algorithms which is the goal of this thesis. We tested and evaluated the algorithms with a variety of programs as the benchmark and compared the bytecode readers besides the three Call Graph algorithms in different aspects.
85

Vytvoření Sparse adaptéru pro infrastrukturu Code Listener / Creation of Sparse Adapter for the Code Listener Infrastructure

Pokorný, Jan January 2012 (has links)
Program checking is indisputably important, especially if originating in formal methods. VeriFIT at FIT BUT uses custom Code Listener (CL) infrastructure modularly interconnecting the front-end, typically a code parser adapter, and the back-end, typically an analyser. Our aim is to offer a former as a compact alternative to existing GCC compiler plug-in. This adapter uses linearized code mediated by sparse library for static analysis of programs in C. According to the experiments with one of the main CL analysers, Predator tool and its tests suite, our product - clsp program - is successful successful in roughly 75% of cases in comparison with the GCC plug-in. Further improvements are expected.
86

An Analysis Of A Large Urban School District's Eighth-grade Summer Reading Camp Curriculum And Student Performance Knowledge Voids

Sochocki, Eric 01 January 2013 (has links)
This study sought to determine if the 2012 Eighth Grade Summer Reading Camp curriculum was aligned with the students’ needs. To determine if curriculum alignment existed, the researcher completed a qualitative and quantitative study. The qualitative study consisted of interviewing the school district program development team to ascertain how the curriculum was designed. The quantitative segment involved running descriptive statistics for student performance on the Pre-program Benchmark Examination. The determined student knowledge voids were compared to the amount of instructional time spent taught teaching those individual benchmarks to ascertain if the curriculum was aligned with student need. The curriculum was determined to not be aligned with the performance deficiencies of the students.
87

THE RELATIONAL DATABASE: A NEW STATIC ANALYSIS TOOL?

Dutko, Adam M. 19 August 2011 (has links)
No description available.
88

Program Analyses for Understanding the Behavior and Performance of Traditional and Mobile Object-Oriented Software

Yan, Dacong 20 October 2014 (has links)
No description available.
89

Logic-based techniques for program analysis and specification synthesis

Feliú Gabaldón, Marco Antonio 19 November 2013 (has links)
La Tesis investiga técnicas ágiles dentro del paradigma declarativo para dar solución a dos problemas: el análisis de programas y la inferencia de especificaciones a partir de programas escritos en lenguajes multiparadigma y en lenguajes imperativos con tipos, objetos, estructuras y punteros. Respecto al estado actual de la tesis, la parte de análisis de programas ya está consolidada, mientras que la parte de inferencia de especificaciones sigue en fase de desarrollo activo. La primera parte da soluciones para la ejecución de análisis de punteros especificados en Datalog. En esta parte se han desarrollado dos técnicas de ejecución de especificaciones en dicho lenguaje Datalog: una de ellas utiliza resolutores de sistemas de ecuaciones booleanas, y la otra utiliza la lógica de reescritura implementada eficientemente en el lenguaje Maude. La segunda parte desarrolla técnicas de inferencia de especificaciones a partir de programas. En esta parte se han desarrollado dos métodos de inferencia de especificaciones. El primer método se desarrolló para el lenguaje lógico-funcional Curry y permite inferir especificaciones ecuacionales mediante interpretación abstracta de los programas. El segundo método está siendo desarrollado para lenguajes imperativos realistas, y se ha aplicado a un subconjunto del lenguaje de programación C. Este método permite inferir especificaciones en forma de reglas que representan las distintas relaciones entre las propiedades que el estado de un programa satisface antes y después de su ejecución. Además, estas propiedades son expresables en términos de las abstracciones funcionales del propio programa, resultando en una especificación de muy alto nivel y, por lo tanto, de más fácil comprensión. / Feliú Gabaldón, MA. (2013). Logic-based techniques for program analysis and specification synthesis [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/33747
90

Hardware-Aided Privacy Protection and Cyber Defense for IoT

Zhang, Ruide 08 June 2020 (has links)
With recent advances in electronics and communication technologies, our daily lives are immersed in an environment of Internet-connected smart things. Despite the great convenience brought by the development of these technologies, privacy concerns and security issues are two topics that deserve more attention. On one hand, as smart things continue to grow in their abilities to sense the physical world and capabilities to send information out through the Internet, they have the potential to be used for surveillance of any individuals secretly. Nevertheless, people tend to adopt wearable devices without fully understanding what private information can be inferred and leaked through sensor data. On the other hand, security issues become even more serious and lethal with the world embracing the Internet of Things (IoT). Failures in computing systems are common, however, a failure now in IoT may harm people's lives. As demonstrated in both academic research and industrial practice, a software vulnerability hidden in a smart vehicle may lead to a remote attack that subverts a driver's control of the vehicle. Our approach to the aforementioned challenges starts by understanding privacy leakage in the IoT era and follows with adding defense layers to the IoT system with attackers gaining increasing capabilities. The first question we ask ourselves is "what new privacy concerns do IoT bring". We focus on discovering information leakage beyond people's common sense from even seemingly benign signals. We explore how much private information we can extract by designing information extraction systems. Through our research, we argue for stricter access control on newly coming sensors. After noticing the importance of data collected by IoT, we trace where sensitive data goes. In the IoT era, edge nodes are used to process sensitive data. However, a capable attacker may compromise edge nodes. Our second research focuses on applying trusted hardware to build trust in large-scale networks under this circumstance. The application of trusted hardware protects sensitive data from compromised edge nodes. Nonetheless, if an attacker becomes more powerful and embeds malicious logic into code for trusted hardware during the development phase, he still can secretly steal private data. In our third research, we design a static analyzer for detecting malicious logic hidden inside code for trusted hardware. Other than the privacy concern of data collected, another important aspect of IoT is that it affects the physical world. Our last piece of research work enables a user to verify the continuous execution state of an unmanned vehicle. This way, people can trust the integrity of the past and present state of the unmanned vehicle. / Doctor of Philosophy / The past few years have witnessed a rising in computing and networking technologies. Such advances enable the new paradigm, IoT, which brings great convenience to people's life. Large technology companies like Google, Apple, Amazon are creating smart devices such as smartwatch, smart home, drones, etc. Compared to the traditional internet, IoT can provide services beyond digital information by interacting with the physical world by its sensors and actuators. While the deployment of IoT brings value in various aspects of our society, the lucrative reward from cyber-crimes also increases in the upcoming IoT era. Two unique privacy and security concerns are emerging for IoT. On one hand, IoT brings a large volume of new sensors that are deployed ubiquitously and collect data 24/7. User's privacy is a big concern in this circumstance because collected sensor data may be used to infer a user's private activities. On the other hand, cyber-attacks now harm not only cyberspace but also the physical world. A failure in IoT devices could result in loss of human life. For example, a remotely hacked vehicle could shut down its engine on the highway regardless of the driver's operation. Our approach to emerging privacy and security concerns consists of two directions. The first direction targets at privacy protection. We first look at the privacy impact of upcoming ubiquitous sensing and argue for stricter access control on smart devices. Then, we follow the data flow of private data and propose solutions to protect private data from the networking and cloud computing infrastructure. The other direction aims at protecting the physical world. We propose an innovative method to verify the cyber state of IoT devices.

Page generated in 0.0584 seconds