• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 163
  • 34
  • 22
  • 12
  • 11
  • 5
  • 5
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 303
  • 303
  • 86
  • 55
  • 55
  • 51
  • 50
  • 47
  • 45
  • 44
  • 41
  • 34
  • 30
  • 29
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Normalizer: Augmenting Code Clone Detectors Using Source Code Normalization

Ly, Kevin 01 March 2017 (has links) (PDF)
Code clones are duplicate fragments of code that perform the same task. As software code bases increase in size, the number of code clones also tends to increase. These code clones, possibly created through copy-and-paste methods or unintentional duplication of effort, increase maintenance cost over the lifespan of the software. Code clone detection tools exist to identify clones where a human search would prove unfeasible, however the quality of the clones found may vary. I demonstrate that the performance of such tools can be improved by normalizing the source code before usage. I developed Normalizer, a tool to transform C source code to normalized source code where the code is written as consistently as possible. By maintaining the code's function while enforcing a strict format, the variability of the programmer's style will be taken out. Thus, code clones may be easier to detect by tools regardless of how it was written. Reordering statements, removing useless code, and renaming identifiers are used to achieve normalized code. Normalizer was used to show that more clones can be found in Introduction to Computer Networks assignments by normalizing the source code versus the original source code using a small variety of code clone detection tools.
52

Sensitivity of Seismic Response of a 12 Story Reinforced Concrete Building to Varying Material Properties

Leung, Colin 01 December 2011 (has links) (PDF)
The main objective of this investigation is to examine how various material properties, governed by code specification, affect the seismic response of a twelve- story reinforced concrete building. This study incorporates the pushover and response history analysis to examine how varying steel yield strength (Fy), 28 day nominal compressive concrete strength (f’c), modes, and ground motions may affect the base shear capacity and displacements of a reinforced concrete structure. Different steel and concrete strengths were found to have minimal impact on the initial stiffness of the structure. However, during the post-yielding phase, higher steel and concrete compressive strengths resulted in larger base shear capacities of up to 22%. The base shear capacity geometric median increased as f’c or Fy increased, and the base shear capacity dispersion measure decreased as f’c or Fy increased. Higher mode results were neglected in this study due to non-convergent pushover analyses results. According to the response history analysis, larger yield and concrete compressive strengths result in lower roof displacement. The difference in roof displacement was less than 12% throughout. This displays the robustness of both analysis methods because material properties have insignificant impact on seismic response. Therefore, acceptable yield and compressive strengths governed by seismic code will result in acceptable building performance.
53

A language-independent static checking system for coding conventions

Mount, Sarah January 2013 (has links)
Despite decades of research aiming to ameliorate the difficulties of creating software, programming still remains an error-prone task. Much work in Computer Science deals with the problem of specification, or writing the right program, rather than the complementary problem of implementation, or writing the program right. However, many desirable software properties (such as portability) are obtained via adherence to coding standards, and therefore fall outside the remit of formal specification and automatic verification. Moreover, code inspections and manual detection of standards violations are time consuming. To address these issues, this thesis describes Exstatic, a novel framework for the static detection of coding standards violations. Unlike many other static checkers Exstatic can be used to examine code in a variety of languages, including program code, in-line documentation, markup languages and so on. This means that checkable coding standards adhered to by a particular project or institution can be handled by a single tool. Consequently, a major challenge in the design of Exstatic has been to invent a way of representing code from a variety of source languages. Therefore, this thesis describes ICODE, which is an intermediate language suitable for representing code from a number of different programming paradigms. To substantiate the claim that ICODE is a universal intermediate language, a proof strategy has been developed: for a number of different programming paradigms (imperative, declarative, etc.), a proof is constructed to show that semantics-preserving translation exists from an exemplar language (such as IMP or PCF) to ICODE. The usefulness of Exstatic has been demonstrated by the implementation of a number of static analysers for different languages. This includes a checker for technical documentation written in Javadoc which validates documents against the Sun Microsystems (now Oracle) Coding Conventions and a checker for HTML pages against a site-specifc standard. A third system is targeted at a variant of the Python language, written by the author, called python-csp, based on Hoare's Communicating Sequential Processes.
54

Improving dynamic analysis with data flow analysis

Chang, Walter Chochen 26 October 2010 (has links)
Many challenges in software quality can be tackled with dynamic analysis. However, these techniques are often limited in their efficiency or scalability as they are often applied uniformly to an entire program. In this thesis, we show that dynamic program analysis can be made significantly more efficient and scalable by first performing a static data flow analysis so that the dynamic analysis can be selectively applied only to important parts of the program. We apply this general principle to the design and implementation of two different systems, one for runtime security policy enforcement and the other for software test input generation. For runtime security policy enforcement, we enforce user-defined policies using a dynamic data flow analysis that is more general and flexible than previous systems. Our system uses the user-defined policy to drive a static data flow analysis that identifies and instruments only the statements that may be involved in a security vulnerability, often eliminating the need to track most objects and greatly reducing the overhead. For taint analysis on a set of five server programs, the slowdown is only 0.65%, two orders of magnitude lower than previous taint tracking systems. Our system also has negligible overhead on file disclosure vulnerabilities, a problem that taint tracking cannot handle. For software test case generation, we introduce the idea of targeted testing, which focuses testing effort on select parts of the program instead of treating all program paths equally. Our “Bullseye” system uses a static analysis performed with respect to user-defined “interesting points” to steer the search down certain paths, thereby finding bugs faster. We also introduce a compiler transformation that allows symbolic execution to automatically perform boundary condition testing, revealing bugs that could be missed even if the correct path is tested. For our set of 9 benchmarks, Bullseye finds bugs an average of 2.5× faster than a conventional depth-first search and finds numerous bugs that DFS could not. In addition, our automated boundary condition testing transformation allows both Bullseye and depth-first search to find numerous bugs that they could not find before, even when all paths were explored. / text
55

Impacts of liquefaction and lateral spreading on bridge pile foundations from the February 22nd 2011 Christchurch earthquake

Winkley, Anna Margaret Mathieson January 2013 (has links)
The Mw 6.2 February 22nd 2011 Christchurch earthquake (and others in the 2010-2011 Canterbury sequence) provided a unique opportunity to study the devastating effects of earthquakes first-hand and learn from them for future engineering applications. All major events in the Canterbury earthquake sequence caused widespread liquefaction throughout Christchurch’s eastern suburbs, particularly extensive and severe during the February 22nd event. Along large stretches of the Avon River banks (and to a lesser extent along the Heathcote) significant lateral spreading occurred, affecting bridges and the infrastructure they support. The first stage of this research involved conducting detailed field reconnaissance to document liquefaction and lateral spreading-induced damage to several case study bridges along the Avon River. The case study bridges cover a range of ages and construction types but all are reinforced concrete structures which have relatively short, stiff decks. These factors combined led to a characteristic deformation mechanism involving deck-pinning and abutment back-rotation with consequent damage to the abutment piles and slumping of the approaches. The second stage of the research involved using pseudo-static analysis, a simplified seismic modelling tool, to analyse two of the bridges. An advantage of pseudo-static analysis over more complicated modelling methods is that it uses conventional geotechnical data in its inputs, such as SPT blowcount and CPT cone resistance and local friction. Pseudo-static analysis can also be applied without excessive computational power or specialised knowledge, yet it has been shown to capture the basic mechanisms of pile behaviour. Single pile and whole bridge models were constructed for each bridge, and both cyclic and lateral spreading phases of loading were investigated. Parametric studies were carried out which varied the values of key parameters to identify their influence on pile response, and computed displacements and damages were compared with observations made in the field. It was shown that pseudo-static analysis was able to capture the characteristic damage mechanisms observed in the field, however the treatment of key parameters affecting pile response is of primary importance. Recommendations were made concerning the treatment of these governing parameters controlling pile response. In this way the future application of pseudo-static analysis as a tool for analysing and designing bridge pile foundations in liquefying and laterally spreading soils is enhanced.
56

Towards a Gold Standard for Points-to Analysis

Gutzmann, Tobias January 2010 (has links)
Points-to analysis is a static program analysis that computes reference informationfor a given input program. It serves as input to many client applicationsin optimizing compilers and software engineering tools. Unfortunately, the Gold Standard – i.e., the exact reference information for a given program– is impossible to compute automatically for all but trivial cases, and thus, little can been said about the accuracy of points-to analysis. This thesis aims at paving the way towards a Gold Standard for points-to analysis. For this, we discuss theoretical implications and practical challenges that occur when comparing results obtained by different points-to analyses. We also show ways to improve points-to analysis by different means, e.g., combining different analysis implementations, and a novel approach to path sensitivity. We support our theories with a number of experiments.
57

Static analyses over weak memory

Nimal, Vincent P. J. January 2014 (has links)
Writing concurrent programs with shared memory is often not trivial. Correctly synchronising the threads and handling the non-determinism of executions require a good understanding of the interleaving semantics. Yet, interleavings are not sufficient to model correctly the executions of modern, multicore processors. These executions follow rules that are weaker than those observed by the interleavings, often leading to reorderings in the sequence of updates and readings from memory; the executions are subject to a weaker memory consistency. Reorderings can produce executions that would not be observable with interleavings, and these possible executions also depend on the architecture that the processors implement. It is therefore necessary to locate and understand these reorderings in the context of a program running, or to prevent them in an automated way. In this dissertation, we aim to automate the reasoning behind weak memory consistency and perform transformations over the code so that developers need not to consider all the specifics of the processors when writing concurrent programs. We claim that we can do automatic static analysis for axiomatically-defined weak memory models. The method that we designed also allows re-use of automated verification tools like model checkers or abstract interpreters that were not designed for weak memory consistency, by modification of the input programs. We define an abstraction in detail that allows us to reason statically about weak memory models over programs. We locate the parts of the code where the semantics could be affected by the weak memory consistency. We then provide a method to explicitly reveal the resulting reorderings so that usual verification techniques can handle the program semantics under a weaker memory consistency. We finally provide a technique that synthesises synchronisations so that the program would behave as if only interleavings were allowed. We finally test these approaches on artificial and real software. We justify our choice of an axiomatic model with the scalability of the approach and the runtime performance of the programs modified by our method.
58

Statická analýza programů v C# / Static analysis of C# programs

Malý, Petr January 2014 (has links)
The goal of this diploma thesis is to study and implement selected methods of static code analysis for C# programs translated into the Common Intermediate Language. The results of this work are integrated into the ParallaX Development Environment system. This diploma thesis focuses on Structural, Points-to and Dependence. analysis. Powered by TCPDF (www.tcpdf.org)
59

Implementace rezoluce řízení toku v dynamickém jazyce / Implementing control flow resolution in dynamic language

Šindelář, Štěpán January 2014 (has links)
Dynamic programming languages allow us to write code without type information and types of variables can change during execution. Although easier to use and suitable for fast prototyping, dynamic typing can lead to error prone code and is challenging for the compilers or interpreters. Programmers often use documentation comments to provide the type information, but the correspondence of the documentation and the actual code is usually not checked by the tools. In this thesis, we focus on one of the most popular dynamic programming languages: PHP. We have developed a framework for static analysis of PHP code as a part of the Phalanger project -- the PHP to .NET compiler. The framework supports any kind of analysis, but in particular, we implemented type inference analysis with emphasis on discovery of possible type related errors and mismatches between documentation and the actual code. The implementation was evaluated on real PHP applications and discovered several real errors and documentation mismatches with a good ratio of false positives. Powered by TCPDF (www.tcpdf.org)
60

Demand-Driven Static Analysis of Heap-Manipulating Programs

Chenguang Sun (5930306) 16 August 2019 (has links)
<div>Modern Java application frameworks present significant challenges for existing static analysis algorithms. Such challenges include large-scale code bases, heap-carried dependency, and asynchronous control flow caused by message passing.</div><div>Existing analysis algorithms are not suitable to deal with these challenges. One reason is that analyses are typically designed to operate homogeneously on the whole program. This leads to scalability problems when the analysis algorithms are used on applications built as plug-ins of large frameworks, since the framework code is analyzed together with the application code. Moreover, the asynchronous message passing of the actor model adopted by most modern frameworks leads to control flows which are not modeled by existing analyses.</div><div>This thesis presents several techniques for more powerful debugging and program understanding tools based on slicing. In general, slicing-based techniques aim to discover interesting properties of a large program by only reasoning about the relevant part of the program (typically a small amount of code) precisely, abstracting away the behavior of the rest of the program.</div><div>The key contribution of this thesis is a demand-driven framework to enable precise and scalable analyses on programs built on large frameworks. A slicing algorithm, which can handle heap-carried dependence, is used to identify the program elements relevant to an analysis query. We instantiated the framework to infer correlations between registration call sites and callback methods, and resolve asynchronous control flows caused by asynchronous message passing.</div>

Page generated in 0.0758 seconds