• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 163
  • 30
  • 17
  • 10
  • 7
  • 7
  • 6
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 295
  • 149
  • 121
  • 72
  • 53
  • 41
  • 34
  • 31
  • 30
  • 30
  • 27
  • 24
  • 23
  • 22
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Effective Static Debugging via Compential Set-Based Analysis

January 1997 (has links)
Sophisticated software systems are inherently complex. Understanding, debugging and maintaining such systems requires inferring high-level characteristics of the system's behavior from a myriad of low-level details. For large systems, this quickly becomes an extremely difficult task. MrSpidey is a static debugger that augments the programmers ability to deal with such complex systems. It statically analyzes the program and uses the results of the analysis to identify and highlight any program operation may cause a run-time fault. The programmer can then investigate each potential fault site and, using the graphical explanation facilities of MrSpidey, determine if the fault will really happen or whether the corresponding correctness proof is beyond the analysis's capabilities. In practice, MrSpidey has proven to be an effective tool for debugging program under development and understanding existing programs. The key technology underlying MrSpidey is componential set-based analysis. This is a constraint-based, whole-program analysis for object-oriented and functional programs. The analysis first processes each program component (eg. module or package) independently, generating and simplifying a constraint system describing the data flow behavior of that component. The analysis then combines and solves these simplified constraint systems to yield invariants characterizing the run-time behavior of the entire program. This component-wise approach yields an analysis that handles significantly larger programs than previous analyses of comparable accuracy. The simplification of constraint systems raises a number of questions. In particular, we need to ensure that simplification preserves the observable behavior, or solution space, of a constraint system. This dissertation provides a complete proof-theoretic and algorithmic characterization of the observable behavior of constraint systems, and establishes a close connection between the observable equivalence of constraint systems and the equivalence of regular tree grammars. We exploit this connection to develop a complete algorithm for deciding the observable equivalence of constraint systems, and to adapt a variety of algorithms for simplifying regular tree grammars to the problem of simplifying constraint systems. The resulting constraint simplification algorithms yield an order of magnitude reduction in the size of constraint systems for typical program expressions.
52

Implementation of discoverable digital clone library for knowledge transfer and improved productivity.

Gadebe, Moses Lesiba. January 2013 (has links)
M. Tech. Information Networks / Code clone is a code portion in one source code fragment that is similar or identical to a code portion in another source code fragment. Clones in applications are inevitable within an organization's intranet. There are a great number of clone detection tools to help maintenance programmers to locate and refactor code clones where they exist. Currently clone detection process has not been explored fully to construct digital libraries to store clones for reuse and shareability. This is because most of clone detection techniques produce Indexed Statistical Reports as textual file showing related group of code fragments. Other techniques visualize clones to depict clones versions history as genealogies. Furthermore current techniques do not indicate the reusability and shareability worthiness of the detected clones in taxonomy. In this mini-dissertation a Clone Wrapper Detection Technique prototype is developed to detect and store commonly used structural clones into a Discoverable Digital Clone Library hosted in Fedora Repository. Stored clones metadata are then extracted to induce a Clone Family Tree Ontology of related class clones in a taxonomy based on Abstraction (inheritance and composition hierarchy) process.
53

Fault Location via Precise Dynamic Slicing

Zhang, Xiangyu January 2006 (has links)
Developing automated techniques for identifying a fault candidate set (i.e., subset of executed statements that contains the faulty code responsible for the failure during a program run), can greatly reduce the effort of debugging. Over 15 years ago precise dynamic slicing was proposed to identify a fault candidate set as consisting of all executed statements that influence the computation of an incorrect value through a chain of data and/or control dependences. However, the challenge of making precise dynamic slicing practical has not been addressed. This dissertation addresses this challenge and makes precise dynamic slicing useful for debugging realistic applications. First, the cost of computing precise dynamic slices is greatly reduced. Second, innovative ways of using precise dynamic slicing are identified to produce small failure candidate sets. The key cause of high space and time cost of precise dynamic slicing is the very large size of dynamic dependence graphs that are constructed and traversed for computing dynamic slices. By developing a novel series of optimizations the size of the dynamic dependence graph is greatly reduced leading to a compact representation that can be rapidly traversed. Average space needed is reduced from 2 Gigabytes to 94 Megabytes for dynamic dependence graphs corresponding to executions with average lengths of 130 Million instructions. The precise dynamic slicing time is reduced from up to 20 minutes for a demand-driven algorithm to 16 seconds. A compression algorithm is developed to further reduce dependence graph sizes. The resulting representation achieves the space efficiency such that the dynamic execution history of executing a couple of billion instructions can be held in a Gigabyte of memory. To further scale precise dynamic slicing to longer program runs, a novel approach is proposed that uses checkpointing/logging to enable collection of dynamic history of only the relevant window of execution. Classical backward dynamic slicing can often produce fault candidate sets that contain thousands of statements making the task of identifying faulty code very time consuming for the programmer. Novel techniques are proposed to improve effectiveness of dynamic slicing for fault location. The merit of these techniques lies in identifying multiple forms of dynamic slices in a failed run and then intersecting them to produce smaller fault candidate sets. Using these techniques, the fault candidate set size corresponding to the backward dynamic slice is reduced by nearly a factor of 3. A fine-grained statistical pruning technique based on value profiles is also developed and this technique reduces the sizes of backward dynamic slices by a factor of 2.5. In conclusion, this dissertation greatly reduces the cost of precise dynamic slicing and presents techniques to improve its effectiveness for fault location.
54

Automatic test data generation

Offutt, Andrew Jefferson, VI 08 1900 (has links)
No description available.
55

Tools and Methods for Analysis, Debugging, and Performance Improvement of Equation-Based Models

Sjölund, Martin January 2015 (has links)
Equation-based object-oriented (EOO) modeling languages such as Modelica provide a convenient, declarative method for describing models of cyber-physical systems. Because of the ease of use of EOO languages, large and complex models can be built with limited effort. However, current state-of-the-art tools do not provide the user with enough information when errors appear or simulation results are wrong. It is of paramount importance that such tools should give the user enough information to correct errors or understand where the problems that lead to wrong simulation results are located. However, understanding the model translation process of an EOO compiler is a daunting task that not only requires knowledge of the numerical algorithms that the tool executes during simulation, but also the complex symbolic transformations being performed. As part of this work, methods have been developed and explored where the EOO tool, an enhanced Modelica compiler, records the transformations during the translation process in order to provide better diagnostics, explanations, and analysis. This information is used to generate better error-messages during translation. It is also used to provide better debugging for a simulation that produces unexpected results or where numerical methods fail. Meeting deadlines is particularly important for real-time applications. It is usually essential to identify possible bottlenecks and either simplify the model or give hints to the compiler that enable it to generate faster code. When profiling and measuring execution times of parts of the model the recorded information can also be used to find out why a particular system model executes slowly. Combined with debugging information, it is possible to find out why this system of equations is slow to solve, which helps understanding what can be done to simplify the model. A tool with a graphical user interface has been developed to make debugging and performance profiling easier. Both debugging and profiling have been combined into a single view so that performance metrics are mapped to equations, which are mapped to debugging information. The algorithmic part of Modelica was extended with meta-modeling constructs (MetaModelica) for language modeling. In this context a quite general approach to debugging and compilation from (extended) Modelica to C code was developed. That makes it possible to use the same executable format for simulation executables as for compiler bootstrapping when the compiler written in MetaModelica compiles itself. Finally, a method and tool prototype suitable for speeding up simulations has been developed. It works by partitioning the model at appropriate places and compiling a simulation executable for a suitable parallel platform.
56

Perfiles de testing aplicados a modelos de software

Palacios, Luis Fernando January 2010 (has links)
Actualmente, la complejidad de los sistemas de software se ha incrementado. El software sufre cambios y evoluciona durante todo el ciclo de vida del desarrollo, por lo tanto es fundamental contar con un proceso de pruebas que detecte errores y fallas en la implementación en todas las etapas garantizando además la calidad del producto final. Las técnicas de validación y verificación también se pueden aplicar a los modelos de pruebas de software permitiendo automatizar la creación y ejecución de los casos de pruebas, aumentando la productividad y reduciendo los costos. El Desarrollo de software Dirigido por Modelos (en inglés Model Driven software Development, MDD) propone un nuevo mecanismo de construcción de software a través de un proceso guiado por modelos que van desde los más abstractos (en inglés Platform Independent Model, PIM) a los más concretos (en inglés Platform Specific Model, PSM) realizando transformaciones y/o refinamientos sucesivos que permitan llegar al código aplicando una última transformación. Dentro del contexto de MDD, las Pruebas de software Dirigidas por Modelos (en inglés Model-Driven Testing, MDT) son una forma de prueba de caja negra [Bei 95] que utiliza modelos estructurales y de comportamiento para automatizar el proceso de generación de casos de prueba. Para ello, MDT utiliza un lenguaje definido con mecanismos de perfiles basado en el Perfil de Pruebas UML [U2TP 04] (en inglés UML 2.0 Testing Profile, U2TP). Este lenguaje permite diseñar los artefactos de los sistemas de pruebas e identificar los conceptos esenciales del dominio en cuestión adaptados a plataformas tecnológicas y a dominios específicos. La especificación del Perfil de Pruebas UML proporciona además un marco formal para la definición de un modelo de prueba bajo la propuesta de caja negra que incluye las reglas que se deben aplicar para transformar dicho modelo a código ejecutable. Actualmente existen herramientas basadas en técnicas de validación y verificación formal de programas y de chequeo de modelos que se enfocan principalmente en cómo expresar las transformaciones. Sin embargo, la validación y verificación en forma automática a través de una alternativa práctica como es el testing dirigido por modelos lo hacen en menor medida. El testing consiste en el proceso de ejercitar un producto para verificar que satisface los requerimientos e identificar diferencias entre el comportamiento real y el comportamiento esperado (IEEE Standard for Software Test Documentation, 1983), lo cual es más simple y no requiere tener experiencia en métodos formales comparadas con las técnicas mencionadas anteriormente. Tanto UML y sus extensiones, como el Perfil de Pruebas UML, están definidos a través de una especificación de tecnología estandarizada por OMG (en inglés Object Management Group) denominada MOF [MOF] (en inglés Meta-Object Facility). MOF es un meta-metamodelo utilizado para crear metamodelos que pueden ser transformados a texto a través de herramientas que soporten la definición MOF. MOFScript [Oldevik 06] es un lenguaje textual basado en QVT [QVT] (en inglés "Queries, Views and Transformations") que puede ser utilizado para realizar transformaciones de metamodelos MOF a texto. El objetivo de esta tesis es desarrollar una herramienta que permita realizar las transformaciones en forma automática de los modelos de pruebas estructurales y de comportamiento a código JUnit [JUnit]. Para lograr dicho objetivo, definimos el lenguaje para modelar dominios de pruebas utilizando el Perfil de Pruebas UML y las reglas formales de transformación de modelos U2TP a código de testing JUnit basadas en el lenguaje MOFScript. Esta tesis está organizada de la siguiente manera. En el capítulo 2 se introducen los conceptos del desarrollo de software dirigido por modelos. En el capítulo 3 se describen las pruebas de software dirigidas por modelos. En el capítulo 4 se definen las reglas de transformación de modelos de prueba a código JUnit. En el capítulo 5 se describe la implementación de la herramienta que permite transformar en forma automática modelos definidos con el Perfil de Pruebas UML a código JUnit, además de describir la arquitectura utilizada en el proyecto. El capitulo 6 muestra un caso de estudio del trabajo realizado desde la perspectiva del usuario final. En el capítulo 7 se detallan los trabajos relacionados. En el capítulo 8 se exponen las conclusiones finales y se citan futuros trabajos.
57

Semi-automatic fault localization

Jones, James Arthur. January 2008 (has links)
Thesis (Ph. D.)--Computing, Georgia Institute of Technology, 2008. / Committee Chair: Harrold, Mary Jean; Committee Member: Orso, Alessandro; Committee Member: Pande, Santosh; Committee Member: Reiss, Steven; Committee Member: Rugaber, Spencer.
58

Testing, tracing und debugging bei embedded systems /

Langer, Josef. January 2008 (has links)
Zugl.: Linz, Universiẗat, Diss., 2008.
59

SOFTVIZ, a step forward

Singh, Mahim. January 2004 (has links)
Thesis (M.S.)--Worcester Polytechnic Institute. / Keywords: Eclipse plug-in; tracer; timeline; software visualization; sunburst; SoftViz; ParaVis; error categorization framework; debugging; program understanding. Includes bibliographical references (p. 85-89).
60

Eine feingranulare SESAM-Variante

Hampp, Tilmann. January 2001 (has links)
Stuttgart, Univ., Diplomarb., 2001.

Page generated in 0.0485 seconds