• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 2
  • 1
  • 1
  • Tagged with
  • 10
  • 10
  • 9
  • 5
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Perfiles de testing aplicados a modelos de software

Palacios, Luis Fernando January 2010 (has links)
Actualmente, la complejidad de los sistemas de software se ha incrementado. El software sufre cambios y evoluciona durante todo el ciclo de vida del desarrollo, por lo tanto es fundamental contar con un proceso de pruebas que detecte errores y fallas en la implementación en todas las etapas garantizando además la calidad del producto final. Las técnicas de validación y verificación también se pueden aplicar a los modelos de pruebas de software permitiendo automatizar la creación y ejecución de los casos de pruebas, aumentando la productividad y reduciendo los costos. El Desarrollo de software Dirigido por Modelos (en inglés Model Driven software Development, MDD) propone un nuevo mecanismo de construcción de software a través de un proceso guiado por modelos que van desde los más abstractos (en inglés Platform Independent Model, PIM) a los más concretos (en inglés Platform Specific Model, PSM) realizando transformaciones y/o refinamientos sucesivos que permitan llegar al código aplicando una última transformación. Dentro del contexto de MDD, las Pruebas de software Dirigidas por Modelos (en inglés Model-Driven Testing, MDT) son una forma de prueba de caja negra [Bei 95] que utiliza modelos estructurales y de comportamiento para automatizar el proceso de generación de casos de prueba. Para ello, MDT utiliza un lenguaje definido con mecanismos de perfiles basado en el Perfil de Pruebas UML [U2TP 04] (en inglés UML 2.0 Testing Profile, U2TP). Este lenguaje permite diseñar los artefactos de los sistemas de pruebas e identificar los conceptos esenciales del dominio en cuestión adaptados a plataformas tecnológicas y a dominios específicos. La especificación del Perfil de Pruebas UML proporciona además un marco formal para la definición de un modelo de prueba bajo la propuesta de caja negra que incluye las reglas que se deben aplicar para transformar dicho modelo a código ejecutable. Actualmente existen herramientas basadas en técnicas de validación y verificación formal de programas y de chequeo de modelos que se enfocan principalmente en cómo expresar las transformaciones. Sin embargo, la validación y verificación en forma automática a través de una alternativa práctica como es el testing dirigido por modelos lo hacen en menor medida. El testing consiste en el proceso de ejercitar un producto para verificar que satisface los requerimientos e identificar diferencias entre el comportamiento real y el comportamiento esperado (IEEE Standard for Software Test Documentation, 1983), lo cual es más simple y no requiere tener experiencia en métodos formales comparadas con las técnicas mencionadas anteriormente. Tanto UML y sus extensiones, como el Perfil de Pruebas UML, están definidos a través de una especificación de tecnología estandarizada por OMG (en inglés Object Management Group) denominada MOF [MOF] (en inglés Meta-Object Facility). MOF es un meta-metamodelo utilizado para crear metamodelos que pueden ser transformados a texto a través de herramientas que soporten la definición MOF. MOFScript [Oldevik 06] es un lenguaje textual basado en QVT [QVT] (en inglés "Queries, Views and Transformations") que puede ser utilizado para realizar transformaciones de metamodelos MOF a texto. El objetivo de esta tesis es desarrollar una herramienta que permita realizar las transformaciones en forma automática de los modelos de pruebas estructurales y de comportamiento a código JUnit [JUnit]. Para lograr dicho objetivo, definimos el lenguaje para modelar dominios de pruebas utilizando el Perfil de Pruebas UML y las reglas formales de transformación de modelos U2TP a código de testing JUnit basadas en el lenguaje MOFScript. Esta tesis está organizada de la siguiente manera. En el capítulo 2 se introducen los conceptos del desarrollo de software dirigido por modelos. En el capítulo 3 se describen las pruebas de software dirigidas por modelos. En el capítulo 4 se definen las reglas de transformación de modelos de prueba a código JUnit. En el capítulo 5 se describe la implementación de la herramienta que permite transformar en forma automática modelos definidos con el Perfil de Pruebas UML a código JUnit, además de describir la arquitectura utilizada en el proyecto. El capitulo 6 muestra un caso de estudio del trabajo realizado desde la perspectiva del usuario final. En el capítulo 7 se detallan los trabajos relacionados. En el capítulo 8 se exponen las conclusiones finales y se citan futuros trabajos.
2

Automatic relative debugging

Searle, Aaron James January 2006 (has links)
Relative Debugging is a paradigm that assists users to locate errors in programs that have been corrected or enhanced. In particular, the contents of key data structures in the development version are compared with the contents of the corresponding data structures, in an existing version, as the two programs execute. If the values of two corresponding data structures differ at points where they should not, an error may exist and the user is notified. Relative Debugging requires users to identify the corresponding data structures within the two programs, and the locations at which the comparisons should be performed. To quickly and effectively identify useful data structures and comparison points requires that users have a detailed knowledge of the two programs under consideration. Without a detailed knowledge of the two programs, the task of locating useful data structures and comparison points can quickly become a difficult and time consuming process. Prior to the research detailed in this thesis, the Relative Debugging paradigm did not provide any assistance that allowed users to quickly and effectively identify suitable data structures and program points that will help discover the source of an error. Our research efforts have been directed at enhancing the Relative Debugging paradigm. The outcome of this research is the discovery of techniques that empower Relative Debugging users to become more productive and allow the Relative Debugging paradigm to be significantly enhanced. Specifically, the research has resulted in the following three contributions: 1. A Systematic Approach to Relative Debugging. 2. Data Flow Browsing for Relative Debugging. 3. Automatic Relative Debugging. These contributions have enhanced the Relative Debugging paradigm and allow errors to be localized with little human interaction. Minimizing the user's involvement reduces the cost of debugging programs that have been corrected or enhanced, and has a significant impact on current debugging practices.
3

Improving Stability and Parameter Selection of Data Processing Programs

Wen-Chuan Lee (8206287) 07 January 2020 (has links)
<div>Data-processing programs are becoming increasingly important in the Big-data era. However, two notable problems of these programs may cause sub-optimal data- processing results. On one hand, these programs contain large number of floating-point computations. Due to the limited precision of floating-point representations, errors are introduced, propagated and accumulated in series of computations, making the computation results unreliable. We call this problem as floating-point instability. On the other hand, these programs are heavily parameterized. As no universal optimal parameter configuration exists for all possible inputs, the setting of program parameters should be carefully chosen and tuned for each input. Otherwise, the result would be sub-optimal. Manual tuning is infeasible because the number of parameters and the range of each parameter value may be big.</div><div><br></div><div>We try to address these two challenges in this dissertation. For floating-point instability problem, we develop a novel runtime technique to capture different output variations in the presence of instability. It features the idea of transforming every floating point value to a vector of multiple values $-$ the values added to create the vector are obtained by introducing artificial errors that are upper bounds of actual errors. The propagation of artificial errors models the propagation of actual errors. When values in vectors result in discrete execution differences (e.g., following different paths), the execution is forked to capture the resulting output variations.</div><div><br></div><div>For parameterized data-processing programs, we develop a white-box program tuning framework to tune the program parameter configuration for optimal data-processing result of each program input. </div><div>To further reduce the parameter configuration overhead, we propose the first general framework to inject artificial intelligence (AI) in the program, so the intelligent program is able to predict the parameter configuration for each incoming input directly. However, similar to many other ML/AI applications, the crucial challenge lies in feature selection, i.e., selection of the feature variables for predicting the target parameter specified by the users.</div><div>Thus, we propose a novel approach by combining program analysis and statistical analysis for better program feature variables selection which further helps better target parameter prediction and improves the result.</div>
4

Enfoque para pruebas de unidad basado en la generación aleatoria de objetos

Barrientos, Pablo Andrés 28 April 2014 (has links)
El testing del software es una tarea crucial y a la vez muy desafiante dentro del proceso de desarrollo de software. El testing permite encontrar errores y problemas del software contra la especificación del mismo y cumple un rol fundamental en el aseguramiento de la calidad del producto. Entre los tipos de pruebas que se pueden realizar al software están las pruebas de unidad, carga, integración y funcionales. Cada una de ellas tiene distintos objetivos y son realizadas en diferentes etapas del desarrollo del software. En el primer tipo mencionado, se desarrollan pruebas a componentes individuales de un sistema de software. Los desarrolladores especifican y codifican pruebas para cubrir todos o al menos una parte significativa de los posibles estados/configuraciones del artefacto o unidad de software, para simular el entorno del componente y descubrir la presencia de errores o “bugs”. Dado que escribir todas esas pruebas de forma manual es costoso, las pruebas de unidad son generalmente realizadas de manera ineficiente o simplemente dejadas de lado. El panorama es aún peor, más allá del esfuerzo, porque el testing no puede ser usado para probar la usencia de errores en el software sino tan solo la presencia. Por eso es necesario atacar el problema desde diferentes enfoques, cada uno teniendo sus fortalezas y ventajas. Actualmente existen muchas técnicas para hacer testing de software, y la mayoría de ellos se basan en la automatización de pasos o caminos de ejecución, con valores fijos o componentes predefinidos (hard-coded) o estáticos, y condiciones específicas. En este trabajo de maestría, se presenta un enfoque para pruebas de unidad en la programación orientada a objetos, basado en la generación de objetos de manera aleatoria. El fundamento básico de este enfoque propuesto es el testing aleatorio. También se presenta una herramienta de testing de unidad que usa el enfoque dicho, y que fue escrita en un lenguaje orientado a objetos de amplia difusión. El testing aleatorio (RT o random testing) como técnica no es nueva. Tampoco lo es la generación de valores aleatorios para pruebas. En el paradigma funcional, existe una herramienta muy conocida para probar especificaciones sobre funciones llamada QuickCheck. Ésta herramienta (escrita en Haskell) y sus ideas subyacentes son usadas como fundamento para la herramienta creada en este trabajo. La herramienta desarrollada en el presente trabajo cubre además características que existen en el paradigma orientado a objetos de manera inherente, tales como el estado de los objetos (en particular los objetos singleton con estado), clases abstractas e interfaces, que no existen en la programación funcional pura. La constribución de este trabajo de maestría es la presentación de una forma alternativa de realizar tests de unidad en la programación orientada a objetos (POO), basada en un trabajo anterior para el paradigma funcional. También se presenta una herramienta llamada YAQC4J que plasma esas ideas en un lenguaje orientado a objetos de amplia difusión. Finalmente se incluyen ejemplos que ilustran el uso de la herramienta, y se presenta una comparación con herramientas existentes que han intentado implementar el enfoque de testing. Este trabajo está dirigido a los desarrolladores de software interesados en conocer soluciones alternativas para el testing de unidad, y al mismo tiempo una forma complementaria a las ya existentes para pruebas de unidad.
5

Addressing high dimensionality and lack of feature models in testing of software product lines

SOUTO, Sabrina de Figueirêdo 31 March 2015 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2016-03-15T15:21:11Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) TESE_SABRINA.pdf: 1152470 bytes, checksum: a89ffc94cb3ee813cf52ca2c043171ba (MD5) / Made available in DSpace on 2016-03-15T15:21:11Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) TESE_SABRINA.pdf: 1152470 bytes, checksum: a89ffc94cb3ee813cf52ca2c043171ba (MD5) Previous issue date: 2015-03-31 / Software Product Lines (SPLs) allow engineers to systematically build families of software products, defined by a unique combination of features—increments in functionality, improving both the efficiency of the software development process and the quality of the software developed. However, testing these kinds of systems is challenging, as it may require running each test against a combinatorial number of products. We call this problem the High Dimensionality Problem. Another obstacle to product line testing is the absence of Feature Models (FMs), making it difficult to discover the real causes for test failures. We call this problem the Lack of Feature Model Problem. The High Dimensionality Problem is associated to the large space of possible configurations that an SPL can reach. If an SPL has n boolean features, for example, there are 2n possible feature combinations. Therefore, systematically testing this kind of system may require running each test against all those combinations, in the worst case. The Lack of Feature Model Problem is related to the absence of feature models. The FM enables accurate categorization of failing tests as failures of programs or the tests themselves, not as failures due to inconsistent combinations of features. For this reason, the lack of FM presents a huge challenge to discover the true causes for test failures. Aiming to solve these problems, we propose two lightweight techniques: SPLat and SPLif. SPLat is a new approach to dynamically prune irrelevant configurations: the configurations to run for a test can be determined during test execution by monitoring accesses to configuration variables. As a result, SPLat reduces the number of configurations. Consequently, SPLat is lightweight compared to prior works that used static analysis and heavyweight dynamic execution. SPLif is a technique for testing SPLs that does not require a priori availability of feature models. Our insight is to use a profile of passing and failing test runs to quickly identify test failures that are indicative of a problem (in test or code) as opposed to a manifestation of execution against an inconsistent combination of features. Experimental results show that SPLat effectively identifies relevant configurations with a low overhead. We also apply SPLat on two large configurable systems (Groupon and GCC), and it scaled without much engineering effort. Experimental results demonstrate that SPLif is useful and effective to quickly find tests that fail on consistent configurations, regardless of how complete the FMs are. Furthermore, we evaluated SPLif on one large extensively tested configurable system, GCC, where it helped to reveal 5 new bugs, 3 of which have been fixed after our bug reports. / Software Product Lines (SPLs) allow engineers to systematically build families of software products, defined by a unique combination of features—increments in functionality, improving both the efficiency of the software development process and the quality of the software developed. However, testing these kinds of systems is challenging, as it may require running each test against a combinatorial number of products. We call this problem the High Dimensionality Problem. Another obstacle to product line testing is the absence of Feature Models (FMs), making it difficult to discover the real causes for test failures. We call this problem the Lack of Feature Model Problem. The High Dimensionality Problem is associated to the large space of possible configurations that an SPL can reach. If an SPL has n boolean features, for example, there are 2n possible feature combinations. Therefore, systematically testing this kind of system may require running each test against all those combinations, in the worst case. The Lack of Feature Model Problem is related to the absence of feature models. The FM enables accurate categorization of failing tests as failures of programs or the tests themselves, not as failures due to inconsistent combinations of features. For this reason, the lack of FM presents a huge challenge to discover the true causes for test failures. Aiming to solve these problems, we propose two lightweight techniques: SPLat and SPLif. SPLat is a new approach to dynamically prune irrelevant configurations: the configurations to run for a test can be determined during test execution by monitoring accesses to configuration variables. As a result, SPLat reduces the number of configurations. Consequently, SPLat is lightweight compared to prior works that used static analysis and heavyweight dynamic execution. SPLif is a technique for testing SPLs that does not require a priori availability of feature models. Our insight is to use a profile of passing and failing test runs to quickly identify test failures that are indicative of a problem (in test or code) as opposed to a manifestation of execution against an inconsistent combination of features. Experimental results show that SPLat effectively identifies relevant configurations with a low overhead. We also apply SPLat on two large configurable systems (Groupon and GCC), and it scaled without much engineering effort. Experimental results demonstrate that SPLif is useful and effective to quickly find tests that fail on consistent configurations, regardless of how complete the FMs are. Furthermore, we evaluated SPLif on one large extensively tested configurable system, GCC, where it helped to reveal 5 new bugs, 3 of which have been fixed after our bug reports.
6

Practical High-Coverage Sound Predictive Race Detection

Roemer, Jake 02 October 2019 (has links)
No description available.
7

Acceptability-Oriented Computing

Rinard, Martin C. 01 1900 (has links)
We discuss a new approach to the construction of software systems. Instead of attempting to build a system that is as free of errors as possible, the designer instead identifies key properties that the execution must satisfy to be acceptable to its users. Together, these properties define the acceptability envelope of the system: the region that it must stay within to remain acceptable. The developer then augments the system with a layered set of components, each of which enforces one of the acceptability properties. The potential advantages of this approach include more flexible, resilient systems that recover from errors and behave acceptably across a wide range of operating environments, an appropriately prioritized investment of engineering resources, and the ability to productively incorporate unreliable components into the final software system. / Singapore-MIT Alliance (SMA)
8

Learning Finite State Machine Specifications from Test Cases / Lernen von Spezifikationen in Form von endlichen Zustandsmaschinen aus Testfällen

Werner, Edith Benedicta Maria 01 June 2010 (has links)
No description available.
9

Quality Assurance of Test Specifications for Reactive Systems / Qualitätssicherung von Testspezifikationen für Reaktive Systeme

Zeiß, Benjamin 02 June 2010 (has links)
No description available.
10

Automated Performance Test Generation and Comparison for Complex Data Structures - Exemplified on High-Dimensional Spatio-Temporal Indices

Menninghaus, Mathias 23 August 2018 (has links)
There exist numerous approaches to index either spatio-temporal or high-dimensional data. None of them is able to efficiently index hybrid data types, thus spatio-temporal and high-dimensional data. As the best high-dimensional indexing techniques are only able to index point-data and not now-relative data and the best spatio-temporal indexing techniques suffer from the curse of dimensionality, this thesis introduces the Spatio-Temporal Pyramid Adapter (STPA). The STPA maps spatio-temporal data on points, now-values on the median of the data set and indexes them with the pyramid technique. For high-dimensional and spatio-temporal index structures no generally accepted benchmark exists. Most index structures are only evaluated by custom benchmarks and compared to a tiny set of competitors. Benchmarks may be biased as a structure may be created to perform well in a certain benchmark or a benchmark does not cover a certain speciality of the investigated structures. In this thesis, the Interface Based Performance Comparison (IBPC) technique is introduced. It automatically generates test sets with a high code coverage on the system under test (SUT) on the basis of all functions defined by a certain interface which all competitors support. Every test set is performed on every SUT and the performance results are weighted by the achieved coverage and summed up. These weighted performance results are then used to compare the structures. An implementation of the IBPC, the Performance Test Automation Framework (PTAF) is compared to a classic custom benchmark, a workload generator whose parameters are optimized by a genetic algorithm and a specific PTAF alternative which incorporates the specific behavior of the systems under test. This is done for a set of two high-dimensional spatio-temporal indices and twelve variants of the R-tree. The evaluation indicates that PTAF performs at least as good as the other approaches in terms of minimal test cases with a maximized coverage. Several case studies on PTAF demonstrate its widespread abilities.

Page generated in 0.0961 seconds