• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 4
  • 3
  • 1
  • 1
  • Tagged with
  • 26
  • 26
  • 8
  • 7
  • 5
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Investigation of similarity-based test case selection for specification-based regression testing.

OLIVEIRA NETO, Francisco Gomes de. 10 April 2018 (has links)
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-04-10T20:00:05Z No. of bitstreams: 1 FRANCISCO GOMES DE OLIVEIRA NETO - TESE PPGCC 2014..pdf: 5163454 bytes, checksum: 228c1fc4f2dc9aad01698011238cfde1 (MD5) / Made available in DSpace on 2018-04-10T20:00:05Z (GMT). No. of bitstreams: 1 FRANCISCO GOMES DE OLIVEIRA NETO - TESE PPGCC 2014..pdf: 5163454 bytes, checksum: 228c1fc4f2dc9aad01698011238cfde1 (MD5) Previous issue date: 2014-07-30 / uring software maintenance, several modifications can be performed in a specification model in order to satisfy new requirements. Perform regression testing on modified software is known to be a costly and laborious task. Test case selection, test case prioritization, test suite minimisation,among other methods,aim to reduce these costs by selecting or prioritizing a subset of test cases so that less time, effort and thus money are involved in performing regression testing. In this doctorate research, we explore the general problem of automatically selecting test cases in a model-based testing (MBT) process where specification models were modified. Our technique, named Similarity Approach for Regression Testing (SART), selects subset of test cases traversing modified regions of a software system’s specification model. That strategy relies on similarity-based test case selection where similarities between test cases from different software versions are analysed to identify modified elements in a model. In addition, we propose an evaluation approach named Search Based Model Generation for Technology Evaluation (SBMTE) that is based on stochastic model generation and search-based techniques to generate large samples of realistic models to allow experiments with model-based techniques. Based on SBMTE,researchers are able to develop model generator tools to create a space of models based on statistics from real industrial models, and eventually generate samples from that space in order to perform experiments. Here we developed a generator to create instances of Annotated Labelled Transitions Systems (ALTS), to be used as input for our MBT process and then perform an experiment with SART.In this experiment, we were able to conclude that SART’s percentage of test suite size reduction is robust and able to select a sub set with an average of 92% less test cases, while ensuring coverage of all model modification and revealing defects linked to model modifications. Both SART and our experiment are executable through the LTS-BT tool, enabling researchers to use our selections trategy andr eproduce our experiment. / During software maintenance, several modifications can be performed in a specification model in order to satisfy new requirements. Perform regression testing on modified software is known to be a costly and laborious task. Test case selection, test case prioritization, test suite minimisation,among other methods,aim to reduce these costs by selecting or prioritizing a subset of test cases so that less time, effort and thus money are involved in performing regression testing. In this doctorate research, we explore the general problem of automatically selecting test cases in a model-based testing (MBT) process where specification models were modified. Our technique, named Similarity Approach for Regression Testing (SART), selects subset of test cases traversing modified regions of a software system’s specification model. That strategy relies on similarity-based test case selection where similarities between test cases from different software versions are analysed to identify modified elements in a model. In addition, we propose an evaluation approach named Search Based Model Generation for Technology Evaluation (SBMTE) that is based on stochastic model generation and search-based techniques to generate large samples of realistic models to allow experiments with model-based techniques. Based on SBMTE,researchers are able to develop model generator tools to create a space of models based on statistics from real industrial models, and eventually generate samples from that space in order to perform experiments. Here we developed a generator to create instances of Annotated Labelled Transitions Systems (ALTS), to be used as input for our MBT process and then perform an experiment with SART.In this experiment, we were able to conclude that SART’s percentage of test suite size reduction is robust and able to select a sub set with an average of 92% less test cases, while ensuring coverage of all model modification and revealing defects linked to model modifications. Both SART and our experiment are executable through the LTS-BT tool, enabling researchers to use our selections trategy andr eproduce our experiment.
22

Modelo baseado em agentes para estimar a geração e a distribuição de viagens intraurbanas / Agent based model to estimate the generation and distribution of intra-urban trip

Rochele Amorim Ribeiro 13 December 2011 (has links)
Neste trabalho é proposto um modelo para estimar a geração e a distribuição de viagens intraurbanas baseado em agentes, denominado Modelo GDA. Neste modelo foram aplicadas simulações em Sistemas Multiagentes (SMA), nas quais foram usadas, como dados de entrada, informações relativas ao morador e ao uso do solo. Na estimativa da geração de viagens, a simulação SMA foi usada para estimar uma população sintética baseada nas informações sociodemográficas dos moradores e para obter um plano de atividades associado a cada morador. Na estimativa da distribuição de viagens, a simulação SMA foi usada para obter uma matriz Origem-Destino (OD) com base no plano de atividades dos moradores e nos atributos do uso do solo. Para definir os critérios da distribuição de viagens, foram testadas teorias alternativas à força gravitacional, como a teoria das redes livres de escala e o conceito de path dependence. Foi feita uma aplicação do Modelo GDA na cidade de São Carlos (SP), cujos resultados estimados foram comparados aos resultados observados, provenientes da pesquisa Origem-Destino (OD), e aos resultados estimados obtidos pela aplicação de modelos gravitacionais nesta cidade. Os resultados mostraram que os resultados estimados obtidos pelo Modelo GDA são tão acurados quanto aos do modelo gravitacional. Conclui-se que o Modelo GDA, comparativamente ao modelo gravitacional, possui vantagens quanto à sua aplicabilidade, pois em vez de serem utilizados pelo modelo dados provenientes de pesquisas de tráfego, geralmente onerosos e de difícil aquisição; são empregados dados acerca do morador e do uso do solo, de fácil coleta e atualização periódica. / In this work, an agent-based model in order to estimate trip generation and trip distribution in an intra-urban context (GDA model) is proposed. Simulations using Multiagent Systems (MAS), with input data concerning dwellers and land use were applied in this model. To estimate the trip generation, the MAS simulation was used to elaborate a synthetic population based on sociodemographic information of the dwellers and to obtain an activity plan of each dweller. To estimate the trip distribution, the MAS simulation was used to obtain an Origin-Destiny (OD) matrix based on the dwellers activity plans and the land use characteristics. To define the trip distribution rules, alternative theories to gravitational force like free scale networks and path dependence theories were tested. The GDA model was applied in the urban area of São Carlos (Brazil), whose estimates was compared to the observed data from the OD survey and the estimate data from the Gravity model applied in this same area. The results showed that the estimates from the GDA Model are as accurate as from the Gravity Model. It was observed that the GDA Model presents advantages in relation to the Gravity Model because instead of using traffic survey data, which often is expensive and difficult to get, it uses dwellers and land use information, which is periodically collected from government researches, making it easy for government agencies to obtain this information.
23

A Model Integrated Meshless Solver (mims) For Fluid Flow And Heat Transfer

Gerace, Salvadore 01 January 2010 (has links)
Numerical methods for solving partial differential equations are commonplace in the engineering community and their popularity can be attributed to the rapid performance improvement of modern workstations and desktop computers. The ubiquity of computer technology has allowed all areas of engineering to have access to detailed thermal, stress, and fluid flow analysis packages capable of performing complex studies of current and future designs. The rapid pace of computer development, however, has begun to outstrip efforts to reduce analysis overhead. As such, most commercially available software packages are now limited by the human effort required to prepare, develop, and initialize the necessary computational models. Primarily due to the mesh-based analysis methods utilized in these software packages, the dependence on model preparation greatly limits the accessibility of these analysis tools. In response, the so-called meshless or mesh-free methods have seen considerable interest as they promise to greatly reduce the necessary human interaction during model setup. However, despite the success of these methods in areas demanding high degrees of model adaptability (such as crack growth, multi-phase flow, and solid friction), meshless methods have yet to gain notoriety as a viable alternative to more traditional solution approaches in general solution domains. Although this may be due (at least in part) to the relative youth of the techniques, another potential cause is the lack of focus on developing robust methodologies. The failure to approach development from a practical perspective has prevented researchers from obtaining commercially relevant meshless methodologies which reach the full potential of the approach. The primary goal of this research is to present a novel meshless approach called MIMS (Model Integrated Meshless Solver) which establishes the method as a generalized solution technique capable of competing with more traditional PDE methodologies (such as the finite element and finite volume methods). This was accomplished by developing a robust meshless technique as well as a comprehensive model generation procedure. By closely integrating the model generation process into the overall solution methodology, the presented techniques are able to fully exploit the strengths of the meshless approach to achieve levels of automation, stability, and accuracy currently unseen in the area of engineering analysis. Specifically, MIMS implements a blended meshless solution approach which utilizes a variety of shape functions to obtain a stable and accurate iteration process. This solution approach is then integrated with a newly developed, highly adaptive model generation process which employs a quaternary triangular surface discretization for the boundary, a binary-subdivision discretization for the interior, and a unique shadow layer discretization for near-boundary regions. Together, these discretization techniques are able to achieve directionally independent, automatic refinement of the underlying model, allowing the method to generate accurate solutions without need for intermediate human involvement. In addition, by coupling the model generation with the solution process, the presented method is able to address the issue of ill-constructed geometric input (small features, poorly formed faces, etc.) to provide an intuitive, yet powerful approach to solving modern engineering analysis problems.
24

An Artificial Intelligence-Driven Model-Based Analysis of System Requirements for Exposing Off-Nominal Behaviors

Madala, Kaushik 05 1900 (has links)
With the advent of autonomous systems and deep learning systems, safety pertaining to these systems has become a major concern. The existing failure analysis techniques are not enough to thoroughly analyze the safety in these systems. Moreover, because these systems are created to operate in various conditions, they are susceptible to unknown safety issues. Hence, we need mechanisms which can take into account the complexity of operational design domains, identify safety issues other than failures, and expose unknown safety issues. Moreover, existing safety analysis approaches require a lot of effort and time for analysis and do not consider machine learning (ML) safety. To address these limitations, in this dissertation, we discuss an artificial-intelligence driven model-based methodology that aids in identifying unknown safety issues and analyzing ML safety. Our methodology consists of 4 major tasks: 1) automated model generation, 2) automated analysis of component state transition model specification, 3) undesired states analysis, and 4) causal factor analysis. In our methodology we identify unknown safety issues by finding undesired combinations of components' states and environmental entities' states as well as causes resulting in these undesired combinations. In our methodology, we refer to the behaviors that occur because of undesired combinations as off-nominal behaviors (ONBs). To identify undesired combinations and ONBs that aid in exposing unknown safety issues with less effort and time we proposed various approaches for each of the task and performed corresponding empirical studies. We also discussed machine learning safety analysis from the perspective of machine learning engineers as well as system and software safety engineers. The results of studies conducted as part of our research shows that our proposed methodology helps in identifying unknown safety issues effectively. Our results also show that combinatorial methods are effective in reducing effort and time for analysis of off-nominal behaviors without overlooking any dependencies among components and environmental entities of a system. We also found that safety analysis of machine learning components is different from analysis of conventional software components and detail the aspects we need to consider for ML safety.
25

[pt] SISTEMAS DE PROVA E GERAÇÃO DE CONTRA EXEMPLO PARA LÓGICA PROPOSICIONAL MINIMAL IMPLICACIONAL / [en] SYSTEMS FOR PROVABILITY AND COUNTERMODEL GENERATION IN PROPOSITIONAL MINIMAL IMPLICATIONAL LOGIC

23 November 2021 (has links)
[pt] Esta tese apresenta um novo cálculo de sequente, correto e completo para a Lógica Proposicional Minimal Implicacional (M →). LMT → destina-se a ser usado para a busca de provas em M →, em uma abordagem bottom-up. A Terminação do cálculo é garantida por uma estratégia de aplicação de regras que força uma maneira ordenada no procedimento de busca de provas de tal forma que todas as combinações possíveis são exploradas. Para uma fórmula inicial α, as provas em LMT→ têm um limite superior de |α|.2 |α|+1+2·log2|α|, que juntamente com a estratégia do sistema, garantem a decidibilidade do mesmo. As regras do sistema são concebidas para lidar com a necessidade de repetição de hipóteses e a natureza de perda de contexto da regra → esquerda , evitando a ocorrência de loops e o uso de backtracking. Portanto, a busca de prova em LMT → é determinística, sempre executando buscas no sentido forward. LMT → tem a propriedade de permitir a extração de contramodelos a partir de buscas de prova que falharam (bicompletude), isto é, a árvore de tentativa de prova de um ramo totalmente expandido produz um modelo de Kripke que falsifica a fórmula inicial. A geração de contra-modelo (usando a semântica Kripke) é obtida como consequência da completude do sistema. LMT→ é implementado como um provador de teoremas interativo baseado no cálculo proposto aqui. Comparamos nosso cálculo com outros sistemas dedutivos conhecidos para M →, especialmente com Tableaux no estilo Fitting, um método que também tem a propriedade de ser bicompleto. Também propomos aqui uma tradução de LMT → para o verificador de prova Dedukti como uma forma de avaliar a correção da implementação que desenvolvemos, no que diz respeito à especificação do sistema, além de torná-lo mais fácil de comparar com outros sistemas existentes. / [en] This thesis presents a new sequent calculus called LMT→ that has the properties to be terminating, sound and complete for Propositional Implicational Minimal Logic (M →). LMT→ is aimed to be used for proof search in M →, in a bottom-up approach. Termination of the calculus is guaranteed by a strategy of rule application that forces an ordered way to search for proofs such that all possible combinations are stressed. For an initial formula α, proofs in LMT→ has an upper bound of |α|.2 |α|+1+2·log2|α|, which together with the system strategy ensure decidability. System rules are conceived to deal with the necessity of hypothesis repetition and the contextsplitting nature of → left, avoiding the occurrence of loops and the usage of backtracking. Therefore, LMT→ steers the proof search always in a forward, deterministic manner. LMT→ has the property to allow extractability of counter-models from failed proof searches (bicompleteness), i.e., the attempt proof tree of an expanded branch produces a Kripke model that falsifies the initial formula. Counter-model generation (using Kripke semantics) is achieved as a consequence of the completeness of the system. LMT→ is implemented as an interactive theorem prover based on the calculus proposed here. We compare our calculus with other known deductive systems for M →, especially with Fitting s Tableaux, a method that also has the bicompleteness property. We also proposed here a translation of LMT→ to the Dedukti proof checker as a way to evaluate the correctness of the implementation regarding the system specification and to make our system easier to compare to others.
26

The Paragon Corporation : Exploring Corporate Responsibility and Shared Value for Profitability

Paulsson, John January 2013 (has links)
This thesis is a two-part exploratory inquiry into how actions of Corporate Responsibility (CR) create economic value for the company performing them, in addition to social/environmental value. The purpose of the thesis is to describe the CR initiatives of a theoretical “paragon corporation”: a corporation that excels in its CR initiatives and sees financial gain in it. The report starts by going over literature, describing the CR context that companies operate in today, and similar work. A model for describing CR activities as business activities is drawn from Nancy Bocken’s concept Business Model Archetypes, and it is proposed as a possible tool for describing economic value creation from CR activities. The first part of the study is a word frequency analysis of the annual financial reports of the companies listed on the FTSE 100, where words connected to CR are counted. The sustainability reports of the five companies that have mentioned CR terminology most in the first study are analyzed in detail during the second study, and are characterized using Bocken’s archetypes. Findings show that the paragon corporation should have CR initiatives that can be modeled after the archetypes, enabling the CR initiatives to create direct economic value for the company. The archetypes can be used when formulating a CR strategy from the ground up, or evaluating existing CR strategy. The thesis ends with suggestions for how this can be explored further.

Page generated in 0.0945 seconds