• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 302
  • 106
  • 35
  • 34
  • 23
  • 11
  • 10
  • 6
  • 4
  • 4
  • 3
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 626
  • 132
  • 103
  • 96
  • 79
  • 75
  • 62
  • 58
  • 52
  • 48
  • 47
  • 40
  • 40
  • 37
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Distributed Support Vector Machine Learning

Armond, Kenneth C., Jr. 07 August 2008 (has links)
Support Vector Machines (SVMs) are used for a growing number of applications. A fundamental constraint on SVM learning is the management of the training set. This is because the order of computations goes as the square of the size of the training set. Typically, training sets of 1000 (500 positives and 500 negatives, for example) can be managed on a PC without hard-drive thrashing. Training sets of 10,000 however, simply cannot be managed with PC-based resources. For this reason most SVM implementations must contend with some kind of chunking process to train parts of the data at a time (10 chunks of 1000, for example, to learn the 10,000). Sequential and multi-threaded chunking methods provide a way to run the SVM on large datasets while retaining accuracy. The multi-threaded distributed SVM described in this thesis is implemented using Java RMI, and has been developed to run on a network of multi-core/multi-processor computers.
132

Modeling, Simulation, Dynamic Optimization and Control of a Semibatch Emulsion Polymerization Process / Modélisation, simulation, optimisation dynamique et commande d'un procédé semibatch de polymérisation en émulsion

Gil, Iván-Dario 03 June 2014 (has links)
Dans ce travail, la modélisation, la simulation, l'optimisation dynamique et la commande nonlinéaire d'un procédé industriel de polymérisation en émulsion produisant du polyacétate de vinyle (PVAc) sont étudiées. La réaction est modélisée comme un système à deux phases constitué d'une phase aqueuse et une phase particulaire. Un modèle détaillé est développé pour calculer la masse molaire moyenne en poids, la masse molaire moyenne en nombre et la dispersité. Les moments de chaînes en croissance et terminés sont utilisés pour représenter l'état du polymère et pour calculer la distribution de masse molaire (MWD). L'étude de cas correspond à un réacteur industriel fonctionnant dans une entreprise de produits chimiques à Bogotá. Un réacteur à l'échelle industrielle (11 m3 de capacité) est simulé dans lequel une réaction semi-batch de polymérisation en émulsion de l'acétate de vinyle est effectuée. Le problème d'optimisation dynamique est résolu directement en utilisant un solveur de programmation non linéaire. L'intégration des équations différentielles est faite en utilisant la méthode de Runge-Kutta. Trois problèmes d'optimisation différents sont résolus, depuis le plus simpliste (une seule variable d'optimisation : la température du réacteur) au plus complexe (trois variables d'optimisation : la température du réacteur, le débit de l'amorceur et le débit du monomère) en vue de minimiser le temps final de réaction. Une réduction de 25% du temps de traitement par batchs est réalisée par rapport aux conditions normales de fonctionnement appliquées dans l'entreprise. Les résultats montrent qu'il est possible de minimiser la durée de réaction alors que certaines qualités de polymères souhaitées (conversion, masse molaire et contenu en solides) satisfont les contraintes définies. Une technique de commande non linéaire géométrique à l'aide de la linéarisation entrée/sortie est adaptée à la régulation de la température du réacteur. Un filtre Kalman étendu (EKF) est mis en oeuvre pour estimer les états non mesurés et il est testé dans différents cas, dont une étude de robustesse où des erreurs du modèle sont introduites pour vérifier son bon fonctionnement. Après vérification des performances du régulateur, certains changements d'opération du procédé ont été proposés afin d'améliorer la productivité du procédé et la qualité du polymère. Enfin, le profil de température optimale et les politiques d'alimentation optimales de débits du monomère et de l'amorceur, obtenues dans l'étape d'optimisation dynamique, ont fourni les consignes optimales pour la commande non linéaire. Les résultats montrent que le régulateur non linéaire conçu ici convient pour suivre les trajectoires optimales de température calculées précédemment / In this work, modeling, simulation, dynamic optimization and nonlinear control of an industrial emulsion polymerization process to produce poly-vinyl acetate (PVAc) are proposed. The reaction is modeled as a two-phase system composed of an aqueous phase and a particle phase. A detailed model is used to calculate the weight average molecular weight, the number average molecular weight and the dispersity. The moments of the growing and dead chains are used to represent the state of the polymer and to calculate the molecular weight distribution (MWD). The case study corresponds to an industrial reactor operated at a chemical company in Bogotá. An industrial scale reactor (11 m3 of capacity) is simulated where a semi-batch emulsion polymerization reaction of vinyl acetate is performed. Dynamic optimization problem is solved directly using a Nonlinear Programming solver. Integration of differential equations is made using Runge-Kutta method. Three different optimization problems are solved from the more simplistic (only one control variable: reactor temperature) to the more complex (three control variables : reactor temperature, initiator flow rate and monomer flow rate) in order to minimize the reaction time. A reduction of 25% of the batch time is achieved with respect to the normal operating conditions applied at the company. The results show that is possible to minimize the reaction time while some polymer desired qualities (conversion, molecular weight and solids content) satisfy the defined constraints. A nonlinear geometric control technique by using input/output linearization is adapted to the reactor temperature control. An extended Kalman filter (EKF) is implemented to estimate unmeasured states and it is tested in different cases including a robustness study where model errors are introduced to verify its good performance. After verification of controller performance, some process changes were proposed in order to improve process productivity and polymer quality. Finally, the optimal temperature profile and optimal feed policies of the monomer and initiator, obtained in a dynamic optimization step, are used to provide the optimal set points for the nonlinear control. The results show that the nonlinear controller designed here is appropriate to follow the optimal temperature trajectories calculated previously
133

Factor analysis of dynamic PET images

Cruz Cavalcanti, Yanna 31 October 2018 (has links) (PDF)
Thanks to its ability to evaluate metabolic functions in tissues from the temporal evolution of a previously injected radiotracer, dynamic positron emission tomography (PET) has become an ubiquitous analysis tool to quantify biological processes. Several quantification techniques from the PET imaging literature require a previous estimation of global time-activity curves (TACs) (herein called \textit{factors}) representing the concentration of tracer in a reference tissue or blood over time. To this end, factor analysis has often appeared as an unsupervised learning solution for the extraction of factors and their respective fractions in each voxel. Inspired by the hyperspectral unmixing literature, this manuscript addresses two main drawbacks of general factor analysis techniques applied to dynamic PET. The first one is the assumption that the elementary response of each tissue to tracer distribution is spatially homogeneous. Even though this homogeneity assumption has proven its effectiveness in several factor analysis studies, it may not always provide a sufficient description of the underlying data, in particular when abnormalities are present. To tackle this limitation, the models herein proposed introduce an additional degree of freedom to the factors related to specific binding. To this end, a spatially-variant perturbation affects a nominal and common TAC representative of the high-uptake tissue. This variation is spatially indexed and constrained with a dictionary that is either previously learned or explicitly modelled with convolutional nonlinearities affecting non-specific binding tissues. The second drawback is related to the noise distribution in PET images. Even though the positron decay process can be described by a Poisson distribution, the actual noise in reconstructed PET images is not expected to be simply described by Poisson or Gaussian distributions. Therefore, we propose to consider a popular and quite general loss function, called the $\beta$-divergence, that is able to generalize conventional loss functions such as the least-square distance, Kullback-Leibler and Itakura-Saito divergences, respectively corresponding to Gaussian, Poisson and Gamma distributions. This loss function is applied to three factor analysis models in order to evaluate its impact on dynamic PET images with different reconstruction characteristics.
134

Quadratic Criteria for Optimal Martingale Measures in Incomplete Markets

McWalter, Thomas Andrew 22 February 2007 (has links)
Student Number : 8804388Y - MSc Dissertation - School of Computational and Applied Mathematics - Faculty of Science / This dissertation considers the pricing and hedging of contingent claims in a general semimartingale market. Initially the focus is on a complete market, where it is possible to price uniquely and hedge perfectly. In this context the two fundamental theorems of asset pricing are explored. The market is then extended to incorporate risk that cannot be hedged fully, thereby making it incomplete. Using quadratic cost criteria, optimal hedging approaches are investigated, leading to the derivations of the minimal martingale measure and the variance-optimal martingale measure. These quadratic approaches are then applied to the problem of minimizing the basis risk that arises when an option on a non-traded asset is hedged with a correlated asset. Closed-form solutions based on the Black-Scholes equation are derived and numerical results are compared with those resulting from a utility maximization approach, with encouraging results.
135

Suivi des mouvements de la main et reproduction de gestes à partir de séquences vidéo monoculaires / Monocular hand motion tracking and gestures recognition

Ben Henia, Ouissem 12 April 2012 (has links)
Les gestes de la main représentent un moyen naturel et intuitif de communication chez l'homme lui permettant d'interagir avec son environnement dans la vie de tous les jours. Ils permettent notamment de ponctuer et de renforcer l'expression orale d'un dialogue entre personnes. Outre la communication entre individus, les gestes de la main permettent de manipuler des objets ou encore d'interagir avec des machines. Avec le développement de la vision par ordinateur, on assiste à un véritable engouement pour de nouveaux types d'interactions qui exploitent le mouvement de la main et qui passent par une étape d'analyse et de reconnaissance du mouvement afin d'aboutir à l'interprétation des gestes de la main. La réalisation d'un tel objectif ouvre un large champ d'applications. C'est dans ce cadre que se positionne le travail réalisé au cours de cette thèse. Les objectifs visés étaient de proposer des méthodes pour: 1) permettre le transfert d'animation depuis une séquence réelle vers un modèle 3D représentant la main. Dans une telle perspective, le suivi permet d'estimer les différents paramètres correspondant aux degrés de liberté de la main. 2) identifier les gestes de la main en utilisant une base de gestes prédéfinie dans le but de proposer des modes d'interactions basés sur la vision par ordinateur. Sur le plan technique, nous nous sommes intéressés à deux types d’approches : le premier utilise un modèle 3D de la main et le deuxième fait appel à une base de gestes / Hand gestures take a fundamental role in inter-human daily communication. Their use has become an important part of human-computer interaction in the two last decades. Building a fast and effective vision-based hand motion tracker is challenging. This is due to the high dimensionality of the pose space, the ambiguities due to occlusion, the lack of visible surface texture and the significant appearance variations due to shading. In this thesis we are interested in two approaches for monocular hand tracking. In the first one, a parametric hand model is used. The hand motion tracking is first formulated as an optimization task, where a dissimilarity function between the projection of the hand model under articulated motion and the observed image features, is to be minimized. A two-step iterative algorithm is then proposed to minimize this dissimilarity function. We propose two dissimilarity functions to be minimized. We propose also in this thesis a data-driven method to track hand gestures and animate 3D hand model. To achieve the tracking, the presented method exploits a database of hand gestures represented as 3D point clouds. In order to track a large number of hand poses with a database as small as possible we classify the hand gestures using a Principal Component Analysis (PCA). Applied to each point cloud, the PCA produces a new representation of the hand pose independent of the position and orientation in the 3D space. To explore the database in a fast and efficient way, we use a comparison function based on 3D distance transform. Experimental results on synthetic and real data demonstrate the potentials of ours methods
136

[en] RECONSTRUTION OF GEOMETRY BASED IN CONNECTIVITY AND MESH SAMPLES / [pt] RECONSTRUÇÃO DE GEOMETRIA A PARTIR DA CONECTIVIDADE DA MALHA E DE PONTOS DE CONTROLE

CATIUSCIA ALBUQUERQUE BENEVENTE BORGES 31 August 2007 (has links)
[pt] Este trabalho busca reconstruir a geometria de uma malha partindo de sua conectividade e de um conjunto esparso de pontos com geometria conhecida, denominados pontos de controle. O problema é formulado como a maximização da suavidade da superfície fixando a posição dos pontos de controle. Nessa formulação, o método consiste em resolver um sistema linear esparso aplicando-se mínimos quadrados. Diferentes propostas para a seleção de pontos de controle, o método de minimização e a construção do sistema linear são apresentadas e comparadas. / [en] This work aims at reconstructing the geometry of a mesh from its connectivity and a small set of control points, whose geometry is known.The problem is formulated as a maximization of the surface smoothness restricting the position of the control points. With this formulation, the method reduces to solving a sparse linear system using least squares minimization. Several proposals for the selection of the control points, the minimization method and the linear system construction are presented and compared .
137

AVFALLSMINIMERING / WASTE MINIMAZITION

Källström, Matilda, Lennartsson, Sofia January 2019 (has links)
Byggsektorn står idag för stora avfallsmängder, i Sverige år 2016 mättes ca 8,9 miljoner ton primärt bygg- och rivningsavfall upp. I Sverige så står byggsektorn för 31% av allt avfall som uppstått. Enligt Naturvårdsverket når inte Sverige upp till det nationella målet om minst 70% återvinning av bygg- och rivningsavfall som ska uppfyllas senast år 2020. Syftet är att ge en klar bild av hur avfallshanteringen utförs i byggproduktionen idag och var byggprojekt kan åstadkomma förbättringar inom produktion och projektering för att minska avfallet till deponi. Genom att minska avfallet till deponi ökar chansen att nå nationella målet om återvinning senast år 2020. En kvalitativ metod har använts i form av litteraturstudier samt ett platsbesök och tre intervjuer med tjänstemän från Skanska. Arbetet fokuserar på avfallshantering på arbetsplatsen men också på vad som kan förbättras under projekteringen. Endast nybyggen har studerats och arbetet omfattar endast vad som händer med materialet när det lämnar projektet för återanvändning, inte till deponi. I resultatet undersöks följande avfallsmetoder: hållbarhetskompetens, utbildning, avfallshanteringssystem, återtag av materialleverantör samt fraktionen blandat avfall. Lundprojektet använder samtliga. Kungsmässan-projektet använder sig av metoderna utbildning och ett bra avfallshanteringssystem. Utbildningen är av kortare variant som Skanska använder på samtliga byggprojekt. Avfallshanteringssystemet består av en anställd som har ansvar för avfallshanteringen. Regionens hus-projektet använder metoderna utbildning, återtag av materialleverantör samt har inte använt fraktionen blandat avfall. Utbildningen är den samma som nämndes för Kungsmässan-projektet. Projektet använde återtag av material med hjälp av leverantören men dock inte med så lyckat resultat. Byggarbetsplatsen har inte haft någon container för blandat avfall. För att lyckas med målet 0 kg avfall till deponi använder Lund-projektet en hållbarhetskompetens som finns tidigt i planeringen samt ett gediget avfallshanteringssystem med flera kontroller under produktionen. Projektet har upphandlat en sorteringsinriktad avfallsentreprenör och valt att utnyttja isoleringsleverantörens tjänst för återtag av oanvänd isolering. Resultatet visar att avfall till deponi kan minskas genom att: • Ha en hållbarhetskompetens i tidigt skede • Utbilda anställda • Använda ett bra avfallshanteringssystem • Försöka sortera i så stor utsträckning som möjligt och att undvika fraktionen blandat avfall • Utnyttja leverantörernas återtagstjänst. / The construction sector is currently responsible for a large amount of waste. In Sweden in 2016, approximately 8.9 million tons of primary construction and demolition waste were measured. In Sweden, the construction sector accounts for 31% of all waste generated. According to the Swedish Environmental Protection Agency, Sweden does not reach the national target of at least 70% recycling of construction and demolition waste to be met by 2020. The purpose is to give a clear picture of where construction projects can bring about improvements in production and design to reduce waste to landfill. By reducing waste to landfill, the chance of reaching the national recycling target can increase by 2020 at the latest. A qualitative method has been used in the form of literature studies and a site visit and three interviews with officials from Skanska. The work focuses on waste management at the workplace but also see what can be improved during the projection and planning phase in a project. Only new buildings have been studied and the work only covers what happens to the material when it leaves the project for reuse, not to landfill. The result examines the following waste methods: sustainability competence, education, waste management systems, withdrawal of material supplier and the fraction mixed waste. The Lund project uses all. The Kungsmässan project uses methods of education and a good waste management system. The education is of a shorter variant that Skanska uses on all construction projects. The waste management system consists of an employee who is responsible for waste management. The Regionens hus-project uses methods of education, recycling of material suppliers and has not used the fraction of mixed waste. The education is the same as mentioned for the Kungsmässan project. The project used retrieval of material with the help of the supplier, but not with so successful result. The construction site has not had any container for mixed waste. To succeed with the goal 0 kg of waste to landfill, the Lund project uses a sustainability competence that is early in the planning and a solid waste management system with several controls during production. The project has contracted a sorting waste disposal contractor and has chosen to utilize the insulation supplier's service for withdrawal of unused insulation. The result shows that waste to landfill can be reduced by: • Have a sustainability competence at an early stage • Educate employees • Use a good waste management system • Try to sort as much as possible and avoid fraction mixed waste • Take advantage of the suppliers' take-back service.
138

Uma contribuição para a minimização do número de stubs no teste de integração de programas orientados a aspectos / A contribution to the minimization of the number of stubs during integration test of aspect-oriented programs

Ré, Reginaldo 31 March 2009 (has links)
A programação orientada a aspectos é uma abordagem que utiliza conceitos da separação de interesses para modularizar o software de maneira mais adequada. Com o surgimento dessa abordagem vieram também novos desafios, dentre eles o teste de programas orientados a aspectos. Duas estratégias de ordenação de classes e aspectos para apoiar o teste de integração orientado a aspectos são propostas nesta tese. As estratégias de ordenação tem o objetivo de diminuir o custo da atividade de teste por meio da diminuição do número de stubs implementados durante o teste de integração. As estratégias utilizam um modelo de dependências aspectuais e um modelo que descreve dependências entre classes e aspectos denominado AORD (Aspect and Oriented Relation Diagram) também propostos neste trabalho. Tanto o modelo de dependências aspectuais como o AORD foram elaborados a partir da sintaxe e semântica da linguagem AspectJ. Para apoiar as estratégias de ordenação, idealmente aplicadas durante a fase de projeto, um processo de mapeamento de modelos de projeto que usam as notações UML e MATA para o AORD é proposto neste trabalho. O processo de mapeamento é composto de regras que mostram como mapear dependências advindas da programação orientada a objetos e também da programação orientada a aspectos. Como uma forma de validação das estratégias de ordenação, do modelo de dependências aspectuais e do AORD, um estudo exploratório de caracterização com três sistemas implementados em AspectJ foi conduzido. Durante o estudo foram coletadas amostras de casos de implementação de stubs e drivers de teste. Os casos de implementação foram analisados e classificados. A partir dessa análise e classificação, um catálogo de stubs e drivers de teste é apresentado / Aspect-oriented programming is an approach that uses principles of separation of concerns to improve the sofware modularization. Testing of aspect-oriented programs is a new challenge related to this approach. Two aspects and classes test order strategies to support integration testing of aspect-oriented programs are proposed in this thesis. The objective of these strategies is to reduce the cost of testing activities through the minimization of the number of implemented stubs during integration test. An aspectual dependency model and a diagram which describes dependencies among classes and aspects called AORD (Aspect and Object Relation Diagram) used by the ordering strategies are also proposed. The aspectual dependency model and the AORD were defined considering the syntax constructions and the semantics of AspectJ. As the proposed estrategies should be applied in design phase of software development, a process to map a desing model using UML and MATA notations into a AORD is proposed in order to support the ordering strategies. The mapping process is composed by rules that show how to map both aspect and object-oriented dependencies. A characterization exploratory study using three systems implemented with AspectJ was conducted to validate the ordering strategies, the aspectual dependency model and the AORD. Interesting samples of stubs implementations were collected during the study conduction. The stubs were analyzed and classified. Based on these analysis and classification a catalog of stubs and drivers is presented
139

Otimização do Infill para redução das incertezas em um depósito sintético de cobre / Infill optimization to reduce uncertainty in a copper ore synthetic deposit

Ramos, Gustavo Zanco 19 September 2016 (has links)
A aquisição de novas informações de sondagem é realizada por intermédio do infill de furos de sonda e esta é uma prática utilizada em diversas etapas da exploração mineral. Métodos de otimização são largamente utilizados em várias fases e processos na vida da mina, por exemplo na otimização de cavas, na otimização do sequenciamento de lavra, entre outros. Contudo a utilização de métodos de otimização aplicados à locação de furos de inifill não é usual. Neste trabalho propõem-se utilizar a otimização matemática para melhorar a distribuição espacial dos novos furos, bem como para definir a quantidade adequada de furos a serem realizados. Métodos de otimização meta-heurísticos foram testados com o objetivo de minimizar duas funções objetivo que tratam das incertezas associadas à simulação dos dados, que são a soma da variância e a soma dos coeficientes de variação dos blocos simulados. O método que apresentou melhores resultados na otimização da função objetivo no menor tempo e custo computacional foi o método simulated annealing com resfriamento rápido e memória. Com base neste método de otimização comparou-se as funções objetivo propostas. Para efetuar a comparação amostraram-se os 11 furos definidos pela otimização para ambas as funções objetivo. O infill amostral foi realizado no corpo sintético e as comparações realizadas foram: a estatística descritiva - dos dados de infill comparados à população - e o gráfico Q-Q entre o e-type das simulações realizadas na base com infill e a população. A estatística descritiva do infill permitiu interpretar que a amostragem atualizada (soma das amostragens inicial e a nova) apresentou-se mais representativa do que a amostragem incial. Baseado no resultado dos gráficos Q-Q, a simulação calculada com o infill otimizando a minimização da soma dos coeficientes de variação apresentou maior aderência à população. / The acquisition of new drillhole information can be accomplished by the drill hole infill, a practice used in several steps of the mineral exploration. Optimization methods are widely used in several stages and processes of the mine life cycle, for example, mine pit optimization, mine scheduling optimization among others. However the optimization of drill hole infill locations are unusual. This work proposes the use of mathematical optimization to improve the spatial distribution and the number of the new drill holes to be made. Metaheuristics optimization methods were tested to minimize two objective functions that deal with the uncertainty associated to simulated data, the sum of the simulated blocks variance and the sum of the simulated blocks coefficient of variation. The best processing cost, processing time and results were obtained by simulated annealing method with fast cooling and memory for both objective functions. Based on this optimization method both proposed objective functions were compared. In order to perform the comparison 11 optimized drill holes locations by both objective functions were sampled. Sampling infill were done in the synthetic ore body and the made comparisons were: statistics - comparison between the infill data and population - and the QQ plot of the e-type statistics computed for simulation based on infill and population. Statistics for infill allowed to interpret that updated sample (the addition of new sampling in the initial data) was more representative than the initial sampling. Based on Q-Q plot the simulation computed for optimized infill location by the sum of the coefficient of variation minimization has more adherence to population.
140

Minimização de conjuntos de casos de teste para máquinas de estados finitos / Teste suite minimization for finite state machines

Mello Neto, Lúcio Felippe de 09 May 2008 (has links)
O TESTE baseado em modelos visa a possibilitar a derivação de conjuntos de casos de teste a partir de especificações formais, tais como Máquinas de Estados Finitos. Os conjuntos de teste podem ser obtidos tanto pelos métodos clássicos de geração quanto por alguma abordagem ad hoc. Procura-se obter um conjunto de teste que consiga detectar todos os possíveis defeitos de uma implementação e possua tamanho reduzido para que a sua aplicação seja factível. Por questões de ordem prática, pode não ser possível a aplicação de todo o conjunto de teste gerado. Desse modo, um subconjunto de casos de teste deve ser selecionado, ou seja, uma minimização do conjunto de teste deve ser realizada. No entanto, é fundamental que a minimização reduza o custo de aplicação dos testes, mas mantenha a efetividade em revelar defeitos. Neste trabalho, propõe-se um algoritmo de minimização de conjuntos de teste para Máquinas de Estados Finitos. O algoritmo baseia-se em condições de suficiência para que a completude em relação à detecção de defeitos seja mantida. O algoritmo foi utilizado em dois diferentes contextos. Utilizou-se o algoritmo com conjuntos de teste gerados de forma aleatória para verificar a minimização obtida. O algoritmo também foi utilizado para reduzir o esforço em se obter um conjunto completo em relação à detecção de defeitos / THE Model-based testing aims at generating test suites from formal specifications, such as Finite State Machines. Test suites can be obtained either from classical test derivation methods or from some ad-hoc approach. It is desirable to produce a test suite which detects all possible faults of an implementation and has small size, so that its application can be feasible. For practical reasons, the application of the generated test suite may not be possible. Therefore, a subset of test cases should be selected, i.e., a test suite minimization should be performed. However, it is important that the minimization reduces the test application cost, but keeps the effectiveness in revealing faults. In this work, an algorithm is proposed for the minimization of test suites generated from Finite State Machines. The algorithm is based on sufficient conditions, so that test suite completeness can be maintained. The algorithm was used in two different contexts. It was used with randomly generated test suites to verify the minimization obtained. The algorithm was also used to reduce the effort of obtaining a test suite with full fault coverage

Page generated in 0.1046 seconds