• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 23
  • 5
  • Tagged with
  • 33
  • 20
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

[en] OPTIMAL HYDROTHERMAL OPERATION: THE CASE WITH HYDRO PLANTS DISPOSED IN PARALLEL / [es] OPERACIÓN ÓPTIMA DE UN SISTEMA HIDROTÉRMICO EL CASO DE HIDROELÉCTRICAS EN PARALELO / [pt] OPERAÇÃO ÓTIMA DE UM SISTEMA HIDROTÉRMICO: O CASO DE HIDRELÉTRICAS EM PARALELO

PAULA VARELLA CALUX LOPES 29 October 2001 (has links)
[pt] Neste trabalho estudamos o problema de planejamento hidrotérmico para um sistema onde as hidrelétricas estão em paralelo, buscando estender os resultados obtidos por Bortolossi, Pereira e Tomei. Com uma conveniente formulação contínua, estabelecemos um teorema que garante a existência de solução para este problema, e caracterizamos os ótimos interiores. / [en] In this work we study the problem of hydrothermal scheduling for a system where the hydroelectric power stations are disposed in parallel, trying to extend the results obtained by Bortolossi, Pereira e Tomei. With a convenient continuous formulation, we establish a theorem that guarantees the existence of solution to this problem, and characterize the interior optimums. / [es] En este trabajo estudiamos el problema de planeamiento hidrotérmico para un sistema donde las hidroeléctricas están en paralelo, com el objetivo de extender los resultados obtenidos por Bortolosi, Pereira y Tomei. Con una formulación contínua conveniente, establecemos un teorema que garantiza la existencia de solución para este problema, y caracterizamos los óptimos interiores.
12

Eficácia em problemas inversos: generalização do algoritmo de recozimento simulado e função de regularização aplicados a tomografia de impedância elétrica e ao espectro de raios X / Efficiency in inverse problems: generalization of simulated annealing algorithm and regularization function applied to electrical impedance tomography and X-rays spectrum

Olavo Henrique Menin 08 December 2014 (has links)
A modelagem de processos em física e engenharia frequentemente resulta em problemas inversos. Em geral, esses problemas apresentam difícil resolução, pois são classificados como mal-postos. Resolvê-los, tratando-os como problemas de otimização, requer a minimização de uma função objetivo, que mede a discrepância entre os dados experimentais e os obtidos pelo modelo teórico, somada a uma função de regularização. Na maioria dos problemas práticos, essa função objetivo é não-convexa e requer o uso de métodos de otimização estocásticos. Dentre eles, tem-se o algoritmo de recozimento simulado (Simulated Annealing), que é baseado em três pilares: i) distribuição de visitação no espaço de soluções; ii) critério de aceitação; e iii) controle da estocasticidade do processo. Aqui, propomos uma nova generalização do algoritmo de recozimento simulado e da função de regularização. No algoritmo de otimização, generalizamos o cronograma de resfriamento, que usualmente são considerados algébricos ou logarítmicos, e o critério de Metropolis. Com relação à função de regularização, unificamos as versões mais utilizadas, em uma única fórmula. O parâmetro de controle dessa generalização permite transitar continuamente entre as regularizações de Tikhonov e entrópica. Por meio de experimentos numéricos, aplicamos nosso algoritmo na resolução de dois importantes problemas inversos na área de Física Médica: a determinação do espectro de um feixe de raios X, a partir de sua curva de atenuação, e a reconstrução da imagem na tomografia de impedância elétrica. Os resultados mostram que o algoritmo de otimização proposto é eficiente e apresenta um regime ótimo de parâmetros, relacionados à divergência do segundo momento da distribuição de visitação. / Modeling of processes in Physics and Engineering frequently yields inverse problems. These problems are normally difficult to be solved since they are classified as ill-posed. Solving them as optimization problems require the minimization of an objective function which measures the difference between experimental and theoretical data, added to a regularization function. For most of practical inverse problems, this objective function is non-convex and needs a stochastic optimization method. Among them, we have Simulated Annealing algorithm, which is based on three fundamentals: i) visitation distribution in the search space; ii) acceptance criterium; and iii) control of process stochasticity. Here, we propose a new generalization of simulated annealing algorithm and of the regularization function. On the optimization algorithm, we have generalized both the cooling schedule, which usually is algebric or logarithmic, and the Metropolis acceptance criterium. Regarding to regularization function, we have unified the most used versions in an unique equation. The generalization control parameter allows exchange continuously between the Tikhonov and entropic regularization. Through numerical experiments, we applied our algorithm to solve two important inverse problems in Medical Physics: determination of a beam X-rays spectrum from its attenuation curve and the image reconstruction of electrical impedance tomography. Results show that the proposed algorithm is efficient and presents an optimal arrangement of parameters, associated to the divergence of the visitation distribution.
13

Métodos de inteligência computacional com otimização evolucionária para a estimativa de propriedades mecânicas do concreto de agregado leve

Andrade, Jonata Jefferson 27 September 2017 (has links)
Submitted by Geandra Rodrigues (geandrar@gmail.com) on 2018-01-11T16:56:36Z No. of bitstreams: 1 jonatajeffersonandrade.pdf: 3871423 bytes, checksum: e67d44781c780adff8ab0f791d6a9f1c (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2018-01-23T13:43:37Z (GMT) No. of bitstreams: 1 jonatajeffersonandrade.pdf: 3871423 bytes, checksum: e67d44781c780adff8ab0f791d6a9f1c (MD5) / Made available in DSpace on 2018-01-23T13:43:37Z (GMT). No. of bitstreams: 1 jonatajeffersonandrade.pdf: 3871423 bytes, checksum: e67d44781c780adff8ab0f791d6a9f1c (MD5) Previous issue date: 2017-09-27 / No concreto de agregado leve, a resistência à compressão e o módulo de elasticidade são as propriedades mecânicas mais importantes e consequentemente as mais comumente analisadas. A relação entre os componentes do concreto de agregado leve e suas propriedades mecânicas é altamente não linear, e o estabelecimento de um modelo de previsão abrangente de tais características é usualmente problemático. Existem trabalhos que buscam encontrar essa relação de formas empíricas. Há também trabalhos que buscam aplicar técnicas de inteligência computacional para prever essas propriedades a partir dos componentes do concreto. Prever com precisão as propriedades mecânicas do concreto de agregado leve é um problema crítico em projetos de engenharia que utilizam esse material. O objetivo desta dissertação é avaliar o desempenho de diferentes métodos de inteligência computacional para prever a módulo de elasticidade e a resistência à compressão aos 28 dias de concretos de agregados leves em função do fator água/cimento, volume de agregado leve, quantidade de cimento e densidade do agregado leve. Para a escolha da melhor configuração de cada método, foi definida uma metodologia utilizando o algoritmo de otimização PSO (Particle Swarm Optmization). Por fim, é verificada a capacidade de generalização dos métodos através do processo de validação cruzada de modo a encontrar o método que apresenta o melhor desempenho na aproximação das duas propriedades mecânicas. / In lightweight aggregate concrete, the compressive strength, the elastic modulus and specific weight are the most important properties and consequently the most commonly analyzed. The relationship between lightweight aggregate concrete components and their mechanical properties is highly nonlinear, and establishing a comprehensive predictive model of such characteristics is usually problematic. There are works that seek to find this relation of empirical forms. There are also works that seek to apply computational intelligence techniques to predict these properties from the concrete components. Accurately predicting the mechanical properties of lightweight aggregate concrete is a critical problem in engineering projects that use this material. The objective of this dissertation is to evaluate the performance of different computational intelligence methods to predict the elastic modulus and the compressive strength at 28 days of lightweight aggregates concrete as a function of water/cement factor, lightweight aggregate volume, cement quantity and density of the lightweight aggregate. In order to choose the best configuration of each method, a methodology was defined using the Particle Swarm Optmization (PSO) algorithm. Finally, the generalization of the methods through the cross validation process is verified in order to find the method that presents the best performance in the approximation of the two mechanical properties.
14

Um metodo Newton-GMRES globalmente convergente com uma nova escolha para o termo forçante e algumas estrategias para melhorar o desempenho de GMRES(m) / A globally convergent Newton-GMRES method with a new choice for the forcing term and some stragies to improve GMRES(m)

Toledo Benavides, Julia Victoria 17 June 2005 (has links)
Orientadores: Marcia A. Gomes Ruggiero, Vera Lucia da Rocha Lopes / Tese (doutorado) - Universidade Estadual de Campinas. Instituto de Matematica, Estatistica e Computação Cientifica / Made available in DSpace on 2018-08-04T14:53:24Z (GMT). No. of bitstreams: 1 ToledoBenavides_JuliaVictoria_D.pdf: 2835915 bytes, checksum: 1b77270a65a21cc42d9aa81819e4acc4 (MD5) Previous issue date: 2005 / Resumo: Neste trabalho, apresentamos um método de Newton inexato através da proposta de uma nova escolha para o termo forçante. O método obtido é globalizado através de uma busca linear robusta e suas propriedades de convergência são demonstradas. O passo de Newton inexato é obtido pela resolução do sistema linear através do método GMRES com recomeços, GMRES(m). Em testes computacionais observamos a ocorrência da estagnação em GMRES(m) e um acréscimo inaceitável na norma da função nas primeiras Iterações do método. Para contornar estas dificuldades são propostas estratégias de implementação computacional simples e que não exigem alterações internas no algoritmo do GMRES, possibilitando a interação com softwares já disponíveis. Exaustivos testes numéricos foram realizados, os quais nos permitiram concluir que a proposta para o termo for¸cante e as estratégias introduzidas foram bem sucedidas, resultando em um algoritmo robusto, com propriedade de convergência global e taxa superlinear de convergência / Abstract: In this work it is presented an inexact Newton method by a new choice for the forcing term. A globalization of the new method is done by introducing a robust line search strategy. Convergence properties are proved. The inexact Newton step is obtained through the restarted GMRES, GMRES (m), applied for solving the linear systems. Numerical experiments showed a stagnation of the GMRES (m) and also an occurrence of a great increase in the norm of the function at the initial iterations. Some strategies were proposed to avoid these drawbacks. These strategies are characterized by their simplicity of implementation and also by the fact that they do not need internal modifications of the GMRES algorithm. So, the interaction with available softwares are trivial. A bunch of numerical experiments were performed. With them it can be concluded that the new choice for the forcing term and the strategies incorporated in the algorithm were successfull. The resulting algorithm is then robust and has global convergence property with supelinear convergence rate / Doutorado / Doutor em Matemática Aplicada
15

Optimization Models in Retail Reverse Supply Chains

Coskun, Mehmet Erdem 16 June 2022 (has links)
Unlike most of the existing literature on reverse supply chains, that focuses on product recovery or waste management, in this thesis we consider reverse supply chain operations for an independent retailer. The latter have forward and reverse supply chains that are independent of the manufacturers. We study three major problems related to Retail Reverse Supply Chains (RRSC) for independent retailers. In RRSCs, each retail store holds some products that are not selling (and/or under-selling) and wishes to salvage them optimally. We refer to these products as Ineffective Inventory. Salvage can be in many forms and take place by relocating a product within the reverse supply chain (RSC), such as sending the product from a franchise store back to a Distribution/Return Center (RC) and then forward to another franchise store, or returning it to a vendor, liquidation, etc. The RRSC network may includes system members such as stores (retailer owned and/or franchise), RCs, warehouses, vendors and liquidators. Each of the stores carries some inventory that is underselling, and it is important to reduce the inventory of such products in order to refill the space with inventory that is more likely to sell. In the first problem, we consider a basic RRSC with retail stores, vendors and a warehouse. The retail company allocates a budget for its RRSC activities. We refer to this budget as a Profit-Loss budget, due to lost income from the items that will be removed from the stores that was a part of the gains resulting from the previous year tax calculations. The objective is to use this Profit-Loss budgetary limitation as effectively as possible with the most suitable products to relocate products within the supply chain and/or return them back to their vendor. A heuristic algorithm is developed to solve this problem, by making use of the problem structure, and results are compared with the solutions of an exact state-of-the-art commercial solver. In the second problem, we consider a network optimization model with inventory decisions. The goal is to optimize ineffective inventory levels in stores and the disposition of their returns. We model a comprehensive RRSC network with multiple stores that could be Company-Owned or Franchise Stores, multiple warehouses, multiple RCs, multiple vendors, and liquidators. The objective of the retailer is to minimize costs for relocating some of this ineffective inventory within the network or scrapping. However, individual franchise stores have their own goals of how their excessive inventory should be handled. The franchisee goals may be conflicting with those of the franchisor in terms of how much inventory should be chosen from each store to be relocated. In return, this conflict may lead to a conflict among franchise stores. This issue is addressed and resolved through inventory transparency among all the supply chain members. The tactical decision making process of which RC should be used for handling returns is incorporated into the model. In order to overcome the complexities of the large size problem, a multi-stage heuristic is developed to solve this problem within reasonable times. The results are then compared with the solutions of state-of-the-art commercial solver. In the third problem, we focus on the strategic decision of developing optimal vendor contract parameters for the retailer, using optimization models. Specifically, we identify optimal return penalties and associated return thresholds, between an independent retailer and its vendors. This model will support the retailer in their contract re-negotiation for its RSC activities. Vendors use a multi-layered penalty structure that assigns higher penalties to higher returns. The objective is to find the optimal penalties and/or optimal return thresholds that should be negotiated with the vendors in order to pay a lower penalty in the upcoming return cycles compared to existing penalty structures. We first design a Mixed Integer Non-Linear Program (MINLP) where the model makes the decision of vendor penalty fees and return thresholds simultaneously for each vendor. We generate small size to large size problems and solve them via MINLP solvers such as DICOPT and ANTIGONE. In order to gain insights to the inner workings of the MINLP, the decision variables, vendor penalty fees and return thresholds, are considered as parameters and hence, two models are designed to find the optimal penalty structure and optimal return thresholds, respectively. Useful insights from both of the models’ solutions are derived in order to generate rule-of-thumb methodologies to find approximate solutions close to optimal penalty percentages and return thresholds via identifying all possible scenarios that can exist in the problem structure. / Thesis / Doctor of Philosophy (PhD) / This thesis deals with Retail Reverse Supply Chain (RRSC) management. We consider an independent retail company's and its franchise stores' ineffective inventory which may be constituted of unsold, under-selling, slow-moving, customer-returned, end-of-life, end-of-use, damaged, and faulty products within their inventory. We take into account the retailer's reverse supply chain structure and investigate the following problems: 1) How to manage a store's product returns under a given budgetary limitation for financial planning and taxation reasons, due to lost income from returned items, 2) Inventory optimization by taking into account the reverse supply chain structure of the retailer, and 3) Providing insight to the retailer on how it can best re-negotiate its vendor (buy-back) contracts for its product returns. The thesis covers decision making in all three levels: day-to-day operational decisions such as which products to be returned and where to allocate them within its reverse supply chain options, mid-term tactical decisions such as which Return Centers (RC) to be activated for the Reverse Logistics (RL) activities, and long-term strategic decisions such as what should be the optimal contract terms to re-negotiate with the vendors in order to cut future return costs.
16

Controle da pressão seletiva em algoritmo genético aplicado a otimização de demanda em infra-estrutura aeronáutica. / Selective pressure control in genetic algorithms applied to demand optimization in aeronautical infraestructure.

Camargo, Gilberto de Menezes 18 August 2006 (has links)
A busca por entender e copiar o majestoso mundo que nos cerca fez do homem um curioso por natureza. Santos Dumont realizou a proeza de alcançar um dos sonhos mais antigos do homem, voar. Charles Darwin escreveu a Teoria da Evolução como um paradigma para a nossa existência, inspirando John Holland a desenvolver os Algoritmos Genéticos. Atualmente, com o grande crescimento da demanda no transporte aéreo, o homem volta seus esforços na busca por soluções que garantam a segurança da sociedade. Recentemente o pesquisador Naufal reuniu todos esses conceitos e desenvolveu um Modelo de Otimização de Demanda para o setor aeronáutico. Tal modelo visa amenizar a carga de trabalho dos controladores de tráfego aéreo na busca por aumentar a qualidade do serviço prestado por esse profissional, garantindo dessa forma níveis aceitáveis de segurança. Embora o modelo tenha se mostrado eficiente, ele apresentou uma deficiência quanto aos tempos despendidos para alcançar bons resultados. Na tentativa de otimizar os tempos do modelo atual, este trabalho de pesquisa adicionou o conceito de pressão seletiva, que representa a influência do meio ambiente. A representação da influência que o meio ambiente tem dentro da teoria da evolução de Darwin pode gerar uma implementação mais realista dos algoritmos genéticos. Este trabalho propõe a aplicação dos métodos de controle da pressão seletiva como alternativa de diminuir os tempos despendidos pelo modelo de otimização de demanda na busca por aumentar a segurança do setor aeroviário. / The search for understanding and copying the magnificent world that surrounds us has made man curious by nature. Santos Dumont achieved one of man?s most ancient dreams, to fly. Charles Darwin wrote the Theory of Evolution as a paradigm of our existence, inspiring John Holland to develop genetic algorithms. Presently, because of the growth of demand in air transportation, men concentrate their efforts in the search for solutions that can guarantee the safety of our society. Recently, researcher Naufal has gathered all these concepts and developed a Demand Optimization Model for the aeronautical sector. This model aims to ease the workload of air traffic controllers, in a search for increasing the quality of this service and guaranteeing acceptable levels of safety. Although his model has proved to be efficient, it has presented a weak point when it comes to time spent to reach good results. In an attempt to optimize time in the existing model, this research added the concept of selective pressure, which represents the influence of the environment. The representation of the influence that the environment has inside Darwin?s Theory of Evolution can generate a more realistic implementation of genetic algorithms. This work proposes an application of selective pressure control methods as an alternative to diminish time spent by the Demand Optimization Model in a search for increasing safety for the aeronautical sector.
17

New bounds for information complexity and quantum query complexity via convex optimization tools

Brandeho, Mathieu 28 September 2018 (has links) (PDF)
Cette thèse rassemble trois travaux sur la complexité d'information et sur la complexité en requête quantique. Ces domaines d'études ont pour points communs les outils mathématiques pour étudier ces complexités, c'est-à-dire les problèmes d'optimisation.Les deux premiers travaux concernent le domaine de la complexité en requête quantique, en généralisant l'important résultat suivant: dans l'article cite{LMRSS11}, leurs auteurs parviennent à caractériser la complexité en requête quantique, à l'aide de la méthode par adversaire, un programme semi-définie positif introduit par A. Ambainis dans cite{Ambainis2000}. Cependant, cette caractérisation est restreinte aux modèles à temps discret, avec une erreur bornée. Ainsi, le premier travail consiste à généraliser leur résultat aux modèles à temps continu, tandis que le second travail est une démarche, non aboutie, pour caractériser la complexité en requête quantique dans le cas exact et pour erreur non bornée.Dans ce premier travail, pour caractériser la complexité en requête quantique aux modèles à temps discret, nous adaptons la démonstration des modèles à temps discret, en construisant un algorithme en requête adiabatique universel. Le principe de cet algorithme repose sur le théorème adiabatique cite{Born1928}, ainsi qu'une solution optimale du dual de la méthode par adversaire. À noter que l'analyse du temps d'exécution de notre algorithme adiabatique est basée sur preuve qui ne nécessite pas d'écart dans le spectre de l'Hamiltonien.Dans le second travail, on souhaite caractériser la complexité en requête quantique pour une erreur non bornée ou nulle. Pour cela on reprend et améliore la méthode par adversaire, avec une approche de la mécanique lagrangienne, dans laquelle on construit un Lagrangien indiquant le nombre de requêtes nécessaires pour se déplacer dans l'espace des phases, ainsi on peut définir l'``action en requête''. Or ce lagrangien s'exprime sous la forme d'un programme semi-defini, son étude classique via les équations d'Euler-Lagrange nécessite l'utilisation du théorème de l'enveloppe, un puissant outils d'économathématiques. Le dernier travail, plus éloigné, concerne la complexité en information (et par extension la complexité en communication) pour simuler des corrélations non-locales. Ou plus précisement la quantitié d'information (selon Shannon) que doive s'échanger deux parties pour obtenir ses corrélations. Dans ce but, nous définissons une nouvelle complexité, denommée la zero information complexity IC_0, via le modèle sans communication. Cette complexité a l'avantage de s'exprimer sous la forme d'une optimization convexe. Pour les corrélations CHSH, on résout le problème d'optimisation pour le cas à une seule direction où nous retrouvons un résultat connu. Pour le scénario à deux directions, on met numériquement en évidence la validité de cette borne, et on résout une forme relaxée de IC_0 qui est un nouveau résultat. / Doctorat en Sciences de l'ingénieur et technologie / info:eu-repo/semantics/nonPublished
18

Controle da pressão seletiva em algoritmo genético aplicado a otimização de demanda em infra-estrutura aeronáutica. / Selective pressure control in genetic algorithms applied to demand optimization in aeronautical infraestructure.

Gilberto de Menezes Camargo 18 August 2006 (has links)
A busca por entender e copiar o majestoso mundo que nos cerca fez do homem um curioso por natureza. Santos Dumont realizou a proeza de alcançar um dos sonhos mais antigos do homem, voar. Charles Darwin escreveu a Teoria da Evolução como um paradigma para a nossa existência, inspirando John Holland a desenvolver os Algoritmos Genéticos. Atualmente, com o grande crescimento da demanda no transporte aéreo, o homem volta seus esforços na busca por soluções que garantam a segurança da sociedade. Recentemente o pesquisador Naufal reuniu todos esses conceitos e desenvolveu um Modelo de Otimização de Demanda para o setor aeronáutico. Tal modelo visa amenizar a carga de trabalho dos controladores de tráfego aéreo na busca por aumentar a qualidade do serviço prestado por esse profissional, garantindo dessa forma níveis aceitáveis de segurança. Embora o modelo tenha se mostrado eficiente, ele apresentou uma deficiência quanto aos tempos despendidos para alcançar bons resultados. Na tentativa de otimizar os tempos do modelo atual, este trabalho de pesquisa adicionou o conceito de pressão seletiva, que representa a influência do meio ambiente. A representação da influência que o meio ambiente tem dentro da teoria da evolução de Darwin pode gerar uma implementação mais realista dos algoritmos genéticos. Este trabalho propõe a aplicação dos métodos de controle da pressão seletiva como alternativa de diminuir os tempos despendidos pelo modelo de otimização de demanda na busca por aumentar a segurança do setor aeroviário. / The search for understanding and copying the magnificent world that surrounds us has made man curious by nature. Santos Dumont achieved one of man?s most ancient dreams, to fly. Charles Darwin wrote the Theory of Evolution as a paradigm of our existence, inspiring John Holland to develop genetic algorithms. Presently, because of the growth of demand in air transportation, men concentrate their efforts in the search for solutions that can guarantee the safety of our society. Recently, researcher Naufal has gathered all these concepts and developed a Demand Optimization Model for the aeronautical sector. This model aims to ease the workload of air traffic controllers, in a search for increasing the quality of this service and guaranteeing acceptable levels of safety. Although his model has proved to be efficient, it has presented a weak point when it comes to time spent to reach good results. In an attempt to optimize time in the existing model, this research added the concept of selective pressure, which represents the influence of the environment. The representation of the influence that the environment has inside Darwin?s Theory of Evolution can generate a more realistic implementation of genetic algorithms. This work proposes an application of selective pressure control methods as an alternative to diminish time spent by the Demand Optimization Model in a search for increasing safety for the aeronautical sector.
19

Um framework para a geração semiautomática de solos de guitarra

Cunha, Nailson dos Santos 24 February 2016 (has links)
Submitted by Fernando Souza (fernandoafsou@gmail.com) on 2017-08-17T15:58:41Z No. of bitstreams: 1 arquivototal.pdf: 2509071 bytes, checksum: 7e5e11344bc01beb22fdb94bee3ccdcf (MD5) / Made available in DSpace on 2017-08-17T15:58:41Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 2509071 bytes, checksum: 7e5e11344bc01beb22fdb94bee3ccdcf (MD5) Previous issue date: 2016-02-24 / This work deals with the development of a framework based on computational and optimization methods for algorithmic composition, more precisely, for the generation of guitar solos. The proposed approach was considered semiautomatic because it makes use of small melodic fragments (licks), previously created from human models. The solos generated are from the musical style Blues and they are applied over a well-known harmonic model called 12-Bar Blues. A licks database was created in which small instances containing a subset of them were randomly derived so as to diversify the possible candidates to be in the solo that will be generated. Once the instance is created, one solves an optimization problem that consists of determining the optimal sequence of a subset of licks by using a integer linear programming model. A set of rules was implemented for creating a matrix that de nes the transition cost between the licks. The outputs generated were stored in the MusicXML format and they can be read by most applications that provide support for this type of le and are capable of displaying it using the tablatures format. The solos created were evaluated by a sample of 173 subjects, classi ed as beginners, intermediates and professional musicians. A web application was developed to streamline the evaluation process. The results obtained show that the solos whose licks were optimally sequenced were statistically much better evaluated than those randomly sequenced, which indicates that the proposed methodology was capable of producing, on average, solos with a favorable percentage of acceptance. / Este trabalho trata do desenvolvimento de um framework baseado em m etodos computacionais e de otimização para a composição algoritmica, mais especifi camente, para a geração de solos de guitarra. A abordagem proposta foi considerada semiautomatica pois faz uso de pequenos fragmentos mel odicos (licks) previamente criados a partir de modelos humanos. Os solos gerados possuem caracter sticas do estilo musical Blues e s~ao aplicados sobre um modelo de harmonia bastante conhecido denominado 12-Bar Blues. Um banco de dados de licks foi criado, do qual são realizados sorteios de instâncias menores do conjunto, diversifi cando os possíveis candidatos a estarem no solo a ser gerado. De posse da inst^ancia, um problema de otimiza c~ao, que consiste em sequenciar de forma otimizada um subconjunto de licks, e resolvido utilizando um modelo de programa c~ao linear inteira. Implementou-se um conjunto de regras para a criação de uma matriz que de ne o custo de transição entre os licks. As sa das geradas s~ao armazenadas no formato MusicXML e podem ser lidas pela maioria dos aplicativos que possuam suporte a esse tipo de arquivo e disponibilizem visualiza c~ao no formato de tablaturas. Os solos criados foram avaliados por uma amostra de 173 indiv duos, classi cados como m usicos iniciantes, intermedi arios e pro fissionais. Uma aplicação web foi desenvolvida para agilizar o processo de avaliação. Os resultados obtidos demonstram que os solos cujos licks foram sequenciados de forma otimizada foram estatisticamente mais bem avaliados que aqueles sequenciados aleatoriamente, indicando que a metodologia proposta foi capaz de produzir, em m edia, solos com percentual de aceitação favorável.
20

SCOUT: a multi-objective method to select components in designing unit testing

Freitas, Eduardo Noronha de Andrade 15 February 2016 (has links)
Submitted by Marlene Santos (marlene.bc.ufg@gmail.com) on 2016-06-09T17:02:10Z No. of bitstreams: 2 Tese - Eduardo Noronha de Andrade Freitas - 2016.pdf: 1936673 bytes, checksum: 4336d187b0e552ae806ef83b9f695db0 (MD5) license_rdf: 19874 bytes, checksum: 38cb62ef53e6f513db2fb7e337df6485 (MD5) / Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2016-06-10T11:14:00Z (GMT) No. of bitstreams: 2 Tese - Eduardo Noronha de Andrade Freitas - 2016.pdf: 1936673 bytes, checksum: 4336d187b0e552ae806ef83b9f695db0 (MD5) license_rdf: 19874 bytes, checksum: 38cb62ef53e6f513db2fb7e337df6485 (MD5) / Made available in DSpace on 2016-06-10T11:14:00Z (GMT). No. of bitstreams: 2 Tese - Eduardo Noronha de Andrade Freitas - 2016.pdf: 1936673 bytes, checksum: 4336d187b0e552ae806ef83b9f695db0 (MD5) license_rdf: 19874 bytes, checksum: 38cb62ef53e6f513db2fb7e337df6485 (MD5) Previous issue date: 2016-02-15 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / Fundação de Amparo à Pesquisa do Estado de Goiás - FAPEG / The creation of a suite of unit testing is preceded by the selection of which components (code units) should be tested. This selection is a significant challenge, usually made based on the team member’s experience or guided by defect prediction or fault localization models. We modeled the selection of components for unit testing with limited resources as a multi-objective problem, addressing two different objectives: maximizing benefits and minimizing cost. To measure the benefit of a component, we made use of important metrics from static analysis (cost of future maintenance), dynamic analysis (risk of fault, and frequency of calls), and business value. We tackled gaps and challenges in the literature to formulate an effective method, the Selector of Software Components for Unit testing (SCOUT). SCOUT was structured in two stages: an automated extraction of all necessary data and a multi-objective optimization process. The Android platform was chosen to perform our experiments, and nine leading open-source applications were used as our subjects. SCOUT was compared with two of the most frequently used strategies in terms of efficacy.We also compared the effectiveness and efficiency of seven algorithms in solving a multi-objective component selection problem: random technique; constructivist heuristic; Gurobi, a commercial tool; genetic algorithm; SPEA_II; NSGA_II; and NSGA_III. The results indicate the benefits of using multi-objective evolutionary approaches such as NSGA_II and demonstrate that SCOUT has a significant potential to reduce market vulnerability. To the best of our knowledge, SCOUT is the first method to assist software testing managers in selecting components at the method level for the development of unit testing in an automated way based on a multi-objective approach, exploring static and dynamic metrics and business value. / (Sem resumo)

Page generated in 0.1078 seconds