• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 168
  • 42
  • 37
  • 13
  • 5
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 345
  • 345
  • 345
  • 72
  • 69
  • 48
  • 48
  • 47
  • 46
  • 43
  • 39
  • 38
  • 34
  • 32
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
281

Contribution à l'optimisation multi-objectif des paramètres de coupe en usinage et apport de l 'analyse vibratoire : application aux matériaux métalliques et composites / Contribution to the multi-objective optimization of cutting parameters in machining and supply of vibration analysis : application to meal and composite materials

Chibane, Hicham 05 April 2013 (has links)
Les procédés de fabrication de pièces mécaniques par enlèvement de matière (tournage, fraisage, perçage, ...) connaissent une utilisation massive dans l’industrie aéronautique et l’automobile. Les pièces obtenues par ces procédés doivent satisfaire à des propriétés géométriques, métallurgiques et à des caractéristiques de qualité. Pour répondre à ces exigences, plusieurs essais expérimentaux basés sur le choix des conditions de coupe sont souvent nécessaires avant d’aboutir à une pièce satisfaisante. Actuellement, ces méthodes empiriques basées sur l’expérience des fabricants et des utilisateurs des outils coupants sont souvent très longues et coûteuses, donnent une large plage de choix des paramètres en fonction de leurs besoins. Toutefois, le coût très élevé d’un essai limite fondamentalement le nombre d’expériences, avoir une pièce respectant les caractéristiques souhaitées avec un coût acceptable devient une tâche difficile. / Manufacturing processes of mechanical parts by removal of material (turning, milling, drilling ...) have extensive use in aeronautic and automobile industry. The components obtained using these methods must satisfy geometric properties, metallurgical and quality characteristics. To meet these requirements, several experimental tests based on the selection of cutting conditions are often necessary before manufacturing. Currently, these empirical methods based on the experience of manufacturers and users of cutting tools (charts, diagrams with experimental findings, ...) are often very lengthy and costly. However, the high cost of a trial limits the number of experiments, so to have a deserted component with an acceptable cost is a difficult task. The importance of cutting conditions monitored by limitations is related to the type of material to be machined, since it determines the behavior of the machining.
282

Multi-objective optimization for Green Supply Chain Management and Design : Application to the orange juice agrofood cluster / Optimisation multi-objectif pour la gestion et la conception d'une chaine logistique verte : Application au cas de la filière agroalimentaire du jus d'orange

Miranda Ackerman, Marco Augusto 05 November 2015 (has links)
La gestion de la chaîne logistique a gagné en maturité depuis l’extension de son champ d’application qui portait sur des problématiques opérationnelles et économiques s’est élargi à des questions environnementales et sociales auxquelles sont confrontées les organisations industrielles actuelles. L’addition du terme «vert» aux activités de la chaîne logistique vise à intégrer une conscience écologique dans tous les processus de la chaîne d'approvisionnement. Le but de ce travail est de développer un cadre méthodologique pour traiter la gestion de la chaîne logistique verte (GrSCM) basée sur une approche d'optimisation multi-objectif, en se focalisant sur la conception, la planification et les opérations de la chaîne agroalimentaire, à travers la mise en oeuvre des principes de gestion et de logistique de la chaîne d'approvisionnement verte. L'étude de cas retenu est la filière du jus d'orange. L'objectif du travail consiste en la minimisation de l'impact environnemental et la maximisation de la rentabilité économique pour des catégories de produits sélectionnés. Ce travail se concentre sur l'application de la GrSCM à deux questions stratégiques fondamentales visant les chaînes d'approvisionnement agroalimentaire. La première est liée au problème de la sélection des fournisseurs en produits « verts » (GSS) pour les systèmes de production agricole et à leur intégration dans le réseau globalisé de la chaîne d'approvisionnement. Le second se concentre sur la conception globale du réseau de la chaîne logistique verte (GSCND). Ces deux sujets complémentaires sont finalement intégrés afin d'évaluer et exploiter les caractéristiques des chaînes d'approvisionnement agro-alimentaire en vue du développement d’un éco-label. La méthodologie est basée sur le couplage entre analyse du cycle de vie (ACV), optimisation multi-objectifs par algorithmes génétiques et technique d’aide à la décision multicritère (de type TOPSIS). L’approche est illustrée et validée par le développement et l'analyse d'une étude de cas de la chaîne logistique de jus d'orange, modélisée comme une chaîne logistique verte (GrSC) à trois échelons composés de la production d’oranges, de leur transformation en jus, puis de leur distribution, chaque échelon étant modélisé de façon plus fine en sous-composants. D’un point de vue méthodologique, le travail a démontré l’intérêt du cadre de modélisation et d’optimisation de GrSC dans le contexte des chaînes d'approvisionnement, notamment pour le développement d’un éco-label dans le domaine de l’agro-alimentaire. Il peut aider les décideurs pour gérer la complexité inhérente aux décisions de conception de la chaîne d'approvisionnement agroalimentaire, induite par la nature multi-objectifs multi-acteurs multi-périodes du problème, empêchant ainsi une prise de décision empirique et segmentée. D’un point de vue expérimental, sous les hypothèses utilisées dans l'étude de cas, les résultats du travail soulignent que si l’on restreint l’éco-label "bio" à l'aspect agricole, seule une faible, voire aucune amélioration sur la performance environnementale de la chaîne d'approvisionnement n’est atteinte. La prise en compte des critères environnementaux pertinents sur l’ensemble du cycle de vie s’avère être une meilleure option pour les stratégies publiques et privées afin de tendre vers des chaînes agro-alimentaires plus durables. / Supply chain and operations management has matured from a field that addressed only operational and economic concerns to one that comprehensively considers the broader environmental and social issues that face industrial organizations of today. Adding the term “green” to supply chain activities seeks to incorporate environmentally conscious thinking in all processes in the supply chain. The aim of this work is to develop a Green Supply Chain (GrSC) framework based on a multi-objective optimization approach, with specific emphasis on agrofood supply chain design, planning and operations through the implementation of appropriate green supply chain management and logistics principles. The case study is the orange juice cluster. The research objective is the minimization of the environmental burden and the maximization of economic profitability of the selected product categories. This work focuses on the application of GrSCM to two fundamental strategic issues targeting agro food supply chains. The former is related to the Green Supplier Selection (GSS) problem devoted to the farming production systems and the way they are integrated into the global supply chain network. The latter focuses on the global Green Supply Chain Network Design (GSCND) as a whole. These two complementary and ultimately integrated strategic topics are framed in order to evaluate and exploit the unique characteristics of agro food supply chains in relation to eco-labeling. The methodology is based on the use of Life Cycle Assessment, Multi-objective Optimization via Genetic Algorithms and Multiple-criteria Decision Making tools (TOPSIS type). The approach is illustrated and validated through the development and analysis of an Orange Juice Supply Chain case study modelled as a three echelon GrSC composed of the supplier, manufacturing and market levels that in turn are decomposed into more detailed subcomponents. Methodologically, the work has shown the development of the modelling and optimization GrSCM framework is useful in the context of eco-labeled agro food supply chain and feasible in particular for the orange juice cluster. The proposed framework can help decision makers handle the complexity that characterizes agro food supply chain design decision and that is brought on by the multi-objective and multi-period nature of the problem as well as by the multiple stakeholders, thus preventing to make the decision in a segmented empirical manner. Experimentally, under the assumptions used in the case study, the work highlights that by focusing only on the “organic” eco-label to improve the agricultural aspect, low to no improvement on overall supply chain environmental performance is reached in relative terms. In contrast, the environmental criteria resulting from a full lifecycle approach is a better option for future public and private policies to reach more sustainable agro food supply chains.
283

Optimisation multi-objectif sous incertitudes de phénomènes de thermique transitoire / Multi-objective optimization under uncertainty of transient thermal phenomena

Guerra, Jonathan 20 October 2016 (has links)
L'objectif de cette thèse est la résolution d’un problème d’optimisation multi-objectif sous incertitudes en présence de simulations numériques coûteuses. Une validation est menée sur un cas test de thermique transitoire. Dans un premier temps, nous développons un algorithme d'optimisation multi-objectif basé sur le krigeage nécessitant peu d’appels aux fonctions objectif. L'approche est adaptée au calcul distribué et favorise la restitution d'une approximation régulière du front de Pareto complet. Le problème d’optimisation sous incertitudes est ensuite étudié en considérant des mesures de robustesse pires cas et probabilistes. Le superquantile intègre tous les évènements pour lesquels la valeur de la sortie se trouve entre le quantile et le pire cas mais cette mesure de risque nécessite un grand nombre d’appels à la fonction objectif incertaine pour atteindre une précision suffisante. Peu de méthodes permettent de calculer le superquantile de la distribution de la sortie de fonctions coûteuses. Nous développons donc un estimateur du superquantile basé sur une méthode d'échantillonnage préférentiel et le krigeage. Il permet d’approcher les superquantiles avec une faible erreur et une taille d’échantillon limitée. De plus, un couplage avec l’algorithme multi-objectif permet la réutilisation des évaluations. Dans une dernière partie, nous construisons des modèles de substitution spatio-temporels capables de prédire des phénomènes dynamiques non linéaires sur des temps longs et avec peu de trajectoires d’apprentissage. Les réseaux de neurones récurrents sont utilisés et une méthodologie de construction facilitant l’apprentissage est mise en place. / This work aims at solving multi-objective optimization problems in the presence of uncertainties and costly numerical simulations. A validation is carried out on a transient thermal test case. First of all, we develop a multi-objective optimization algorithm based on kriging and requiring few calls to the objective functions. This approach is adapted to the distribution of the computations and favors the restitution of a regular approximation of the complete Pareto front. The optimization problem under uncertainties is then studied by considering the worst-case and probabilistic robustness measures. The superquantile integrates every event on which the output value is between the quantile and the worst case. However, it requires an important number of calls to the uncertain objective function to be accurately evaluated. Few methods give the possibility to approach the superquantile of the output distribution of costly functions. To this end, we have developed an estimator based on importance sampling and kriging. It enables to approach superquantiles with little error and using a limited number of samples. Moreover, the setting up of a coupling with the multi-objective algorithm allows to reuse some of those evaluations. In the last part, we build spatio-temporal surrogate models capable of predicting non-linear, dynamic and long-term in time phenomena by using few learning trajectories. The construction is based on recurrent neural networks and a construction facilitating the learning is proposed.
284

Estudo avaliativo de um algoritmo gen?tico auto-organiz?vel e multiobjetivo utilizando aprendizado de m?quina para aplica??es de telecomunica??es

Martins, Sinara da Rocha 15 August 2012 (has links)
Made available in DSpace on 2014-12-17T14:56:06Z (GMT). No. of bitstreams: 1 SinaraRM_DISSERT.pdf: 1037040 bytes, checksum: 9dd71f16b45358e60b8b82862adaafc6 (MD5) Previous issue date: 2012-08-15 / Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior / This paper presents an evaluative study about the effects of using a machine learning technique on the main features of a self-organizing and multiobjective genetic algorithm (GA). A typical GA can be seen as a search technique which is usually applied in problems involving no polynomial complexity. Originally, these algorithms were designed to create methods that seek acceptable solutions to problems where the global optimum is inaccessible or difficult to obtain. At first, the GAs considered only one evaluation function and a single objective optimization. Today, however, implementations that consider several optimization objectives simultaneously (multiobjective algorithms) are common, besides allowing the change of many components of the algorithm dynamically (self-organizing algorithms). At the same time, they are also common combinations of GAs with machine learning techniques to improve some of its characteristics of performance and use. In this work, a GA with a machine learning technique was analyzed and applied in a antenna design. We used a variant of bicubic interpolation technique, called 2D Spline, as machine learning technique to estimate the behavior of a dynamic fitness function, based on the knowledge obtained from a set of laboratory experiments. This fitness function is also called evaluation function and, it is responsible for determining the fitness degree of a candidate solution (individual), in relation to others in the same population. The algorithm can be applied in many areas, including in the field of telecommunications, as projects of antennas and frequency selective surfaces. In this particular work, the presented algorithm was developed to optimize the design of a microstrip antenna, usually used in wireless communication systems for application in Ultra-Wideband (UWB). The algorithm allowed the optimization of two variables of geometry antenna - the length (Ls) and width (Ws) a slit in the ground plane with respect to three objectives: radiated signal bandwidth, return loss and central frequency deviation. These two dimensions (Ws and Ls) are used as variables in three different interpolation functions, one Spline for each optimization objective, to compose a multiobjective and aggregate fitness function. The final result proposed by the algorithm was compared with the simulation program result and the measured result of a physical prototype of the antenna built in the laboratory. In the present study, the algorithm was analyzed with respect to their success degree in relation to four important characteristics of a self-organizing multiobjective GA: performance, flexibility, scalability and accuracy. At the end of the study, it was observed a time increase in algorithm execution in comparison to a common GA, due to the time required for the machine learning process. On the plus side, we notice a sensitive gain with respect to flexibility and accuracy of results, and a prosperous path that indicates directions to the algorithm to allow the optimization problems with "η" variables / Este trabalho apresenta um estudo avaliativo dos efeitos da utiliza??o de uma t?cnica de aprendizado de m?quina nas caracter?sticas principais de um algoritmo gen?tico (GA) multiobjetivo e auto-organiz?vel. Um GA t?pico pode ser visto como uma t?cnica de busca que ? normalmente aplicada em problemas que envolvem complexidade n?o polinomial. Originalmente, estes algoritmos foram idealizados para criar m?todos que buscam solu??es aceit?veis para problemas em que os ?timos globais s?o inacess?veis ou s?o de dif?cil obten??o. A princ?pio, os GAs consideravam apenas uma fun??o de avalia??o e um ?nico objetivo de otimiza??o. Hoje, entretanto, s?o comuns as implementa??es que consideram diversos objetivos de otimiza??o simultaneamente (algoritmos multiobjetivos), al?m de permitir a altera??o de diversos componentes do algoritmo dinamicamente (algoritmos autoorganiz?veis). Ao mesmo tempo, s?o comuns tamb?m as combina??es dos GAs com t?cnicas de aprendizado de m?quina para melhorar algumas de suas caracter?sticas de desempenho e utiliza??o. Neste trabalho, um GA com recursos de aprendizado de m?quina foi analisado e aplicado em um projeto de antena. Utilizou-se uma t?cnica variante de interpola??o bic?bica, denominada Spline 2D, como t?cnica de aprendizado de m?quina para estimar o comportamento de uma fun??o de fitness din?mica, a partir do conhecimento obtido de um conjunto de experimentos realizados em laborat?rio. Esta fun??o de fitness ? tamb?m denominada de fun??o de avalia??o e ? respons?vel pela determina??o do grau de aptid?o de uma solu??o candidata (indiv?duo) em rela??o ?s demais de uma mesma popula??o. O algoritmo pode ser aplicado em diversas ?reas, inclusive no dom?nio das telecomunica??es, como nos projetos de antenas e de superf?cies seletivas de frequ?ncia. Neste trabalho em particular, o algoritmo apresentado foi desenvolvido para otimizar o projeto de uma antena de microfita, comumente utilizada em sistemas de comunica??o sem fio e projetada para aplica??o em sistemas de banda ultra larga (Ultra-Wideband - UWB). O algoritmo permitiu a otimiza??o de duas vari?veis da geometria da antena - o Comprimento (Ls) e a Largura (Ws) de uma fenda no plano de terra com rela??o a tr?s objetivos: largura de banda do sinal irradiado, perda de retorno e desvio da frequ?ncia central. As duas dimens?es (Ls e Ws) s?o usadas como vari?veis em tr?s distintas fun??es de interpola??o, sendo uma Spline para cada objetivo da otimiza??o, para compor uma fun??o de fitness agregada e multiobjetiva. O resultado final proposto pelo algoritmo foi comparado com o resultado obtido de um programa simulador e com o resultado medido de um prot?tipo f?sico da antena constru?da em laborat?rio. No estudo apresentado, o algoritmo foi analisado com rela??o ao seu grau de sucesso, no que diz respeito a quatro caracter?sticas importantes de um GA multiobjetivo auto-organiz?vel: desempenho, flexibilidade, escalabilidade e exatid?o. Ao final do estudo, observou-se na compila??o do algoritmo um aumento no tempo de execu??o em compara??o a um GA comum, por conta do tempo necess?rio para o processo de aprendizagem. Como ponto positivo, notou-te um ganho sens?vel com rela??o a flexibilidade e a exatid?o dos resultados apresentados, al?m de um caminho pr?spero que indica dire??es para permitir com que o algoritmo permita a otimiza??o de problemas com η vari?veis
285

Multi-objective optimization in learn to pre-compute evidence fusion to obtain high quality compressed web search indexes

Pal, Anibrata 19 April 2016 (has links)
Submitted by Sáboia Nágila (nagila.saboia01@gmail.com) on 2016-07-29T14:09:40Z No. of bitstreams: 1 Disertação-Anibrata Pal.pdf: 1139751 bytes, checksum: a29e1923e75e239365abac2dc74c7f40 (MD5) / Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2016-08-15T17:54:46Z (GMT) No. of bitstreams: 1 Disertação-Anibrata Pal.pdf: 1139751 bytes, checksum: a29e1923e75e239365abac2dc74c7f40 (MD5) / Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2016-08-15T17:57:29Z (GMT) No. of bitstreams: 1 Disertação-Anibrata Pal.pdf: 1139751 bytes, checksum: a29e1923e75e239365abac2dc74c7f40 (MD5) / Made available in DSpace on 2016-08-15T17:57:29Z (GMT). No. of bitstreams: 1 Disertação-Anibrata Pal.pdf: 1139751 bytes, checksum: a29e1923e75e239365abac2dc74c7f40 (MD5) Previous issue date: 2016-04-19 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The world of information retrieval revolves around web search engines. Text search engines are one of the most important source for routing information. The web search engines index huge volumes of data and handles billions of documents. The learn to rank methods have been adopted in the recent past to generate high quality answers for the search engines. The ultimate goal of these systems are to provide high quality results and, at the same time, reduce the computational time for query processing. Drawing direct correlation from the aforementioned fact; reading from smaller or compact indexes always accelerate data read or in other words, reduce computational time during query processing. In this thesis we study about using learning to rank method to not only produce high quality ranking of search results, but also to optimize another important aspect of search systems, the compression achieved in their indexes. We show that it is possible to achieve impressive gains in search engine index compression with virtually no loss in the final quality of results by using simple, yet effective, multi objective optimization techniques in the learning process. We also used basic pruning techniques to find out the impact of pruning in the compression of indexes. In our best approach, we were able to achieve more than 40% compression of the existing index, while keeping the quality of results at par with methods that disregard compression. / Máquinas de busca web para a web indexam grandes volumes de dados, lidando com coleções que muitas vezes são compostas por dezenas de bilhões de documentos. Métodos aprendizagem de máquina têm sido adotados para gerar as respostas de alta qualidade nesses sistemas e, mais recentemente, há métodos de aprendizagem de máquina propostos para a fusão de evidências durante o processo de indexação das bases de dados. Estes métodos servem então não somente para melhorar a qualidade de respostas em sistemas de busca, mas também para reduzir custos de processamento de consultas. O único método de fusão de evidências em tempo de indexação proposto na literatura tem como foco exclusivamente o aprendizado de funções de fusão de evidências que gerem bons resultados durante o processamento de consulta, buscando otimizar este único objetivo no processo de aprendizagem. O presente trabalho apresenta uma proposta onde utiliza-se o método de aprendizagem com múltiplos objetivos, visando otimizar, ao mesmo tempo, tanto a qualidade de respostas produzidas quando o grau de compressão do índice produzido pela fusão de rankings. Os resultados apresentados indicam que a adoção de um processo de aprendizagem com múltiplos objetivos permite que se obtenha melhora significativa na compressão dos índices produzidos sem que haja perda significativa na qualidade final do ranking produzido pelo sistema.
286

Um algoritmo exato para obter o conjunto solução de problemas de portfólio / An exact algorithm to obtain the solution set to portfolio problems

Villela, Pedro Ferraz, 1982- 25 August 2018 (has links)
Orientador: Francisco de Assis Magalhães Gomes Neto / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação Científica / Made available in DSpace on 2018-08-25T19:03:25Z (GMT). No. of bitstreams: 1 Villela_PedroFerraz_D.pdf: 10794575 bytes, checksum: 746b8aebf0db423d557d9c5fe1446592 (MD5) Previous issue date: 2014 / Resumo: Neste trabalho, propomos um método exato para obter o conjunto solução de um problema biobjetivo quadrático de otimização de carteiras de investimento, que envolve variáveis binárias. Nosso algoritmo é baseado na junção de três algoritmos específicos. O primeiro encontra uma curva associada ao conjunto solução de problemas biobjetivo contínuos por meio de um método de restrições ativas, o segundo encontra o ótimo de um problema de programação quadrática inteira mista pelo método Branch-and-Bound, e o terceiro encontra a interseção de duas curvas associadas a problemas biobjetivo distintos. Ao longo do texto, algumas heurísticas e métodos adicionais também são introduzidos, com o propósito de acelerar a convergência do algoritmo proposto. Além disso, o nosso método pode ser visto como uma nova contribuição na área, pois ele determina, de forma exata, a curva associada ao conjunto solução do problemas biobjetivo inteiro misto, algo que é incomum na literatura, pois o problema alvo geralmente é abordado via métodos meta-heurísticos. Ademais, ele mostrou ser eficiente do ponto de vista do tempo computacional, pois encontra o conjunto solução do problema em poucos segundos / Abstract: In this work, we propose an exact method to find the solution set of a mixed quadratic bi-objective portfolio optimization problem. Our method is based on the combination of three specific algorithms. The first one obtains a curve associated with the solution set of a continuous bi-objective problem through an active set algorithm, the second one solves a mixed quadratic optimization problem through the Branch-and-Bound method, and the third one searches the intersection of two curves associated with distinct bi-objective problems. Throughout the text, some heuristics are also introduced in order to accelerate the performance of the method. Moreover, our method can be seen as a new contribution to the field, since it finds, in an exact way, the curve related to the solution set of the mixed integer bi-objective problem, something uncommon in the corresponding literature, where the target problem is usually approached by metaheuristic methods. Additionally, it has also shown to be efficient in terms of running time, being capable of finding the problem's solution set within a much faster time frame / Doutorado / Matematica Aplicada / Doutor em Matemática Aplicada
287

Approches de résolution exacte et approchée en optimisation combinatoire multi-objectif, application au problème de l'arbre couvrant de poids minimal / Exact and approximate solving approaches in multi-objective combinatorial optimization, application to the minimum weight spanning tree problem

Lacour, Renaud 02 July 2014 (has links)
On s'attache dans cette thèse à plusieurs aspects liés à la résolution de problèmes multi-objectifs, sans se limiter au cas biobjectif. Nous considérons la résolution exacte, dans le sens de la détermination de l'ensemble des points non dominés, ainsi que la résolution approchée dans laquelle on cherche une approximation de cet ensemble dont la qualité est garantie a priori.Nous nous intéressons d'abord au problème de la détermination d'une représentation explicite de la région de recherche. La région de recherche, étant donné un ensemble de points réalisables connus, exclut la partie de l'espace des objectifs que dominent ces points et constitue donc la partie de l'espace des objectifs où les efforts futurs doivent être concentrés dans la perspective de déterminer tous les points non dominés.Puis nous considérons le recours aux algorithmes de séparation et évaluation ainsi qu'aux algorithmes de ranking afin de proposer une nouvelle méthode hybride de détermination de l'ensemble des points non dominés. Nous montrons que celle-ci peut également servir à obtenir une approximation de l'ensemble des points non dominés. Cette méthode est implantée pour le problème de l'arbre couvrant de poids minimal. Les quelques propriétés de ce problème que nous passons en revue nous permettent de spécialiser certaines procédures et d'intégrer des prétraitements spécifiques. L'intérêt de cette approche est alors soutenu à l'aide de résultats expérimentaux. / This thesis deals with several aspects related to solving multi-objective problems, without restriction to the bi-objective case. We consider exact solving, which generates the nondominated set, and approximate solving, which computes an approximation of the nondominated set with a priori guarantee on the quality.We first consider the determination of an explicit representation of the search region. The search region, defined with respect to a set of known feasible points, excludes from the objective space the part which is dominated by these points. Future efforts to find all nondominated points should therefore be concentrated on the search region.Then we review branch and bound and ranking algorithms and we propose a new hybrid approach for the determination of the nondominated set. We show how the proposed method can be adapted to generate an approximation of the nondominated set. This approach is instantiated on the minimum spanning tree problem. We review several properties of this problem which enable us to specialize some procedures of the proposed approach and integrate specific preprocessing rules. This approach is finally supported through experimental results.
288

Optimisation des corrections de forme dans les engrenages droits et hélicoïdaux : Approches déterministes et probabilistes / Optimization of tooth modifications for spur and helical gears : Deterministic and probabilistic approaches

Ghribi, Dhafer 21 February 2013 (has links)
Cette thèse a pour objectif de mener une optimisation des corrections de forme des engrenages cylindriques, droits et hélicoïdaux. Le travail se décompose en quatre parties principales. Dans la première partie, on présente un état de l’art sur les différents types de corrections de forme proposées dans la littérature. Une analyse des travaux d’optimisation, menés jusqu’à présent, est conduite. La deuxième partie est focalisée sur une approche déterministe visant à cerner l’influence des corrections de dentures sur les principaux critères de performance. Dans ce contexte, on propose un développement analytique qui caractérise les fluctuations d’erreur de transmission quasi-statique permettant d’obtenir des relations approchées originales. En présence de plusieurs paramètres de corrections, un algorithme génétique est utilisé afin d’identifier, en un temps réduit, les solutions optimales. Nous proposons, en troisième partie, une étude probabiliste pour caractériser les corrections robustes. Ainsi, on définit une fonction objectif de robustesse faisant intervenir des paramètres statistiques. Après une étape de validation, l’estimation de ces paramètres est effectuée en utilisant les formules de quadrature de Gauss. Plusieurs études paramétriques sont ensuite menées et qui reflètent entre autre l’influence des classes de qualité, la forme de la correction, etc. Enfin, on a conduit une optimisation multicritère en utilisant un algorithme d’optimisation spécifique : « NSGA-II ». / The objective of this PhD thesis is to define optimum tooth shape modifications for spur and helical gears with regard to a number of design parameters. The memoir is divided into four parts. A literature review on tooth modification along with optimization techniques is presented in the first section. The second part of the text is centred on a deterministic approach to the performance induced by tooth modifications on several design criteria commonly used in gearing. Some original analytical developments on transmission errors are presented which are combined with a genetic algorithm in order to define optimum profile relief. In the third part of the memoir, a probabilistic analysis is conducted based on Gaussian quadrature leading to robust tooth modifications. A number of results are presented which illustrate the influence of the quality grade, the tooth modification shapes, etc. Finally, the results delivered by a specific multi-criterion optimization algorithm “NSGA-II” are displayed and commented upon.
289

Méthodes et applications industrielles en optimisation multi-critère de paramètres de processus et de forme en emboutissage / Methods and industrial applications in multicriteria optimization of process parameters in sheet metal forming

Oujebbour, Fatima Zahra 12 March 2014 (has links)
Face aux exigences concurrentielles et économiques actuelles dans le secteur automobile, l'emboutissage a l'avantage, comme étant un procédé de mise en forme par grande déformation, de produire, en grandes cadences, des pièces de meilleure qualité géométrique par rapport aux autres procédés de fabrication mécanique. Cependant, il présente des difficultés de mise en œuvre, cette dernière s'effectue généralement dans les entreprises par la méthode classique d'essai-erreur, une méthode longue et très coûteuse. Dans la recherche, le recours à la simulation du procédé par la méthode des éléments finis est une alternative. Elle est actuellement une des innovations technologiques qui cherche à réduire le coût de production et de réalisation des outillages et facilite l'analyse et la résolution des problèmes liés au procédé. Dans le cadre de cette thèse, l'objectif est de prédire et de prévenir, particulièrement, le retour élastique et la rupture. Ces deux problèmes sont les plus répandus en emboutissage et présentent une difficulté en optimisation puisqu'ils sont antagonistes. Une pièce mise en forme par emboutissage à l'aide d'un poinçon sous forme de croix a fait l'objet de l'étude. Nous avons envisagé, d'abord, d'analyser la sensibilité des deux phénomènes concernés par rapport à deux paramètres caractéristiques du procédé d'emboutissage (l'épaisseur du flan initial et de la vitesse du poinçon), puis par rapport à quatre (l'épaisseur du flan initial, de la vitesse du poinçon, l'effort du serre flan et le coefficient du frottement) et finalement par rapport à la forme du contour du flan. Le recours à des méta-modèles pour optimiser les deux critères était nécessaire. / The processing of sheet metal forming is of vital importance to a large range of industries as production of car bodies, cans, appliances, etc. It generates complex and precise parts. Although, it is an involved technology combining elastic-plastic bending and stretch deformation of the workpiece. These deformations can lead to undesirable problems in the desired shape and performance of the stamped. To perform a successful stamping process and avoid shape deviations such as springback and failure defects, process variables should be optimized.In the present work, the objective is the prediction and the prevention of, especially, springback and failure. These two phenomena are the most common problems in stamping process that present much difficulties in optimization since they are two conflicting objectives. The forming test studied in this thesis concern the stamping of an industrial workpiece stamped with a cross punch. To solve this optimization problem, the approach chosen was based on the hybridization of an heuristic and a direct descent method. This hybridization is designed to take advantage from both disciplines, stochastic and deterministic, in order to improve the robustness and the efficiency of the hybrid algorithm. For the multi-objective problem, we adopt methods based on the identification of Pareto front. To have a compromise between the convergence towards the front and the manner in which the solutions are distributed, we choose two appropriate methods. This methods have the capability to capture the Pareto front and have the advantage of generating a set of Pareto-optimal solutions uniformly spaced. The last property can be of important and practical.
290

Dynamic Resampling for Preference-based Evolutionary Multi-objective Optimization of Stochastic Systems : Improving the efficiency of time-constrained optimization

Siegmund, Florian January 2016 (has links)
In preference-based Evolutionary Multi-objective Optimization (EMO), the decision maker is looking for a diverse, but locally focused non-dominated front in a preferred area of the objective space, as close as possible to the true Pareto-front. Since solutions found outside the area of interest are considered less important or even irrelevant, the optimization can focus its efforts on the preferred area and find the solutions that the decision maker is looking for more quickly, i.e., with fewer simulation runs. This is particularly important if the available time for optimization is limited, as is the case in many real-world applications. Although previous studies in using this kind of guided-search with preference information, for example, withthe R-NSGA-II algorithm, have shown positive results, only very few of them considered the stochastic outputs of simulated systems. In the literature, this phenomenon of stochastic evaluation functions is sometimes called noisy optimization. If an EMO algorithm is run without any countermeasure to noisy evaluation functions, the performance will deteriorate, compared to the case if the true mean objective values are known. While, in general, static resampling of solutions to reduce the uncertainty of all evaluated design solutions can allow EMO algorithms to avoid this problem, it will significantly increase the required simulation time/budget, as many samples will be wasted on candidate solutions which are inferior. In comparison, a Dynamic Resampling (DR) strategy can allow the exploration and exploitation trade-off to be optimized, since the required accuracy about objective values varies between solutions. In a dense, converged population, itis important to know the accurate objective values, whereas noisy objective values are less harmful when an algorithm is exploring the objective space, especially early in the optimization process. Therefore, a well-designed Dynamic Resampling strategy which resamples the solution carefully, according to the resampling need, can help an EMO algorithm achieve better results than a static resampling allocation. While there are abundant studies in Simulation-based Optimization that considered Dynamic Resampling, the survey done in this study has found that there is no related work that considered how combinations of Dynamic Resampling and preference-based guided search can further enhance the performance of EMO algorithms, especially if the problems under study involve computationally expensive evaluations, like production systems simulation. The aim of this thesis is therefore to study, design and then to compare new combinations of preference-based EMO algorithms with various DR strategies, in order to improve the solution quality found by simulation-based multi-objective optimization with stochastic outputs, under a limited function evaluation or simulation budget. Specifically, based on the advantages and flexibility offered by interactive, reference point-based approaches, studies of the performance enhancements of R-NSGA-II when augmented with various DR strategies, with increasing degrees of statistical sophistication, as well as several adaptive features in terms of optimization parameters, have been made. The research results have clearly shown that optimization results can be improved, if a hybrid DR strategy is used and adaptive algorithm parameters are chosen according to the noise level and problem complexity. In the case of a limited simulation budget, the results allow the conclusions that both decision maker preferences and DR should be used at the same time to achieve the best results in simulation-based multi-objective optimization. / Vid preferensbaserad evolutionär flermålsoptimering försöker beslutsfattaren hitta lösningar som är fokuserade kring ett valt preferensområde i målrymden och som ligger så nära den optimala Pareto-fronten som möjligt. Eftersom lösningar utanför preferensområdet anses som mindre intressanta, eller till och med oviktiga, kan optimeringen fokusera på den intressanta delen av målrymden och hitta relevanta lösningar snabbare, vilket betyder att färre lösningar behöver utvärderas. Detta är en stor fördel vid simuleringsbaserad flermålsoptimering med långa simuleringstider eftersom antalet olika konfigurationer som kan simuleras och utvärderas är mycket begränsat. Även tidigare studier som använt fokuserad flermålsoptimering styrd av användarpreferenser, t.ex. med algoritmen R-NSGA-II, har visat positiva resultat men enbart få av dessa har tagit hänsyn till det stokastiska beteendet hos de simulerade systemen. I litteraturen kallas optimering med stokastiska utvärderingsfunktioner ibland "noisy optimization". Om en optimeringsalgoritm inte tar hänsyn till att de utvärderade målvärdena är stokastiska kommer prestandan vara lägre jämfört med om optimeringsalgoritmen har tillgång till de verkliga målvärdena. Statisk upprepad utvärdering av lösningar med syftet att reducera osäkerheten hos alla evaluerade lösningar hjälper optimeringsalgoritmer att undvika problemet, men leder samtidigt till en betydande ökning av antalet nödvändiga simuleringar och därigenom en ökning av optimeringstiden. Detta är problematiskt eftersom det innebär att många simuleringar utförs i onödan på undermåliga lösningar, där exakta målvärden inte bidrar till att förbättra optimeringens resultat. Upprepad utvärdering reducerar ovissheten och hjälper till att förbättra optimeringen, men har också ett pris. Om flera simuleringar används för varje lösning så minskar antalet olika lösningar som kan simuleras och sökrymden kan inte utforskas lika mycket, givet att det totala antalet simuleringar är begränsat. Dynamisk upprepad utvärdering kan däremot effektivisera flermålsoptimeringens avvägning mellan utforskning och exploatering av sökrymden baserat på det faktum att den nödvändiga precisionen i målvärdena varierar mellan de olika lösningarna i målrymden. I en tät och konvergerad population av lösningar är det viktigt att känna till de exakta målvärdena, medan osäkra målvärden är mindre skadliga i ett tidigt stadium i optimeringsprocessen när algoritmen utforskar målrymden. En dynamisk strategi för upprepad utvärdering med en noggrann allokering av utvärderingarna kan därför uppnå bättre resultat än en allokering som är statisk. Trots att finns ett rikligt antal studier inom simuleringsbaserad optimering som använder sig av dynamisk upprepad utvärdering så har inga relaterade studier hittats som undersöker hur kombinationer av dynamisk upprepad utvärdering och preferensbaserad styrning kan förbättra prestandan hos algoritmer för flermålsoptimering ytterligare. Speciell avsaknad finns det av studier om optimering av problem med långa simuleringstider, som t.ex. simulering av produktionssystem. Avhandlingens mål är därför att studera, konstruera och jämföra nya kombinationer av preferensbaserade optimeringsalgoritmer och dynamiska strategier för upprepad utvärdering. Syftet är att förbättra resultatet av simuleringsbaserad flermålsoptimering som har stokastiska målvärden när antalet utvärderingar eller optimeringstiden är begränsade. Avhandlingen har speciellt fokuserat på att undersöka prestandahöjande åtgärder hos algoritmen R-NSGA-II i kombination med dynamisk upprepad utvärdering, baserad på fördelarna och flexibiliteten som interaktiva referenspunktbaserade algoritmer erbjuder. Exempel på förbättringsåtgärder är dynamiska algoritmer för upprepad utvärdering med förbättrad statistisk osäkerhetshantering och adaptiva optimeringsparametrar. Resultaten från avhandlingen visar tydligt att optimeringsresultaten kan förbättras om hybrida dynamiska algoritmer för upprepad utvärdering används och adaptiva optimeringsparametrar väljs beroende på osäkerhetsnivån och komplexiteten i optimeringsproblemet. För de fall där simuleringstiden är begränsad är slutsatsen från avhandlingen att både användarpreferenser och dynamisk upprepad utvärdering bör användas samtidigt för att uppnå de bästa resultaten i simuleringsbaserad flermålsoptimering.

Page generated in 0.0927 seconds