• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 71
  • 35
  • 14
  • 13
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 178
  • 178
  • 31
  • 25
  • 25
  • 24
  • 22
  • 21
  • 19
  • 18
  • 18
  • 17
  • 17
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Optimization Algorithms for Deterministic, Stochastic and Reinforcement Learning Settings

Joseph, Ajin George January 2017 (has links) (PDF)
Optimization is a very important field with diverse applications in physical, social and biological sciences and in various areas of engineering. It appears widely in ma-chine learning, information retrieval, regression, estimation, operations research and a wide variety of computing domains. The subject is being deeply studied both theoretically and experimentally and several algorithms are available in the literature. These algorithms which can be executed (sequentially or concurrently) on a computing machine explore the space of input parameters to seek high quality solutions to the optimization problem with the search mostly guided by certain structural properties of the objective function. In certain situations, the setting might additionally demand for “absolute optimum” or solutions close to it, which makes the task even more challenging. In this thesis, we propose an optimization algorithm which is “gradient-free”, i.e., does not employ any knowledge of the gradient or higher order derivatives of the objective function, rather utilizes objective function values themselves to steer the search. The proposed algorithm is particularly effective in a black-box setting, where a closed-form expression of the objective function is unavailable and gradient or higher-order derivatives are hard to compute or estimate. Our algorithm is inspired by the well known cross entropy (CE) method. The CE method is a model based search method to solve continuous/discrete multi-extremal optimization problems, where the objective function has minimal structure. The proposed method seeks, in the statistical manifold of the parameters which identify the probability distribution/model defined over the input space to find the degenerate distribution concentrated on the global optima (assumed to be finite in quantity). In the early part of the thesis, we propose a novel stochastic approximation version of the CE method to the unconstrained optimization problem, where the objective function is real-valued and deterministic. The basis of the algorithm is a stochastic process of model parameters which is probabilistically dependent on the past history, where we reuse all the previous samples obtained in the process till the current instant based on discounted averaging. This approach can save the overall computational and storage cost. Our algorithm is incremental in nature and possesses attractive features such as stability, computational and storage efficiency and better accuracy. We further investigate, both theoretically and empirically, the asymptotic behaviour of the algorithm and find that the proposed algorithm exhibits global optimum convergence for a particular class of objective functions. Further, we extend the algorithm to solve the simulation/stochastic optimization problem. In stochastic optimization, the objective function possesses a stochastic characteristic, where the underlying probability distribution in most cases is hard to comprehend and quantify. This begets a more challenging optimization problem, where the ostentatious nature is primarily due to the hardness in computing the objective function values for various input parameters with absolute certainty. In this case, one can only hope to obtain noise corrupted objective function values for various input parameters. Settings of this kind can be found in scenarios where the objective function is evaluated using a continuously evolving dynamical system or through a simulation. We propose a multi-timescale stochastic approximation algorithm, where we integrate an additional timescale to accommodate the noisy measurements and decimate the effects of the gratuitous noise asymptotically. We found that if the objective function and the noise involved in the measurements are well behaved and the timescales are compatible, then our algorithm can generate high quality solutions. In the later part of the thesis, we propose algorithms for reinforcement learning/Markov decision processes using the optimization techniques we developed in the early stage. MDP can be considered as a generalized framework for modelling planning under uncertainty. We provide a novel algorithm for the problem of prediction in reinforcement learning, i.e., estimating the value function of a given stationary policy of a model free MDP (with large state and action spaces) using the linear function approximation architecture. Here, the value function is defined as the long-run average of the discounted transition costs. The resource requirement of the proposed method in terms of computational and storage cost scales quadratically in the size of the feature set. The algorithm is an adaptation of the multi-timescale variant of the CE method proposed in the earlier part of the thesis for simulation optimization. We also provide both theoretical and empirical evidence to corroborate the credibility and effectiveness of the approach. In the final part of the thesis, we consider a modified version of the control problem in a model free MDP with large state and action spaces. The control problem most commonly addressed in the literature is to find an optimal policy which maximizes the value function, i.e., the long-run average of the discounted transition payoffs. The contemporary methods also presume access to a generative model/simulator of the MDP with the hidden premise that observations of the system behaviour in the form of sample trajectories can be obtained with ease from the model. In this thesis, we consider a modified version, where the cost function to be optimized is a real-valued performance function (possibly non-convex) of the value function. Additionally, one has to seek the optimal policy without presuming access to the generative model. In this thesis, we propose a stochastic approximation algorithm for this peculiar control problem. The only information, we presuppose, available to the algorithm is the sample trajectory generated using a priori chosen behaviour policy. The algorithm is data (sample trajectory) efficient, stable, robust as well as computationally and storage efficient. We provide a proof of convergence of our algorithm to a high performing policy relative to the behaviour policy.
162

Estimação de parâmetros em modelos para eliminação enzimática de substratos no fígado: um estudo via otimização global / Parameter estimation applied to enzymatic elimination models of liver substracts: a study via global optimization

Ana Carolina Rios Coelho 26 February 2009 (has links)
Fundação Carlos Chagas Filho de Amparo a Pesquisa do Estado do Rio de Janeiro / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Neste trabalho, abordamos um problema de otimização de parâmetros da biofísica em que o objetivo é a obtenção da taxa média de concentração de substrato no fígado. Este problema é altamente não-linear, multimodal e com função-objetivo não-diferenciável. Resolvemos o mesmo através de métodos de otimização da literatura e introduzimos três métodos de otimização. Os métodos introduzidos neste trabalho são baseados na hibridização de um método estocástico, que explora o espaço de busca, com um método determinístico de busca direta, que faz uma busca local mais refinada nas áreas mais promissoras deste espaço. Os novos métodos são comparados aos da literatura e é verificado que o desempenho dos primeiros é superior. / In this work, we attack a parameter optimization problem from Biophysics, where the aim is to obtain the substrate concentration rate of a liver. This problem is highly non-linear, multimodal, and with non-differentiable objective-function. We solve it using optimization methods from the literature and three methods introduced in this work. The latter methods are based on the hybridization of a stochastic technique which explores the search space, with a direct search deterministic technique which exploits the most promising areas. Our results show that the new optimization methods perform better than those from the literature.
163

Aplicação de técnicas de programação linear e extensões para otimização da alocação de água em sistemas de recursos hídricos, utilizando métodos de pontos interiores. / Application of linear programming techniques and extensions for optimization of water allocation in water resource systems, using interior points methods.

André Schardong 13 April 2006 (has links)
Neste trabalho é apresentada uma ferramenta de otimização para análise de problemas de alocação de água em bacias hidrográficas utilizando técnicas de programação linear e linear por partes, integradas a um modelo de amortecimentos de ondas em canais. A otimização é feita de forma global, com uso de softwares de programação linear baseados nos métodos de pontos interiores. A metodologia de uso do sistema consiste em se obter uma solução ?ótima? para situações de disponibilidade de água insuficiente a todos os usos conflitantes na bacia. A ferramenta está sendo acoplada e incorporada ao AcquaNet, um Sistema de Suporte a Decisões (SSD) para análise de sistemas de recursos hídricos, que utiliza um algoritmo de rede de fluxo afim de otimizar a alocação de água. A formulação utilizando programação linear permite a análise global do sistema e por isso, espera-se melhor aproveitamento da água disponível, seja no menor déficit de atendimento às demandas ou maior armazenamento nos reservatórios. A programação linear com utilização de métodos de pontos interiores é atualmente uma técnica bastante conhecida e bem desenvolvida. Existem vários pacotes computacionais gratuitos com implementações eficientes dos métodos de pontos interiores que motivaram sua utilização neste trabalho. / This work presents an optimization tool for analyzing the problems of water allocation in watersheds by utilizing techniques of linear and piecewise linear programming integrated to a pattern of stream flow routing. The optimization is done in a global way with the usage of linear programming packages based upon the Internal Point Methods. The methodology of the usage consists in the acquirement of an optimal solution for situation of insufficient water availability for all conflicting consumptions from the watershed. The tool is being attached and incorporated to AcquaNet, which is a decision support system (DSS) for analysis of water resources systems that utilizes a network flow algorithm, with the purpose of optimizing the water allocation. The formulation that uses the linear programming leads to the analysis of the system as a whole and for this reason it is expected a better usage of the available water with a lower deficit in the supply or a greater storage in the reservoirs. Linear Programming with Internal Point Methods is nowadays a well known and very well developed technique. There are several computational packages with efficient implementations of the Internal Points Methods freely available, and that, has brought great motivation in its usage in the present work.
164

Využití prostředků umělé inteligence pro podporu rozhodování v podniku / The Use of Means of Artificial Intelligence for the Decision Making Support in the Firm

Jágr, Petr January 2012 (has links)
The master’s thesis deals with the use of artificial intelligence as support for managerial decision making in the company. This thesis contains the application which utilize genetic and graph algorithms to optimize the location of production facilities and logistic warehouses according to transport cost aspects.
165

Systems level generation of mammalian circadian rhythms and its connection to liver metabolism

Pett, Jan Patrick 16 May 2019 (has links)
Circadiane Uhren sind endogene Oszillatoren, die 24-Stunden Rhythmen erzeugen. Sie erlauben Organismen deren Physiologie und Verhalten an tägliche Änderungen der Umwelt anzupassen. In Säugetieren basieren solche Uhren auf transkriptional-translationalen Rückkopplungsschleifen, aber es ist noch nicht ganz verstanden, welche Schleifen zur Erzeugung von Rhythmen beitragen. Eine der physiologischen Schlüsselfunktionen von cirkadianen Uhren scheint die zeitliche Anordnung von metabolischen Prozessen zu sein. Im ersten Projekt haben wir eine Methode eingeführt, um systematisch Regulationen in einem datengetriebenen mathematischen Modell der Kernuhr zu testen. Überraschenderweise haben wir ein Rückkopplungsmotif entdeckt, das vorher noch nicht im Zusammenhang mit der circadianen Uhr in Säugetieren in Betracht gezogen wurde. Dieser Repressilator ist mit Gen-knockout Studien und weiteren Perturbationsexperimenten konsistent. Im zweiten Projekt haben wir das Modell wiederholt auf gewebespezifische Datensätze gefitted und essentielle Rückkopplungen in allen Modellversionen identifiziert. Interessanterweise fanden wir dabei für alle gewebespezifischen Datensätze Synergien von Rückkopplungen, die zusammen Rhythmen erzeugen. Desweiteren haben wir festgestellt, dass die Synergien sich abhängig vom Gewebe unterscheiden. Im dritten Projekt haben wir die circadianen Aspekte des Metabolismus untersucht. Wir haben circadiane Komponenten in verschiedenen omics Studien identifiziert, integriert und auf ein metabolisches Netzwerk gemapped. Unsere Analyse hat bestätigt, dass viele Stoffwechselwege vermutlich circadianen Rhythmen folgen. Interessanterweise haben wir festgestellt, dass die durchschnittlichen Phasen von rhythmischen Komponenten sich zwischen verschiedenen Stoffwechselwegen unterscheiden. Solche Unterschiede könnten eine zeitliche Anpassung metabolischer Funktionen an Zeiten darstellen zu denen sie gebraucht werden. / Circadian clocks are endogenous oscillators that generate 24-hour rhythms. They allow many organisms to synchronize their physiology and behaviour with daily changes of the environment. In mammals such clocks are based on transcriptional-translational feedback loops, however, it is not fully understood which feedback loops contribute to rhythm generation. Within an organism different clocks are distinguished by their localization in different organs. One of the key physiological functions of circadian clocks in various organs seems to be the temporal alignment of metabolic processes. In the first project we introduced and applied a method to systematically test regulations in a data-driven mathematical model of the core clock. Surprisingly, we discovered a feedback loop that has previously not been considered in the context of the mammalian circadian clock. This repressilator is consistent with knockout studies and further perturbation experiments. It could constitute an explanation for different phases observed between Cryptochromes, which are part of the core clock. In the second project we repeatedly fitted the same mathematical model to tissue-specific data sets and identified essential feedback loops in all model versions. Interestingly, for all tissue-specific data sets we found synergies of loops generating rhythms together. Further, we found that the synergies differ depending on the tissue. In the third project we examined the circadian aspects of metabolism. We identified rhythmic data in different omics studies, integrated and mapped them to a metabolic network. Our analysis confirmed that many metabolic pathways may follow circadian rhythms. Interestingly, we also found that the average peak times of rhythmic components between various pathways differ. Such differences might reflect a temporal alignment of metabolic functions to the time when they are required.
166

Global Optimization of Dynamic Process Systems using Complete Search Methods

Sahlodin, Ali Mohammad 04 1900 (has links)
<p>Efficient global dynamic optimization (GDO) using spatial branch-and-bound (SBB) requires the ability to construct tight bounds for the dynamic model. This thesis works toward efficient GDO by developing effective convex relaxation techniques for models with ordinary differential equations (ODEs). In particular, a novel algorithm, based upon a verified interval ODE method and the McCormick relaxation technique, is developed for constructing convex and concave relaxations of solutions of nonlinear parametric ODEs. In addition to better convergence properties, the relaxations so obtained are guaranteed to be no looser than their underlying interval bounds, and are typically tighter in practice. Moreover, they are rigorous in the sense of accounting for truncation errors. Nonetheless, the tightness of the relaxations is affected by the overestimation from the dependency problem of interval arithmetic that is not addressed systematically in the underlying interval ODE method. To handle this issue, the relaxation algorithm is extended to a Taylor model ODE method, which can provide generally tighter enclosures with better convergence properties than the interval ODE method. This way, an improved version of the algorithm is achieved where the relaxations are generally tighter than those computed with the interval ODE method, and offer better convergence. Moreover, they are guaranteed to be no looser than the interval bounds obtained from Taylor models, and are usually tighter in practice. However, the nonlinearity and (potentially) nonsmoothness of the relaxations impedes their fast and reliable solution. Therefore, the algorithm is finally modified by incorporating polyhedral relaxations in order to generate relatively tight and computationally cheap linear relaxations for the dynamic model. The resulting relaxation algorithm along with a SBB procedure is implemented in the MC++ software package. GDO utilizing the proposed relaxation algorithm is demonstrated to have significantly reduced computational expense, up to orders of magnitude, compared to existing GDO methods.</p> / Doctor of Philosophy (PhD)
167

Resource Allocation on Networks: Nested Event Tree Optimization, Network Interdiction, and Game Theoretic Methods

Lunday, Brian Joseph 08 April 2010 (has links)
This dissertation addresses five fundamental resource allocation problems on networks, all of which have applications to support Homeland Security or industry challenges. In the first application, we model and solve the strategic problem of minimizing the expected loss inflicted by a hostile terrorist organization. An appropriate allocation of certain capability-related, intent-related, vulnerability-related, and consequence-related resources is used to reduce the probabilities of success in the respective attack-related actions, and to ameliorate losses in case of a successful attack. Given the disparate nature of prioritizing capital and material investments by federal, state, local, and private agencies to combat terrorism, our model and accompanying solution procedure represent an innovative, comprehensive, and quantitative approach to coordinate resource allocations from various agencies across the breadth of domains that deal with preventing attacks and mitigating their consequences. Adopting a nested event tree optimization framework, we present a novel formulation for the problem as a specially structured nonconvex factorable program, and develop two branch-and-bound schemes based respectively on utilizing a convex nonlinear relaxation and a linear outer-approximation, both of which are proven to converge to a global optimal solution. We also investigate a fundamental special-case variant for each of these schemes, and design an alternative direct mixed-integer programming model representation for this scenario. Several range reduction, partitioning, and branching strategies are proposed, and extensive computational results are presented to study the efficacy of different compositions of these algorithmic ingredients, including comparisons with the commercial software BARON. The developed set of algorithmic implementation strategies and enhancements are shown to outperform BARON over a set of simulated test instances, where the best proposed methodology produces an average optimality gap of 0.35% (compared to 4.29% for BARON) and reduces the required computational effort by a factor of 33. A sensitivity analysis is also conducted to explore the effect of certain key model parameters, whereupon we demonstrate that the prescribed algorithm can attain significantly tighter optimality gaps with only a near-linear corresponding increase in computational effort. In addition to enabling effective comprehensive resource allocations, this research permits coordinating agencies to conduct quantitative what-if studies on the impact of alternative resourcing priorities. The second application is motivated by the author's experience with the U.S. Army during a tour in Iraq, during which combined operations involving U.S. Army, Iraqi Army, and Iraqi Police forces sought to interdict the transport of selected materials used for the manufacture of specialized types of Improvised Explosive Devices, as well as to interdict the distribution of assembled devices to operatives in the field. In this application, we model and solve the problem of minimizing the maximum flow through a network from a given source node to a terminus node, integrating different forms of superadditive synergy with respect to the effect of resources applied to the arcs in the network. Herein, the superadditive synergy reflects the additional effectiveness of forces conducting combined operations, vis-à-vis unilateral efforts. We examine linear, concave, and general nonconcave superadditive synergistic relationships between resources, and accordingly develop and test effective solution procedures for the underlying nonlinear programs. For the linear case, we formulate an alternative model representation via Fourier-Motzkin elimination that reduces average computational effort by over 40% on a set of randomly generated test instances. This test is followed by extensive analyses of instance parameters to determine their effect on the levels of synergy attained using different specified metrics. For the case of concave synergy relationships, which yields a convex program, we design an inner-linearization procedure that attains solutions on average within 3% of optimality with a reduction in computational effort by a factor of 18 in comparison with the commercial codes SBB and BARON for small- and medium-sized problems; and outperforms these softwares on large-sized problems, where both solvers failed to attain an optimal solution (and often failed to detect a feasible solution) within 1800 CPU seconds. Examining a general nonlinear synergy relationship, we develop solution methods based on outer-linearizations, inner-linearizations, and mixed-integer approximations, and compare these against the commercial software BARON. Considering increased granularities for the outer-linearization and mixed-integer approximations, as well as different implementation variants for both these approaches, we conduct extensive computational experiments to reveal that, whereas both these techniques perform comparably with respect to BARON on small-sized problems, they significantly improve upon the performance for medium- and large-sized problems. Our superlative procedure reduces the computational effort by a factor of 461 for the subset of test problems for which the commercial global optimization software BARON could identify a feasible solution, while also achieving solutions of objective value 0.20% better than BARON. The third application is likewise motivated by the author's military experience in Iraq, both from several instances involving coalition forces attempting to interdict the transport of a kidnapping victim by a sectarian militia as well as, from the opposite perspective, instances involving coalition forces transporting detainees between interment facilities. For this application, we examine the network interdiction problem of minimizing the maximum probability of evasion by an entity traversing a network from a given source to a designated terminus, while incorporating novel forms of superadditive synergy between resources applied to arcs in the network. Our formulations examine either linear or concave (nonlinear) synergy relationships. Conformant with military strategies that frequently involve a combination of overt and covert operations to achieve an operational objective, we also propose an alternative model for sequential overt and covert deployment of subsets of interdiction resources, and conduct theoretical as well as empirical comparative analyses between models for purely overt (with or without synergy) and composite overt-covert strategies to provide insights into absolute and relative threshold criteria for recommended resource utilization. In contrast to existing static models, in a fourth application, we present a novel dynamic network interdiction model that improves realism by accounting for interactions between an interdictor deploying resources on arcs in a digraph and an evader traversing the network from a designated source to a known terminus, wherein the agents may modify strategies in selected subsequent periods according to respective decision and implementation cycles. We further enhance the realism of our model by considering a multi-component objective function, wherein the interdictor seeks to minimize the maximum value of a regret function that consists of the evader's net flow from the source to the terminus; the interdictor's procurement, deployment, and redeployment costs; and penalties incurred by the evader for misperceptions as to the interdicted state of the network. For the resulting minimax model, we use duality to develop a reformulation that facilitates a direct solution procedure using the commercial software BARON, and examine certain related stability and convergence issues. We demonstrate cases for convergence to a stable equilibrium of strategies for problem structures having a unique solution to minimize the maximum evader flow, as well as convergence to a region of bounded oscillation for structures yielding alternative interdictor strategies that minimize the maximum evader flow. We also provide insights into the computational performance of BARON for these two problem structures, yielding useful guidelines for other research involving similar non-convex optimization problems. For the fifth application, we examine the problem of apportioning railcars to car manufacturers and railroads participating in a pooling agreement for shipping automobiles, given a dynamically determined total fleet size. This study is motivated by the existence of such a consortium of automobile manufacturers and railroads, for which the collaborative fleet sizing and efforts to equitably allocate railcars amongst the participants are currently orchestrated by the \textit{TTX Company} in Chicago, Illinois. In our study, we first demonstrate potential inequities in the industry standard resulting either from failing to address disconnected transportation network components separately, or from utilizing the current manufacturer allocation technique that is based on average nodal empty transit time estimates. We next propose and illustrate four alternative schemes to apportion railcars to manufacturers, respectively based on total transit time that accounts for queuing; two marginal cost-induced methods; and a Shapley value approach. We also provide a game-theoretic insight into the existing procedure for apportioning railcars to railroads, and develop an alternative railroad allocation scheme based on capital plus operating costs. Extensive computational results are presented for the ten combinations of current and proposed allocation techniques for automobile manufacturers and railroads, using realistic instances derived from representative data of the current business environment. We conclude with recommendations for adopting an appropriate apportionment methodology for implementation by the industry. / Ph. D.
168

Ανάπτυξη και θεμελίωση νέων μεθόδων υπολογιστικής νοημοσύνης, ευφυούς βελτιστοποίησης και εφαρμογές / Development and foundation of new methods of computational intelligence, intelligent optimization and applications

Επιτροπάκης, Μιχαήλ 17 July 2014 (has links)
Η παρούσα διατριβή ασχολείται με τη μελέτη, την ανάπτυξη και τη θεμελίωση νέων μεθόδων Υπολογιστικής Νοημοσύνης και Ευφυούς Βελτιστοποίησης. Συνοπτικά οργανώνεται στα ακόλουθα τρία μέρη: Αρχικά παρουσιάζεται το πεδίο της Υπολογιστικής Νοημοσύνης και πραγματοποιείται μία σύντομη αναφορά στους τρεις κύριους κλάδους της, τον Εξελικτικό Υπολογισμό, τα Τεχνητά Νευρωνικά Δίκτυα και τα Ασαφή Συστήματα. Το επόμενο μέρος αφιερώνεται στην παρουσίαση νέων, καινοτόμων οικογενειών των αλγορίθμων Βελτιστοποίησης Σμήνους Σωματιδίων (ΒΣΣ) και των Διαφοροεξελικτικών Αλγόριθμων (ΔΕΑ), για την επίλυση αριθμητικών προβλημάτων βελτιστοποίησης χωρίς περιορισμούς, έχοντας είτε ένα, είτε πολλαπλούς ολικούς βελτιστοποιητές. Οι αλγόριθμοι ΒΣΣ και ΔΕΑ αποτελούν τις βασικές μεθοδολογίες της παρούσας διατριβής. Όλες οι οικογένειες μεθόδων που προτείνονται, βασίζονται σε παρατηρήσεις των κοινών δομικών χαρακτηριστικών των ΒΣΣ και ΔΕΑ, ενώ η κάθε προτεινόμενη οικογένεια τις αξιοποιεί με διαφορετικό τρόπο, δημιουργώντας νέες, αποδοτικές μεθόδους με αρκετά ενδιαφέρουσες ιδιότητες και δυναμική. Η παρουσίαση του ερευνητικού έργου της διατριβής ολοκληρώνεται με το τρίτο μέρος στο οποίο περιλαμβάνεται μελέτη και ανάπτυξη μεθόδων ολικής βελτιστοποίησης για την εκπαίδευση Τεχνητών Νευρωνικών Δικτύων Υψηλής Τάξης, σε σειριακά και παράλληλα ή / και κατανεμημένα υπολογιστικά συστήματα. Η διδακτορική διατριβή ολοκληρώνεται με βασικά συμπεράσματα και τη συνεισφορά της. / The main subject of the thesis at hand revolves mainly around the development and foundations of new methods of computational intelligence and intelligent optimization. The thesis is organized into the following three parts: Firstly, we briefly present an overview of the field of Computational Intelligence, by describing its main categories, the Evolutionary Computation, the Artificial Neural Networks and the Fuzzy Systems. In the second part, we provide a detailed description of the newly developed families of algorithms for solving unconstrained numerical optimization problems in continues spaces with at least one global optimum. The proposed families are based on two well-known and widely used algorithms, namely the Particle Swarm Optimization (PSO) and the Differential Evolution (DE) algorithm. Both DE and PSO are the basic components for almost all methodologies proposed in the thesis. The proposed methodologies are based on common observations of the dynamics, the structural and the spacial characteristics of DE and PSO algorithms. Four novel families are presented in this part which exploit the aforementioned characteristics of the DE and the PSO algorithms. The proposed methodologies are efficient methods with quite interesting properties and dynamics. The presentation and description of our research contribution ends with the third and last part of the thesis, which includes the study and the development of novel global optimization methodologies for training Higher order Artificial Neural Networks in serial and parallel / distributed computational environments. The thesis ends with a brief summary, conclusions and discussion of the contribution of this thesis.
169

Adaptive techniques for ultrafast laser material processing

Stoian, Razvan 18 November 2008 (has links) (PDF)
Le besoin d'une très grande précision lors du traitement des matériaux par laser a fortement encouragé le développement des études de l'effet des impulsions ultra brèves pour la structuration des matériaux à une échelle micro et nano métrique. Une diffusion d'énergie minimale et une forte non linéarité de l'interaction permet un important confinement énergétique à des échelles les plus petites possibles. La possibilité d'introduire des changements de phases rapides et même de créer de nouveaux états de matière ayant des propriétés optimisées et des fonctions améliorées donne aux impulsions ultra brèves de sérieux arguments pour être utilisées dans des dispositifs très précis de transformation et de structuration des matériaux. L'étude de ces mécanismes de structuration et, en particulier, de leurs caractéristiques dynamiques, est une clé pour l'optimisation de l'interaction laser-matière suivant de nombreux critères utiles pour les procédés laser : efficacité, précision, qualité. Ce mémoire synthétise les travaux de l'auteur sur l'étude statique et dynamique du dépôt d'énergie ultra rapide, avec application aux procédés laser. La connaissance de la réponse dynamique des matériaux après irradiation laser ultra brève montre que les temps de relaxation pilotent l'interaction lumière-matière. Il est alors possible d'adapter l'énergie déposée à la réponse du matériau en utilisant les toutes récentes techniques de mise en forme spatio temporelle de faisceaux. Un couplage optimal de l'énergie donne la possibilité d'orienter la réponse du matériau vers un résultat recherché, offrant une grande flexibilité de contrôle des procédés et, sans doute, la première étape du développement de procédés « intelligents ».
170

Anwendung von Line-Search-Strategien zur Formoptimierung und Parameteridentifikation

Clausner, André 05 June 2013 (has links) (PDF)
Die kontinuierliche Weiterentwicklung und Verbesserung technischer Prozesse erfolgt heute auf der Basis stochastischer und deterministischer Optimierungsstrategien in Kombination mit der numerischen Simulation dieser Abläufe. Da die FE-Simulation von Umformvorgängen in der Regel sehr zeitintensiv ist, bietet sich für die Optimierung solcher Prozesse der Einsatz deterministischer Methoden an, da hier weniger Optimierungsschritte und somit auch weniger FE-Simulationen notwendig sind. Eine wichtige Anforderung an solche Optimierungsverfahren ist globale Konvergenz zu lokalen Minima, da die optimalen Parametersätze nicht immer näherungsweise bekannt sind. Die zwei wichtigsten Strategien zum Ausdehnen des beschränkten Konvergenzradius der natürlichen Optimierungsverfahren (newtonschrittbasierte Verfahren und Gradientenverfahren) sind die Line-Search-Strategie und die Trust-Region-Strategie. Die Grundlagen der Line-Search-Strategie werden aufgearbeitet und die wichtigsten Teilalgorithmen implementiert. Danach wird dieses Verfahren auf eine effiziente Kombination der Teilalgorithmen und Verfahrensparameter hin untersucht. Im Anschluss wird die Leistung eines Optimierungsverfahrens mit Line-Search-Strategie verglichen mit der eines ebenfalls implementierten Optimierungsverfahrens mit skalierter Trust-Region-Strategie. Die Tests werden nach Einfügen der implementierten Verfahren in das Programm SPC-Opt anhand der Lösung eines Quadratmittelproblems aus der Materialparameteridentifikation sowie der Formoptimierung eines Umformwerkzeugs vorgenommen.

Page generated in 0.1026 seconds