• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 52
  • 19
  • 8
  • 5
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 104
  • 104
  • 17
  • 15
  • 14
  • 14
  • 11
  • 11
  • 10
  • 10
  • 9
  • 9
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Machine Learning and Optimization Algorithms for Intra- and Intermolecular Interaction Prediction

Roche, Rahmatullah 30 July 2024 (has links)
Computational prediction of intra- and intermolecular interactions, specifically intra- protein residue-residue interactions and the interaction sites in between proteins and other macromolecules, are critical for understanding numerous biological processes. The existing methods fall short in estimating the quality of intra-protein interactions. Moreover, the methods for predicting intermolecular interactions fail to harness some of the latest technological advancements such as advances in pretrained protein and RNA language models and struggle to effectively integrate predicted structural information, thus limiting their predictive modeling accuracy. Hence, my objectives include (1) the development of computational methods for protein structure modeling through the estimation of intra-protein interactions, (2) the development of computational methods for predicting protein- protein interaction sites leveraging the latest deep learning architectures and predicted structural information, and (3) extending the scope beyond protein-protein interactions to develop novel computational methods to predict protein-nucleic acid interactions informed by protein and RNA language models. The major benefits of achieving these objectives for the broader scientific community are the following: (1) intra-protein interaction estimation methods have the potential to enhance the accuracy of protein structure modeling, and (2) the methods for predicting protein-protein and protein-nucleic acid interaction will deepen our understanding of biomolecular interactions in cell, even when experimentally determined molecular structures are not available. / Doctor of Philosophy / My research focuses on developing accurate computational predictive modeling methods centering to biomolecular interactions. Proteins, one of the most important biomolecules, fold into stable three-dimensional forms to perform specific tasks in the cell. Recognizing the importance of this information in the absence of ground truth three-dimensional structures, I developed computational methods to predict the folded three-dimensional structures of proteins, utilizing intra-protein atomic interactions and for the quality estimation of those interactions. Since proteins not only fold themselves but also interact with other proteins and biomolecules such as nucleic acids, which is crucial for many biological processes, I expanded my research from intra-protein interactions to predicting interactions between proteins and other molecules. In particular, using advanced computational techniques, I developed methods for predicting protein-protein and protein-nucleic acid interactions. The research outcomes not only outperform existing state-of-the-art computational methods by overcoming their limitations but also have potential applications in designing effective therapies and combating diseases, ultimately improving the health sector through their large-scale predictability. All the scientific tools resulting from the research are publicly available, fascinating knowledge sharing and collaboration within and beyond the scientific community.
52

Fertigungsrestriktionsmodell zur Unterstützung des algorithmisierten PEP fertigungsgerechter Blechprodukte

Albrecht, Katharina, Weber Martins , Thiago, Anderl, Reiner 10 December 2016 (has links) (PDF)
Die Entwicklung von verzweigten Blechstrukturen, die mit Hilfe der innovativen Verfahren Spaltprofilieren und Spaltbiegen hergestellt werden, erfordert eine Erweiterung des verwendeten Produktentwicklungsansatzes. Im Sonderforschungsbereich 666 wurde dafür der Algorithmen-basierte Produktentwicklungsprozess eingeführt. Im Unterschied zu etablierten Produktentwicklungsprozessen, werden mathematische Optimierungsalgorithmen mit Randbedingungen aus den formalisierten Anforderungen wie Bauraumrestriktionen oder Lasten verwendet, um optimierte verzweigte Blechstrukturen zu entwickeln. Da in der mathematischen Optimierung nicht alle Informationen aus der Fertigung berücksichtigt werden können, müssen die optimierten verzweigten Blechstrukturen fertigungsgerecht optimiert werden. Dazu wird in diesem Beitrag zunächst ein Fertigungsrestriktionsmodell eingeführt. Basierend auf den gewonnen Erkenntnissen aus dem Modell wird dann ein Ansatz zur teilautomatisierten fertigungsorientierten Optimierung von verzweigten Blechstrukturen eingeführt und anhand eines Beispiels validiert.
53

Role pokročilých oceňovacích metod opcí empirické testy na neuronových sítích / The Role of Advanced Option Pricing Techniques Empirical Tests on Neural Networks

Brejcha, Jiří January 2011 (has links)
This thesis concerns with a comparison of two advanced option-pricing techniques applied on European-style DAX index options. Specifically, the study examines the performance of both the stochastic volatility model based on asymmetric nonlinear GARCH, which was proposed by Heston and Nandi (2000), and the artificial neural network, where the conventional Black-Scholes-Merton model serves as a benchmark. These option-pricing models are tested with the use of the dataset covering the period 3rd July 2006 - 30th October 2009 as well as of its two subsets labelled as "before crisis" and "in crisis" data where the breakthrough day is the 17th March 2008. Finding the most appropriate option-pricing method for the whole periods as well as for both the "before crisis" and the "in crisis" datasets is the main focus of this work. The first two chapters introduce core issues involved in option pricing, while the subsequent third section provides a theoretical background related to all of above-mentioned pricing methods. At the same time, the reader is provided with an overview of the theoretical frameworks of various nonlinear optimization techniques, i.e. descent gradient, quassi-Newton method, Backpropagation and Levenberg-Marquardt algorithm. The empirical part of the thesis then shows that none of the...
54

Contribution à l'identification des paramètres rhéologiques des suspensions cimentaires à l'état frais / Identification to rheological parameters identification of fresh cementitious suspensions

Anglade, Célimène 03 February 2017 (has links)
Le travail de thèse s'inscrit dans la modélisation numérique de l'écoulement des matériaux cimentaires à l'état frais couplée à un outil d'identification des paramètres. Il traite en particulier l'étape de mise en place de l'identification par analyse inverse. D'abord, une analyse de la littérature fait ressortir l'existence d'outils rhéométriques dédiés aux suspensions cimentaires ; le passage des grandeurs macroscopiques à celles locales est faite, soit par le biais de l'utilisation de géométries conventionnelles, soit au moyen de méthodes de calibration. Néanmoins, ces outils ne permettent pas de trouver une signature rhéologique unique pour une même suspension. De plus, les stratégies d'identification des paramètres relatifs aux matériaux cimentaires frais sont peu nombreuses et limitées aux données locales. Ensuite, une stratégie qui consiste à identifier les paramètres d'une loi supposée, directement à partir des mesures macroscopiques simulées (couples, vitesses de rotation imposées au mobile de cisaillement) a été développée et validée en 2D, en discutant notamment de l'efficacité des algorithmes d'optimisation testés (méthode du simplexe et algorithmes génétiques), en fonction du degré de connaissances que l'utilisateur a du matériau. Enfin, la méthode a été appliquée en 3D sur des fluides modèles supposés homogènes. Elle apparaît efficace en fluide pseudo-plastique, notamment par combinaison des algorithmes d'optimisation. Mais il reste des obstacles à lever en fluide visco-plastique, vraisemblablement liés aux outils expérimentaux plutôt qu'à la procédure d'identification elle-même. / The thesis work is part of the numerical modeling of the flow of cementitious materials in the fresh state coupled with an identification procedure of the parameters. It deals in particular with the step of the development of the identification by inverse analysis. First,the literature review reveals the existence of rheometric tools dedicated to cementitious suspensions; The passage from the macroscopic quantities to the local ones is made either by the use of conventional geometries or by means of calibration methods. Nevertheless, these tools do not make it possible to find the expected single rheological signature for a given suspension. In addition, there are few studies reporting strategies for identifying constitutive parameters in the case of fresh cement-based materials and they are limited to local data. Then, a strategy consisting in identifying the parameters of a supposed law, directly on the basis of the simulated macroscopic measurements (torques, rotational speeds imposed on the shearing tool) was developed and validated in 2D, discussing in particular the efficiency Of the optimization algorithms tested (simplex method and genetic algorithms), according to the degree of knowledge that the user has of the material. Finally, the method has been applied in 3D on model fluids, assuming that they are homogeneous. The method appears effective in the case of pseudo-plastic fluid, in particular by combining both optimization algorithms used. But there remain obstacles to overcome in the case of visco-plastic fluids, probably related to the experimental tools rather than to the procedure of identification itself.
55

Techniques for construction of phylogenetic trees / TÃcnicas para construÃÃo de Ãrvores filogenÃticas

Gerardo ValdÃso Rodrigues Viana 27 April 2007 (has links)
FundaÃÃo Cearense de Apoio ao Desenvolvimento Cientifico e TecnolÃgico / Phylogenetic tree structures express similarities, ancestrality, and relationships between species or group of species, and are also known as evolutionary trees or phylogenies. Phylogenetic trees have leaves that represent species (taxons), and internal nodes that correspond to hypothetical ancestors of the species. In this thesis we rst present elements necessary to the comprehension of phylogenetic trees systematics, then efcient algorithms to build them will be described. Molecular biology concepts, life evolution, and biological classication are important to the understanding of phylogenies. Phylogenetic information may provide important knowledge to biological research work, such as, organ transplantation from animals, and drug toxicologic tests performed in other species as a precise prediction to its application in human beings. To solve a phylogeny problem implies that a phylogenetic tree must be built from known data about a group of species, according to an optimization criterion. The approach to this problem involves two main steps: the rst refers to the discovery of perfect phylogenies, in the second step, information extracted from perfect phylogenies are used to infer more general ones. The techniques that are used in the second step take advantage of evolutionary hypothesis. The problem becomes NP-hard for a number of interesting hypothesis, what justify the use of inference methods based on heuristics, metaheuristics, and approximative algorithms. The description of an innovative technique based on local search with multiple start over a diversied neighborhood summarizes our contribution to solve the problem. Moreover, we used parallel programming in order to speed up the intensication stage of the search for the optimal solution. More precisely, we developed an efcient algorithm to obtain approximate solutions for a phylogeny problem which infers an optimal phylogenetic tree from characteristics matrices of various species. The designed data structures and the binary data manipulation in some routines accelerate simulation and illustration of the experimentation tests. Well known instances have been used to compare the proposed algorithm results with those previously published. We hope that this work may arise researchers' interest to the topic and contribute to the Bioinformatics area. / Ãrvores filogenÃticas sÃo estruturas que expressam a similaridade, ancestralidade e relacionamentos entre as espÃcies ou grupo de espÃcies. Conhecidas como Ãrvores evolucionÃrias ou simplesmente filogenias, as Ãrvores filogenÃticas possuem folhas que representam as espÃcies (tÃxons) e nÃs internos que correspondem aos seus ancestrais hipotÃticos. Neste trabalho, alÃm das informaÃÃes necessÃrias para o entendimento de toda a sistemÃtica filogenÃtica, sÃo apresentadas tÃcnicas algorÃtmicas para construÃÃo destas Ãrvores. Os conceitos bÃsicos de biologia molecular, evoluÃÃo da vida e classificaÃÃo biolÃgica, aqui descritos, permitem compreender o que à uma Filogenia e qual sua importÃncia para a Biologia. As informaÃÃes filogenÃticas fornecem,por exemplo, subsÃdios importantes para decisÃes relativas aos transplantes de ÃrgÃos ou tecidos de outras espÃcies para o homem e para que testes de reaÃÃo imunolÃgica ou de toxicidade sejam feitos antes em outros sistemas biolÃgicos similares ao ser humano. Resolver um Problema de Filogenia corresponde à construÃÃo de uma Ãrvore filogenÃtica a partir de dados conhecidos sobre as espÃcies em estudo, obedecendo a algum critÃrio de otimizaÃÃo. A abordagem dada a esse problema envolve duas etapas, a primeira, referente aos casos em que as filogenias sÃo perfeitas cujos procedimentos desenvolvidos serÃo utilizados na segunda etapa, quando deve ser criada uma tÃcnica de inferÃncia para a filogenia num caso geral. Essas tÃcnicas consideram de forma peculiar as hipÃteses sobre o processo de evoluÃÃo. Para muitas hipÃteses de interesse o problema se torna NP-DifÃcil, justificando-se o uso de mÃtodos de inferÃncia atravÃs de heurÃsticas, meta-heurÃsticas e algoritmos aproximativos. Nossa contribuiÃÃo neste trabalho consiste em apresentar uma tÃcnica de resoluÃÃo desse problema baseada em buscas locais com partidas mÃltiplas em vizinhanÃas diversificadas. Foi utilizada a programaÃÃo paralela para minimizar o tempo de execuÃÃo no processo de intensificaÃÃo da busca pela soluÃÃo Ãtima do problema. Desta forma, desenvolvemos um algoritmo para obter soluÃÃes aproximadas para um Problema da Filogenia, no caso, para inferir, a partir de matrizes de caracterÃsticas de vÃrias espÃcies, uma Ãrvore filogenÃtica que mais se aproxima da histÃria de sua evoluÃÃo. Uma estrutura de dados escolhida adequadamente aliada à manipulaÃÃo de dados em binÃrio em algumas rotinas facilitaram a simulaÃÃo e ilustraÃÃo dos testes realizados. InstÃncias com resultados conhecidos na literatura foram utilizadas para comprovar a performance do algoritmo. Esperamos com este trabalho despertar o interesse dos pesquisadores da Ãrea de ComputaÃÃo, consolidando, assim, o crescimento da BioinformÃtica.
56

Fertigungsrestriktionsmodell zur Unterstützung des algorithmisierten PEP fertigungsgerechter Blechprodukte

Albrecht, Katharina, Weber Martins, Thiago, Anderl, Reiner January 2016 (has links)
Die Entwicklung von verzweigten Blechstrukturen, die mit Hilfe der innovativen Verfahren Spaltprofilieren und Spaltbiegen hergestellt werden, erfordert eine Erweiterung des verwendeten Produktentwicklungsansatzes. Im Sonderforschungsbereich 666 wurde dafür der Algorithmen-basierte Produktentwicklungsprozess eingeführt. Im Unterschied zu etablierten Produktentwicklungsprozessen, werden mathematische Optimierungsalgorithmen mit Randbedingungen aus den formalisierten Anforderungen wie Bauraumrestriktionen oder Lasten verwendet, um optimierte verzweigte Blechstrukturen zu entwickeln. Da in der mathematischen Optimierung nicht alle Informationen aus der Fertigung berücksichtigt werden können, müssen die optimierten verzweigten Blechstrukturen fertigungsgerecht optimiert werden. Dazu wird in diesem Beitrag zunächst ein Fertigungsrestriktionsmodell eingeführt. Basierend auf den gewonnen Erkenntnissen aus dem Modell wird dann ein Ansatz zur teilautomatisierten fertigungsorientierten Optimierung von verzweigten Blechstrukturen eingeführt und anhand eines Beispiels validiert.
57

Medical Imaging Centers in Central Indiana: Optimal Location Allocation Analyses

Seger, Mandi J. 01 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / While optimization techniques have been studied since 300 B.C. when Euclid first considered the minimal distance between a point and a line, it wasn’t until 1966 that location optimization was first applied to a problem in healthcare. Location optimization techniques are capable of increasing efficiency and equity in the placement of many types of services, including those within the healthcare industry, thus enhancing quality of life. Medical imaging is a healthcare service which helps to determine medical diagnoses in acute and preventive care settings. It provides physicians with information guiding treatment and returning a patient back to optimal health. In this study, a retrospective analysis of the locations of current medical imaging centers in central Indiana is performed, and alternate placement as determined using optimization techniques is considered and compared. This study focuses on reducing the drive time experienced by the population within the study area to their nearest imaging facility. Location optimization models such as the P-Median model, the Maximum Covering model, and Clustering and Partitioning are often used in the field of operations research to solve location problems, but are lesser known within the discipline of Geographic Information Science. This study was intended to demonstrate the capabilities of these powerful algorithms and to increase understanding of how they may be applied to problems within healthcare. While the P-Median model is effective at reducing the overall drive time for a given network set, individuals within the network may experience lengthy drive times. The results further indicate that while the Maximum Covering model is more equitable than the P-Median model, it produces large sets of assigned individuals overwhelming the capacity of one imaging center. Finally, the Clustering and Partitioning method is effective at limiting the number of individuals assigned to a given imaging center, but it does not provide information regarding average drive time for those individuals. In the end, it is determined that a capacitated Maximal Covering model would be the preferred method for solving this particular location problem.
58

First-Order Algorithms for Communication Efficient Distributed Learning

Khirirat, Sarit January 2019 (has links)
Technological developments in devices and storages have made large volumes of data collections more accessible than ever. This transformation leads to optimization problems with massive data in both volume and dimension. In response to this trend, the popularity of optimization on high performance computing architectures has increased unprecedentedly. These scalable optimization solvers can achieve high efficiency by splitting computational loads among multiple machines. However, these methods also incur large communication overhead. To solve optimization problems with millions of parameters, communication between machines has been reported to consume up to 80% of the training time. To alleviate this communication bottleneck, many optimization algorithms with data compression techniques have been studied. In practice, they have been reported to significantly save communication costs while exhibiting almost comparable convergence as the full-precision algorithms. To understand this intuition, we develop theory and techniques in this thesis to design communication-efficient optimization algorithms. In the first part, we analyze the convergence of optimization algorithms with direct compression. First, we outline definitions of compression techniques which cover many compressors of practical interest. Then, we provide the unified analysis framework of optimization algorithms with compressors which can be either deterministic or randomized. In particular, we show how the tuning parameters of compressed optimization algorithms must be chosen to guarantee performance. Our results show explicit dependency on compression accuracy and delay effect due to asynchrony of algorithms. This allows us to characterize the trade-off between iteration and communication complexity under gradient compression. In the second part, we study how error compensation schemes can improve the performance of compressed optimization algorithms. Even though convergence guarantees of optimization algorithms with error compensation have been established, there is very limited theoretical support which guarantees improved solution accuracy. We therefore develop theoretical explanations, which show that error compensation guarantees arbitrarily high solution accuracy from compressed information. In particular, error compensation helps remove accumulated compression errors, thus improving solution accuracy especially for ill-conditioned problems. We also provide strong convergence analysis of error compensation on parallel stochastic gradient descent across multiple machines. In particular, the error-compensated algorithms, unlike direct compression, result in significant reduction in the compression error. Applications of the algorithms in this thesis to real-world problems with benchmark data sets validate our theoretical results. / Utvecklandet av kommunikationsteknologi och datalagring har gjort stora mängder av datasamlingar mer tillgängliga än någonsin. Denna förändring leder till numeriska optimeringsproblem med datamängder med stor skala i volym och dimension. Som svar på denna trend har populariteten för högpresterande beräkningsarkitekturer ökat mer än någonsin tidigare. Skalbara optimeringsverktyg kan uppnå hög effektivitet genom att fördela beräkningsbördan mellan ett flertal maskiner. De kommer dock i praktiken med ett pris som utgörs av betydande kommunikationsomkostnader. Detta orsakar ett skifte i flaskhalsen för prestandan från beräkningar till kommunikation. När lösning av verkliga optimeringsproblem sker med ett stort antal parametrar, dominerar kommunikationen mellan maskiner nästan 80% av träningstiden. För att minska kommunikationsbelastningen, har ett flertal kompressionstekniker föreslagits i litteraturen. Även om optimeringsalgoritmer som använder dessa kompressorer rapporteras vara lika konkurrenskraftiga som sina motsvarigheter med full precision, dras de med en förlust av noggrannhet. För att ge en uppfattning om detta, utvecklar vi i denna avhandling teori och tekniker för att designa kommunikations-effektiva optimeringsalgoritmer som endast använder information med låg precision. I den första delen analyserar vi konvergensen hos optimeringsalgoritmer med direkt kompression. Först ger vi en översikt av kompressionstekniker som täcker in många kompressorer av praktiskt intresse. Sedan presenterar vi ett enhetligt analysramverk för optimeringsalgoritmer med kompressorer, som kan vara antingen deterministiska eller randomiserade. I synnerhet visas val av parametrar i komprimerade optimeringsalgoritmer som avgörs av kompressorns parametrar som garanterar konvergens. Våra konvergensgarantier visar beroende av kompressorns noggrannhet och fördröjningseffekter på grund av asynkronicitet hos algoritmer. Detta låter oss karakterisera avvägningen mellan iterations- och kommunikations-komplexitet när kompression används. I den andra delen studerarvi hög prestanda hos felkompenseringsmetoder för komprimerade optimeringsalgoritmer. Även om konvergensgarantier med felkompensering har etablerats finns det väldigt begränsat teoretiskt stöd för konkurrenskraftiga konvergensgarantier med felkompensering. Vi utvecklar därför teoretiska förklaringar, som visar att användande av felkompensering garanterar godtyckligt hög lösningsnoggrannhet från komprimerad information. I synnerhet bidrar felkompensering till att ta bort ackumulerade kompressionsfel och förbättrar därmed lösningsnoggrannheten speciellt för illa konditionerade kvadratiska optimeringsproblem. Vi presenterar också stark konvergensanalys för felkompensering tillämpat på stokastiska gradientmetoder med ett kommunikationsnätverk innehållande ett flertal maskiner. De felkompenserade algoritmerna resulterar, i motsats till direkt kompression, i betydande reducering av kompressionsfelet. Simuleringar av algoritmer i denna avhandling på verkligaproblem med referensdatamängder validerar våra teoretiska resultat. / <p>QC20191120</p>
59

Geometrical and kinematic optimization of closed-loop multibody systems/Optimisation géométrique et cinématique de systèmes multicorps avec boucles cinématiques

Collard, Jean-François 16 November 2007 (has links)
In order to improve the design of mechanical or mechatronic systems, mathematical optimization techniques have become an efficient and attractive tool with the increasing development of computer resources. However, the application of such optimization methods to multibody systems (MBS) remains a challenge when the MBS analysis requires the solving of assembly constraints. Hence, this PhD research focuses on the optimization of such closed-loop MBS, particularly when the objective function is of geometrical or kinematic nature. For kinematic optimization of MBS, we propose two penalty strategies to deal with assembly constraints during optimization. Both strategies are compared and illustrated via applications such as the isotropy maximization of parallel manipulators: the 3-dof Delta robot and the 6-dof Hunt platform. Following the same strategies, geometrical optimization of MBS is then investigated. However, due to a higher complexity, we propose to relax the problem, combining two modeling approaches: rigid-body and extensible-link formulations. This leads to a two-step strategy which is then successfully applied to synthesize mechanisms for path-following or function-generation problems. Through these applications, the existence of multiple local optima is highlighted. Therefore, instead of focusing on the unique global optimum solution, we have developed original methods to search and propose several local solutions for the design problem. This approach is called morphological optimization. This enables the designer to choose finally the best solution among several local optima using additional design criteria. Such morphological optimization techniques open the doors for the topology optimization of MBS which remains a challenging problem for future research / Afin d'améliorer la conception de systèmes mécaniques ou mécatroniques, les techniques d'optimisation mathématique sont devenues un outil efficace et attrayant étant donné le développement croissant des ressources informatiques. Cependant, l'application de telles méthodes d'optimisation sur les systèmes multicorps demeure un défi quand l'analyse du système nécessite la résolution de contraintes d'assemblage. C'est pourquoi cette recherche doctorale se concentre sur l'optimisation de tels systèmes multicorps, particulièrement lorsque la fonction objectif est de nature géométrique ou cinématique. Pour l'optimisation cinématique des systèmes multicorps, nous proposons deux stratégies de pénalité pour traiter les contraintes d'assemblage en cours d'optimisation. Ces deux stratégies sont comparées et illustrées par des applications telles que la maximisation d'isotropie de manipulateurs parallèles. Suivant les mêmes stratégies, l'optimisation géométrique des systèmes multicorps est alors étudiée. Cependant, vu la plus grande complexité, nous proposons de relaxer le problème en combinant deux approches de modélisation : une formulation en termes de corps rigides et une autre en termes de liens extensibles. Ceci nous mène à une stratégie en deux étapes qui est alors appliquée avec succès pour la synthèse de mécanismes. A travers ces applications, on a mis en évidence l'existence d'optimums locaux multiples. Dès lors, plutôt que de se focaliser sur l'unique optimum global, nous avons développé des méthodes originales afin de rechercher et proposer plusieurs solutions locales pour le problème de conception. Cette approche est baptisée "optimisation morphologique". Elle permet au concepteur de choisir finalement la meilleure solution parmi plusieurs optimums locaux en utilisant des critères supplémentaires de conception. De telles techniques d'optimisation morphologique ouvrent les portes pour l'optimisation topologique des systèmes multicorps qui demeure un challenge motivant pour des recherches futures.
60

Algoritmos de otimização e criticalidade auto-organizada / Optimization algorithms and self-organized criticality

Castro, Paulo Alexandre de 22 April 2002 (has links)
As teorias científicas surgiram da necessidade do homem entender o funcionamento das coisas. Novos métodos e técnicas são então criados com o objetivo não só de melhor compreender, mas também de desenvolver essas próprias teorias. Nesta dissertação, vamos estudar várias dessas técnicas (aqui chamadas de algoritmos) com o objetivo de obter estados fundamentais em sistemas de spin e de revelar suas possíveis propriedades de auto-organização crítica. No segundo capítulo desta dissertação, apresentamos os algoritmos de otimização: simulated annealing, algoritmo genético, otimização extrema (EO) e evolutivo de Bak-Sneppen (BS). No terceiro capítulo apresentamos o conceito de criticalidade auto-organizada (SOC), usando como exemplo o modelo da pilha de areia. Para uma melhor compreensão da importância da criticalidade auto-organizada, apresentamos vários outros exemplos de onde o fenômeno é observado. No quarto capítulo apresentamos o modelo de relógio quiral de p-estados que será nosso sistema de testes. No caso unidimensional, determinamos a matriz de transferência e utilizamos o teorema de Perron-Frobenius para provar a inexistência de transição de fase a temperaturas finitas a temperaturas finitas. Esboçamos os diagramas de fases dos estados fundamentais que obtivemos de maneira analítica e numérica para os casos de p = 2, 3, 4, 5 e 6, no caso numérico fazendo uso do algoritmo de Bak-Sneppen com sorteio (BSS). Apresentamos ainda um breve estudo do número de mínimos locais para o modelo de relógio quiral de p-estados, para os casos de p = 3 e 4. Por último, no quinto capítulo, propomos uma dinâmica Bak-Sneppen com ruído (BSR) como uma nova técnica de otimização para tratar sistemas discretos. O ruído é introduzido diretamente no espaço de configuração de spins. Conseqüentemente, o fitness (adaptabilidade) passa a assumir valores contínuos, num pequeno intervalo em torno do seu valor original (discreto). Os resultados dessa dinâmica indicam a presença de criticalidade auto-organizada, evidenciada pelo decaimento em leis de potências das correlações espacial e temporal. Também estudamos o método EO e obtivemos uma confirmação numérica de que sua dinâmica exibe um comportamento não crítico com alcance espacial infinito e decaimento exponencial das avalanches. Finalmente, para o modelo de relógio quiral, comparamos a eficiência das três dinâmicas (EO, BSS e BSR) no que tange às suas habilidades de encontrar o estado fundamental do sistema. / In order to understand how things work, man has formulated scientific theories. New methods and techniques have been created not only to increase our understanding on the subject but also to develop and even expand those theories. In this thesis, we study several techniques (here called algorithms) designed with the objective to get the ground states of some spin systems and eventually to reveal possible properties of critical self-organization. In the second chapter, we introduce four fundamental optimization algorithms: simulated annealing, genetics algorithms, extremal optimization (EO) and Bak-Sneppen (BS). In the third chapter we present the concept of self-organized criticality (SOC), using as an example the sandpile model. To understand the importance of the self-organized criticality, we show many other situations where the phenomenon can be observed. In the fourth chapter, we introduce the p-states chiral clock model. This will be our test or toy system. For the one-dimensional case, we first determined the corresponding transfer-matrix and then proved the nonexistence of phase transitions by using the Perron-Frobenius theorem. We calculate the ground state phase diagrams both analytically and numerically in the cases of p = 2, 3, 4, 5 and 6. We also present a brief study of the number of local minima for the cases p = 3 and 4 of the chiral clock model. Finally, in the fifth chapter, we propose a Bak-Sneppen dynamics with noise (BSN) as a new technique of optimization to treat discrete systems. The noise is directly introduced into the spin configuration space. Consequently, the fitness now take values in a continuum but small interval around its original value (discrete). The results of this dynamics indicate the presence of self-organized criticality, which becomes evident with the power law scaling of the spacial and temporal correlations. We also study the EO algorithm and found a numerical con_rmation that it does not show a critical behavior since it has an in_nite space range and an exponential decay of the avalanches. At the end, we compare the e_ciency of the three dynamics (EO, BSD and BSN) for the chiral clock model, concerning their abilities to _nd the system\'s ground state.

Page generated in 0.0859 seconds