Spelling suggestions: "subject:"3research algorithms"" "subject:"3research a.lgorithms""
31 |
Land Leveling Using Optimal Earthmoving Vehicle RoutingMcInvale, Howard D. 30 April 2002 (has links)
This thesis presents new solution approaches for land leveling, using optimal earthmoving vehicle routing. It addresses the Shortest Route Cut and Fill Problem (SRCFP) developed by Henderson, Vaughan, Wakefield and Jacobson [2000]. The SRCFP is a discrete optimization search problem, proven to be NP-hard. The SRCFP describes the process of reshaping terrain through a series of cuts and fills. This process is commonly done when leveling land for building homes, parking lots, etc. The model used to represent this natural system is a variation of the Traveling Salesman Problem. The model is designed to limit the time needed to operate expensive, earthmoving vehicles. The model finds a vehicle route that minimizes the total time required to travel between cut and fill locations while leveling the site. An optimal route is a route requiring the least amount of travel time for an individual earthmoving vehicle.
This research addresses the SRCFP by evaluating minimum function values across an unknown response surface. The result is a cost estimating strategy that provides construction planners a strategy for contouring terrain as cheaply as possible. Other applications of this research include rapid runway repair, and robotic vehicle routing. / Master of Science
|
32 |
Simulation and optimization models for scheduling and balancing the public bicycle-sharing systems / Modéles de simulation et d'optimisation pour l'ordonnancement et l'équilibrage des systèmes de vélos en libre-serviceKadri, Ahmed Abdelmoumene 11 December 2015 (has links)
Les enjeux du développement durable, le réchauffement climatique, la pollution dans les grandes villes, la congestion et les nuisances sonores, l'augmentation des prix de carburants, sont parmi des nombreux facteurs qui incitent les pays développés à l'innovation dans les transports publics. Dans ce contexte, l'introduction des systèmes de vélos en libre-service, au cours de ces dernières années, est une des solutions adoptées par de nombreuses grandes villes. Malgré leur succès fulgurant dans le monde entier, il existe peu d'études fondamentales sur ce type transport urbain. Pourtant, leur exploitation et leur management par des opérateurs soulèvent de nombreuses questions notamment d'ordre opérationnel. Dans ce contexte, cette thèse s'adresse aux problèmes d'ordonnancement et de rééquilibrage des stations de vélos en libre-service. Ce sont des problèmes cruciaux pour la qualité de service et la viabilité économique de tels systèmes. Le rééquilibrage consiste à redistribuer le nombre de vélos entre les différentes stations afin de satisfaire au mieux les demandes des usagers. Cette régulation se fait souvent par le biais de véhicules spécifiques qui font des tournées autour des différentes stations. Ainsi, deux problèmes d'optimisation difficiles se posent : la recherche de la meilleure tournée du véhicule de régulation (ordonnancement de la tournée) et la détermination des nombres de véhicules à utiliser (rééquilibrage des stations). Dans cette optique, les travaux de cette thèse constituent une contribution à la modélisation et à l'optimisation de performances des systèmes de vélos en libre-service en vue de leur rééquilibrage et leur ordonnancement. Plusieurs méthodes d'optimisation et ont été développées et testées. De telles méthodes incorporent différentes approches de simulation ou d'optimisation comme les réseaux de Petri, les algorithmes génétiques, les algorithmes gloutons, les algorithmes de recherche par voisinage, la méthode arborescente de branch-and-bound, l'élaboration des bornes supérieures et inférieures, etc. Différentes facettes du problème ont été étudiées : le cas statique, le cas dynamique, l'ordonnancement et le rééquilibrage avec un seul (ou multiple) véhicule(s). Afin de montrer la pertinence de nos approches, la thèse comporte également plusieurs applications réelles et expérimentations / In our days, developed countries have to face many public transport problems, including traffic congestion, air pollution, global oil prices and global warming. In this context, Public Bike sharing systems are one of the solutions that have been recently implemented in many big cities around the world. Despite their apparent success, the exploitation and management of such transportation systems imply crucial operational challenges that confronting the operators while few scientific works are available to support such complex dynamical systems. In this context, this thesis addresses the scheduling and balancing in public bicycle-sharing systems. These problems are the most crucial questions for their operational efficiency and economic viability. Bike sharing systems are balanced by distributing bicycles from one station to another. This procedure is generally ensured by using specific redistribution vehicles. Therefore, two hard optimization problems can be considered: finding a best tour for the redistribution vehicles (scheduling) and the determination of the numbers of bicycles to be assigned and of the vehicles to be used (balancing of the stations). In this context, this thesis constitutes a contribution to modelling and optimizing the bicycle sharing systems' performances in order to ensure a coherent scheduling and balancing strategies. Several optimization methods have been proposed and tested. Such methods incorporate different approaches of simulation or optimization like the Petri nets, the genetic algorithms, the greedy search algorithms, the local search algorithms, the arborescent branch-and-bound algorithms, the elaboration of upper and lower bounds, ... Different variants of the problem have been studied: the static mode, the dynamic mode, the scheduling and the balancing by using a single or multiple vehicle(s). In order to demonstrate the coherence and the suitability of our approaches, the thesis contains several real applications and experimentations
|
33 |
Determinação de caminhos mínimos em aplicações de transporte público: um estudo de caso para a cidade de Porto AlegreBastos, Rodrigo 27 September 2013 (has links)
Submitted by William Justo Figueiro (williamjf) on 2015-07-21T22:37:51Z
No. of bitstreams: 1
63c.pdf: 2699232 bytes, checksum: 1ae2013ef31101508f9fef3997d71790 (MD5) / Made available in DSpace on 2015-07-21T22:37:51Z (GMT). No. of bitstreams: 1
63c.pdf: 2699232 bytes, checksum: 1ae2013ef31101508f9fef3997d71790 (MD5)
Previous issue date: 2013 / SIMTUR - Sistema Inteligente De Monitoramento de Tráfego Urbano / O crescente aumento do uso de automóveis e de motocicletas tem provocado uma contínua degradação no trânsito urbano das grandes metrópoles. Este cenário é agravado pelas deficiências nos atuais sistemas de transporte público, geradas, em parte, pela falta de informação ao usuário. O presente trabalho apresenta um modelo computacional para um sistema de informação ao usuário de transporte público. Ao contrário de outros trabalhos baseados no algoritmo clássico Dijkstra, a abordagem apresentada faz uso do algoritmo A* para resolução do problema de caminhos mínimos, presente neste contexto, a fim de reduzir o tempo de resposta de maneira que o modelo possa ser utilizado em um sistema real de informação ao usuário. O modelo proposto considera múltiplos critérios de decisão, como a distância total percorrida e o número de transbordos. Um estudo de caso foi realizado utilizando dados reais do transporte público da cidade Porto Alegre com o objetivo de avaliar o modelo computacional desenvolvido. Os resultados gerados foram comparados com aqueles obtidos através do emprego do algoritmo Dijkstra e indicam que a combinação do algoritmo A* com técnicas de aceleração permite reduzir, significativamente, a complexidade de espaço, o tempo de processamento e o número de transbordos. / The increasing use of automobiles and motorcycles has caused a continuous degradation in the traffic of large cities. This scenario gets worse due to shortcomings in the current public transportation, which is entailed, in a certain way, by the lack of information provided to the user. This study shows a computing model for a public transportation user information system. Unlike other studies based on the classical Dijkstra’s algorithm, the approach makes use of the algorithm A* to solve a shortest path problem to reduce the response time so that the model can be used in an real-time web information system. The proposed model takes into account multiple criteria of decision, such as total distance traveled and number of transfers and it was evaluated with data from Porto Alegre’s public transportation. The results were compared to those ones obtained by the use of Dijkstra’s algorithm and indicate that the combination of algorithm A* with acceleration techniques allows reducing significantly the space complexity, processing time and the number of transfers.
|
34 |
Quantum Information Processing By NMR : Quantum State Discrimination, Hadamard Spectroscopy, Liouville Space Search, Use Of Geometric Phase For Gates And AlgorithmsGopinath, T 07 1900 (has links)
The progess in NMRQIP can be outlined in to four parts.1) Implementation of theoretical protocols on small number of qubits. 2) Demonstration of QIP on various NMR systems. 3) Designing and implementing the algorithms for mixed initial states. 4) Developing the techniques for coherent and decoherent control on higher number(up to 15) of qubits.
This thesis contains some efforts in the direction of first three points.
Quantum-state discrimination has important applications in the context of quantum communication and quantum cryptography. One of the characteristic features of quantum mechanics is that it is impossible to devise a measurement that can distinguish nonorthogonal states perfectly. However, one can distinguish them with a finite probability by an appropriate measurement strategy. In Chapter 2, we describe the implementation of a theoretical protocol of programmable quantum-state discriminator, on a two-qubit NMR System. The projective measurement is simulated by adding two experiments. This device does the unambiguous discrimination of a pair of states of the data qubit that are symmetrically located about a fixed state. The device is used to discriminate both linearly polarized states and eillipitically polarized states. The maximum probability of successful discrimination is achieved by suitably preparing the ancilla quubit.
The last step of any QIP protocol is the readout. In NMR-QIP the readout is done by using density matrix tomography. It was first proposed by Ernst and co-workers that a two-dimensional method can be used to correlate input and output states. This method uses an extra (aniclla) qubit, whose transitions indicate the quantum states of the remaining qubits. The 2D spectrum of ancilla qubit represent the input and output states along F1 and F2 dimensions respectively. However the 2D method requires several t1 increments to achieve the required spectral width and resolution in the indirect dimension, hence leads to large experimental time. In chapter 3, the conventional 2D NMRQIP method is speeded-up by using Hadamard spectroscopy. The Hadamard method is used to implement various two-, three-qubit gates and qutrit gates. We also use Hadamard spectroscopy for information storage under spatial encoding and to implement a parallel search algorithm. Various slices of water sample can be spatially encoded by using a multi-frequency pulse under the field gradient. Thus the information of each slice is projected to the frequency space. Each slice represents a classical bit, where excitation and no excitation corresponds to the binary values 0 and 1 respectively. However one has to do the experiment for each binary information, by synthesizing a suitable multi-frequency pulse. In this work we show that by recording the data obtained by various Hadamard encoded multi-frequency pulses, one can suitably decode it to obtain any birnary information, without doing further experiments.
Geometric phases depend only on the geometry of the path executed in the projective Hilbert space, and are therefore resilient to certain types of errors. This leads to the possibility of an intrinsically fault-tolerant quantum computation. In liquid state NMRQIP. Controlled phase shift gates are achieved by using qubit selective pulses and J evolutions, and also by using geometir phases. In order to achieve higher number of qubits in NMR, one explores dipolar couplings which are larger in magnitude, yielding strongly coupled spectra. In such systems since the Hamiltonian consists of terms, it is difficult to apply qubit selective pulses. However such systems have been used for NMRQIP by considering 2n eigen states as basis states of an n-qubit system. In chapter 4, it is shown that non-adiabatic geometric phases can be used to implement controlled phase shift gates in strongly dipolar coupled systems. A detailed theoretical explanation of non-adiabatic geometric phases in NMR is given, by using single transition operators. Using such controlled phase shift gates, the implementation of Deutsch-Jozsa and parity algorithms are demonstrated.
Search algorithms play an important role in the filed of information processing. Grovers quantum search algorithm achieves polynomial speed-up over the classical search algorithm. Bruschweiler proposed a Liouville space search algorithm which achieve polymonial speed-up. This algorithm requires a weakly coupled system with a mixed initial state. In chapter 5 we modified the Bruschweiler’s algorithm, so that it can be implemented on a weakly as well as strongly coupled system. The experiments are performed on a strongly dipolar coupled four-qubit system. The experiments from four spin-1/2 nuclei of a molecule oriented in a liquid crystal matrix.
Chapter 6 describes the implementation of controlled phase shift gates on a quadrupolar spin-7/2 nucleus, using non-adiabatic geometric phases. The eight energy levels of spin-7/2 nucleus, form a three qubit system. A general procedure is given, for implementing a controlled phase shift gate on a system consisting of any number of energy levels. Finally Collin’s version of three-qubit DJ algorithm using multi-frequency pulses, is implemented in the spin-7/2 system.
|
35 |
Research Ontology Data Models for Data and Metadata Exchange RepositoryKamenieva, Iryna January 2009 (has links)
<p>For researches in the field of the data mining and machine learning the necessary condition is an availability of various input data set. Now researchers create the databases of such sets. Examples of the following systems are: The UCI Machine Learning Repository, Data Envelopment Analysis Dataset Repository, XMLData Repository, Frequent Itemset Mining Dataset Repository. Along with above specified statistical repositories, the whole pleiad from simple filestores to specialized repositories can be used by researchers during solution of applied tasks, researches of own algorithms and scientific problems. It would seem, a single complexity for the user will be search and direct understanding of structure of so separated storages of the information. However detailed research of such repositories leads us to comprehension of deeper problems existing in usage of data. In particular a complete mismatch and rigidity of data files structure with SDMX - Statistical Data and Metadata Exchange - standard and structure used by many European organizations, impossibility of preliminary data origination to the concrete applied task, lack of data usage history for those or other scientific and applied tasks.</p><p>Now there are lots of methods of data miming, as well as quantities of data stored in various repositories. In repositories there are no methods of DM (data miming) and moreover, methods are not linked to application areas. An essential problem is subject domain link (problem domain), methods of DM and datasets for an appropriate method. Therefore in this work we consider the building problem of ontological models of DM methods, interaction description of methods of data corresponding to them from repositories and intelligent agents allowing the statistical repository user to choose the appropriate method and data corresponding to the solved task. In this work the system structure is offered, the intelligent search agent on ontological model of DM methods considering the personal inquiries of the user is realized.</p><p>For implementation of an intelligent data and metadata exchange repository the agent oriented approach has been selected. The model uses the service oriented architecture. Here is used the cross platform programming language Java, multi-agent platform Jadex, database server Oracle Spatial 10g, and also the development environment for ontological models - Protégé Version 3.4.</p>
|
36 |
Otimiza??o de superf?cies seletivas de frequ?ncia com elementos pr?-fractais utilizando rede neural MLP e algoritmos de busca populacionalSilva, Marcelo Ribeiro da 27 January 2014 (has links)
Made available in DSpace on 2014-12-17T14:55:18Z (GMT). No. of bitstreams: 1
MarceloRS_TESE.pdf: 2113878 bytes, checksum: 1cc62a66f14cc48f2e97f986a4dbbb8d (MD5)
Previous issue date: 2014-01-27 / Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior / This thesis describes design methodologies for frequency selective surfaces (FSSs)
composed of periodic arrays of pre-fractals metallic patches on single-layer dielectrics (FR4,
RT/duroid). Shapes presented by Sierpinski island and T fractal geometries are exploited to
the simple design of efficient band-stop spatial filters with applications in the range of
microwaves. Initial results are discussed in terms of the electromagnetic effect resulting from
the variation of parameters such as, fractal iteration number (or fractal level), fractal iteration
factor, and periodicity of FSS, depending on the used pre-fractal element (Sierpinski island or
T fractal). The transmission properties of these proposed periodic arrays are investigated
through simulations performed by Ansoft DesignerTM and Ansoft HFSSTM commercial
softwares that run full-wave methods. To validate the employed methodology, FSS prototypes
are selected for fabrication and measurement. The obtained results point to interesting features
for FSS spatial filters: compactness, with high values of frequency compression factor; as
well as stable frequency responses at oblique incidence of plane waves. This thesis also
approaches, as it main focus, the application of an alternative electromagnetic (EM)
optimization technique for analysis and synthesis of FSSs with fractal motifs. In application
examples of this technique, Vicsek and Sierpinski pre-fractal elements are used in the optimal
design of FSS structures. Based on computational intelligence tools, the proposed technique
overcomes the high computational cost associated to the full-wave parametric analyzes. To
this end, fast and accurate multilayer perceptron (MLP) neural network models are developed
using different parameters as design input variables. These neural network models aim to
calculate the cost function in the iterations of population-based search algorithms. Continuous
genetic algorithm (GA), particle swarm optimization (PSO), and bees algorithm (BA) are
used for FSSs optimization with specific resonant frequency and bandwidth. The performance
of these algorithms is compared in terms of computational cost and numerical convergence.
Consistent results can be verified by the excellent agreement obtained between simulations
and measurements related to FSS prototypes built with a given fractal iteration / Esta tese descreve metodologias de projeto para superf?cies seletivas de frequ?ncia
(FSSs) compostas por arranjos peri?dicos de patches met?licos pr?-fractais impressos em
camadas diel?tricas simples (FR4, RT/duroid). As formas apresentadas pelas geometrias
correspondentes ? ilha de Sierpinski e ao fractal T s?o exploradas para o projeto simples de
filtros espaciais rejeita-faixa eficientes com aplica??es na faixa de micro-ondas. Resultados
iniciais s?o discutidos em termos do efeito eletromagn?tico decorrente da varia??o de
par?metros como, n?mero de itera??es fractais (ou n?vel do fractal), fator de itera??o fractal, e
periodicidade da FSS, dependendo do elemento pr?-fractal utilizado (ilha de Sierpinski ou
fractal T). As propriedades de transmiss?o destes arranjos peri?dicos propostos s?o
investigadas atrav?s de simula??es realizadas pelos programas comerciais Ansoft DesignerTM
e Ansoft HFSSTM, que executam m?todos de onda completa. Para validar a metodologia
empregada, prot?tipos de FSS s?o selecionados para fabrica??o e medi??o. Os resultados
obtidos apontam caracter?sticas interessantes para filtros espaciais de FSS, tais como:
estrutura compacta, com maiores fatores de compress?o de frequ?ncia; al?m de respostas
est?veis em frequ?ncia com rela??o ? incid?ncia obl?qua de ondas planas. Esta tese aborda
ainda, como enfoque principal, a aplica??o de uma t?cnica alternativa de otimiza??o
eletromagn?tica (EM) para an?lise e s?ntese de FSSs com motivos fractais. Em exemplos de
aplica??o desta t?cnica, elementos pr?-fractais de Vicsek e Sierpinski s?o usados no projeto
?timo das estruturas de FSS. Baseada em ferramentas de intelig?ncia computacional, a t?cnica
proposta supera o alto custo computacional proveniente das an?lises param?tricas de onda
completa. Para este fim, s?o desenvolvidos modelos r?pidos e precisos de rede neural do tipo
perceptron de m?ltiplas camadas (MLP) utilizando diferentes par?metros como vari?veis de
entrada do projeto. Estes modelos de rede neural t?m como objetivo calcular a fun??o custo
nas itera??es dos algoritmos de busca populacional. O algoritmo gen?tico cont?nuo (GA), a
otimiza??o por enxame de part?culas (PSO), e o algoritmo das abelhas (BA), s?o usados para
a otimiza??o das FSSs com valores espec?ficos de frequ?ncia de resson?ncia e largura de
banda. O desempenho destes algoritmos ? comparado em termos do custo computacional e da
13
converg?ncia num?rica. Resultados consistentes podem ser verificados atrav?s da excelente
concord?ncia obtida entre simula??es e medi??es referentes aos prot?tipos de FSS constru?dos
com uma dada itera??o fractal
|
37 |
Research Ontology Data Models for Data and Metadata Exchange RepositoryKamenieva, Iryna January 2009 (has links)
For researches in the field of the data mining and machine learning the necessary condition is an availability of various input data set. Now researchers create the databases of such sets. Examples of the following systems are: The UCI Machine Learning Repository, Data Envelopment Analysis Dataset Repository, XMLData Repository, Frequent Itemset Mining Dataset Repository. Along with above specified statistical repositories, the whole pleiad from simple filestores to specialized repositories can be used by researchers during solution of applied tasks, researches of own algorithms and scientific problems. It would seem, a single complexity for the user will be search and direct understanding of structure of so separated storages of the information. However detailed research of such repositories leads us to comprehension of deeper problems existing in usage of data. In particular a complete mismatch and rigidity of data files structure with SDMX - Statistical Data and Metadata Exchange - standard and structure used by many European organizations, impossibility of preliminary data origination to the concrete applied task, lack of data usage history for those or other scientific and applied tasks. Now there are lots of methods of data miming, as well as quantities of data stored in various repositories. In repositories there are no methods of DM (data miming) and moreover, methods are not linked to application areas. An essential problem is subject domain link (problem domain), methods of DM and datasets for an appropriate method. Therefore in this work we consider the building problem of ontological models of DM methods, interaction description of methods of data corresponding to them from repositories and intelligent agents allowing the statistical repository user to choose the appropriate method and data corresponding to the solved task. In this work the system structure is offered, the intelligent search agent on ontological model of DM methods considering the personal inquiries of the user is realized. For implementation of an intelligent data and metadata exchange repository the agent oriented approach has been selected. The model uses the service oriented architecture. Here is used the cross platform programming language Java, multi-agent platform Jadex, database server Oracle Spatial 10g, and also the development environment for ontological models - Protégé Version 3.4.
|
38 |
Theoretical and practical aspects of ant colony optimizationBlum, Christian 23 January 2004 (has links)
Combinatorial optimization problems are of high academical as well as practical importance. Many instances of relevant combinatorial optimization problems are, due to their dimensions, intractable for complete methods such as branch and bound. Therefore, approximate algorithms such as metaheuristics received much attention in the past 20 years. Examples of metaheuristics are simulated annealing, tabu search, and evolutionary computation. One of the most recent metaheuristics is ant colony optimization (ACO), which was developed by Prof. M. Dorigo (who is the supervisor of this thesis) and colleagues. This thesis deals with theoretical as well as practical aspects of ant colony optimization.<p><p>* A survey of metaheuristics. Chapter 1 gives an extensive overview on the nowadays most important metaheuristics. This overview points out the importance of two important concepts in metaheuristics: intensification and diversification. <p><p>* The hyper-cube framework. Chapter 2 introduces a new framework for implementing ACO algorithms. This framework brings two main benefits to ACO researchers. First, from the point of view of the theoretician: we prove that Ant System (the first ACO algorithm to be proposed in the literature) in the hyper-cube framework generates solutions whose expected quality monotonically increases with the number of algorithm iterations when applied to unconstrained problems. Second, from the point of view of the experimental researcher, we show through examples that the implementation of ACO algorithms in the hyper-cube framework increases their robustness and makes the handling of the pheromone values easier.<p><p>* Deception. In the first part of Chapter 3 we formally define the notions of first and second order deception in ant colony optimization. Hereby, first order deception corresponds to deception as defined in the field of evolutionary computation and is therefore a bias introduced by the problem (instance) to be solved. Second order deception is an ACO-specific phenomenon. It describes the observation that the quality of the solutions generated by ACO algorithms may decrease over time in certain settings. In the second part of Chapter 3 we propose different ways of avoiding second order deception.<p><p>* ACO for the KCT problem. In Chapter 4 we outline an ACO algorithm for the edge-weighted k-cardinality tree (KCT) problem. This algorithm is implemented in the hyper-cube framework and uses a pheromone model that was determined to be well-working in Chapter 3. Together with the evolutionary computation and the tabu search approaches that we develop in Chapter 4, this ACO algorithm belongs to the current state-of-the-art algorithms for the KCT problem.<p><p>* ACO for the GSS problem. Chapter 5 describes a new ACO algorithm for the group shop scheduling (GSS) problem, which is a general shop scheduling problem that includes among others the well-known job shop scheduling (JSS) and the open shop scheduling (OSS) problems. This ACO algorithm, which is implemented in the hyper-cube framework and which uses a new pheromone model that was experimentally tested in Chapter 3, is currently the best ACO algorithm for the JSS as well as the OSS problem. In particular when applied to OSS problem instances, this algorithm obtains excellent results, improving the best known solution for several OSS benchmark instances. A final contribution of this thesis is the development of a general method for the solution of combinatorial optimization problems which we refer to as Beam-ACO. This method is a hybrid between ACO and a tree search technique known as beam search. We show that Beam-ACO is currently a state-of-the-art method for the application to the existing open shop scheduling (OSS) problem instances.<p><p> / Doctorat en sciences appliquées / info:eu-repo/semantics/nonPublished
|
39 |
k-ary search on modern processorsSchlegel, Benjamin, Gemulla, Rainer, Lehner, Wolfgang 19 May 2022 (has links)
This paper presents novel tree-based search algorithms that exploit the SIMD instructions found in virtually all modern processors. The algorithms are a natural extension of binary search: While binary search performs one comparison at each iteration, thereby cutting the search space in two halves, our algorithms perform k comparisons at a time and thus cut the search space into k pieces. On traditional processors, this so-called k-ary search procedure is not beneficial because the cost increase per iteration offsets the cost reduction due to the reduced number of iterations. On modern processors, however, multiple scalar operations can be executed simultaneously, which makes k-ary search attractive. In this paper, we provide two different search algorithms that differ in terms of efficiency and memory access patterns. Both algorithms are first described in a platform independent way and then evaluated on various state-of-the-art processors. Our experiments suggest that k-ary search provides significant performance improvements (factor two and more) on most platforms.
|
40 |
Anwendung von Line-Search-Strategien zur Formoptimierung und ParameteridentifikationClausner, André 05 June 2013 (has links) (PDF)
Die kontinuierliche Weiterentwicklung und Verbesserung technischer Prozesse erfolgt heute auf der Basis stochastischer und deterministischer Optimierungsstrategien in Kombination mit der numerischen Simulation dieser Abläufe. Da die FE-Simulation von Umformvorgängen in der Regel sehr zeitintensiv ist, bietet sich für die Optimierung solcher Prozesse der Einsatz deterministischer Methoden an, da hier weniger Optimierungsschritte und somit auch weniger FE-Simulationen notwendig sind. Eine wichtige Anforderung an solche Optimierungsverfahren ist globale Konvergenz zu lokalen Minima, da die optimalen Parametersätze nicht immer näherungsweise bekannt sind. Die zwei wichtigsten Strategien zum Ausdehnen des beschränkten Konvergenzradius der natürlichen Optimierungsverfahren (newtonschrittbasierte Verfahren und Gradientenverfahren) sind die Line-Search-Strategie und die Trust-Region-Strategie. Die Grundlagen der Line-Search-Strategie werden aufgearbeitet und die wichtigsten Teilalgorithmen implementiert. Danach wird dieses Verfahren auf eine effiziente Kombination der Teilalgorithmen und Verfahrensparameter hin untersucht. Im Anschluss wird die Leistung eines Optimierungsverfahrens mit Line-Search-Strategie verglichen mit der eines ebenfalls implementierten Optimierungsverfahrens mit skalierter Trust-Region-Strategie. Die Tests werden nach Einfügen der implementierten Verfahren in das Programm SPC-Opt anhand der Lösung eines Quadratmittelproblems aus der Materialparameteridentifikation sowie der Formoptimierung eines Umformwerkzeugs vorgenommen.
|
Page generated in 0.0425 seconds