• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 1
  • 1
  • 1
  • Tagged with
  • 9
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Irreducible Infeasible Subsystem Decomposition for Probabilistically Constrained Stochastic Integer Programs

Gallego Arrubla, Julian Andres 16 December 2013 (has links)
This dissertation explores methods for finding irreducible infeasible subsystems (IISs) of systems of inequalities with binary decision variables and for solving probabilistically constrained stochastic integer programs (SIP-C). Finding IISs for binary systems is useful in decomposition methods for SIP-C. SIP-C has many important applications including modeling of strategic decision-making problems in wildfire initial response planning. New theoretical results and two new algorithms to find IISs for systems of inequalities with binary variables are developed. The first algorithm uses the new theory and the method of the alternative polyhedron within a branch-and-bound (BAB) approach. The second algorithm applies the new theory and the method of the alternative polyhedron to a system in which zero/one box constraints are appended. Decomposition schemes using IISs for binary systems can be used to solve SIP-C. SIP-C is challenging to solve due to the generally non-convex feasible region. In addition, very weak lower (upper) bounds on the objective function are obtained from the linear programming (LP) relaxation of the deterministic equivalent problem (DEP) to SIP-C. This work develops a branch-and-cut (BAC) method based on IIS inequalities to solve SIP-C with random technology matrix and random righthand- side vector. Computational results show that the LP relaxation of the DEP to SIP-C can be strengthened by the IIS inequalities. SIP-C modeling can be applied to wildfire initial response planning. A new methodology for wildfire initial response that includes a fire behavior simulation model, a wildfire risk model, and SIP-C is developed and tested. The new method- ology assumes a known standard response needed to contain a fire of given size. Likewise, this methodology is used to evaluate deployment decisions in terms of the number of firefighting resources positioned at each base, the expected number of escaped and contained fires, as well as the wildfire risk associated with fires not receiving a standard response. A study based on the Texas district 12 (TX12) that is one of the Texas A&M Forest Service (TFS) fire planning units in east Texas demonstrates the effectiveness of the new methodology towards making strategic deployment decisions for wildfire initial response planning.
2

Novelty Search och krav inom evolutionära algoritmer : En jämförelse av FINS och PMOEA för att generera dungeon nivåer med krav / Novelty Search and demands in evolutionary algorithms : A comparison between FINS and PMOEA for generating dungeon levels with demands

Bergström, Anton January 2019 (has links)
Evolutionära algoritmer har visat sig vara effektiva för att utveckla spelnivåer. Dock finns fortfarande ett behov av nivåer som både uppfyller de krav som spelen har, samt att nivåerna som skapas ska vara så olika som möjligt för att uppmuntra upprepade spelomgångar. För att åstadkomma detta kan man använda Novelty Search. Dock saknar Novelty Search funktioner som gör att populationen vill uppfylla de krav som nivåerna ska ha. Arbetet fokuserar därför på att jämföra två Novelty Search baserade algoritmer som båda uppmuntrar kravuppfyllning: Feasible Infeasible Novelty Search (FINS) och Pareto based Multi-objective evolutionary algorithm (PMOEA) med två mål: krav och Novelty Search. Studien jämför algoritmerna utifrån tre värden: hur stor andel av populationen som följer de ställda kraven, hur bra dessa individer är på att lösa ett nivårelaterat problem samt diversiteten bland dessa individer. Utöver PMOEA och FINS implementeras även en Novelty Search algoritm och en traditionell evolutionär algoritm. Tre experiment genomförs där nivåernas storlek och antalet krav varierade. Resultatet visar att PMOEA var bättre på att skapa fler individer som följde alla kraven och att dessa individer överlag var bättre på att optimera lösningar än vanlig Novelty Search och FINS. Dock hade FINS högre diversitet bland individerna än alla algoritmerna som testades. Studiens svaghet är att resultatet är subjektivt till algoritmernas uppsättning i artefakten, som sådan borde framtida arbeten fokusera på att utforska nya uppsättningar för att generalisera resultatet.
3

Algorithms for irreducible infeasible subset detection in CSP - Application to frequency planning and graph k-coloring

Hu, Jun 27 November 2012 (has links) (PDF)
The frequency assignment (FAP) consists in assigning the frequency on the radio links of a network which satisfiesthe electromagnetic interference among the links. Given the limited spectrum resources for each application, the fre-quency resources are often insufficient to deploy a wireless network without interference. In this case, the network isover-contrained and the problem is infeasible. Our objective is to identify an area with heavy interference.The work presented here concerns the detection for one of these areas with an algorithmic approach based onmodeling the problem by CSP. The problem of frequency assignment can be modeled as a constraint satisfactionproblem (CSP) which is represented by a triple: a set of variables (radio links), a set of constraints (electromagneticinterference) and a set of available frequencies.The interfered area in CSP can be considered a subset of irreducible feasible subset (IIS). An IIS is a infeasiblesubproblem with irreducible size, that is to say that all subsets of an IIS are feasible. The identification of an IIS ina CSP refers to two general interests. First, locating an IIS can easily prove the infeasibility of the problem. Becausethe size of IIS is assumed to be smaller compared to the entire problem, its infeasibility is relatively easier to prove.Second, we can locate the reason of infeasibility, in this case, the decision maker can provide the solutions to relax theconstraints inside IIS, which perhaps leads to a feasible solution to the problem.This work proposes algorithms to identify an IIS in the over-constrained CSP. These algorithms have tested on the well known benchmarks of the FAP and of the problem of graph k-coloring. The results show a significant improve-ment on instances of FAP compared to known methods.
4

Infeasible Path Detection : a Formal Model and an Algorithm / Détection de chemins infaisables : un modèle formel et un algorithme

Aïssat, Romain 30 January 2017 (has links)
Le test boîte blanche basé sur les chemins est largement utilisé pour la validation de programmes. A partir du graphe de flot de contrôle (CFG) du programme sous test, les cas de test sont générés en sélectionnant des chemins d'intérêt, puis en essayant de fournir, pour chaque chemin, des valeurs d'entrées concrètes qui déclencheront l'exécution du programme le long de ce chemin.Il existe de nombreuses manières de définir les chemins d'intérêt: les méthodes de test structurel sélectionnent des chemins remplissant un critère de couverture concernant les éléments du graphe; dans l'approche aléatoire, les chemins sont tirés selon une distribution de probabilité sur ces éléments. Ces méthodes aléatoires ont l'avantage de fournir un moyen d'évaluer la qualité d'un jeu de test à travers la probabilité minimale de couvrir un élément du critère.Fournir des valeurs concrètes d'entrées nécessite de construire le prédicat de cheminement chaque chemin, i.e. la conjonction des contraintes sur les entrées devant être vérifiée pour que le système s'exécute le long de ce chemin. Cette construction se fait par exécution symbolique. Les données de test sont ensuite déterminées par résolution de contraintes. Si le prédicat d'un chemin est insatisfiable, le chemin est dit infaisable. Il est très courant qu'un programme présente de tels chemins, leur nombre surpassent généralement de loin celui des faisables. Les chemins infaisables sélectionnés lors la première étape ne contribuent pas au jeu de test final, et doivent être tirés à nouveau. La présence de ces chemins pose un sérieux problème aux méthodes structurelles et à toutes les méthodes d'analyse statique, la qualité des approximations qu'elles fournissent étant réduite par les données calculées le long de chemins infaisables.De nombreuses méthodes ont été proposées pour résoudre ce problème, telles que le test concolique ou le test aléatoire basé sur les domaines d'entrée. Nous présentons un algorithme qui construit de meilleures approximations du comportement d'un programme que son CFG, produisant un nouveau CFG qui sur-approxime l'ensemble des chemins faisables mais présentant moins de chemins infaisables. C'est dans ce nouveau graphe que sont tirés les chemins.Nous avons modélisé notre approche et prouvé formellement, à l'aide de l'assistant de preuve interactif Isabelle/HOL, les propriétés principales établissant sa correction.Notre algorithme se base sur l'exécution symbolique et la résolution de contraintes, permettant de détecter si certains chemins sont infaisables ou non. Nos programmes peuvent contenir des boucles, et leurs graphes des cycles. Afin d'éviter de suivre infiniment les chemins cycliques, nous étendons l'exécution symbolique avec la détection de subsomptions. Une subsomption peut être vue comme le fait qu'un certain point atteint durant l'analyse est un cas particulier d'un autre atteint précédemment: il n'est pas nécessaire d'explorer les successeurs d'un point subsumé, ils sont subsumés par les successeurs du subsumeur. Notre algorithme a été implémenté par un prototype, dont la conception suit fidèlement la formalisation, offrant un haut niveau de confiance dans sa correction.Dans cette thèse, nous présentons les concepts théoriques sur lesquels notre approche se base, sa formalisation à l'aide d'Isabelle/HOL, les algorithmes implémentés par notre prototype et les diverses expériences menées et résultats obtenus à l'aide de ce prototype. / White-box, path-based, testing is largely used for the validation of programs. Given the control-flow graph (CFG) of the program under test, a test suit is generated by selecting a collection of paths of interest, then trying to provide, for each path, some concrete input values that will make the program follow that path during a run.For the first step, there are various ways to define paths of interest: structural testing methods select some set of paths that fulfills coverage criteria related to elements of the graph; in random-based techniques, paths are selected according to a given distribution of probability over these elements (for instance, uniform probability over all paths of length less than a given bound). Both approaches can be combined as in structural statistical testing. The random-based methods above have the advantage of providing a way to assess the quality of a test set as the minimal probability of covering an element of a criterion.The second step requires to produce for each path its path predicate, i.e. the conjunction of the constraints over the input parameters that must hold for the system to run along that path. This is done using symbolic execution. Then, constraint-solving is used to compute test data. If there is no input values such that the path predicate evaluates to true, the path is infeasible. It is very common for a program to have infeasible paths and such paths can largely outnumber feasible paths. Infeasible paths selected during the first step will not contribute to the final test suite, and there is no better choice than to select another path, hoping for its feasibility. Handling infeasible paths is the serious limitation of structural methods since most of the time is spent selecting useless paths. It is also a major challenge for all techniques in static analysis of programs, since the quality of the approximations they provide is lowered by data computed for paths that do not correspond to actual program runs.To overcome this problem, different methods have been proposed, like concolic testing or random testing based on the input domain. In path-biased random testing, paths are drawn according to a given distribution and their feasibility is checked in a second step. We present an algorithm that builds better approximations of the behavior of a program than its CFG, providing a transformed CFG, which still over-approximates the set of feasible paths but with fewer infeasible paths. This transformed graph is used for drawing paths at random.We modeled our graph transformations and formally proved, using the interactive theorem proving environment Isabelle/HOL, the key properties that establish the correctness of our approach.Our algorithm uses symbolic execution and constraint solving, which allows to detect whether some paths are infeasible. Since programs can contain loops, their graphs can contain cycles. In order to avoid to follow infinitely a cyclic path, we enrich symbolic execution with the detection of subsumptions. A subsumption can be interpreted as the fact that some node met during the analysis is a particular case of another node met previously: there is no need to explore the successors of the subsumed node: they are subsumed by the successors of the subsumer. Our algorithm has been implemented by a prototype, whose design closely follows said formalization, giving a good level of confidence in its correctness.In this thesis, we introduce the theoretical concepts on which our approach relies, its formalization in Isabelle/HOL, the algorithms our prototype implements and the various experiments done and results obtained using it.
5

Using maximal feasible subset of constraints to accelerate a logic-based Benders decomposition scheme for a multiprocessor scheduling problem

Grgic, Alexander, Andersson, Filip January 2022 (has links)
Logic-based Benders decomposition (LBBD) is a strategy for solving discrete optimisation problems. In LBBD, the optimisation problem is divided into a master problem and a subproblem and each part is solved separately. LBBD methods that combine mixed-integer programming and constraint programming have been successfully applied to solve large-scale scheduling and resource allocation problems. Such combinations typically solve an assignment-type master problem and a scheduling-type subproblem. However, a challenge with LBBD methods that have feasibility subproblems are that they do not provide a feasible solution until an optimal solution is found.  In this thesis, we show that feasible solutions can be obtained by finding and combining feasible parts of an infeasible master problem assignment. We use these insights to develop an acceleration technique for LBBD that solves a series of subproblems, according to algorithms for constructing a maximal feasible subset of constraints (MaFS). Using a multiprocessor scheduling problem as a benchmark, we study the computational impact from using this technique. We evaluate three variants of LBBD schemes. The first uses MaFS, the second uses irreducible subset of constraints (IIS) and the third combines MaFS with IIS. Computational tests were performed on an instance set of multiprocessor scheduling problems. In total, 83 instances were tested, and their number of tasks varied between 2794 and 10,661. The results showed that when applying our acceleration technique in the decomposition scheme, the pessimistic bounds were strong, but the convergence was slow. The decomposition scheme combining our acceleration technique with the acceleration technique using IIS showed potential to accelerate the method.
6

Heuristic Algorithms for Graph Coloring Problems / Algorithmes heuristiques pour des problèmes de coloration de graphes

Sun, Wen 29 November 2018 (has links)
Cette thèse concerne quatre problèmes de coloration de graphes NPdifficiles, à savoir le problème de coloration (GCP), le problème de coloration équitable (ECP), le problème de coloration des sommets pondérés et le problème de sous-graphe critique (k-VCS). Ces problèmes sont largement étudiés dans la littérature, non seulement pour leur difficulté théorique, mais aussi pour leurs applications réelles dans de nombreux domaines. Étant donné qu'ils appartiennent à la classe de problèmes NP-difficiles, il est difficile de les résoudre dans le cas général de manière exacte. Pour cette raison, cette thèse est consacrée au développement d'approches heuristiques pour aborder ces problèmes complexes. Plus précisément, nous développons un algorithme mémétique de réduction (RMA) pour la coloration des graphes, un algorithme de recherche réalisable et irréalisable (FISA) pour la coloration équitable et un réalisable et irréalisable (AFISA) pour le problème de coloration des sommets pondérés et un algorithme de suppression basé sur le retour en arrière (IBR) pour le problème k-VCS. Tous les algorithmes ont été expérimentalement évalués et comparés aux méthodes de l'état de l'art. / This thesis concerns four NP-hard graph coloring problems, namely, graph coloring (GCP), equitable coloring (ECP), weighted vertex coloring (WVCP) and k-vertex-critical subgraphs (k-VCS). These problems are extensively studied in the literature not only for their theoretical intractability, but also for their real-world applications in many domains. Given that they belong to the class of NP-hard problems, it is computationally difficult to solve them exactly in the general case. For this reason, this thesis is devoted to developing effective heuristic approaches to tackle these challenging problems. We develop a reduction memetic algorithm (RMA) for the graph coloring problem, a feasible and infeasible search algorithm (FISA) for the equitable coloring problem, an adaptive feasible and infeasible search algorithm (AFISA) for the weighted vertex coloring problem and an iterated backtrack-based removal (IBR) algorithm for the k-VCS problem. All these algorithms were experimentally evaluated and compared with state-of-the-art methods.
7

Algorithms for irreducible infeasible subset detection in CSP - Application to frequency planning and graph k-coloring / Algorithmes pour la détection d'un sous ensemble irréalisable irréductible dans un CSP - Applications aux problèmes d'affectation des fréquences et problème de k-coloration

Hu, Jun 27 November 2012 (has links)
L’affectation de fr´equences (AFP) consiste `a attribuer des fr´equences radio aux liens de communications d’un r´eseauen respectant un spectre de fr´equences donn´e et des contraintes d’interf´erence ´electromagn´etique sur les liens. Vu lalimitation des ressources spectrales pour chaque application, les ressources en fr´equences sont souvent insuffisantespour d´eployer un r´eseau sans interf´erence. Dans ce cas, le r´eseau est surcontraint et le probl`eme est irr´ealisable.R´esoudre le probl`eme consiste alors `a identifier les zones surcontraintes pour en revoir la conception.Le travail que nous pr´esentons concerne la recherche d’une de ces zones surcontraintes avec une approche algo-rithmique bas´ee sur la mod´elisation du probl`eme par un CSP. Le probl`eme de l’affectation de fr´equences doit doncˆetre mod´elis´e comme un probl`eme de satisfaction de contraintes (CSP) qui est repr´esent´e par un tripl´e : un ensemblede variables (les liens radio), un ensemble de contraintes (les interf´erences ´electromagn´etiques), et un ensemble dedomaines (les fr´equences admises).Sous forme de CSP, une zone perturb´ee peut ˆetre consid´er´ee comme un sous-ensemble irr´ealisable irr´eductible duprobl`eme (IIS pour Irreductible Infeasible Subset). Un IIS est un sous probl`eme de taille minimale qui est irr´ealisable,c’est-`a-dire que tous les sous-ensembles d’un IIS sont r´ealisables. L’identification d’un IIS dans un CSP se rapporte `a deux r´esultats g´en´eraux int´eressants. Premi`erement, en localisant un IIS on peut plus facilement prouver l’irr´ealisabilit´ed’un probl`eme donn´e car l’irr´ealisabilit´e d’un IIS, qui est suppos´e ˆetre petit par rapport au probl`eme complet, est plusrapidement calculable que sur le probl`eme entier. Deuxi`emement, on peut localiser la raison de l’irr´ealisabilit´e; dansce cas, sur un probl`eme r´eel, le d´ecideur peut proposer des solutions pour relˆacher des contraintes de l’IIS, et peut-ˆetre aboutir `a une solution r´ealisable pour son probl`eme. La recherche d’IIS consiste donc `a r´esoudre un probl`emefondamental qui fait partie des outils de prise de d´ecision.Ce travail propose des algorithmes pour identifier un IIS dans un CSP incoh´erent. Ces algorithmes ont ´et´e test´essur des instances connues du probl`eme de l’affectation des fr´equences et du probl`eme de k-coloration de graphe. Lesr´esultats ont montr´es d’une grande am´elioration sur des instances du probl`eme de l’affectation des fr´equences parrapport aux m´ethodes connues. / The frequency assignment (FAP) consists in assigning the frequency on the radio links of a network which satisfiesthe electromagnetic interference among the links. Given the limited spectrum resources for each application, the fre-quency resources are often insufficient to deploy a wireless network without interference. In this case, the network isover-contrained and the problem is infeasible. Our objective is to identify an area with heavy interference.The work presented here concerns the detection for one of these areas with an algorithmic approach based onmodeling the problem by CSP. The problem of frequency assignment can be modeled as a constraint satisfactionproblem (CSP) which is represented by a triple: a set of variables (radio links), a set of constraints (electromagneticinterference) and a set of available frequencies.The interfered area in CSP can be considered a subset of irreducible feasible subset (IIS). An IIS is a infeasiblesubproblem with irreducible size, that is to say that all subsets of an IIS are feasible. The identification of an IIS ina CSP refers to two general interests. First, locating an IIS can easily prove the infeasibility of the problem. Becausethe size of IIS is assumed to be smaller compared to the entire problem, its infeasibility is relatively easier to prove.Second, we can locate the reason of infeasibility, in this case, the decision maker can provide the solutions to relax theconstraints inside IIS, which perhaps leads to a feasible solution to the problem.This work proposes algorithms to identify an IIS in the over-constrained CSP. These algorithms have tested on the well known benchmarks of the FAP and of the problem of graph k-coloring. The results show a significant improve-ment on instances of FAP compared to known methods.
8

Détermination de propriétés de flot de données pour améliorer les estimations de temps d'exécution pire-cas / Lookup of data flow properties to improve worst-case execution time estimations

Ruiz, Jordy 21 December 2017 (has links)
La recherche d'une borne supérieure au temps d'exécution d'un programme est une partie essentielle du processus de vérification de systèmes temps-réel critiques. Les programmes de tels systèmes ont généralement des temps d'exécution variables et il est difficile, voire impossible, de prédire l'ensemble de ces temps possibles. Au lieu de cela, il est préférable de rechercher une approximation du temps d'exécution pire-cas ou Worst-Case Execution Time (WCET). Une propriété cruciale de cette approximation est qu'elle doit être sûre, c'est-à-dire qu'elle doit être garantie de majorer le WCET. Parce que nous cherchons à prouver que le système en question se termine en un temps raisonnable, une surapproximation est le seul type d'approximation acceptable. La garantie de cette propriété de sûreté ne saurait raisonnablement se faire sans analyse statique, un résultat se basant sur une série de tests ne pouvant être sûr sans un traitement exhaustif des cas d'exécution. De plus, en l'absence de certification du processus de compilation (et de transfert des propriétés vers le binaire), l'extraction de propriétés doit se faire directement sur le code binaire pour garantir leur fiabilité. Toutefois, cette approximation a un coût : un pessimisme - écart entre le WCET estimé et le WCET réel - important entraîne des surcoûts superflus de matériel pour que le système respecte les contraintes temporelles qui lui sont imposées. Il s'agit donc ensuite, tout en maintenant la garantie de sécurité de l'estimation du WCET, d'améliorer sa précision en réduisant cet écart de telle sorte qu'il soit suffisamment faible pour ne pas entraîner des coûts supplémentaires démesurés. Un des principaux facteurs de surestimation est la prise en compte de chemins d'exécution sémantiquement impossibles, dits infaisables, dans le calcul du WCET. Ceci est dû à l'analyse par énumération implicite des chemins ou Implicit Path Enumeration Technique (IPET) qui raisonne sur un surensemble des chemins d'exécution. Lorsque le chemin d'exécution pire-cas ou Worst-Case Execution Path (WCEP), correspondant au WCET estimé, porte sur un chemin infaisable, la précision de cette estimation est négativement affectée. Afin de parer à cette perte de précision, cette thèse propose une technique de détection de chemins infaisables, permettant l'amélioration de la précision des analyses statiques (dont celles pour le WCET) en les informant de l'infaisabilité de certains chemins du programme. Cette information est passée sous la forme de propriétés de flot de données formatées dans un langage d'annotation portable, FFX, permettant la communication des résultats de notre analyse de chemins infaisables vers d'autres analyses. Les méthodes présentées dans cette thèse sont inclues dans le framework OTAWA, développé au sein de l'équipe TRACES à l'IRIT. Elles usent elles-mêmes d'approximations pour représenter les états possibles de la machine en différents points du programme. / The search for an upper bound of the execution time of a program is an essential part of the verification of real-time critical systems. The execution times of the programs of such systems generally vary a lot, and it is difficult, or impossible, to predict the range of the possible times. Instead, it is better to look for an approximation of the Worst-Case Execution Time (WCET). A crucial requirement of this estimate is that it must be safe, that is, it must be guaranteed above the real WCET. Because we are looking to prove that the system in question terminates reasonably quickly, an overapproximation is the only acceptable form of approximation. The guarantee of such a safety property could not sensibly be done without static analysis, as a result based on a battery of tests could not be safe without an exhaustive handling of test cases. Furthermore, in the absence of a certified compiler (and tech- nique for the safe transfer of properties to the binaries), the extraction of properties must be done directly on binary code to warrant their soundness. However, this approximation comes with a cost : an important pessimism, the gap between the estimated WCET and the real WCET, would lead to superfluous extra costs in hardware in order for the system to respect the imposed timing requirements. It is therefore important to improve the precision of the WCET by reducing this gap, while maintaining the safety property, as such that it is low enough to not lead to immoderate costs. A major cause of overestimation is the inclusion of semantically impossible paths, said infeasible paths, in the WCET computation. This is due to the use of the Implicit Path Enumeration Technique (IPET), which works on an superset of the possible execution paths. When the Worst-Case Execution Path (WCEP), corresponding to the estimated WCET, is infeasible, the precision of that estimation is negatively affected. In order to deal with this loss of precision, this thesis proposes an infeasible paths detection technique, enabling the improvement of the precision of static analyses (namely for WCET estimation) by notifying them of the infeasibility of some paths of the program. This information is then passed as data flow properties, formatted in the FFX portable annotation language, and allowing the communication of the results of our infeasible path analysis to other analyses.
9

Využití optimalizačních algoritmů při návrhování konstrukcí / Using Optimization's Algorithms by Designing of Structures

Fedorik, Filip Unknown Date (has links)
The application of optimization algorithms in the design of many economical and industrial problems currently represents a significant assignment. The development of high-powered computers allows an application of difficult mathematical techniques and physical phenomena to simulate real problems with sufficient accuracy. The optimization techniques used in engineering designs are mostly represented by modified mathematical programming methods with extension of their usability. The aim of the presented thesis "Using Optimization´s Algorithms by Designing of Structures" is to analyze the applicability of optimization procedures which are available in the widely used computing system ANSYS in civil and mechanical engineering practice. The numerical analyses were performed within the frame of multi-extreme, one to three dimensional optimization problems, multi-dimensional problems expressed by minimizing the weight of a truss beam and efficient design of air gap location in wooden studs from the point of view of thermal features of the structure. The analyzed optimization processes are in plurality verified with accurate manual computing and graphical solutions and the accent is put on optimization methods´ possibilities to improve robustness, efficiency and accuracy of the optimization algorithms in civil engineering problems´ designs. The optimization methods represent a suitable approach to improve the efficient design of a wide range of civil and mechanical engineering structures and elements. By combination of their advantages and FEM/FEA method it is possible to achieve very good results, although robustness of the solutions is not guaranteed. The robustness and accuracy of the procedure could be increased by competent exploration of design space and suitable selections of optimization methods´ features.

Page generated in 0.0375 seconds