• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 171
  • 54
  • 50
  • 49
  • 10
  • 8
  • 8
  • 6
  • 5
  • 5
  • 5
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 447
  • 95
  • 73
  • 71
  • 66
  • 56
  • 46
  • 43
  • 43
  • 38
  • 37
  • 33
  • 32
  • 32
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
331

Hybrid Particle Swarm Optimization Algorithm For Obtaining Pareto Front Of Discrete Time-cost Trade-off Problem

Aminbakhsh, Saman 01 January 2013 (has links) (PDF)
In pursuance of decreasing costs, both the client and the contractor would strive to speed up the construction project. However, accelerating the project schedule will impose additional cost and might be profitable up to a certain limit. Paramount for construction management, analyses of this trade-off between duration and cost is hailed as the time-cost trade-off (TCT) optimization. Inadequacies of existing commercial software packages for such analyses tied with eminence of discretization, motivated development of different paradigms of particle swarm optimizers (PSO) for three extensions of discrete TCT problems (DTCTPs). A sole-PSO algorithm for concomitant minimization of time and cost is proposed which involves minimal adjustments to shift focus to the completion deadline problem. A hybrid model is also developed to unravel the time-cost curve extension of DCTCPs. Engaging novel principles for evaluation of cost-slopes, and pbest/gbest positions, the hybrid SAM-PSO model combines complementary strengths of overhauled versions of the Siemens Approximation Method (SAM) and the PSO algorithm. Effectiveness and efficiency of the proposed algorithms are validated employing instances derived from the literature. Throughout computational experiments, mixed integer programming technique is implemented to introduce the optimal non-dominated fronts of two specific benchmark problems for the very first time in the literature. Another chief contribution of this thesis can be depicted as potency of SAM-PSO model in locating the entire Pareto fronts of the practiced instances, within acceptable time-frames with reasonable deviations from the optima. Possible further improvements and applications of SAM-PSO model are suggested in the conclusion.
332

Multiple Objective Evolutionary Algorithms for Independent, Computationally Expensive Objectives

Rohling, Gregory Allen 19 November 2004 (has links)
This research augments current Multiple Objective Evolutionary Algorithms with methods that dramatically reduce the time required to evolve toward a region of interest in objective space. Multiple Objective Evolutionary Algorithms (MOEAs) are superior to other optimization techniques when the search space is of high dimension and contains many local minima and maxima. Likewise, MOEAs are most interesting when applied to non-intuitive complex systems. But, these systems are often computationally expensive to calculate. When these systems require independent computations to evaluate each objective, the computational expense grows with each additional objective. This method has developed methods that reduces the time required for evolution by reducing the number of objective evaluations, while still evolving solutions that are Pareto optimal. To date, all other Multiple Objective Evolutionary Algorithms (MOEAs) require the evaluation of all objectives before a fitness value can be assigned to an individual. The original contributions of this thesis are: 1. Development of a hierarchical search space description that allows association of crossover and mutation settings with elements of the genotypic description. 2. Development of a method for parallel evaluation of individuals that removes the need for delays for synchronization. 3. Dynamical evolution of thresholds for objectives to allow partial evaluation of objectives for individuals. 4. Dynamic objective orderings to minimize the time required for unnecessary objective evaluations. 5. Application of MOEAs to the computationally expensive flare pattern design domain. 6. Application of MOEAs to the optimization of fielded missile warning receiver algorithms. 7. Development of a new method of using MOEAs for automatic design of pattern recognition systems.
333

Risk Measures Constituting Risk Metrics for Decision Making in the Chemical Process Industry

Prem, Katherine 2010 December 1900 (has links)
The occurrence of catastrophic incidents in the process industry leave a marked legacy of resulting in staggering economic and societal losses incurred by the company, the government and the society. The work described herein is a novel approach proposed to help predict and mitigate potential catastrophes from occurring and for understanding the stakes at risk for better risk informed decision making. The methodology includes societal impact as risk measures along with tangible asset damage monetization. Predicting incidents as leading metrics is pivotal to improving plant processes and, for individual and societal safety in the vicinity of the plant (portfolio). From this study it can be concluded that the comprehensive judgments of all the risks and losses should entail the analysis of the overall results of all possible incident scenarios. Value-at-Risk (VaR) is most suitable as an overall measure for many scenarios and for large number of portfolio assets. FN-curves and F$-curves can be correlated and this is very beneficial for understanding the trends of historical incidents in the U.S. chemical process industry. Analyzing historical databases can provide valuable information on the incident occurrences and their consequences as lagging metrics (or lagging indicators) for the mitigation of the portfolio risks. From this study it can be concluded that there is a strong statistical relationship between the different consequence tiers of the safety pyramid and Heinrich‘s safety pyramid is comparable to data mined from the HSEES database. Furthermore, any chemical plant operation is robust only when a strategic balance is struck between optimal plant operations and, maintaining health, safety and sustaining environment. The balance emerges from choosing the best option amidst several conflicting parameters. Strategies for normative decision making should be utilized for making choices under uncertainty. Hence, decision theory is utilized here for laying the framework for choice making of optimum portfolio option among several competing portfolios. For understanding the strategic interactions of the different contributing representative sets that play a key role in determining the most preferred action for optimum production and safety, the concepts of game theory are utilized and framework has been provided as novel application to chemical process industry.
334

Duality for convex composed programming problems

Vargyas, Emese Tünde 20 December 2004 (has links) (PDF)
The goal of this work is to present a conjugate duality treatment of composed programming as well as to give an overview of some recent developments in both scalar and multiobjective optimization. In order to do this, first we study a single-objective optimization problem, in which the objective function as well as the constraints are given by composed functions. By means of the conjugacy approach based on the perturbation theory, we provide different kinds of dual problems to it and examine the relations between the optimal objective values of the duals. Given some additional assumptions, we verify the equality between the optimal objective values of the duals and strong duality between the primal and the dual problems, respectively. Having proved the strong duality, we derive the optimality conditions for each of these duals. As special cases of the original problem, we study the duality for the classical optimization problem with inequality constraints and the optimization problem without constraints. The second part of this work is devoted to location analysis. Considering first the location model with monotonic gauges, it turns out that the same conjugate duality principle can be used also for solving this kind of problems. Taking in the objective function instead of the monotonic gauges several norms, investigations concerning duality for different location problems are made. We finish our investigations with the study of composed multiobjective optimization problems. In doing like this, first we scalarize this problem and study the scalarized one by using the conjugacy approach developed before. The optimality conditions which we obtain in this case allow us to construct a multiobjective dual problem to the primal one. Additionally the weak and strong duality are proved. In conclusion, some special cases of the composed multiobjective optimization problem are considered. Once the general problem has been treated, particularizing the results, we construct a multiobjective dual for each of them and verify the weak and strong dualities. / In dieser Arbeit wird, anhand der sogenannten konjugierten Dualitätstheorie, ein allgemeines Dualitätsverfahren für die Untersuchung verschiedener Optimierungsaufgaben dargestellt. Um dieses Ziel zu erreichen wird zuerst eine allgemeine Optimierungsaufgabe betrachtet, wobei sowohl die Zielfunktion als auch die Nebenbedingungen zusammengesetzte Funktionen sind. Mit Hilfe der konjugierten Dualitätstheorie, die auf der sogenannten Störungstheorie basiert, werden für die primale Aufgabe drei verschiedene duale Aufgaben konstruiert und weiterhin die Beziehungen zwischen deren optimalen Zielfunktionswerten untersucht. Unter geeigneten Konvexitäts- und Monotonievoraussetzungen wird die Gleichheit dieser optimalen Zielfunktionswerte und zusätzlich die Existenz der starken Dualität zwischen der primalen und den entsprechenden dualen Aufgaben bewiesen. In Zusammenhang mit der starken Dualität werden Optimalitätsbedingungen hergeleitet. Die Ergebnisse werden abgerundet durch die Betrachtung zweier Spezialfälle, nämlich die klassische restringierte bzw. unrestringierte Optimierungsaufgabe, für welche sich die aus der Literatur bekannten Dualitätsergebnisse ergeben. Der zweite Teil der Arbeit ist der Dualität bei Standortproblemen gewidmet. Dazu wird ein sehr allgemeines Standortproblem mit konvexer zusammengesetzter Zielfunktion in Form eines Gauges formuliert, für das die entsprechenden Dualitätsaussagen abgeleitet werden. Als Spezialfälle werden Optimierungsaufgaben mit monotonen Normen betrachtet. Insbesondere lassen sich Dualitätsaussagen und Optimalitätsbedingungen für das klassische Weber und Minmax Standortproblem mit Gauges als Zielfunktion herleiten. Das letzte Kapitel verallgemeinert die Dualitätsaussagen, die im zweiten Kapitel erhalten wurden, auf multikriterielle Optimierungsprobleme. Mit Hilfe geeigneter Skalarisierungen betrachten wir zuerst ein zu der multikriteriellen Optimierungsaufgabe zugeordnetes skalares Problem. Anhand der in diesem Fall erhaltenen Optimalitätsbedingungen formulieren wir das multikriterielle Dualproblem. Weiterhin beweisen wir die schwache und, unter bestimmten Annahmen, die starke Dualität. Durch Spezialisierung der Zielfunktionen bzw. Nebenbedingungen resultieren die klassischen konvexen Mehrzielprobleme mit Ungleichungs- und Mengenrestriktionen. Als weitere Anwendungen werden vektorielle Standortprobleme betrachtet, zu denen wir entsprechende duale Aufgaben formulieren.
335

Automated estimation of time and cost for determining optimal machining plans

Van Blarigan, Benjamin 30 July 2012 (has links)
The process of taking a solid model and producing a machined part requires the time and skillset of a range of professionals, and several hours of part review, process planning, and production. Much of this time is spent creating a methodical step-by-step process plan for creating the part from stock. The work presented here is part of a software package that performs automated process planning for a solid model. This software is capable of not only greatly decreasing the planning time for part production, but also give valuable feedback about the part to the designer, as a time and cost associated with manufacturing the part. In order to generate these parameters, we must simulate all aspects of creating the part. Presented here are models that replicate these aspects. For milling, an automatic tool selection method is presented. Given this tooling, another model uses specific information about the part to generate a tool path length. A machining simulation model calculates relevant parameters, and estimates a time for machining given the tool and tool path determined previously. This time value, along with the machining parameters, is used to estimate the wear to the tooling used in the process. Using the machining time and the tool wear a cost for the process can be determined. Other models capture the time of non-machining production times, and all times are combined with billing rates of machines and operators to present an overall cost for machining a feature on a part. If several such features are required to create the part, these models are applied to each feature, until a complete process plan has been created. Further post processing of the process plan is required. Using a list of available machines, this work considers creating the part on all machines, or any combination of these machines. Candidates for creating the part on specific machines are generated and filtered based on time and cost to keep only the best candidates. These candidates can be returned to the user, who can evaluate, and choose, one candidate. Results are presented for several example parts. / text
336

Portfolio management using computational intelligence approaches : forecasting and optimising the stock returns and stock volatilities with fuzzy logic, neural network and evolutionary algorithms

Skolpadungket, Prisadarng January 2013 (has links)
Portfolio optimisation has a number of constraints resulting from some practical matters and regulations. The closed-form mathematical solution of portfolio optimisation problems usually cannot include these constraints. Exhaustive search to reach the exact solution can take prohibitive amount of computational time. Portfolio optimisation models are also usually impaired by the estimation error problem caused by lack of ability to predict the future accurately. A number of Multi-Objective Genetic Algorithms are proposed to solve the problem with two objectives subject to cardinality constraints, floor constraints and round-lot constraints. Fuzzy logic is incorporated into the Vector Evaluated Genetic Algorithm (VEGA) to but solutions tend to cluster around a few points. Strength Pareto Evolutionary Algorithm 2 (SPEA2) gives solutions which are evenly distributed portfolio along the effective front while MOGA is more time efficient. An Evolutionary Artificial Neural Network (EANN) is proposed. It automatically evolves the ANN's initial values and structures hidden nodes and layers. The EANN gives a better performance in stock return forecasts in comparison with those of Ordinary Least Square Estimation and of Back Propagation and Elman Recurrent ANNs. Adaptation algorithms for selecting a pair of forecasting models, which are based on fuzzy logic-like rules, are proposed to select best models given an economic scenario. Their predictive performances are better than those of the comparing forecasting models. MOGA and SPEA2 are modified to include a third objective to handle model risk and are evaluated and tested for their performances. The result shows that they perform better than those without the third objective.
337

Simulation and optimization of energy consumption in wireless sensor networks

Zhu, Nanhao 11 October 2013 (has links) (PDF)
Les grandes évolutions de la technique de systèmes embarqués au cours des dernières années ont permis avec succès la combinaison de la détection, le traitement des données, et diverses technologies de communication sans fil tout en un nœud. Les réseaux de capteurs sans fil (WSN) qui se composent d'un grand nombre de ces nœuds ont attiré l'attention du monde entier sur les établissements scolaires et les communautés industrielles, puisque leurs applications sont très répandues dans des domaines tels que la surveillance de l'environnement, le domaine militaire, le suivi des événements et la détection des catastrophes. En raison de la dépendance sur la batterie, la consommation d'énergie des réseaux de capteurs a toujours été la préoccupation la plus importante. Dans cet article, une méthode mixte est utilisée pour l'évaluation précise de l'énergie sur les réseaux de capteurs, ce qui inclut la conception d'un environnement de SystemC simulation base au niveau du système et au niveau des transactions pour l'exploration de l'énergie, et la construction d'une plate-forme de mesure d'énergie pour les mesures de nœud banc d'essai dans le monde réel pour calibrer et valider à la fois le modèle de simulation énergétique de nœud et le modèle de fonctionnement. La consommation d'énergie élaborée de plusieurs différents réseaux basés sur la plate-forme de nœud sont étudiées et comparées dans différents types de scénarios, et puis des stratégies globales d'économie d'énergie sont également données après chaque scénario pour les développeurs et les chercheurs qui se concentrent sur la conception des réseaux de capteurs efficacité énergétique. Un cadre de l'optimisation basée sur un algorithme génétique est conçu et mis en œuvre à l'aide de MATLAB pour les réseaux de capteurs conscients de l'énergie. En raison de la propriété de recherche global des algorithmes génétiques, le cadre de l'optimisation peut automatiquement et intelligemment régler des centaines de solutions possibles pour trouver le compromis le plus approprié entre la consommation d'énergie et d'autres indicateurs de performance. Haute efficacité et la fiabilité du cadre de la recherche des solutions de compromis entre l'énergie de nœud, la perte de paquets réseau et la latence ont été prouvés par réglage paramètres de l'algorithme CSMA / CA de unslotted (le mode non-beacon de IEEE 802.15.4) dans notre simulation basé sur SystemC via une fonction de coût de la somme pondérée. En outre, le cadre est également disponible pour la tâche d'optimisation basée sur multi-scénarios et multi-objectif par l'étude d'une application médicale typique sur le corps humain.
338

The impact of price discrimination on tourism demand / Elizabeth Maria Fouché

Fouché, Elizabeth Maria January 2005 (has links)
The primary goal of this study was to determine the impact of price discrimination on tourism demand. Four objectives were defined with reference to the primary research goal. The first objective was to analyse the concept of price discrimination and relevant theories by means of a literature study. In this regard it was found that price discrimination between markets is fairly common and that it occurs if the same goods were sold to different customers at different prices. Price discrimination is also possible as soon as some monopoly power exists and it is feasible when it is impossible or at least impractical for the buyers to trade among themselves. Three different kinds of price discrimination can be applied, namely first-degree, second-degree and third-degree price discrimination. The data also indicated that price discrimination is advantageous (it mainly increases profit) and that it has several other effects too. The second objective was to analyse examples of price discrimination by means of international case studies. In these different case studies it was found that demand and supply, therefore consumer and product, formed the basis of price discrimination. If demand did not exist, it would be impossible to apply price discrimination. The findings also indicated that, for an organisation to be able to practice price discrimination, the markets must be separated effectively and it will only be successful if there is a significant difference in demand elasticity between the different consumers. Furthermore, the ability to charge these different prices will depend on the consumer's ability and willingness to pay. If an organisation should decide to price discriminate, it would lead to a higher profit, a more optimal pricing policy and also to an increase in sales. The third objective was to analyse national case studies. This was done through comparing the data of a tourism organisation price discriminating (Mosetlha Bush Camp, situated in the North West) to two organisations that did not implement price discrimination (Kgalagadi Transfrontier Park in the Northern Cape and Golden Leopard Resort, also situated in the North West). It was found that a customer with low price elasticity is less deterred by a higher price than a customer with a high price elasticity of demand. As long as the customer's price elasticity is less than one, it will be very advantageous to increase the price: the seller will in this case get more money for less goods. With the increase in price the price elasticity tends to rise above one. The fourth objective was to draw conclusions and make recommendations. It was concluded that price discrimination could be applied successfully in virtually any organisation or industry. Furthermore, price discrimination does not always have a negative effect; but can have a positive ass well. It can have a positive effect on tourism demand. The findings emphasised that the main reason for implementing price discrimination is to increase profit at the cost of reducing consumer surplus. From the results it was recommended that more research on this topic should be conducted. / Thesis (M.Com. (Tourism))--North-West University, Potchefstroom Campus, 2006.
339

Optimisation avec prise en compte des incertitudes dans la mise en forme par hydroformage

Ben Abdessalem, Mohamed Anis 08 June 2011 (has links) (PDF)
Le procédé d'hydroformage est largement utilisé dans les industries automobile et aéronautique. L'optimisation déterministe a été utilisée pour le contrôle et l'optimisation du procédé durant la dernière décennie. Cependant,dans des conditions réelles, différents paramètres comme les propriétés matériaux,les dimensions géométriques, et les chargements présentent des aléas qui peuvent affecter la stabilité et la fiabilité du procédé. Il est nécessaire d'introduire ces incertitudes dans les paramètres et de considérer leur variabilité. L'objectif principal de cette contribution est l'évaluation de la fiabilité et l'optimisation du procédé d'hydroformage en présence d'incertitudes.La première partie de cette thèse consiste à proposer une approche générale pour évaluer la probabilité de défaillance spatiale du procédé d'hydroformage, principalement dans les régions critiques. Avec cette approche, il est possible d'éviter les instabilités plastiques durant une opération d'hydroformage. Cette méthode est basée sur des simulations de Monte Carlo couplée avec des métamodèles. La courbe limite de formage est utilisée comme critère de défaillance pour les instabilités plastiques potentielles.La seconde partie de cette thèse est l'optimisation avec prise en compte d'incertitudes dans le procédé d'hydroformage. En utilisant des exemples illustratifs, on montre que l'approche probabiliste est une méthode efficace pour l'optimisation du procédé pour diminuer la probabilité de défaillance et laisser le procédé insensible ou peu sensible aux sources d'incertitudes. La difficulté est liée à la considération des contraintes fiabilistes qui nécessitent d'énormes efforts de calcul et impliquent des problèmes numériques classiques comme la convergence, la précision et la stabilité. Pour contourner ce problème, la méthode de surface de réponse couplée à des simulations Monte Carlo est utilisée pour évaluer les contraintes probabilistes.L'approche probabiliste peut assurer la stabilité et la fiabilité du procédé et minimise considérablement le pourcentage des pièces défectueuses. Dans cette partie, deux méthodes sont utilisées : l'optimisation fiabiliste et l'optimisation robuste.La dernière partie consiste à optimiser le procédé avec une stratégie Multi-Objectif(MO) avec prise en compte d'incertitudes. Le procédé d'hydroformage est un problème MO qui consiste à optimiser plus d'une performance simultanément.L'objectif principal est d'étudier l'évolution du front de Pareto lorsque des incertitudes affectent les fonctions objectifs ou les paramètres. Dans cette partie, on propose une nouvelle méthodologie qui présente les solutions dans un nouvel espace et les classifie suivant leurs probabilités de défaillances. Cette classification permet d'identifier la meilleure solution et fournit une idée sur la fiabilité de chaque solution.
340

La résolution du problème de formation de cellules dans un contexte multicritère

Ahadri, Mohamed Zaki 01 1900 (has links)
Les techniques de groupement technologique sont aujourd’hui utilisées dans de nombreux ateliers de fabrication; elles consistent à décomposer les systèmes industriels en sous-systèmes ou cellules constitués de pièces et de machines. Trouver le groupement technologique le plus efficace est formulé en recherche opérationnelle comme un problème de formation de cellules. La résolution de ce problème permet de tirer plusieurs avantages tels que la réduction des stocks et la simplification de la programmation. Plusieurs critères peuvent être définis au niveau des contraintes du problème tel que le flot intercellulaire,l’équilibrage de charges intracellulaires, les coûts de sous-traitance, les coûts de duplication des machines, etc. Le problème de formation de cellules est un problème d'optimisation NP-difficile. Par conséquent les méthodes exactes ne peuvent être utilisées pour résoudre des problèmes de grande dimension dans un délai raisonnable. Par contre des méthodes heuristiques peuvent générer des solutions de qualité inférieure, mais dans un temps d’exécution raisonnable. Dans ce mémoire, nous considérons ce problème dans un contexte bi-objectif spécifié en termes d’un facteur d’autonomie et de l’équilibre de charge entre les cellules. Nous présentons trois types de méthodes métaheuristiques pour sa résolution et nous comparons numériquement ces métaheuristiques. De plus, pour des problèmes de petite dimension qui peuvent être résolus de façon exacte avec CPLEX, nous vérifions que ces métaheuristiques génèrent des solutions optimales. / Group technology techniques are now widely used in many manufacturing systems. Those techniques aim to decompose industrial systems into subsystems or cells of parts and machines. The problem of finding the most effectivegroup technology is formulated in operations research as the Cell Formation Problem. Several criteria can be used to specify the optimal solution such as flood intercellular, intracellular load balancing, etc. Solving this problem leads to several advantages such as reducing inventory and simplifying programming. The Cell Formation Problem is an NP-hard problem; therefore, exact methods cannot be used to solve large problems within a reasonabletime, whereas heuristics can generate solutions of lower quality, but in a reasonable execution time. We suggest in this work, three different metaheuristics to solve the cell formation problem having two objectives functions: cell autonomy and load balancing between the cells.We compare numerically these metaheuristics. Furthermore, for problems of smaller dimension that can be solved exactly with CPLEX, we verify that the metaheuristics can reach the optimal value.

Page generated in 0.0728 seconds