• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2059
  • 1533
  • 209
  • 50
  • 20
  • 19
  • 18
  • 9
  • 9
  • 6
  • 6
  • 6
  • 5
  • 3
  • 3
  • Tagged with
  • 4397
  • 1626
  • 750
  • 741
  • 555
  • 495
  • 441
  • 418
  • 393
  • 321
  • 315
  • 288
  • 288
  • 275
  • 273
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Optimising production control for reduction of allergen contamination.

van Gestel, Patrick January 2014 (has links)
The purpose of this work was to provide an integrated solution to the problem of optimising plant production flow while also optimising allergen control. That is, to improve process flows, improve equipment utilisation, reduce work-in-process (WIP) inventory, and reduce unnecessary movement of stock while also optimising allergen control in the area under investigation. Process improvement introduced to the plant during the project resulted in a 7% savings on labour cost, reduction in plant variability, reduced allergen cross contamination risk, reduced WIP, reduction of consumables, and increased equipment utilisation. Discrete event simulation software has been used to determine the preferred strategy for implementing allergen control in a food producing FMCG plant. Three preferred allergen control strategies were identified by the Company, which were then modelled and analysed for impact on labour cost. Furthermore, a study was done on the effect of plant layout on labour cost.
2

Mathematical modelling of nonlinear dynamic systems

Njabeleke, Ignatius Andem January 1995 (has links)
No description available.
3

Time-predefined and trajectory-based search : single and multiobjective approaches to exam timetabling

Bykov, Yuri January 2003 (has links)
No description available.
4

Conception modulaire d'une caisse de véhicule par des méthodes d'optimisation robuste

Charrier, Martin 18 September 2018 (has links)
Dans l'industrie automobile, les volumes de vente sont tels que le moindre kilogramme gagné sur un véhicule génère des économies colossales : diminution du coût des matériaux bruts, réduction de la consommation et de la taxe carbone (meilleure perception client). L'utilisation de simulations par éléments finis et l'optimisation font désormais partie intégrante des processus de développement des véhicules : crash, NVH (Noise and Vibration Harshness), aérodynamique, mécanique des fluides. . . L'optimisation réduit la masse d'un véhicule en faisant varier des variables de conception sous contraintes du cahier des charges. Le nombre de simulations requises à une optimisation classique varie entre 3 et 10 fois le nombre de variables quantitatives (épaisseurs, variables de formes). Ce nombre augmente rapidement si le problème contient des variables qualitatives (présence/absence/alternatives de pièces). Dans ce cas, lorsque les simulations sont très coûteuses comme en crash, les algorithmes d'optimisation deviennent inefficaces. Pour cette raison, l'optimisation est utilisée tardivement dans le cycle de développement des véhicules, lorsque le nombre de variables qualitatives a été réduit par plusieurs décisions stratégiques (architecture de caisse, pièces à ré-utiliser, usines de fabrication. . .). De telles décisions ne sont pas toujours prises de manière pertinente, particulièrement lorsque le planning est serré et les données indisponibles. De mauvais choix à ce stade peuvent s'avérer très coûteux par la suite. La méthode proposée dans les premiers chapitres de cette thèse utilise un algorithme de Branch and Bound pour étendre le périmètre de l'optimisation en permettant un grand nombre de variables qualitatives et une adaptation rapide aux possibles changements de contraintes. Avec ces deux caractéristiques, de nouvelles variables qualitatives généralement pré-contraintes par des décisions stratégiques peuvent être prises en compte dès lors que les modèles numériques sont disponibles. Différents scenarii liés à différents jeux de contraintes stratégiques peuvent alors être comparés. Les chapitres suivants sont dédiés à la méthode de réduction de modèles ReCUR, qui complète l'amélioration de l'algorithme par une réduction drastique du nombre de simulations nécessaires à l'établissement d'un modèle du comportement crash du véhicule. La méthode surpasse le traditionnel fléau de la dimension (ou curse of dimensionality) en proposant une modélisation en fonction de variables exprimées non plus aux pièces, mais aux nœuds (ou éléments) du maillage. Les deux versions de ReCUR testées aujourd'hui seront présentées, et chacune d'entre elle sera appliquée à l'optimisation d'une caisse complète de véhicule. Deux méthodes qui permettraient à ReCUR de prendre en compte des variables de type alternatives géométriques de pièces seront également proposées. / In automotive industry, sales volumes are such that each kilogram saved on a vehicle generates huge gains: raw material cost, customer value due to consumption and carbon tax reduction. Engineering is under severe pressure to reduce mass and CO2. The use of finite element simulation and optimization becomes an integral part of the design processes: crash, NVH (Noise and Vibration Harshness), aerodynamics, fluid mechanics... Optimization reduces the mass of a vehicle varying the design variables, under constraints on specifications. The number of simulations required is between 3 and 10 times the number of quantitative variables (e.g. thicknesses, shape parameters). This number increases rapidly if the problem contains qualitative variables (e.g. presence/absence/alternative of parts). In this last case, when simulations are expensive like crash, optimization algorithms become inefficient. For this reason, optimization is used on the last stage of the development cycle when the number of qualitative variables has been reduced by several strategic decisions (e.g. body architecture, parts to be re-used, manufacturing plants. . .). Such decisions are not always taken in a relevant way, especially when the schedule is tight and some data unavailable. A bad choice made in an early stage in the project can then be very costly. The method proposed in the first chapters of this thesis uses a Branch and Bound algorithm to extend the optimization perimeter by allowing a large number of qualitative variables, as well as a rapid adaptation to possible changes of constraints during the optimization. With these two characteristics, new qualitative variables usually pre-constrained by strategic decisions can be added to the problem, assuming that numerical models are available. Different scenarios linked to several sets of strategic constraints can then be simulated. Decision-making tools, like Pareto frontiers, help to visualize the optimal scenarios. The following chapters are dedicated to the ReCUR model reduction method, which completes the improvement of the algorithm by drastically reducing the number of simulations required to establish a model of the vehicle crash behavior. The method surpasses the traditional curse of dimensionality by proposing a modelization according to variables expressed not to the pieces, but to the nodes (or elements) of the mesh. The two versions of ReCUR tested today will be presented, and each of them will be applied to the optimization of a complete vehicle body. Two methods that would allow ReCUR to take into account variables of the type geometric alternatives of parts will also be proposed.
5

Du séquentiel au parallèle, la recherche arborescente et son application à la programmation quadratique en variables 0.1 /

Roucairol, Catherine, January 1900 (has links)
Th.--Sci.--Paris VI, 1987. / Bibliogr. p. 306-309.
6

Minimisation de masse et amélioration des performances dynamiques ou statistiques de structures axisymétriques minces.

Mazoyer, Marie-Josèphe, January 1900 (has links)
Th. doct. ing.--Besançon, 1978. N°: 88.
7

Problèmes de tournées de véhicules robustes multi-objectifs

Bederina, Hiba 14 May 2018 (has links)
L'objectif de cette thèse est de contribuer à l'adaptation des problèmes de tournées de véhicules (VRP) aux problématiques du monde réel en se focalisant sur deux axes principaux à savoir : la prise en compte des incertitudes à travers l'optimisation robuste et l'optimisation simultanée de plusieurs critères en utilisant l'optimisation multi-objectif. Dans une première partie, nous nous sommes intéressés à la modélisation du problème VRP sous incertitudes en proposant un nouveau critère de robustesse. Ce critère, appelé "Maximizing the Number of scenarios Qualified by the Worst" (MNSQW), a été évalué en utilisant deux méthodes de résolution : une première méthode exacte et une deuxième méthode basée sur une méta-heuristique. Dans une deuxième partie, nous nous sommes intéressés à la résolution robuste multi-objectif d'une variante du VRP: le VRP capacitaire (CVRP), où l'incertitude sur les coûts de trajets est considérée. Un algorithme évolutionnaire multi-objectif hybride a été proposé pour optimiser simultanément le coût du trajet et la taille de la flotte. L'étude expérimentale a montré que l'approche proposée permettait d'atteindre la quasi-totalité des solutions (Pareto) optimales avec une amélioration de deux bornes supérieures (sur un critère) d'une instance. La troisième partie de cette thèse comporte l'étude d'une autre variante du VRP : le problème de tournées de véhicules sélectives (TOP). L'étude vise à optimiser simultanément le profit collecté et le coût du trajet. Pour se faire, nous avons proposé une approche évolutionnaire multi-objectif hybride. La comparaison des résultats par rapport à ceux obtenus par trois méthodes de la littérature, a permis d'observer des amélioration de certaines bornes (quatre nouvelles bornes ont été obtenues). Finalement, nous nous sommes intéressés à l'étude d'une variante robuste du TOP (RTOP). Ce problème a été résolu en adaptant l'algorithme utilisé pour la variante déterministe / The main objective of the thesis is to contribute to the adaptation of VRP problems to the real world problems with a focus on two main axes namely: handling uncertainties through robust optimization and simultaneous optimization of several criteria using multi-objective optimization. First, we focus on modeling the VRP problem under uncertainty by proposing a new robust criterion. This criterion, called "Maximizing the Number of Scenarios Qualified by the Worst (MNSQW)", was evaluated using two approaches: an exact method and a meta-heuristic. In the second part, the robust multi-objective resolution of the capacitated VRP variant (CVRP) with uncertainty on the travel costs has been studied. A hybrid multi-objective evolutionary algorithm has been proposed to optimize the travel cost and the fleet size simultaneously. Experiments were carried out on a state-of-the-art instances, and the proposed approach were compared to an exact method and two meta-heuristics approaches from the literature. The obtained results show that our approach reaches almost all the optimal solutions, and that two new bounds have been established on an other instance. The comparison with the meta-heuristics shows an improvement on the entire results of the first, and competitive results with the second. The third part of this thesis was devoted to the study of another variant of the VRP namely: the Team Orienteering Problem (TOP). We first proposed a hybrid multi-objective evolutionary approach to solve a multi-objective formulation of this problem, to optimize the collected profit and the total travel cost simultaneously. The conducted experiments confirm the conflictual behavior of the optimized objectives. The comparison with three approaches of the literature, allowed to show an improvement of some bounds (four new ones). In the second part of the TOP study, we proposed a robust variant of the latter (RTOP), that has been solved by adapting the algorithm used for the deterministic variant
8

Constrained, non-linear, derivative-free parallel optimization of continuous, high computing load, noisy objective functions.

Vanden Berghen, Frank 28 June 2004 (has links)
The main result is a new original algorithm: CONDOR ("COnstrained, Non-linear, Direct, parallel Optimization using trust Region method for high-computing load, noisy functions"). The aim of this algorithm is to find the minimum x* of an objective function F(x) (x is a vector whose dimension is between 1 and 150) using the least number of function evaluations of F(x). It is assumed that the dominant computing cost of the optimization process is the time needed to evaluate the objective function F(x) (One evaluation can range from 2 minutes to 2 days). The algorithm will try to minimize the number of evaluations of F(x), at the cost of a huge amount of routine work. CONDOR is a derivate-free optimization tool (i.e., the derivatives of F(x) are not required. The only information needed about the objective function is a simple method (written in Fortran, C++,...) or a program (a Unix, Windows, Solaris,... executable) which can evaluate the objective function F(x) at a given point x. The algorithm has been specially developed to be very robust against noise inside the evaluation of the objective function F(x). This hypotheses are very general, the algorithm can thus be applied on a vast number of situations. CONDOR is able to use several CPU's in a cluster of computers. Different computer architectures can be mixed together and used simultaneously to deliver a huge computing power. The optimizer will make simultaneous evaluations of the objective function F(x) on the available CPU's to speed up the optimization process. The experimental results are very encouraging and validate the quality of the approach: CONDOR outperforms many commercial, high-end optimizer and it might be the fastest optimizer in its category (fastest in terms of number of function evaluations). When several CPU's are used, the performances of CONDOR are currently unmatched (may 2004). CONDOR has been used during the METHOD project to optimize the shape of the blades inside a Centrifugal Compressor (METHOD stands for Achievement Of Maximum Efficiency For Process Centrifugal Compressors THrough New Techniques Of Design). In this project, the objective function is based on a 3D-CFD (computation fluid dynamic) code which simulates the flow of the gas inside the compressor.
9

Structural optimisation using the principle of virtual work

Walls, Richard Shaun 24 May 2011 (has links)
MSc(Eng),School of Civil and Environmental Engineering, Faculty of Engineering and the Built Environment, University of the Witwatersrand, 2010 / This dissertation presents a new method for the automated optimisation of structures. The method has been developed to: (1) select sections to satisfy strength and deflection requirements using minimum material, and (2) efficiently group members. The member selection method is based on the principle of virtual work, and is called the Virtual Work Optimisation (VWO) method. It addresses multiple deflection and load case constraints simultaneously. The method determines which sections provide the highest deflection and strength resistance per unit mass. When compared to several other methods in the literature, and designs from industry, the VWO method produced savings of up to 15.1%. A parametric investigation of ungrouped, multi-storey frames is conducted using the VWO method to determine optimal mass and stiffness distributions. Unusual mass patterns have been found. Diagonal paths of increased stiffness are formed in the frames, which suggests truss behaviour. A grouping algorithm is presented which determines how efficiently to create a specified number of groups in a structure. The VWO method has been incorporated into the automated algorithm to optimise the grouped structures. Members are grouped according to their mass per unit length. In the algorithm an exhaustive search of all feasible grouping permutations is carried out, and the lightest structure selected. Results produced are up to 5.9% lighter than those obtained using ad hoc grouping configurations found in the literature and based on experience.
10

Multi particle swarm optimisation algorithm applied to supervisory power control systems

Sallama, Abdulhafid Faraj January 2014 (has links)
Power quality problems come in numerous forms (commonly spikes, surges, sags, outages and harmonics) and their resolution can cost from a few hundred to millions of pounds, depending on the size and type of problem experienced by the power network. They are commonly experienced as burnt-out motors, corrupt data on hard drives, unnecessary downtime and increased maintenance costs. In order to minimise such events, the network can be monitored and controlled with a specific control regime to deal with particular faults. This study developed a control and Optimisation system and applied it to the stability of electrical power networks using artificial intelligence techniques. An intelligent controller was designed to control and optimise simulated models for electrical system power stability. Fuzzy logic controller controlled the power generation, while particle swarm Optimisation (PSO) techniques optimised the system’s power quality in normal operation conditions and after faults. Different types of PSO were tested, then a multi-swarm (M-PSO) system was developed to give better Optimisation results in terms of accuracy and convergence speed. The developed Optimisation algorithm was tested on seven benchmarks and compared to the other types of single PSOs. The developed controller and Optimisation algorithm was applied to power system stability control. Two power electrical network models were used (with two and four generators), controlled by fuzzy logic controllers tuned using the Optimisation algorithm. The system selected the optimal controller parameters automatically for normal and fault conditions during the operation of the power network. Multi objective cost function was used based on minimising the recovery time, overshoot, and steady state error. A supervisory control layer was introduced to detect and diagnose faults then apply the correct controller parameters. Different fault scenarios were used to test the system performance. The results indicate the great potential of the proposed power system stabiliser as a superior tool compared to conventional control systems.

Page generated in 0.0882 seconds