181 |
A Study on Integrated Transportation and Facility Location ProblemOyewole, Gbeminiyi John January 2019 (has links)
The focus of this thesis is the development and solution of problems that simultaneously involve the planning of the location of facilities and transportation decisions from such facilities to consumers. This has been termed integrated distribution planning problems with practical application in logistics and manufacturing. In this integration, different planning horizons of short, medium and long terms are involved with the possibility of reaching sub-optimal decisions being likely when the planning horizons are considered separately.
Two categories of problems were considered under the integrated distribution models. The first is referred to as the Step-Fixed Charge Location and Transportation Problem (SFCLTP). The second is termed the Fixed Charge Solid Location and Transportation Problem (FCSLTP). In these models, the facility location problem is considered to be a strategic or long term decision. The short to medium-term decisions considered are the Step-Fixed Charge Transportation Problem (SFCTP) and the Fixed Charge Solid Transportation Problem (FCSTP). Both SFCTP and FCSTP are different extensions to the classical transportation problem, requiring a trade-off between fixed and variable costs along the transportation routes to minimize total transportation costs.
Linearization and subsequent local improvement search techniques were developed to solve the SFCLTP. The first search technique involved the development of a hands-on solution including a numerical example. In this solution technique, linearization was employed as the primal solution, following which structured perturbation logic was developed to improve on the initial solution. The second search technique proposed also utilized the linearization principle as a base solution in addition to some heuristics to construct transportation problems. The resulting transportation problems were solved to arrive at a competitive solution as regards effectiveness (solution value) compared to those obtainable from standard solvers such as CPLEX.
The FCSLTP is formulated and solved using the CPLEX commercial optimization suite. A Lagrange Relaxation Heuristic (LRH) and a Hybrid Genetic Algorithm (GA) solution of the FCSLTP are presented as alternative solutions. Comparative studies between the FCSTP and the FCSLTP formulation are also presented. The LRH is demonstrated with a numerical example and also extended to hopefully generate improved upper bounds. The CPLEX solution generated better lower bounds and upper bound when compared with the extended LRH. However, it was observed that as problem size increased, the solution time of CPLEX increased exponentially. The FCSTP was recommended as a possible starting solution for solving the FCSLTP. This is due to a lower solution time and its feasible solution generation illustrated through experimentation.
The Hybrid Genetic Algorithm (HGA) developed integrates cost relaxation, greedy heuristic and a modified stepping stone method into the GA framework to further explore the solution search space. Comparative studies were also conducted to test the performance of the HGA solution with the classical Lagrange heuristics developed and CPLEX. Results obtained suggests that the performance of HGA is competitive with that obtainable from a commercial solver such as CPLEX. / Thesis (PhD)--University of Pretoria, 2019. / Industrial and Systems Engineering / PhD / Unrestricted
|
182 |
Evaluation de performance d’une ligne ferroviaire suburbaine partiellement équipée d’un automatisme CBTC / Performance of a suburban railway line partially equipped with a CBTC systemPochet, Juliette 12 January 2018 (has links)
En zone dense, la croissance actuelle du trafic sur les lignes ferroviaires suburbaines conduit les exploitants à déployer des systèmes de contrôle-commande avancés des trains, tels que les systèmes dits « CBTC » (Communication Based Train Control) jusque-là réservés aux systèmes de métro. Les systèmes CBTC mettent en œuvre un pilotage automatique des trains et permettent une amélioration significative des performances. Par ailleurs, ils peuvent inclure un module de supervision de la ligne en charge de réguler la marche des trains en cas d’aléa, améliorant ainsi la robustesse du trafic. Face au problème de régulation, la recherche opérationnelle a produit un certain nombre de méthodes permettant de répondre efficacement aux perturbations, d’une part dans le secteur métro et d’autre part dans le secteur ferroviaire lourd. En tirant profit de l’état de l’art et des avancées faites dans les deux secteurs, les travaux présentés dans ce manuscrit cherchent à contribuer à l’adaptation des fonctions de régulation des systèmes CBTC pour l’exploitation de lignes ferroviaires suburbaines. L’approche du problème débute par la construction de l’architecture fonctionnelle d’un module de supervision pour un système CBTC standard. Nous proposons ensuite une méthode de régulation basée sur une stratégie de commande prédictive et sur une optimisation multi-objectif des consignes des trains automatiques. Afin d’être en mesure d’évaluer précisément les performances d’une ligne ferroviaire suburbaine équipée d’un automatisme CBTC, il est nécessaire de s’équiper d’un outil de simulation microscopique adapté. Nous présentons dans ce manuscrit l’outil SNCF nommé SIMONE qui permet une simulation réaliste du point de vue fonctionnel et dynamique d’un système ferroviaire incluant un système CBTC. Les objectifs des travaux de thèse nous ont naturellement conduits à prendre part, avec l’équipe SNCF, à la spécification, à la conception et à l’implémentation de cet outil. Finalement, grâce à l’outil SIMONE, nous avons pu tester la méthode de régulation proposée sur des scénarios impliquant des perturbations. Afin d’évaluer la qualité des solutions, la méthode multi-objectif proposée a été comparée à une méthode de régulation individuelle basée sur une heuristique simple. La méthode de régulation multi-objectif propose de bonnes solutions au problème, dans la majorité des cas plus satisfaisantes que celles proposées par la régulation individuelle, et avec un temps de calcul jugé acceptable. Le manuscrit se termine par des perspectives de recherche intéressantes. / In high-density area, the demand for railway transportation is continuously increasing. Operating companies turn to new intelligent signaling and control systems, such as Communication Based Train Control (CBTC) systems previously deployed on underground systems only. CBTC systems operate trains in automatic pilot and lead to increase the line capacity without expensive modification of infrastructures. They can also include a supervision module in charge of adapting train behavior according to operating objectives and to disturbances, increasing line robustness. In the literature of real-time traffic management, various methods have been proposed to supervise and reschedule trains, on the one hand for underground systems, on the other hand for railway systems. Making the most of the state-of-the-art in both fields, the presented work intend to contribute to the design of supervision and rescheduling functions of CBTC systems operating suburban railway systems. Our approach starts by designing a supervision module for a standard CBTC system. Then, we propose a rescheduling method based on a model predictive control approach and a multi-objective optimization of automatic train commands. In order to evaluate the performances of a railway system, it is necessary to use a microscopic simulation tool including a CBTC model. In this thesis, we present the tool developed by SNCF and named SIMONE. It allows realistic simulation of a railway system and a CBTC system, in terms of functional architecture and dynamics. The presented work has been directly involved in the design and implementation of the tool. Eventually, the proposed rescheduling method was tested with the tool SIMONE on disturbed scenarios. The proposed method was compared to a simple heuristic strategy intending to recover delays. The proposed multi-objective method is able to provide good solutions to the rescheduling problem and over-performs the simple strategy in most cases, with an acceptable process time. We conclude with interesting perspectives for future work.
|
183 |
Dynamic electronic asset allocation comparing genetic algorithm with particle swarm optimizationIslam, Md Saiful 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / The contribution of this research work can be divided into two main tasks: 1) implementing this Electronic Warfare Asset Allocation Problem (EWAAP) with the Genetic Algorithm (GA); 2) Comparing performance of Genetic Algorithm to Particle Swarm Optimization (PSO) algorithm. This research problem implemented Genetic Algorithm in C++ and used QT Data Visualization for displaying three-dimensional space, pheromone, and Terrain. The Genetic algorithm implementation maintained and preserved the coding style, data structure, and visualization from the PSO implementation. Although the Genetic Algorithm has higher fitness values and better global solutions for 3 or more receivers, it increases the running time. The Genetic Algorithm is around (15-30\%) more accurate for asset counts from 3 to 6 but requires (26-82\%) more computational time. When the allocation problem complexity increases by adding 3D space, pheromones and complex terrains, the accuracy of GA is 3.71\% better but the speed of GA is 121\% slower than PSO. In summary, the Genetic Algorithm gives a better global solution in some cases but the computational time is higher for the Genetic Algorithm with than Particle Swarm Optimization.
|
184 |
COST-EFFECTIVE STRATEGY FOR THE INVESTIGATION AND REMEDIATION OF POLLUTED SOIL USING GEOSTATISTICS AND A GENETIC ALGORITHM APPROACH / 土壌汚染調査と浄化のための、地球統計学と遺伝アルゴリズム手法を用いた費用効果戦略Yongqiang, Cui 23 March 2016 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(工学) / 甲第19697号 / 工博第4152号 / 新制||工||1641(附属図書館) / 32733 / 京都大学大学院工学研究科都市環境工学専攻 / (主査)教授 米田 稔, 教授 清水 芳久, 准教授 藤川 陽子 / 学位規則第4条第1項該当 / Doctor of Philosophy (Engineering) / Kyoto University / DFAM
|
185 |
Multi-objective design optimization framework for structural health monitoringParker, Danny Loren 30 April 2011 (has links)
The purpose of this dissertation is to demonstrate the ability to design health monitoring systems from a systematic perspective and how, with proper sensor and actuator placement, damage occurring in a structure can be detected and tracked. To this end, a design optimization was performed to determine the best locations to excite the structure and to collect data while using the minimum number of sensors. The type of sensors used in this design optimization was uni-axis accelerometers. It should be noted that the design techniques presented here are not limited to accelerometers. Instead, they allow for any type of sensor (thermal, strain, electromagnetic, etc.) and will find the optimal locations with respect to defined objective functions (sensitivity, cost, etc.). The use of model-based optimization techniques for the design of the monitoring system is driven by the desire to obtain the best performance possible from the system given what is known about the system prior to implementation. The use of a model is more systematic than human judgment and is able to take far more into account by using information about the dynamical response of a system than even an experienced structural engineer. It is understood in the context of structural modeling that no model is 100\% accurate and that any designs produced using model-based techniques should be tolerant to modeling errors. Demonstrations performed in the past have shown that poorly placed sensors can be very insensitive to damage development. To perform the optimization, a multi-objective genetic algorithm (GA) was employed. The objectives of the optimization were to be highly sensitive to damage occurring in potential “hot spots” while also maintaining the ability to detect damage occurring elsewhere in the structure and maintaining robustness to modeling errors. Two other objectives were to minimize the number of sensors and actuators used. The optimization only considered placing accelerometers, but it could have considered different type of sensors (i.e. strain, magneto-restrictive) or any combination thereof.
|
186 |
GENETIC ALGORITHMS FOR SAMPLE CLASSIFICATION OF MICROARRAY DATALiu, Dongqing 23 September 2005 (has links)
No description available.
|
187 |
Adaptive Control Strategy for Isolated Intersection and Traffic NetworkShao, Chun 09 June 2009 (has links)
No description available.
|
188 |
Reconstruction of the Temperature Profile Along a Blackbody Optical Fiber ThermometerBarker, David Gary 08 April 2003 (has links) (PDF)
A blackbody optical fiber thermometer consists of an optical fiber whose sensing tip is given a metallic coating. The sensing tip of the fiber forms an isothermal cavity, and the emission from this cavity is approximately equal to the emission from a blackbody. Standard two-color optical fiber thermometry involves measuring the spectral intensity at the end of the fiber at two wavelengths. The temperature at the sensing tip of the fiber can then be inferred using Planck's law and the ratio of the spectral intensities. If, however, the length of the optical fiber is exposed to elevated temperatures, erroneous temperature measurements will occur due to emission by the fiber. This thesis presents a method to account for emission by the fiber and accurately infer the temperature at the tip of the optical fiber. Additionally, an estimate of the temperature profile along the fiber may be obtained.
A mathematical relation for radiation transfer down the optical fiber is developed. The radiation exiting the fiber and the temperature profile along the fiber are related to the detector signal by a signal measurement equation. Since the temperature profile cannot be solved for directly using the signal measurement equation, two inverse minimization techniques are developed to find the temperature profile. Simulated temperature profile reconstructions show the techniques produce valid and unique results. Tip temperatures are reconstructed to within 1.0%.
Experimental results are also presented. Due to the limitations of the detection system and the optical fiber probe, the uncertainty in the signal measurement equation is high. Also, due to the limitations of the laboratory furnace and the optical detector, the measurement uncertainty is also high. This leads to reconstructions that are not always accurate. Even though the temperature profiles are not completely accurate, the tip-temperatures are reconstructed to within 1%—a significant improvement over the standard two-color technique under the same conditions. Improvements are recommended that will lead to decreased measurement and signal measurement equation uncertainty. This decreased uncertainty will lead to the development of a reliable and accurate temperature measurement device.
|
189 |
Food Shelf Life: Estimation and Experimental DesignLarsen, Ross Allen Andrew 15 May 2006 (has links) (PDF)
Shelf life is a parameter of the lifetime distribution of a food product, usually the time until a specified proportion (1-50%) of the product has spoiled according to taste. The data used to estimate shelf life typically come from a planned experiment with sampled food items observed at specified times. The observation times are usually selected adaptively using ‘staggered sampling.’ Ad-hoc methods based on linear regression have been recommended to estimate shelf life. However, other methods based on maximizing a likelihood (MLE) have been proposed, studied, and used. Both methods assume the Weibull distribution. The observed lifetimes in shelf life studies are censored, a fact that the ad-hoc methods largely ignore. One purpose of this project is to compare the statistical properties of the ad-hoc estimators and the maximum likelihood estimator. The simulation study showed that the MLE methods have higher coverage than the regression methods, better asymptotic properties in regards to bias, and have lower median squared errors (mese) values, especially when shelf life is defined by smaller percentiles. Thus, they should be used in practice. A genetic algorithm (Hamada et al. 2001) was used to find near-optimal sampling designs. This was successfully programmed for general shelf life estimation. The genetic algorithm generally produced designs that had much smaller median squared errors than the staggered design that is used commonly in practice. These designs were radically different than the standard designs. Thus, the genetic algorithm may be used to plan studies in the future that have good estimation properties.
|
190 |
Behavior Of Variable-length Genetic Algorithms Under Random SelectionStringer, Harold 01 January 2007 (has links)
In this work, we show how a variable-length genetic algorithm naturally evolves populations whose mean chromosome length grows shorter over time. A reduction in chromosome length occurs when selection is absent from the GA. Specifically, we divide the mating space into five distinct areas and provide a probabilistic and empirical analysis of the ability of matings in each area to produce children whose size is shorter than the parent generation's average size. Diversity of size within a GA's population is shown to be a necessary condition for a reduction in mean chromosome length to take place. We show how a finite variable-length GA under random selection pressure uses 1) diversity of size within the population, 2) over-production of shorter than average individuals, and 3) the imperfect nature of random sampling during selection to naturally reduce the average size of individuals within a population from one generation to the next. In addition to our findings, this work provides GA researchers and practitioners with 1) a number of mathematical tools for analyzing possible size reductions for various matings and 2) new ideas to explore in the area of bloat control.
|
Page generated in 0.0618 seconds