Haney, Mark A.
No description available.
01 January 2006
In today's competitive business environment, a firm's ability to make the correct, critical decisions can be translated into a great competitive advantage. Most of these critical real-world decisions involve the optimization not only of multiple objectives simultaneously, but also conflicting objectives, where improving one objective may degrade the performance of one or more of the other objectives. Traditional approaches for solving multiobjective optimization problems typically try to scalarize the multiple objectives into a single objective. This transforms the original multiple optimization problem formulation into a single objective optimization problem with a single solution. However, the drawbacks to these traditional approaches have motivated researchers and practitioners to seek alternative techniques that yield a set of Pareto optimal solutions rather than only a single solution. The problem becomes much more complicated in stochastic environments when the objectives take on uncertain (or "noisy") values due to random influences within the system being optimized, which is the case in real-world environments. Moreover, in stochastic environments, a solution approach should be sufficiently robust and/or capable of handling the uncertainty of the objective values. This makes the development of effective solution techniques that generate Pareto optimal solutions within these problem environments even more challenging than in their deterministic counterparts. Furthermore, many real-world problems involve complicated, "black-box" objective functions making a large number of solution evaluations computationally- and/or financially-prohibitive. This is often the case when complex computer simulation models are used to repeatedly evaluate possible solutions in search of the best solution (or set of solutions). Therefore, multiobjective optimization approaches capable of rapidly finding a diverse set of Pareto optimal solutions would be greatly beneficial. This research proposes two new multiobjective evolutionary algorithms (MOEAs), called fast Pareto genetic algorithm (FPGA) and stochastic Pareto genetic algorithm (SPGA), for optimization problems with multiple deterministic objectives and stochastic objectives, respectively. New search operators are introduced and employed to enhance the algorithms' performance in terms of converging fast to the true Pareto optimal frontier while maintaining a diverse set of nondominated solutions along the Pareto optimal front. New concepts of solution dominance are defined for better discrimination among competing solutions in stochastic environments. SPGA uses a solution ranking strategy based on these new concepts. Computational results for a suite of published test problems indicate that both FPGA and SPGA are promising approaches. The results show that both FPGA and SPGA outperform the improved nondominated sorting genetic algorithm (NSGA-II), widely-considered benchmark in the MOEA research community, in terms of fast convergence to the true Pareto optimal frontier and diversity among the solutions along the front. The results also show that FPGA and SPGA require far fewer solution evaluations than NSGA-II, which is crucial in computationally-expensive simulation modeling applications.
27 May 2016
The purpose of this dissertation describes several power optimization techniques for energy efficient datacenters. To achieve this goal, it approaches power dissipation holistically for entire datacenters and analyzes them layer-by-layer from (1) the infrastructure level, (2) the system level, and all the way down to (3) the micro-architecture level. First, for infrastructure-level power optimization of datacenters, this work presents infrastructure-level mathematical models and a holistic warehouse-scale datacenter power and performance simulator, SimWare. Experiments using SimWare show a high loss of cooling efficiency resulting from the non-uniform inlet air temperature distribution across servers. Second, this study describes a system-level technique, ATAC, which maximizes power efficiency while minimizing overheating. Finally, this dissertation describes a micro-architecture level technique under the context of emerging non-volatile memory technologies. We first show that storing more than one bit per cell, or multiple bits per cell, ends up with much higher soft-error rates than conventional technologies. However, multi-bit per cell technology can still be used as approximate storage. To this end, we propose a new class of multi-bit per cell memory in which both a precise bit and an approximate bit are located in a physical cell. With the development of these techniques, the contribution of this body of work is a reduction in the power consumption of datacenters in a holistic way, eliminating one of the most important hurdles to the proliferation of cloud-computing environments.
Thesis made openly available per email from author, 5-26-2016.
Torralbo, Pilar Vicaria
ITC/USA 2015 Conference Proceedings / The Fifty-First Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2015 / Bally's Hotel & Convention Center, Las Vegas, NV / The calibration processes consume a big quantity of resources: equipment and people, time and cost. As the number of calibration points increase the resources increase in the same extent. This automatic tool, aimed to reduce these resources, has been designed for commanding, managing and analyzing in real time a large number of acquired data points coming from the specimen under calibration and the standards used in the calibration process, applying at the same time the metrological algorithms which validate the calibration point. Its greatest achievement is the implementation of the rules for accepting or discarding the data point and the level of automation of the process. In the last flight test campaign its usage has been crucial for providing on time the data with the high accuracy required. It was achieved the commissioning of almost 200 temperature parameters in a short period of time taking advantage of equipment which nominal accuracy was not high enough for their direct application.
Leung, Pak-kin, Richard, 梁柏堅
published_or_final_version / Mathematics / Master / Master of Philosophy
Chen, Zhibin, 陳智斌
published_or_final_version / Mathematics / Doctoral / Doctor of Philosophy
This thesis concerns healthcare management and specifically addresses the problems of operating room planning and waiting list management. The operating room department is one of the most expensive areas within the healthcare system which necessitates many expensive resources such as staff, equipment and medicine. The planning of operating rooms is a complex task involving many dependencies and conflicting factors and hence careful operating room planning is critical to attain high productivity. One part of the planning process is to determine a Master Surgery Schedule (MSS). An MSS is a cyclic timetable that specifies the allocation of the surgical groups into different blocks of operating room time. Using an optimization-based approach, this thesis investigates whether the MSS can be adapted to better meet the varying surgery demand. Secondly, an extended optimization-based approach, including post-operative beds, is presented in which different policies related to priority rules are simulated to demonstrate their affect on the average waiting time. The problem of meeting the uncertainty in demand of patient arrival, as well as surgery duration, is then incorporated. With a combination of simulation and optimization techniques, different policies in reserving operating room capacity for emergency cases together with a policy to increase staff in stand-by, are demonstrated. The results show that, by adopting a certain policy, the average patient waiting time and surgery cancellations are decreased while operating room utilization is increased. Furthermore, the thesis focuses on how different aspects of surgery pre-conditions affect different performance measures related to operating room planning. The emergency surgery cases are omitted and the studies are delimited to concern the elective healthcare process only. With a proposed simulation model, an experimental tool is offered, in which a number of analyses related to the process of elective surgeries can be conducted. The hypothesis is that, sufficiently good estimates of future surgery demand can be assessed at the referral stage. Based on this assumption, an experiment is conducted to explore how different policies of managing incoming referrals affect patient waiting times. Related to this study, possibility of using data mining techniques to find indicators that can help to estimate future surgery demand is also investigated. Finally, in parallel, an agent-based simulation approach is investigated to address these types of problems. An agent-based approach would probably be relevant to consider when multiple planners are considered. In a survey, a framework for describing applications of agent based simulation is provided.
09 May 2008
In this work, we present a way to extend Ant Colony Optimization (ACO), so that it can be applied to both continuous and mixed-variable optimization problems. We demonstrate, first, how ACO may be extended to continuous domains. We describe the algorithm proposed, discuss the different design decisions made, and we position it among other metaheuristics. Following this, we present the results of numerous simulations and testing. We compare the results obtained by the proposed algorithm on typical benchmark problems with those obtained by other methods used for tackling continuous optimization problems in the literature. Finally, we investigate how our algorithm performs on a real-world problem coming from the medical field—we use our algorithm for training neural network used for pattern classification in disease recognition. Following an extensive analysis of the performance of ACO extended to continuous domains, we present how it may be further adapted to handle both continuous and discrete variables simultaneously. We thus introduce the first native mixed-variable version of an ACO algorithm. Then, we analyze and compare the performance of both continuous and mixed-variable ACO algorithms on different benchmark problems from the literature. Through the research performed, we gain some insight into the relationship between the formulation of mixed-variable problems, and the best methods to tackle them. Furthermore, we demonstrate that the performance of ACO on various real-world mixed-variable optimization problems coming from the mechanical engineering field is comparable to the state of the art.
Urade, Hemlata S., Patel, Rahila
15 February 2012
Optimization has been an active area of research for several decades. As many real-world optimization problems become increasingly complex, better optimization algorithms are always needed. Unconstrained optimization problems can be formulated as a D-dimensional minimization problem as follows: Min f (x) x=[x1+x2+……..xD] where D is the number of the parameters to be optimized. subjected to: Gi(x) <=0, i=1…q Hj(x) =0, j=q+1,……m Xε [Xmin, Xmax]D, q is the number of inequality constraints and m-q is the number of equality constraints. The particle swarm optimizer (PSO) is a relatively new technique. Particle swarm optimizer (PSO), introduced by Kennedy and Eberhart in 1995,  emulates flocking behavior of birds to solve the optimization problems. / In this paper the concept of dynamic particle swarm optimization is introduced. The dynamic PSO is different from the existing PSO’s and some local version of PSO in terms of swarm size and topology. Experiment conducted for benchmark functions of single objective optimization problem, which shows the better performance rather the basic PSO. The paper also contains the comparative analysis for Simple PSO and Dynamic PSO which shows the better result for dynamic PSO rather than simple PSO.
Page generated in 0.1373 seconds