• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6769
  • 2451
  • 1001
  • 805
  • 777
  • 234
  • 168
  • 119
  • 83
  • 79
  • 70
  • 63
  • 54
  • 52
  • 50
  • Tagged with
  • 15002
  • 2422
  • 1971
  • 1814
  • 1642
  • 1528
  • 1381
  • 1327
  • 1284
  • 1252
  • 1220
  • 1114
  • 972
  • 928
  • 926
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1011

Integrated and Coordinated Relief Logistics Planning Under Uncertainty for Relief Logistics Operations

Kamyabniya, Afshin 22 September 2022 (has links)
In this thesis, we explore three critical emergency logistics problems faced by healthcare and humanitarian relief service providers for short-term post-disaster management. In the first manuscript, we investigate various integration mechanisms (fully integrated horizontal-vertical, horizontal, and vertical resource sharing mechanisms) following a natural disaster for a multi-type whole blood-derived platelets, multi-patient logistics network. The goal is to reduce the amount of shortage and wastage of multi-blood-group of platelets in the response phase of relief logistics operations. To solve the logistics model for a large scale problem, we develop a hybrid exact solution approach involving an augmented epsilon-constraint and Lagrangian relaxation algorithms and demonstrate the model's applicability for a case study of an earthquake. Due to uncertainty in the number of injuries needing multi-type blood-derived platelets, we apply a robust optimization version of the proposed model which captures the expected performance of the system. The results show that the performance of the platelets logistics network under coordinated and integrated mechanisms better control the level of shortage and wastage compared with that of a non-integrated network. In the second manuscript, we propose a two-stage casualty evacuation model that involves routing of patients with different injury levels during wildfires. The first stage deals with field hospital selection and the second stage determines the number of patients that can be transferred to the selected hospitals or shelters via different routes of the evacuation network. The goal of this model is to reduce the evacuation response time, which ultimately increase the number of evacuated people from evacuation assembly points under limited time windows. To solve the model for large-scale problems, we develop a two-step meta-heuristic algorithm. To consider multiple sources of uncertainty, a flexible robust approach considering the worst-case and expected performance of the system simultaneously is applied to handle any realization of the uncertain parameters. The results show that the fully coordinated evacuation model in which the vehicles can freely pick up and off-board the patients at different locations and are allowed to start their next operations without being forced to return to the departure point (evacuation assembly points) outperforms the non-coordinated and non-integrated evacuation models in terms of number of evacuated patients. In the third manuscript, we propose an integrated transportation and hospital capacity model to optimize the assignment of relevant medical resources to multi-level-injury patients in the time of a MCI. We develop a finite-horizon MDP to efficiently allocate resources and hospital capacities to injured people in a dynamic fashion under limited time horizon. We solve this model using the linear programming approach to ADP, and by developing a two-phase heuristics based on column generation algorithm. The results show better policies can be derived for allocating limited resources (i.e., vehicles) and hospital capacities to the injured people compared with the benchmark. Each paper makes a worthwhile contribution to the humanitarian relief operations literature and can help relief and healthcare providers optimize resource and service logistics by applying the proposed integration and coordination mechanisms.
1012

Solution of Constrained Clustering Problems through Homotopy Tracking

Easterling, David R. 15 January 2015 (has links)
Modern machine learning methods are dependent on active optimization research to improve the set of methods available for the efficient and effective extraction of information from large datasets. This, in turn, requires an intense and rigorous study of optimization methods and their possible applications to crucial machine learning applications in order to advance the potential benefits of the field. This thesis provides a study of several modern optimization techniques and supplies a mathematical inquiry into the effectiveness of homotopy methods to attack a fundamental machine learning problem, effective clustering under constraints. The first part of this thesis provides an empirical survey of several popular optimization algorithms, along with one approach that is cutting-edge. These algorithms are tested against deeply challenging real-world problems with vast numbers of local minima, and compares and contrasts the benefits of each when confronted with problems of different natures. The second part of this thesis proposes a new homotopy map for use with constrained clustering problems. This thesis explores the connections between the map and the problem, providing several theorems to justify the use of the map and making use of modern homotopy tracking software to compare an optimization that employs the map with several modern approaches to solving the same problem. / Ph. D.
1013

Developing agile motor skills on virtual and real humanoids

Ha, Sehoon 07 January 2016 (has links)
Demonstrating strength and agility on virtual and real humanoids has been an important goal in computer graphics and robotics. However, developing physics- based controllers for various agile motor skills requires a tremendous amount of prior knowledge and manual labor due to complex mechanisms of the motor skills. The focus of the dissertation is to develop a set of computational tools to expedite the design process of physics-based controllers that can execute a variety of agile motor skills on virtual and real humanoids. Instead of designing directly controllers real humanoids, this dissertation takes an approach that develops appropriate theories and models in virtual simulation and systematically transfers the solutions to hardware systems. The algorithms and frameworks in this dissertation span various topics from spe- cific physics-based controllers to general learning frameworks. We first present an online algorithm for controlling falling and landing motions of virtual characters. The proposed algorithm is effective and efficient enough to generate falling motions for a wide range of arbitrary initial conditions in real-time. Next, we present a robust falling strategy for real humanoids that can manage a wide range of perturbations by planning the optimal contact sequences. We then introduce an iterative learning framework to easily design various agile motions, which is inspired by human learn- ing techniques. The proposed framework is followed by novel algorithms to efficiently optimize control parameters for the target tasks, especially when they have many constraints or parameterized goals. Finally, we introduce an iterative approach for exporting simulation-optimized control policies to hardware of robots to reduce the number of hardware experiments, that accompany expensive costs and labors.
1014

Wind models and optimum selection of wind turbine systems

Poch, Leslie Anton. January 1978 (has links)
Call number: LD2668 .T4 1978 P63 / Master of Science
1015

Optimizing Performance in Highly Utilized Multicores with Intelligent Prefetching

Khan, Muneeb January 2016 (has links)
Modern processors apply sophisticated techniques, such as deep cache hierarchies and hardware prefetching, to increase performance. Such complex hardware structures have helped improve performance in general, however, their full potential is not realized as software often utilizes the memory hierarchy inefficiently. Performance can be improved further by ensuring careful interaction between software and hardware. Performance can typically improve by increasing the cache utilization and by conserving the DRAM bandwidth, i.e., retaining more useful data in the caches and lowering data requests to the DRAM. One way to achieve this is to conserve space across the cache hierarchy and increase opportunity for temporal reuse of cached data. Similarly, conserving the DRAM bandwidth is essential for performance in highly utilized multicores, as it can easily become a critical resource. When multiple cores are active and the per-core share of DRAM bandwidth shrinks, its efficient utilization plays an important role in improving the overall performance. Together the cache hierarchy and the DRAM bandwidth play a significant role in defining the overall performance in multicores. Based on deep insight from memory behavior modeling of software, this thesis explores five software-only methods to analyze and increase performance in multicores. The underlying philosophy that drives these techniques is to increase cache utilization and conserve DRAM bandwidth by 1) focusing on making data prefetching more accurate, and 2) lowering the miss rate in the cache hierarchy either by preserving useful data longer by cache-bypassing the less useful data or via code size compaction using compiler options. First, we show how microarchitecture-independent memory access profiles can be used to analyze the Instruction Cache performance of software. We use this information in a compiler pass to recompile application phases (with large Instruction cache miss rate) for smaller code size in an effort to improve the application Instruction Cache behavior. Second, we demonstrate how a resourceefficient software prefetching method can be combined with hardware prefetching to improve performance in multicores when running software that exhibits irregular memory access patterns. Third, we show that hardware prefetching on high performance commodity multicores is sub-optimal and demonstrate how a resource-efficient software-only prefetching method can perform better in fully utilized multicores. Fourth, we present an adaptive prefetching approach that dynamically combines software and hardware prefetching in a runtime system to improve performance in highly utilized multicores. Finally, in the fifth work we develop a method to predict per-core prefetching configurations that deliver near-optimal overall multicore performance. These software techniques enable us to tap greater performance in multicores (up to 50%), without requiring more processing resources.
1016

Optimization and decision strategies for medical preparedness and emergency response

Chen, Chien-Hung 12 January 2015 (has links)
The public health emergencies, such as bioterrorist attacks or pandemic outbreaks, have gained serious public and government attentions since the 2001 anthrax attacks and the SARS outbreak in 2003. These events require large-scale and timely dispensing of critical medical countermeasures for protection of the general population. This thesis research focuses on developing mathematical models, real-time algorithms, and computerized decision support systems that enable (1) systematic coordination to tackle multifaceted nature of mass dispensing, (2) fast disease propagation module to allow immediate mitigation response to on-site uncertainties, and (3) user-friendly platform to facilitate modeling-solution integration and cross-domain collaboration. The work translates operations research methodologies into practical decision support tools for public health emergency professionals. Under the framework of modeling and optimizing the public health infrastructure for biological and pandemic emergency responses, the task first determines adequate number of point-of-dispensing sites (POD), by placing them strategically for best possible population coverage. Individual POD layout design and associated staffing can thus be optimized to maximize throughput and/or minimize resource requirement for an input throughput. Mass dispensing creates a large influx of individuals to dispensing facilities, thus raising the risk of high degree of intra-facility infections. Our work characterizes the interaction between POD operations and disease propagation. Specifically, fast genetic algorithm-based heuristics were developed for solving the integer-programming-based facility location instances. The approach has been applied to the metro-Atlanta area with a population of 5.2 million people spreading over 11 districts. Among the 2,904 instances, the state-of-the-art specialized integer programming solver solved all except one instance to optimality within 300,000 CPU seconds and solved all except 5 to optimality within 40,000 CPU seconds. Our fast heuristic algorithm returns good feasible solutions that are within 8 percent to optimality in 15 minutes. This algorithm was embedded within an interactive web-based decision support system, RealOpt-Regional©. The system allows public health users to contour the region of interest and determine the network of PODs for their affected population. Along with the fast optimization engine, the system features geographical, demographical, and spatial visualization that facilitate real-time usage. The client-server architecture facilities front-end user interactive design on Google Maps© while the facility location mathematical instances are generated and solved in the back-end server. In the analysis of disease propagation and mitigation strategies, we first extended the 6-stage ordinary differential equation-based (ODE) compartmental model to accommodate POD operations. This allows us to characterize the intra-facility infections of highly contagious diseases during local outbreak when large dispensing is in process. The disease propagation module was then implemented into the CDC-RealOpt-POD© discrete-event-simulation-optimization. CDC-RealOpt-POD is a widely used emergency response decision support system that includes simulation-optimization for determining optimal staffing and operations. We employed the CDC-RealOpt-POD environment to analyze the interactions between POD operations and disease parameters and identified effective mitigation strategies. The disease propagation module allows us to analyze the efficient frontier between operational efficiencies and intra-POD infections. Emergency response POD planners and epidemiologists can collaborate under the familiar CDC-RealOpt-POD environment, e.g., design the most efficient plan by designing and analyzing both POD operations and disease compartmental model in a unified platform. Corresponding problem instances are formed automatically by combining and transforming graphical inputs and numerical parameters from users. To facilitate the operations of receiving, staging and storage (RSS) of medical countermeasures, we expanded the CDC-RealOpt-POD layout design functions by integrating it with the process flow. The resulting RSS system allows modeling of both system processes along with spatial constraints for optimal operations and process design. In addition, agent-based simulation was incorporated inside where integrated process flow and layout design allow analysis of crowd movement and congestion. We developed the hybrid agent behavior where individual agents make decision through system-defined process flow and autonomous discretion. The system was applied successfully to determine guest movement strategies for the new Georgia Aquarium Dolphin Tales exhibit. The goal was to enhance guest experience while mitigating overall congestion.
1017

A dynamic scheduling model for construction enterprises

Fahmy, Amer January 2014 (has links)
The vast majority of researches in the scheduling context focused on finding optimal or near-optimal predictive schedules under different scheduling problem characteristics. In the construction industry, predictive schedules are often produced in advance in order to direct construction operations and to support other planning activities. However, construction projects operate in dynamic environments subject to various real-time events, which usually disrupt the predictive optimal schedules, leading to schedules neither feasible nor optimal. Accordingly, the development of a dynamic scheduling model which can accommodate these real-time events would be of great importance for the successful implementation of construction scheduling systems. This research sought to develop a dynamic scheduling based solution which can be practically used for real time analysis and scheduling of construction projects, in addition to resources optimization for construction enterprises. The literature reviews for scheduling, dynamic scheduling, and optimization showed that despite the numerous researches presented and application performed in the dynamic scheduling field within manufacturing and other industries, there was dearth in dynamic scheduling literature in relation to the construction industry. The research followed two main interacting research paths, a path related to the development of the practical solution, and another path related to the core model development. The aim of the first path (or the proposed practical solution path) was to develop a computer-based dynamic scheduling framework which can be used in practical applications within the construction industry. Following the scheduling literature review, the construction project management community s opinions about the problem under study and the user requirements for the proposed solution were collected from 364 construction project management practitioners from 52 countries via a questionnaire survey and were used to form the basis for the functional specifications of a dynamic scheduling framework. The framework was in the form of a software tool fully integrated with current planning/scheduling practices with all core modelling which can support the integration of the dynamic scheduling processes to the current planning/scheduling process with minimal experience requirement from users about optimization. The second research path, or the dynamic scheduling core model development path, started with the development of a mathematical model based on the scheduling models in literature, with several extensions according to the practical considerations related to the construction industry, as investigated in the questionnaire survey. Scheduling problems are complex from operational research perspective; so, for the proposed solution to be functional in optimizing construction schedules, an optimization algorithm was developed to suit the problem's characteristics and to be used as part of the dynamic scheduling model's core. The developed algorithm contained few contributions to the scheduling context (such as schedule justification heuristics, and rectification to schedule generation schemes), as well as suggested modifications to the formulation and process of the adopted optimization technique (particle swarm optimization) leading to considerable improvement to this techniques outputs with respect to schedules quality. After the completion of the model development path, the first research path was concluded by combining the gathered solution's functional specifications and the developed dynamic scheduling model into a software tool, which was developed to verify & validate the proposed model s functionalities and the overall solution s practicality and scalability. The verification process started with an extensive testing of the model s static functionality using several well recognized scheduling problem sets available in literature, and the results showed that the developed algorithm can be ranked as one of the best state-of-the-art algorithms for solving resource-constrained project scheduling problems. To verify the software tool and the dynamic features of the developed model (or the formulation of data transfers from one optimization stage to the next), a case study was implemented on a construction entity in the Arabian Gulf area, having a mega project under construction, with all aspects to resemble an enterprise structure. The case study results showed that the proposed solution reasonably performed under large scale practical application (where all optimization targets were met in reasonable time) for all designed schedule preparation processes (baseline, progress updates, look-ahead schedules, and what-if schedules). Finally, to confirm and validate the effectiveness and practicality of the proposed solution, the solution's framework and the verification results were presented to field experts, and their opinions were collected through validation forms. The feedbacks received were very positive, where field experts/practitioners confirmed that the proposed solution achieved the main functionalities as designed in the solution s framework, and performed efficiently under the complexity of the applied case study.
1018

Randomized coordinate descent methods for big data optimization

Takac, Martin January 2014 (has links)
This thesis consists of 5 chapters. We develop new serial (Chapter 2), parallel (Chapter 3), distributed (Chapter 4) and primal-dual (Chapter 5) stochastic (randomized) coordinate descent methods, analyze their complexity and conduct numerical experiments on synthetic and real data of huge sizes (GBs/TBs of data, millions/billions of variables). In Chapter 2 we develop a randomized coordinate descent method for minimizing the sum of a smooth and a simple nonsmooth separable convex function and prove that it obtains an ε-accurate solution with probability at least 1 - p in at most O((n/ε) log(1/p)) iterations, where n is the number of blocks. This extends recent results of Nesterov [43], which cover the smooth case, to composite minimization, while at the same time improving the complexity by the factor of 4 and removing ε from the logarithmic term. More importantly, in contrast with the aforementioned work in which the author achieves the results by applying the method to a regularized version of the objective function with an unknown scaling factor, we show that this is not necessary, thus achieving first true iteration complexity bounds. For strongly convex functions the method converges linearly. In the smooth case we also allow for arbitrary probability vectors and non-Euclidean norms. Our analysis is also much simpler. In Chapter 3 we show that the randomized coordinate descent method developed in Chapter 2 can be accelerated by parallelization. The speedup, as compared to the serial method, and referring to the number of iterations needed to approximately solve the problem with high probability, is equal to the product of the number of processors and a natural and easily computable measure of separability of the smooth component of the objective function. In the worst case, when no degree of separability is present, there is no speedup; in the best case, when the problem is separable, the speedup is equal to the number of processors. Our analysis also works in the mode when the number of coordinates being updated at each iteration is random, which allows for modeling situations with variable (busy or unreliable) number of processors. We demonstrate numerically that the algorithm is able to solve huge-scale l1-regularized least squares problems with a billion variables. In Chapter 4 we extended coordinate descent into a distributed environment. We initially partition the coordinates (features or examples, based on the problem formulation) and assign each partition to a different node of a cluster. At every iteration, each node picks a random subset of the coordinates from those it owns, independently from the other computers, and in parallel computes and applies updates to the selected coordinates based on a simple closed-form formula. We give bounds on the number of iterations sufficient to approximately solve the problem with high probability, and show how it depends on the data and on the partitioning. We perform numerical experiments with a LASSO instance described by a 3TB matrix. Finally, in Chapter 5, we address the issue of using mini-batches in stochastic optimization of Support Vector Machines (SVMs). We show that the same quantity, the spectral norm of the data, controls the parallelization speedup obtained for both primal stochastic subgradient descent (SGD) and stochastic dual coordinate ascent (SCDA) methods and use it to derive novel variants of mini-batched (parallel) SDCA. Our guarantees for both methods are expressed in terms of the original nonsmooth primal problem based on the hinge-loss. Our results in Chapters 2 and 3 are cast for blocks (groups of coordinates) instead of coordinates, and hence the methods are better described as block coordinate descent methods. While the results in Chapters 4 and 5 are not formulated for blocks, they can be extended to this setting.
1019

A transportation and location optimization model: minimizing total cost of oilseed crushing facilities in Kansas

Luna Meiners, Shauna Nicole January 1900 (has links)
Master of Agribusiness / Department of Agricultural Economics / Jason Bergtold / Markets for alternative fuels are emerging and are of great interest to both public and private companies, as well as government agencies looking to differentiate fuel sources to achieve improved and sustainable operational efficiencies. This creates a growing need for innovation and an increased supply of biofuel feedstocks for bioenergy options such as bio-jet fuel. This thesis aims to assess the logistical feasibility of producing oilseed bio feedstocks and the practicality of building new crush facilities specifically for bio-jet fuel production in Kansas. A logistical optimization model is built by applying data to estimate the potential Kansas supply of rapeseed as a possible feedstock option; transportation and facility costs associated with building; and proposed crushing facility sites, by considering the estimated demand for bio-jet fuel within Kansas. The developed optimization model determined that even average yields per acre and modest adoption rates by farmers willing to incorporate rapeseed into their crop rotations could provide enough feedstock to supply one or two crushing facilities, depending on a variety of additional factors, including bio-jet fuel demand in Kansas. Sensitivity analysis was performed on key model factors and determined that the most influential factor on both size and number of proposed crushing facilities was the market demand for bio-jet fuel. Ultimately, further research is required to better understand the actual market demand for bio-jet fuel within Kansas and how competition or supply supplementation of other bio feedstocks can affect the size or number of proposed crushing facilities. There are currently six oilseed crushing facilities operating in Kansas; although all are dedicated to soybean or sunflower seed. Further studies may find these sites as viable alternative options to building new crushing facilities for a separate type of feedstock.
1020

Integrated Blankholder Plate for Double Action Stamping Die

Tatipala, Sravan, Suddapalli, Nikshep Reddy January 2016 (has links)
A blankholder is used to hold the edges of metal sheet while it is being formed by a matrix and a punch. An efficient way to design a stamping die is to integrate the blankholder plate into the die structure. This would eliminate the time and cost to manufacture blankholder plates. The integrated structure is called integrated blankholder. The main focus of this thesis is structural analysis and optimization of the integrated blankholder. The structural analysis of the integrated blankholder model (used for the production of doors of Volvo car model V70) is performed using Hypermesh and Abaqus. The FE-results were compared with the analytical calculations of the fatigue limit. To increase the stiffness and reduce the stress levels in the integrated blankholder, topology and shape optimization is performed with Optistruct. Thereafter, a CAD model is set up in Catia based on the results of optimization. Finally, structural analysis of this CAD model is performed and the results are compared with the original results. The results show reduction in stress levels by 70% and a more homogeneous stress distribution is obtained. The mass of the die is increased by 17 % and in overall, a stiffer die is obtained. Based on the simulations and results, discussion and conclusions are formulated.

Page generated in 0.0459 seconds