• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • 10
  • Tagged with
  • 36
  • 36
  • 36
  • 36
  • 10
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Dynamic Learning and Human Interactions under the Extended Belief-Desire-Intention Framework for Transportation Systems

Kim, Sojung January 2015 (has links)
In recent years, multi-agent traffic simulation has been widely used to accurately evaluate the performance of a road network considering individual and dynamic movements of vehicles under a virtual roadway environment. Given initial traffic demands and road conditions, the simulation is executed with multiple iterations and provides users with converged roadway conditions for the performance evaluation. For an accurate traffic simulation model, the driver's learning behavior is one of the major components to be concerned, as it affects road conditions (e.g., traffic flows) at each iteration as well as performance (e.g., accuracy and computational efficiency) of the traffic simulation. The goal of this study is to propose a realistic learning behavior model of drivers concerning their uncertain perception and interactions with other drivers. The proposed learning model is based on the Extended Belief-Desire-Intention (E-BDI) framework and two major decisions arising in the field of transportation (i.e., route planning and decision-making at an intersection). More specifically, the learning behavior is modeled via a dynamic evolution of a Bayesian network (BN) structure. The proposed dynamic learning approach considers three underlying assumptions: 1) the limited memory of a driver, 2) learning with incomplete observations on the road conditions, and 3) non-stationary road conditions. Thus, the dynamic learning approach allows driver agents to understand real-time road conditions and estimate future road conditions based on their past knowledge. In addition, interaction behaviors are also incorporated in the E-BDI framework to address influences of interactions on the driver's learning behavior. In this dissertation work, five major human interactions adopted from a social science literature are considered: 1) accommodation, 2) collaboration, 3) compromise, 4) avoidance, and 5) competition. The first three interaction types help to mimic information exchange behaviors between drivers (e.g., finding a route using a navigation system) while the last two interaction types are relevant with behaviors involving non-information exchange behaviors (e.g., finding a route based on a driver's own experiences). To calibrate the proposed learning behavior model and evaluate its performance in terms of inference accuracy and computational efficiency, drivers' decision data at intersections are collected via a human-in-the-loop experiment involving a driving simulator. Moreover, the proposed model is used to test and demonstrate the impact of five interactions on drivers' learning behavior under an en route planning scenario with real traffic data of Albany, New York, and Phoenix, Arizona. In this dissertation work, two major traffic simulation platforms, AnyLogic® and DynusT®, are used for the demonstration purposes. The experimental results reveal that the proposed model is effective in modeling realistic learning behaviors of drivers in conduction with interactions with other drivers.
32

Simulation-Based Robust Revenue Maximization Of Coal Mines Using Response Surface Methodology

Nageshwaraniyergopalakrishnan, Saisrinivas January 2014 (has links)
A robust simulation-based optimization approach is proposed for truck-shovel systems in surface coal mines to maximize the expected value of revenue obtained from loading customer trains. To this end, a large surface coal mine in North America is considered as case study. A data-driven modeling framework is developed and then applied to automatically generate a highly detailed simulation model of the mine in Arena. The framework comprises a formal information model based on Unified Modeling Language (UML), which is used to input mine structural as well as production information. Petri net-based model generation procedures are applied to automatically generate the simulation model based on the whole set of simulation inputs. Then, factors encountered in material handling operations that may affect the robustness of revenue are then classified into 1) controllable; and 2) uncontrollable categories. While controllable factors are trucks locked to routes, uncontrollable factors are inverses of summation over truck haul, and shovel loading and truck-dumping times for each route. Historical production data of the mine contained in a data warehouse is used to derive probability distributions for the uncontrollable factors. The data warehouse is implemented in Microsoft SQL, and contains snapshots of historical equipment statuses and production outputs taken at regular intervals in each shift of the mine. Response Surface Methodology is applied to derive an expression for the variance of revenue as a function of controllable and uncontrollable factors. More specifically, 1) first order and second order effects for controllable factors, 2) first order effects for uncontrollable factors, and 3) two factor interactions for controllable and uncontrollable factors are considered. Latin Hypercube Sampling method is applied for setting controllable factors and the means of uncontrollable factors. Also, Common Random Numbers method is applied to generate the sequence of pseudo-random numbers for uncontrollable factors in simulation experiments for variance reduction between different design points of the metamodel. The variance of the metamodel is validated using leave-one-out cross validation. It is later applied as an additional constraint to the mathematical formulation to maximize revenue in the simulation model using OptQuest. The decision variables in this formulation are truck locks only. Revenue is a function of the actual quality of coal delivered to each customer and their corresponding quality specifications for premiums and penalties. OptQuest is an optimization add-on for Arena that uses Tabu search and Scatter search algorithms to arrive at the optimal solution. The upper bound on the variance as a constraint is varied to obtain different sets of expected value as well as variance of optimal revenue. After comparison with results using OptQuest with random sampling and without variance expression of metamodel, it has been shown that the proposed approach can be applied to obtain the decision variable set that not only results in a higher expected value but also a narrower confidence interval for optimum revenue. According to the best of our knowledge, there are two major contributions from this research: 1) It is theoretically demonstrated using 2-point and orthonormal k-point response surfaces that Common Random Numbers reduces the error in estimation of variance of metamodel of simulation model. 2) A data-driven modeling and simulation framework has been proposed for automatically generating discrete-event simulation model of large surface coal mines to reduce modeling time, expenditure, as well as human errors associated with manual development.
33

Reliability Analysis and Optimization of Systems Containing Multi-Functional Entities

Xu, Yiwen January 2015 (has links)
Enabling more than one function in an entity provides a new cost-effective way to develop a highly reliable system. In this dissertation, we study the reliability of systems containing multi-functional entities. We derive the expressions for reliability of one-shot systems and reliability of each function. A step further, a redundancy allocation problem (RAP) with the objective of maximizing system reliability is formulated. Unlike constructing a system with only single-functional entities, the number of copies of a specific function to be included in each multi-functional entity (i.e., functional redundancy) needs to be determined as part of the design. Moreover, a start-up strategy for turning on specific functions in these components must be decided prior to system operation. We develop a heuristic algorithm and include it in a two-stage Genetic Algorithm (GA) to solve the new RAP. We also apply a modified Tabu search (TS) method for solving such NP-hard problems. Our numerical studies illustrate that the two-stage GA and the TS method are quite effective in searching for high quality solutions. The concept of multi-functional entities can be also applied in probabilistic site selection problem (PSSP). Unlike traditional PSSP with failures either at nodes or on edges, we consider a more general problem, in which both nodes and edges could fail and the edge-level redundancy is included. We formulate the problem as an integer programming optimization problem. To reduce the searching space, two corresponding simplified models formulated as integer linear programming problems are solved for providing a lower bound to the primal problem. Finally, a big challenge in reliability analysis is how to determine the failure distribution of components. This is especially significant for multi-functional entities as more levels of redundancy are considered. We provide an automated model-selection method to construct the best phase-type (PH) distribution for a given data set in terms of the model complexity and the adequacy of statistical fitting. To efficiently utilize the Akaike Information Criterion for balancing the likelihood value and the number of free parameters, the proposed method is carried out in two stages. The detailed subproblems and the related solution procedures are developed and illustrated through numerical studies. The results verify the effectiveness of the proposed model-selection method in constructing PH distributions.
34

A Unified Decision Framework for Multi-Modal Traffic Signal Control Optimization in a Connected Vehicle Environment

Zamanipour, Mehdi, Zamanipour, Mehdi January 2016 (has links)
Motivated by recent advances in vehicle positioning and vehicle-to-infrastructure (V2I) communication, traffic signal controllers are able to make smarter decisions. Most of the current state-of-the-practice signal priority control systems aim to provide priority for only one mode or based on first-come-first-served logic. Consideration of priority control in a more general framework allows for several different modes of travelers to request priority at any time from any approach and for other traffic control operating principles, such as coordination, to be considered within an integrated signal timing framework. This leads to provision of priority to connected priority eligible vehicles with minimum negative impact on regular vehicles. This dissertation focuses on providing a real-time decision making framework for multi modal traffic signal control that considers several transportation modes in a unified framework using Connected Vehicle (CV) technologies. The unified framework is based on a systems architecture for CVs that is applicable in both simulated and real world (field) testing conditions. The system architecture is used to design both hardware-in-the-loop and software-in-the-loop CV simulation environment. A real-time priority control optimization model and an implementation algorithm are developed using priority eligible vehicles data. The optimization model is extended to include signal coordination concepts. As the penetration rate of the CVs increases, the ability to predict the queue more accurately increases. It is shown that accurate queue prediction improves the performance of the optimization model in reducing priority eligible vehicles delay. The model is generalized to consider regular CVs as well as priority vehicles and coordination priority requests in a unified mathematical model. It is shown than the model can react properly to the decision makers' modal preferences.
35

OPERATIONAL DECISION MAKING IN COMPOUND ENERGY SYSTEMS USING MULTI-LEVEL MULTI PARADIGM SIMULATION BASED OPTIMIZATION

Mazhari, Esfandyar M. January 2011 (has links)
A two level hierarchical simulation and decision modeling framework is proposed for electric power networks involving PV based solar generators, various storage, and grid connection. The high level model, from a utility company perspective, concerns operational decision making and defining regulations for customers for a reduced cost and enhanced reliability. The lower level model concerns changes in power quality and changes in demand behavior caused by customers' response to operational decisions and regulations made by the utility company at the high level. The higher level simulation is based on system dynamics and agent-based modeling while the lower level simulation is based on agent-based modeling and circuit-level continuous time modeling. The proposed two level model incorporates a simulation based optimization engine that is a combination of three meta-heuristics including Scatter Search, Tabu Search, and Neural Networks for finding optimum operational decision making. In addition, a reinforcement learning algorithm that uses Markov decision process tools is also used to generate decision policies. An integration and coordination framework is developed, which details the sequence, frequency, and types of interactions between two models. The proposed framework is demonstrated with several case studies with real-time or historical for solar insolation, storage units, demand profiles, and price of electricity of grid (i.e., avoided cost). Challenges that are addressed in case studies and applications include 1) finding a best policy, optimum price and regulation for a utility company while keeping the customers electricity quality within the accepted range, 2) capacity planning of electricity systems with PV generators, storage systems, and grid, and 3) finding the optimum threshold price that is used to decide how much energy should be bought from sold to grid to minimize the cost. Mathematical formulations, and simulation and decision modeling methodologies are presented. A grid-storage analysis is performed for arbitrage, to explore if in future it is going to be beneficial to use storage systems along with grid, with future technological improvement in storage and increasing cost of electrical energy. An information model is discussed that facilitates interoperability of different applications in the proposed hierarchical simulation and decision environment for energy systems.
36

Multistage Stochastic Decomposition and its Applications

Zhou, Zhihong January 2012 (has links)
In this dissertation, we focus on developing sampling-based algorithms for solving stochastic linear programs. The work covers both two stage and multistage versions of stochastic linear programs. In particular, we first study the two stage stochastic decomposition (SD) algorithm and present some extensions associated with SD. Specifically, we study two issues: a) are there conditions under which the regularized version of SD generates a unique solution? and b) in cases where a user is willing to sacrifice optimality, is there a way to modify the SD algorithm so that a user can trade-off solution times with solution quality? Moreover, we present our preliminary approach to address these questions. Secondly, we investigate the multistage stochastic linear programs and propose a new approach to solving multistage stochastic decision models in the presence of constraints. The motivation for proposing the multistage stochastic decomposition algorithm is to handle large scale multistage stochastic linear programs. In our setting, the deterministic equivalent problems of the multistage stochastic linear program are too large to be solved exactly. Therefore, we seek an asymptotically optimum solution by simulating the SD algorithmic process, which was originally designed for two-stage stochastic linear programs (SLPs). More importantly, when SD is implemented in a time-staged manner, the algorithm begins to take the flavor of a simulation leading to what we refer to as optimization simulation. As for multistage stochastic decomposition, there are a couple of advantages that deserve mention. One of the benefits is that it can work directly with sample paths, and this feature makes the new algorithm much easier to be integrated within a simulation. Moreover, compared with other sampling-based algorithms for multistage stochastic programming, we also overcome certain limitations, such as a stage-wise independence assumption.

Page generated in 0.147 seconds