• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • 10
  • Tagged with
  • 36
  • 36
  • 36
  • 36
  • 10
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Intelligent Traffic Control in a Connected Vehicle Environment

Feng, Yiheng January 2015 (has links)
Signal control systems have experienced tremendous development both in hardware and in control strategies in the past 50 years since the advent of the first electronic traffic signal control device. The state-of-art real-time signal control strategies rely heavily on infrastructure-based sensors, including in-pavement or video based loop detectors for data collection. With the emergence of connected vehicle technology, mobility applications utilizing vehicle to infrastructure (V2I) communications enable the intersection to acquire a much more complete picture of the nearby vehicle states. Based on this new source of data, traffic controllers should be able to make "smarter" decisions. This dissertation investigates the traffic signal control strategies in a connected vehicle environment considering mobility as well as safety. A system architecture for connected vehicle based signal control applications under both a simulation environment and in the real world has been developed. The proposed architecture can be applied to applications such as adaptive signal control, signal priority including transit signal priority (TSP), freight signal priority (FSP), emergency vehicle preemption, and integration of adaptive signal control and signal priority. Within the framework, the trajectory awareness of connected vehicles component processes and stores the connected vehicle data from Basic Safety Message (BSM). A lane level intersection map that represents the geometric structure was developed. Combined with the map and vehicle information from BSMs, the connected vehicles can be located on the map. Some important questions specific to connected vehicle are addressed in this component. A geo-fencing area makes sure the roadside equipment (RSE) receives the BSM from only vehicles on the roadway and within the Dedicated Short-range Communications (DSRC) range. A mechanism to maintain anonymity of vehicle trajectories to ensure privacy is also developed. Vehicle data from the trajectory awareness of connected vehicles component can be used as the input to a real-time phase allocation algorithm that considers the mobility aspect of the intersection operations. The phase allocation algorithm applies a two-level optimization scheme based on the dual ring controller in which phase sequence and duration are optimized simultaneously. Two objective functions are considered: minimization of total vehicle delay and minimization of queue length. Due to the low penetration rate of the connected vehicles, an algorithm that estimates the states of unequipped vehicles based on connected vehicle data is developed to construct a complete arrival table for the phase allocation algorithm. A real-world intersection is modeled in VISSIM to validate the algorithms. Dangerous driving behaviors may occur if a vehicle is trapped in the dilemma zone which represents one safety aspect of signalized intersection operation. An analytical model for estimating the number of vehicles in dilemma zone (NVDZ) is developed on the basis of signal timing, arterial geometry, traffic demand, and driving characteristics. The analytical model of NVDZ calculation is integrated into the signal optimization to perform dilemma zone protection. Delay and NVDZ are formulated as a multi-objective optimization problem addressing efficiency and safety together. Examples show that delay and NVDZ are competing objectives and cannot be optimized simultaneously. An economic model is applied to find the minimum combined cost of the two objectives using a monetized objective function. In the connected vehicle environment, the NVDZ can be calculated from connected vehicle data and dilemma zone protection is integrated into the phase allocation algorithm. Due to the complex nature of traffic control systems, it is desirable to utilize traffic simulation in order to test and evaluate the effectiveness and safety of new models before implementing them in the field. Therefore, developing such a simulation platform is very important. This dissertation proposes a simulation environment that can be applied to different connected vehicle related signal control applications in VISSIM. Both hardware-in-the-loop (HIL) and software-in-the-loop (SIL) simulation are used. The simulation environment tries to mimic the real world complexity and follows the Society of Automotive Engineers (SAE) J2735 standard DSRC messaging so that models and algorithms tested in the simulation can be directly applied in the field with minimal modification. Comprehensive testing and evaluation of the proposed models are conducted in two simulation networks and one field intersection. Traffic signal priority is an operational strategy to apply special signal timings to reduce the delay of certain types of vehicles. The common way of serving signal priority is based on the "first come first serve" rule which may not be optimal in terms of total priority delay. A priority system that can serve multiple requests with different priority levels should perform better than the current method. Traditionally, coordination is treated in a different framework from signal priority. However, the objectives of coordination and signal priority are similar. In this dissertation, adaptive signal control, signal priority and coordination are integrated into a unified framework. The signal priority algorithm generates a feasible set of optimal signal schedules that minimize the priority delay. The phase allocation algorithm considers the set as additional constraints and tries to minimize the total regular vehicle delay within the set. Different test scenarios including coordination request, priority vehicle request and combination of coordination and priority requests are developed and tested.
22

A Network Design Framework for Siting Electric Vehicle Charging Stations in an Urban Network with Demand Uncertainty

Tan, Jingzi January 2013 (has links)
We consider a facility location problem with uncertainty flow customers' demands, which we refer to as stochastic flow capturing location allocation problem (SFCLAP). Potential applications include siting farmers' market, emergency shelters, convenience stores, advertising boards and so on. For this dissertation, electric vehicle charging stations siting with maximum accessibility at lowest cost would be studied. We start with placing charging stations under the assumptions of pre-determined demands and uniform candidate facilities. After this model fails to deal with different scenarios of customers' demands, a two stage flow capturing location allocation programming framework is constructed to incorporate demand uncertainty as SFCLAP. Several extensions are built for various situations, such as secondary coverage and viewing facility's capacity as variables. And then, more capacitated stochastic programming models are considered as systems optimal and user oriented optimal cases. Systems optimal models are introduced with variations which include outsourcing the overflow and alliance within the system. User oriented optimal models incorporate users' choices with system's objectives. After the introduction of various models, an approximation method for the boundary of the problem and also the exact solution method, the L-Shaped method, are presented. As the computation time in the user oriented case surges with the expansion of the network, scenario reduction method is introduced to get similar optimal results within a reasonable time. And then, several cases including testing with different number of scenarios and different sample generating methods are operated for model validation. In the last part, simulation method is operated on the authentic network of the state of Arizona to evaluate the performance of this proposed framework.
23

Statistical Analysis of Operational Data for Manufacturing System Performance Improvement

Wang, Zhenrui January 2013 (has links)
The performance of a manufacturing system relies on its four types of elements: operators, machines, computer system and material handling system. To ensure the performance of these elements, operational data containing various aspects of information are collected for monitoring and analysis. This dissertation focuses on the operator performance evaluation and machine failure prediction. The proposed research work is motivated by the following challenges in analyzing operational data. (i) the complex relationship between the variables, (ii) the implicit information important to failure prediction, and (iii) data with outliers, missing and erroneous measurements. To overcome these challenges, the following research has been conducted. To compare operator performance, a methodology combining regression modeling and multiple comparisons technique is proposed. The regression model quantifies and removes the complex effects of other impacting factors on the operator performance. A robust zero-inflated Poisson (ZIP) model is developed to reduce the impacts of the excessive zeros and outliers in the performance metric, i.e. the number of defects (NoD), on regression analysis. The model residuals are plotted in non-parametric statistical charts for performance comparison. The estimated model coefficients are also used to identify under-performing machines. To detect temporal patterns from operational data sequence, an algorithm is proposed for detecting interval-based asynchronous periodic patterns (APP). The algorithm effectively and efficiently detects pattern through a modified clustering and a convolution-based template matching method. To predict machine failures based on the covariates with erroneous measurements, a new method is proposed for statistical inference of proportional hazard model under a mixture of classical and Berkson errors. The method estimates the model coefficients with an expectation-maximization (EM) algorithm with expectation step achieved by Monte Carlo simulation. The model estimated with the proposed method will improve the accuracy of the inference on machine failure probability. The research work presented in this dissertation provides a package of solutions to improve manufacturing system performance. The effectiveness and efficiency of the proposed methodologies have been demonstrated and justified with both numerical simulations and real-world case studies.
24

A DDDAS-Based Multi-Scale Framework for Pedestrian Behavior Modeling and Interactions with Drivers

Xi, Hui January 2013 (has links)
A multi-scale agent-based simulation framework is firstly proposed to analyze pedestrian delays at signalized crosswalks in large urban areas under different conditions. The aggregated-level model runs under normal conditions, where each crosswalk is represented as an agent. Pedestrian counts collected near crosswalks are utilized to derive the binary choice probability from a utility maximization model. The derived probability function is utilized based on the extended Adam's model to estimate an average pedestrian delay with corresponding traffic flow rate and traffic light control at each crosswalk. When abnormality is detected, the detailed-level model with each pedestrian as an agent is running in the affected subareas. Pedestrian decision-making under abnormal conditions, physical movement, and crowd congestion are considered in the detailed-level model. The detailed-level model contains two sub-level models: the tactical sub-level model for pedestrian route choice and the operational sub-level model for pedestrian physical interactions. The tactical sub-level model is based on Extended Decision Field Theory (EDFT) to represent the psychological preferences of pedestrians with respect to different route choice options during their deliberation process after evaluating current surroundings. At the operational sub-level model, physical interactions among pedestrians and consequent congestions are represented using a Cellular Automata model, in which pedestrians are allowed biased random-walking without back step towards their destination that has been given by the tactical sub-level model. In addition, Dynamic-Data-Driven Application Systems (DDDAS) architecture has been integrated with the proposed multi-scale simulation framework for an abnormality detection and appropriate fidelity selection (between the aggregate level and the detailed level models) during the simulation execution process. Various experiments have been conducted under varying conditions with the scenario of a Chicago Loop area to demonstrate the advantage of the proposed framework, balancing between computational efficiency and model accuracy. In addition to the signalized intersections, pedestrian crossing behavior under unsignalized conditions which has been recognized as a main reason for pedestrian-vehicle crashes has also been analyzed in this dissertation. To this end, an agent-based model is proposed to mimic pedestrian crossing behavior together with drivers' yielding behavior in the midblock crossing scenario. In particular, pedestrian-vehicle interaction is first modeled as a Two-player Pareto game which develops evaluation of strategies from two aspects, delay and risk, for each agent (i.e. pedestrian and driver). The evaluations are then used by Extended Decision Field Theory to mimic decision making of each agent based on his/her aggressiveness and physical capabilities. A base car-following algorithm from NGSIM is employed to represent vehicles' physical movement and execution of drivers' decisions. A midblock segment of a typical arterial in the Tucson area is adopted to illustrate the proposed model, and the model for the considered scenario has been implemented in AnyLogic® simulation software. Using the constructed simulation, experiments have been conducted to analyze different behaviors of pedestrians and drivers and the mutual impact upon each other, i.e. average pedestrian delay resulted from different crossing behaviors (aggressive vs. conservative), and average braking distance which is affected by driving aggressiveness and drivers' awareness of pedestrians. The results look interesting and are believed to be useful for improvement of pedestrians' safety during their midblock crossing. To the best of our knowledge, the proposed multi-scale modeling framework for pedestrians and drivers is one of the first efforts to estimate pedestrian delays in an urban area with adaptive resolution based on demand and accuracy requirement, as well as to address pedestrian-vehicle interactions under unsignalized conditions.
25

Design of Statistically and Energy Efficient Accelerated Life Tests

Zhang, Dan January 2014 (has links)
Because of the needs for producing highly reliable products and reducing product development time, Accelerated Life Testing (ALT) has been widely used in new product development as an alternative to traditional testing methods. The basic idea of ALT is to expose a limited number of test units of a product to harsher-than-normal operating conditions to expedite failures. Based on the failure time data collected in a short time period, an ALT model incorporating the underlying failure time distribution and life-stress relationship can be developed to predict the product reliability under the normal operating condition. However, ALT experiments often consume significant amount of energy due to the harsher-than-normal operating conditions created and controlled by the test equipment used in the experiments. This challenge may obstruct successful implementations of ALT in practice. In this dissertation, a new ALT design methodology is developed to improve the reliability estimation precision and the efficiency of energy utilization in ALT. This methodology involves two types of ALT design procedures - the sequential optimization approach and the simultaneous optimization alternative with a fully integrated double-loop design architecture. Using the sequential optimum ALT design procedure, the statistical estimation precision of the ALT experiment will be improved first followed by energy minimization through the optimum design of controller for the test equipment. On the other hand, we can optimize the statistical estimation precision and energy consumption of an ALT plan simultaneously by solving a multi-objective optimization problem using a controlled elitist genetic algorithm. When implementing either of the methods, the resulting statistically and energy efficient ALT plan depends not only on the reliability of the product to be evaluated but also on the physical characteristics of the test equipment and its controller. Particularly, the statistical efficiency of each candidate ALT plan needs to be evaluated and the corresponding controller capable of providing the required stress loadings must be designed and simulated in order to evaluate the total energy consumption of the ALT plan. Moreover, the realistic physical constraints and tracking performance of the test equipment are also addressed in the proposed methods for improving the accuracy of test environment. In this dissertation, mathematical formulations, computational algorithms and simulation tools are provided to handle such complex experimental design problems. To the best of our knowledge, this is the first methodological investigation on experimental design of statistically precise and energy efficient ALT. The new experimental design methodology is different from most of the previous work on planning ALT in that (1) the energy consumption of an ALT experiment, depending on both the designed stress loadings and controllers, cannot be expressed as a simple function of the related decision variables; (2) the associated optimum experimental design procedure involves tuning the parameters of the controller and evaluating the objective function via computer experiment (simulation). Our numerical examples demonstrate the effectiveness of the proposed methodology in improving the reliability estimation precision while minimizing the total energy consumption in ALT. The robustness of the sequential optimization method is also verified through sensitivity analysis.
26

Application of Optimization Techniques to Water Supply System Planning

Lan, Fujun January 2014 (has links)
Water supply system planning is concerned about the design of water supply infrastructure for distributing water from sources to users. Population growth, economic development and diminishing freshwater supplies are posing growing challenges for water supply system planning in many urban areas. Besides the need to exploit alternative water sources to the conventional surface and groundwater supplies, such as reclaimed water, a systematic point of view has to be taken for the efficient management of all potential water resources, so that issues of water supply, storage, treatment and reuse are not considered separately, but rather in the context of their interactions. The focus of this dissertation is to develop mathematical models and optimization algorithms for water supply system planning, where the interaction of different system components is explicitly considered. A deterministic nonlinear programming model is proposed at first to decide pipe and pump sizes in a regional water supply system for satisfying given potable and non-potable user demands over a certain planning horizon. A branch-and-bound algorithm based on the reformulation-linearization technique is then developed for solving the model to global optimality. To handle uncertainty in the planning process, a stochastic programming (SP) model and a robust optimization (RO) model are successively proposed to deal with random water supply and demand and the risk of facility failure, respectively. Both models attempt to make the decision of building some additional treatment and recharge facilities for recycling wastewater on-the-site. While the objective of the SP model is to minimize the total system design and expected operation cost, the RO model tries to achieve a favorable trade-off between system cost and system robustness, where the system robustness is defined in terms of meeting given user demands against the worst-case failure mode. The Benders decomposition method is then applied for solving both models by exploiting their special structure.
27

An Integrated Simulation, Learning and Game-theoretic Framework for Supply Chain Competition

Xu, Dong January 2014 (has links)
An integrated simulation, learning, and game-theoretic framework is proposed to address the dynamics of supply chain competition. The proposed framework is composed of 1) simulation-based game platform, 2) game solving and analysis module, and 3) multi-agent reinforcement learning module. The simulation-based game platform supports multi-paradigm modeling, such as agent-based modeling, discrete-event simulation, and system dynamics modeling. The game solving and analysis module is designed to include various parts including strategy refinement, data sampling, game solving, equilibrium conditions, solution evaluation, as well as comparative statistics under varying parameter values. The learning module facilitates the decision making of each supply chain competitor under the stochastic and uncertain environments considering different learning strategies. The proposed integrated framework is illustrated for a supply chain system under the newsvendor problem setting in several phases. At phase 1, an extended newsvendor competition considering both the product sale price and service level under an uncertain demand is studied. Assuming that each retailer has the full knowledge of the other retailer's decision space and profit function, we derived the existence and uniqueness conditions of a pure strategy Nash equilibrium with respect to the price and service dominance under additive and multiplicative demand forms. Furthermore, we compared the bounds and obtained various managerial insights. At phase 2, to extend the number of decision variables and enrich the payoff function of the problem considered at phase 1, a hybrid simulation-based framework involving systems dynamics and agent-based modeling is presented, followed by a novel game solving procedure, where the procedural components include strategy refinement, data sampling, gaming solving, and performance evaluation. Various numerical analyses based on the proposed procedure are presented, such as equilibrium accuracy, quality, and asymptotic/marginal stability. At phase 3, multi-agent reinforcement learning technique is employed for the competition scenarios under a partial/incomplete information setting, where each retailer can only observe the opponent' behaviors and adapt to them. Under such a setting, we studied different learning policies and learning rates with different decay patterns between the two competitors. Furthermore, the convergence issues are discussed as well. Finally, the best learning strategies under different problem scenarios are devised.
28

Advanced Data Analysis and Test Planning for Highly Reliable Products

Zhang, Ye January 2014 (has links)
Accelerated life testing (ALT) has been widely used in collecting failure time data of highly reliable products. Most parametric ALT models assume that the ALT data follows a specific probability distribution. However, the assumed distribution may not be adequate in describing the underlying failure time distribution. In this dissertation, a more generic method based on a phase-type distribution is presented to model ALT data. To estimate the parameters of such Erlang Coxian-based ALT models, both a mathematical programming approach and a maximum likelihood method are developed. To the best of our knowledge, this dissertation demonstrates, for the first time, the potential of using PH distributions for ALT data analysis. To shorten the test time of ALT, degradation tests have been studied as a useful alternative. Among many degradation tests, destructive degradation tests (DDT) have attracted much attention in reliability engineering. Moreover, some materials/products start degrading only after a random degradation initiation time that is often not even observable. In this dissertation, two-stage delayed-degradation models are developed to evaluate the reliability of a product with random initiation time. For homogeneous and heterogeneous populations, fixed-effects and random-effects Gamma processes are considered, respectively. An expectation-maximization algorithm and a bootstrap method are developed to facilitate the maximum likelihood estimation of model parameters and to construct the confidence intervals of the interested reliability index, respectively. With an Accelerated DDT model, an optimal test plan is presented to improve the statistical efficiency. In designing the ADDT experiment, decision variables related to the experiment must be determined under the constraints on limited resources, such as the number of test units and the total testing time. In this dissertation, the number of test units and stress level are pre-determined in planning an ADDT experiment. The goal is to improve the statistical efficiency by selecting appropriately allocate the test units to different stress levels to minimize the asymptotic variance of the estimator of the p-quantile of failure time. In particular, considering the random degradation initiation time, a three-level constant-stress destructive degradation test is studied. A mathematical programming problem is formulated to minimize the asymptotic variance of reliability estimate.
29

Analyzing Cyber-Enabled Social Movement Organizations: A Case Study with Crowd-Powered Search

Zhang, Qingpeng January 2012 (has links)
The advances in social media and social computing technologies have dramatically changed the way through which people interact, organize, and collaborate. The use of social media also makes the large-scale data revealing human behavior accessible to researchers and practitioners. The analysis and modeling of social networks formed from relatively stable online communities have been extensively studied. The research on the structural and dynamical patterns of large-scale crowds motivated by accomplishing common goals, named the cyber movement organizations (CMO) or cyber-enabled social movement organizations (CeSMO), however, is still limited to anecdotal case studies. This research is one of the first steps towards the understanding of the CMO/CeSMO based on real data collected from online social media.The focus of my research is on the study of an important type of CMO/CeSMO, the crowd-powered search behavior (also known as human flesh search, HFS), in which a large number of Web users voluntarily gathered together to find out the truth of an event or the information of a person that could not be identified by one single person or simple online searches. In this research, I have collected a comprehensive data-set of HFS. I first introduce the phenomenon of HFS and reviewed the study of online social groups/communities. Then, I present the empirical studies of both individual HFS episodes and aggregated HFS communities, and unveiled their unique topological properties. Based on the empirical findings, I propose two models to simulate evolution and topology of individual HFS networks. I conclude the dissertation with discussions of future research of CMO/CeSMO.
30

Perceptions of Model-Based Systems Engineering As the Foundation for Cost Estimation and Its Implications to Earned Value Management

Balram, Sara January 2012 (has links)
Model-based systems engineering (MBSE) is an enterprising systems engineering methodology, which in replacing traditional, document-centric systems engineering methods, has the potential to reduce project costs, time, effort and risk. The potential benefits of applying MBSE on a project are widely discussed but are largely anecdotal. Throughout the System Engineering and Project Management industries, there is a strong desire to quantify these benefits, particularly within organizations that are looking to apply it to their complex, system of systems projects. The objective of this thesis was to quantify the benefits that model-based systems engineering presents, particularly in terms of project cost estimates. In order to quantify this qualitative data, statistical analysis was conducted on collected perceptions from industry experts and professionals. The results of this work led to identifying future research that should be completed in order to make MBSE an industry-wide standard for the development and estimation of projects.

Page generated in 0.1298 seconds