• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 465
  • 312
  • 159
  • 14
  • 11
  • 11
  • 11
  • 11
  • 11
  • 11
  • 9
  • 6
  • 3
  • 3
  • 3
  • Tagged with
  • 1131
  • 1131
  • 330
  • 326
  • 324
  • 255
  • 214
  • 161
  • 159
  • 148
  • 144
  • 120
  • 106
  • 105
  • 91
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
631

Exact simulation algorithms with applications in queueing theory and extreme value analysis

Liu, Zhipeng January 2020 (has links)
This dissertation focuses on the development and analysis of exact simulation algorithms with applications in queueing theory and extreme value analysis. We first introduce the first algorithm that samples max_𝑛≥0 {𝑆_𝑛 − 𝑛^α} where 𝑆_𝑛 is a mean zero random walk, and 𝑛^α with α ∈ (1/2,1) defines a nonlinear boundary. We apply this algorithm to construct the first exact simulation method for the steady-state departure process of a 𝐺𝐼/𝐺𝐼/∞ queue where the service time distribution has infinite mean. Next, we consider the random field 𝑀 (𝑡) = sup_(𝑛≥1) 􏰄{ − log 𝑨_𝑛 + 𝑋_𝑛 (𝑡)􏰅}, 𝑡 ∈ 𝑇 , for a set 𝑇 ⊂ ℝ^𝓂, where (𝑋_𝑛) is an iid sequence of centered Gaussian random fields on 𝑇 and 𝑂 < 𝑨₁ < 𝑨₂ < . . . are the arrivals of a general renewal process on (0, ∞), independent of 𝑋_𝑛. In particular, a large class of max-stable random fields with Gumbel marginals have such a representation. Assume that the number of function evaluations needed to sample 𝑋_𝑛 at 𝑑 locations 𝑡₁, . . . , 𝑡_𝑑 ∈ 𝑇 is 𝑐(𝑑). We provide an algorithm which samples 𝑀(𝑡_{1}), . . . ,𝑀(𝑡_𝑑) with complexity 𝑂 (𝑐(𝑑)^{1+𝘰 (1)) as measured in the 𝐿_𝑝 norm sense for any 𝑝 ≥ 1. Moreover, if 𝑋_𝑛 has an a.s. converging series representation, then 𝑀 can be a.s. approximated with error δ uniformly over 𝑇 and with complexity 𝑂 (1/(δl og (1/\δ((^{1/α}, where α relates to the Hölder continuity exponent of the process 𝑋_𝑛 (so, if 𝑋_𝑛 is Brownian motion, α =1/2). In the final part, we introduce a class of unbiased Monte Carlo estimators for multivariate densities of max-stable fields generated by Gaussian processes. Our estimators take advantage of recent results on the exact simulation of max-stable fields combined with identities studied in the Malliavin calculus literature and ideas developed in the multilevel Monte Carlo literature. Our approach allows estimating multivariate densities of max-stable fields with precision 𝜀 at a computational cost of order 𝑂 (𝜀 ⁻² log log log 1/𝜀).
632

Debriefing For Meaningful Learning: Fostering Development of Clinical Reasoning Through Simulation

Dreifuerst, Kristina Thomas 25 February 2011 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / There is a critical need for faculty, a shortage of clinical sites, and an emphasis on quality and safety initiatives that drive increasing use of simulation in nursing education. Debriefing is an essential component of simulation, yet faculty are not consistently prepared to facilitate it such that meaningful learning, demonstrated through clinical reasoning, occurs from the experience. The purpose of this exploratory, quasi-experimental, pre-test-post-test study was to discover the effect of the use of a simulation teaching strategy, Debriefing for Meaningful Learning (DML), on the development of clinical reasoning in nursing students. Clinical reasoning was measured in 238 participant students from a Midwestern university school of nursing taking an adult health course that uses simulation. Participants were assigned to either the experimental or control group where the DML was compared to customary debriefing using the Health Sciences Reasoning Test (HSRT) before and after the debriefing experience, and the Debriefing Assessment for Simulation in Healthcare©–Student Version (DASH©–SV) with four supplemental questions about the DML (DMLSQ) process, during the post-debriefing assessment. This research sought to understand if the DML debriefing strategy positively influenced the development of clinical reasoning skills in undergraduate nursing students, as compared to usual and customary debriefing. The data revealed that there was a statistical difference between total mean test scores measured by the HSRT. There was, additionally, statistical significance in the change in scores between pre-test and post-test for those who used the DML as compared to the control. There was also a difference in the student’s perception of the quality of the debriefing measured by the DASH©–SV with the DML rated statistically higher than usual debriefing. Finally, there was a significant correlation, demonstrated through regression analysis, between the change in HSRT scores and students’ perception of quality debriefing and the use of the DML. This study contributes to the growing body of knowledge about simulation pedagogy, provides tools for use in debriefing, and informs faculty on best practices in debriefing.
633

Automatic Modeling and Simulation of Networked Components

Bruce, Nathaniel William January 2011 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Testing and verification are essential to safe and consistent products. Simulation is a widely accepted method used for verification and testing of distributed components. Generally, one of the major hurdles in using simulation is the development of detailed and accurate models. Since there are time constraints on projects, fast and effective methods of simulation model creation emerge as essential for testing. This thesis proposes to solve these issues by presenting a method to automatically generate a simulation model and run a random walk simulation using that model. The method is automated so that a modeler spends as little time as possible creating a simulation model and the errors normally associated with manual modeling are eliminated. The simulation is automated to allow a human to focus attention on the device that should be tested. The communications transactions between two nodes on a network are recorded as a trace file. This trace file is used to automatically generate a finite state machine model. The model can be adjusted by a designer to add missing information and then simulated in real-time using a software-in-the-loop approach. The innovations in this thesis include adaptation of a synthesis method for use in simulation, introduction of a random simulation method, and introduction of a practical evaluation method for two finite state machines. Test results indicate that nodes can be adequately replaced by models generated automatically by these methods. In addition, model construction time is reduced when comparing to the from scratch model creation method.
634

Rare-Event Estimation and Calibration for Large-Scale Stochastic Simulation Models

Bai, Yuanlu January 2023 (has links)
Stochastic simulation has been widely applied in many domains. More recently, however, the rapid surge of sophisticated problems such as safety evaluation of intelligent systems has posed various challenges to conventional statistical methods. Motivated by these challenges, in this thesis, we develop novel methodologies with theoretical guarantees and numerical applications to tackle them from different perspectives. In particular, our works can be categorized into two areas: (1) rare-event estimation (Chapters 2 to 5) where we develop approaches to estimating the probabilities of rare events via simulation; (2) model calibration (Chapters 6 and 7) where we aim at calibrating the simulation model so that it is close to reality. In Chapter 2, we study rare-event simulation for a class of problems where the target hitting sets of interest are defined via modern machine learning tools such as neural networks and random forests. We investigate an importance sampling scheme that integrates the dominating point machinery in large deviations and sequential mixed integer programming to locate the underlying dominating points. We provide efficiency guarantees and numerical demonstration of our approach. In Chapter 3, we propose a new efficiency criterion for importance sampling, which we call probabilistic efficiency. Conventionally, an estimator is regarded as efficient if its relative error is sufficiently controlled. It is widely known that when a rare-event set contains multiple "important regions" encoded by the dominating points, importance sampling needs to account for all of them via mixing to achieve efficiency. We argue that the traditional analysis recipe could suffer from intrinsic looseness by using relative error as an efficiency criterion. Thus, we propose the new efficiency notion to tighten this gap. In particular, we show that under the standard Gartner-Ellis large deviations regime, an importance sampling that uses only the most significant dominating points is sufficient to attain this efficiency notion. In Chapter 4, we consider the estimation of rare-event probabilities using sample proportions output by crude Monte Carlo. Due to the recent surge of sophisticated rare-event problems, efficiency-guaranteed variance reduction may face implementation challenges, which motivate one to look at naive estimators. In this chapter we construct confidence intervals for the target probability using this naive estimator from various techniques, and then analyze their validity as well as tightness respectively quantified by the coverage probability and relative half-width. In Chapter 5, we propose the use of extreme value analysis, in particular the peak-over-threshold method which is popularly employed for extremal estimation of real datasets, in the simulation setting. More specifically, we view crude Monte Carlo samples as data to fit on a generalized Pareto distribution. We test this idea on several numerical examples. The results show that in the absence of efficient variance reduction schemes, it appears to offer potential benefits to enhance crude Monte Carlo estimates. In Chapter 6, we investigate a framework to develop calibration schemes in parametric settings, which satisfies rigorous frequentist statistical guarantees via a basic notion that we call eligibility set designed to bypass non-identifiability via a set-based estimation. We investigate a feature extraction-then-aggregation approach to construct these sets that target at multivariate outputs. We demonstrate our methodology on several numerical examples, including an application to calibration of a limit order book market simulator. In Chapter 7, we study a methodology to tackle the NASA Langley Uncertainty Quantification Challenge, a model calibration problem under both aleatory and epistemic uncertainties. Our methodology is based on an integration of distributionally robust optimization and importance sampling. The main computation machinery in this integrated methodology amounts to solving sampled linear programs. We present theoretical statistical guarantees of our approach via connections to nonparametric hypothesis testing, and numerical performances including parameter calibration and downstream decision and risk evaluation tasks.
635

Modeling Spatiotemporal Dependence for Integrated Climate Risk Assessment of Energy Infrastructure Systems

Amonkar, Yash Vijay January 2023 (has links)
The quality of modern life is intrinsically tied to the development and maintenance of infrastructure systems. Modern energy and electricity infrastructure systems have high-reliability requirements, with people expecting power at the flip of a switch. The complex market structure and public-private partnerships at multiple levels in power generation and transmission systems make ensuring high reliability even more difficult. The 21st century brings with it multiple challenges and opportunities within these sectors. A large portion of the infrastructure fleet, like dams and fossil fuel generation plants, is old and needs replacement. Further, the decarbonization of the power sector is poised to result in the inclusion of large amounts of variable renewable energy sources, thereby introducing stochasticity in supply. The research presented in this dissertation seeks to assess and improve energy infrastructure resilience against regional spatiotemporal climate risk in the face of the upcoming decarbonization of the power sector. This dissertation seeks to develop our understanding of climate risk to energy infrastructure systems at a regional level. The analysis will be focused on the identification of organized modes of climate variability that lead to space-time clustering of risk.These investigations are accompanied by specific case studies in the contiguous United States and are applicable to electricity grids and river basins. Overall, I will focus on the ability to simulate and predict extreme climate events which pose reliability and failure risks to energy infrastructure systems. Since such events are rare, I propose methods that establish event excedance probabilities accounting for their underlying uncertainties. In chapter I, I present a novel statistical simulation model that can produce realistic, synthetic realizations of hydroclimatic fields across a large region. This k-nearest neighbor-based space-time simulator can be applied to single or multiple hydroclimatic fields across a large domain. The algorithm facilitates the estimation of the probability of extreme events that are not well represented in relatively short observational records. I apply this algorithm to wind and solar fields across the Texas Interconnection. Many regions plan to integrate more wind and solar generation into the energy grid, increasing power supply variability that can pose risks of under-supply. This simulation tool facilitates the estimation of the probability of regional wind and solar energy “droughts” and hence allows for the estimation of the storage needed to achieve desired supply-side reliability. In chapter II, I present a clustering based variant of the simulator developed in chapter I. I show how the algorithm developed in chapter I is a special case of a general class of algorithms. In Chapter II, I generalize the algorithm by introducing clustering on the neighbor likelihoods, thereby allowing for the identification of sub-regions with different state-space evolution characteristics. This allows for the application of the generalized algorithm to cases with greater heterogeneity, for example, increased temporal resolution. The clustering based k-nearest neighbor space-time simulator was developed to generate synthetic simulations of wind-solar data at an hourly timescale. I present an application of this algorithm to hourly wind-solar data across the Texas Interconnection. The application of this algorithm to estimate the underlying uncertainty and risk faced by power producers in entering short-term power supply contracts is demonstrated. In chapter III, I present a retrospective analysis of inferred energy demand trends across the contiguous United States. Future net zero scenarios generally require replacing all fossil-fuel heating with electric heat, thereby precipitating higher electricity peak loads during winter. Assuming 100% penetration of efficient electric space-heating and cooling, this chapter carries out a spatially explicit trend analysis of temperature-based proxies of electricity demand over the past 70 years. As expected, annual mean heating and cooling demand decreases and increases over most of the contiguous US, respectively. Peak thermal load is generally dominated by heating, showing large inter-annual and decadal variability, thus far not displaying statistically significant decreasing trends. In the south, the peak cooling demand has started to dominate the peak demand, but the possibility of an occasional high peak heating demand can not be discounted. Conversely, in the north, the average thermal loads are declining while the peak thermal loads are not. This points to the need for an improved pre-season forecast of peak winter heating loads. In chapter IV, I present a method for the diagnosis of low-frequency climate variability from multi-site data, which leads to spatiotemporal clustering of flooding risk at a regional level. Disruptions to energy infrastructure systems are often caused due to flooding, and the characterization of climate risk to energy infrastructure due to flooding is explored in this context. The approach is demonstrated using the Ohio River Basin as a case study. I show that the dominant timescales of flood risk within the Ohio River Basin are in the interannual (6-7 years), decadal (11-12 years), and long-term (secular) scales, with different sub-regions responding to different climate forcings. These leading modes are associated with El-Nino Southern Oscillation and secular trends. Further, the secular trend points to an east-to-west shift in flood incidence and changes in the storm track, which are consistent with certain climate change projections. Overall, the results point to the presence of compound climate risk inherent at regional levels, with the low-frequency climate variability translating into periods of increased and decreased flood risk, which all the stakeholders should consider.
636

Exact Simulation Methods for Functionals of Constrained Brownian Motion Processes and Stochastic Differential Equations

Somnath, Kumar 19 September 2022 (has links)
No description available.
637

A FRAMEWORK FOR ENHANCING PEDESTRIAN SERVICE AT SIGNALIZED INTERSECTIONS

Abdullah Jalal Nafakh (15353704) 27 April 2023 (has links)
<p>   </p> <p>Historically, roadway performance measures have focused almost exclusively on vehicular movement. In most urban settings, pedestrian movements typically outnumber vehicular movements significantly. However,  historically there has been no way to collect such data at scale in a systematic manner. With the widespread introduction of cameras for monitoring vehicular flow, there is an opportunity to leverage this infrastructure to acquire insights into the patterns and trends of pedestrian activities at signalized intersections in an automated and systematic manner. Such data and performance measures are critical inputs for detailed analysis of pedestrian movements. Overall, addressing this issue is a vital component of transportation agencies that seek to develop equitable treatment of all transportation system users including vulnerable road users. This dissertation addresses the gap in the literature regarding detailed characterization of pedestrian movement patterns and trends. The dissertation leverages data from signalized intersection cameras to (1) quantify the required duration for the pedestrian walk-interval based on pedestrian volume and geometric features of the intersection, (2) carry out time series analysis to acquire insights on pedestrian demand patterns and the influential variables, and (3) build machine learning algorithms to accurately predict pedestrian volumes and tie it to signal timing, to enhance service for all roadway users.</p> <p>The first study provides quantitative guidance for walk time interval selection. This part reports on 1,500 pedestrian movement observations from 12 signalized intersections with varying pedestrian demand, pedestrian storage areas, and pedestrian push-button locations. That data were used to develop a model predicting start-up time with an R2 of 0.89. The study concludes by presenting a quantitative table with four timing categories ranging from negligible volume to high volume and corresponding appropriate durations for the pedestrian walk interval time, based on the demand per cycle, storage area for pedestrians, and offset of the pedestrian push-button from the crosswalk.</p> <p>The second study describes several scalable techniques for measuring and analyzing the movement of pedestrians on a typical university campus. Approximately 35.6 million pedestrian movements over 19 months were tabulated in 15-minute counts of pedestrian volumes by intersection. Counts are used in evaluating pedestrian activity dependency on select explanatory variables at both the network and intersection levels at each time step for the entire analysis period. The study reports on time series correlation and cross-correlation and measures the time-dependency between pedestrian activities and influential factors such as the academic calendar, football games, basketball games, and graduation ceremonies. It provides a comprehensive understanding of the factors that are most influential of  pedestrian volumes at intersections.</p> <p>The third study presents a data-driven approach to predict pedestrian volume per intersection quadrant at 15-minute intervals, and to connect this information to signal timing. Machine learning random forest and XGBoost classification models were trained on a large dataset of pedestrian counts consisting of approximately 2.6 million observations collected through 19 months at 13 exclusive pedestrian service intersections. The predicted pedestrian volumes were then categorized per the pedestrian walk-interval categories to provide optimal signal timing for each intersection quadrant, thus enabling potential dynamic pedestrian signal timing at exclusive service intersections. The results of this study showed that the developed models accurately predict pedestrian volumes per 15-minute intervals for each quadrant of an intersection, with a high degree of precision and a prediction accuracy of 82.3%. Signal timing optimization based on predicted pedestrian volume can significantly improve pedestrian mobility and maximize traffic flow. </p> <p>The findings of this study provide valuable insights for traffic engineers and planners interested in developing and deploying dynamic pedestrian signal timing systems. It is a practical and effective solution for improving mobility for all roadway users at intersections with exclusive pedestrian service.</p> <p>  </p>
638

A Governance Reference Model For Service-oriented Architecture-based Common Data Initialization A Case Study Of Military Simulation Federation Systems

Lanman, Jeremy Thomas 01 January 2010 (has links)
Military simulation and command and control federations have become large, complex distributed systems that integrate with a variety of legacy and current simulations, and real command and control systems locally as well as globally. As these systems continue to become increasingly more complex so does the data that initializes them. This increased complexity has introduced a major problem in data initialization coordination which has been handled by many organizations in various ways. Serviceoriented architecture (SOA) solutions have been introduced to promote easier data interoperability through the use of standards-based reusable services and common infrastructure. However, current SOA-based solutions do not incorporate formal governance techniques to drive the architecture in providing reliable, consistent, and timely information exchange. This dissertation identifies the need to establish governance for common data initialization service development oversight, presents current research and applicable solutions that address some aspects of SOA-based federation data service governance, and proposes a governance reference model for development of SOA-based common data initialization services in military simulation and command and control federations.
639

Towards Calibration Of Optical Flow Of Crowd Videos Using Observed Trajectories

Elbadramany, Iman K 01 January 2011 (has links)
The need exists for finding a quantitative method for validating crowd simulations. One approach is to use optical flow of videos of real crowds to obtain velocities that can be used for comparison to simulations. Optical flow, in turn, needs to be calibrated to be useful. It is essential to show that optical flow velocities obtained from crowd videos can be mapped into the spatially averaged velocities of the observed trajectories of crowd members, and to quantify the extent of the correlation of the results. This research investigates methods to uncover the best conditions for a good correlation between optical flow and the average motion of individuals in crowd videos, with the aim that this will help in the quantitative validation of simulations. The first approach was to use a simple linear proportionality relation, with a single coefficient, alpha, between velocity vector of the optical flow and observed velocity of crowd members in a video or simulation. Since there are many variables that affect alpha, an attempt was made to find the best possible conditions for determining alpha, by varying experimental and optical flow settings. The measure of a good alpha was chosen to be that alpha does not vary excessively over a number of video frames. Best conditions of low coefficient of variation of alpha using the Lucas-Kanade optical flow algorithm were found to be when a larger aperture of 15x15 pixels was used, combined with a smaller threshold. Adequate results were found at cell size 40x40 pixels; the improvement in detecting details when smaller cells are used did not reduce the variability of alpha, and required much more computing power. Reduction iii in variability of alpha can be obtained by spreading the tracked location of a crowd member from a pixel into a rectangle. The Particle Image Velocimetry optical flow algorithm had better correspondence with the velocity vectors of manually tracked crowd members than results obtained using the Lukas-Kanade method. Here, also, it was found that 40x40 pixel cells were better than 15x15. A second attempt at quantifying the correlation between optical flow and actual crowd member velocities was studied using simulations. Two processes were researched, which utilized geometrical correction of the perspective distortion of the crowd videos. One process geometrically corrects the video, and then obtains optical flow data. The other obtains optical flow data from video, and then geometrically corrects the data. The results indicate that the first process worked better. Correlation was calculated between sets of data obtained from the average of twenty frames. This was found to be higher than calculating correlations between the velocities of cells in each pair of frames. An experiment was carried out to predict crowd tracks using optical flow and a calculated parameter, beta, seems to give promising results.
640

Simulation Methods at Epiroc within Underground Rock Excavation : Simuleringsmetoder vid Epiroc inom underjordsdivisionen.

Granberg, Simon January 2022 (has links)
Epiroc Underground Rock Excavation is a manufacturer of machines that operate in underground mines. The machines are complex and advanced with different functions and systems that interact with each other. Simulations provides a picture of machine design, its functions and how the real-world result would be. This method enables an iterative way of working with several different design solutions. Simulations is a crucial method for reducing cost and time-to-market. Recently, Epiroc has taken initiatives regarding development and continuous improvements within the underground division. Lack of documentation regarding what simulation methods that is used, and its purposes makes it difficult to work with development and improvements. The long-term target for Epiroc is to move the physical verification of machine systems, functions, and components to a digital environment. The mission of this project has been to collect and identify simulation methods used within the division and collect key information for future work. The information has been collected through interviews with simulation experts and the identified information is presented to get an understanding about the current state. Furthermore, waste and improvement areas has been highlighted within this thesis. / Epirocs underjordsdivision tillverkar maskiner som används inom underjordsarbete. Maskinerina är komplexa och avancerade med många olika funktioner och system som skall interagera med varandra. Simuleringar ger en första anblick i hur designen och maskiners funktioner senare kommer att fungera i verkligheten. Simuleringar möjliggör ett itterativt arbetssätt med flera olika designlösningar för att komma fram till den bästa lösningen. Detta är avgörande för att hålla ner kostnaden och att minska tiden-till-marknaden. På senare tid har initiativ tagits för att jobba med bland annat utveckling och ständiga förbättringar inom underjordsdivisionen. Bristande dokumentation gällande vilka simuleringsmetoder som används och syftet med dem gör det svårt att bedriva utvecklingsarbete gällande nya produkter. Det långsiktiga målet för Epiroc är att maskinens delar, system och funktioner skall kunna verifieras i en virtuell miljö istället för på fysisk maskin. Målet för det här projektet har varit att samla och identifiera simuleringsmetoder som används inom divisionen idag, för att förbereda för framtida arbete. Detta har gjorts genom att utföra intervjuer med simuleringsexperter och genom en nulägesanalys. Vidare så har även slöserier och förbättringsområden identifierats.

Page generated in 0.1144 seconds