Spelling suggestions: "subject:"simulationlation methods."" "subject:"motionsimulation methods.""
531 |
Modeling phosphor space charge in alternating-current thin-film electroluminescent devicesKeir, Paul D. 11 August 1995 (has links)
The accomplishments presented in this thesis are the development of three models
for simulation of space charge generation in the phosphor layer of alternating current
thin-film electroluminescent (ACTFEL) devices and the results from simulation
of these models. First, a single sheet charge model is developed and simulated.
The single sheet charge model is a model that simplifies the problem of modeling an
arbitrary distribution of space charge across the phosphor layer by lumping all of the
space charge into a sheet of charge at a specified location in the phosphor layer. In
this model and all subsequent models, space charge creation is assumed to occur by
field emission from bulk traps or by impact ionization of deep-level traps. A fairly
exhaustive parametric variation study of the single sheet charge model is performed
and the results are presented and discussed. The results show space charge effects
that are quite dependent on several parameters such as the number of bulk traps in
the phosphor layer, the location of the sheet of charge, the capture efficiency for space
charge annihilation, and the characteristic field for impact ionization of the deep-level
traps. The second model considered is a logical extension of the single sheet charge
model, the two sheet charge model, which models the space charge distribution as
two sheets of charge rather than one. This model has potential application in the
simulation of ACTFEL devices which exhibit large and/or symmetrical space charge
effects. The final model developed is an equivalent circuit/SPICE model of the single
sheet charge model. Actually, two models are developed, one for space charge
creation by field emission and one for impact ionization of deep-levels. Two SPICE
models are required because of functional differences in the dependencies of space
charge creation. The results of a simulation showing overshoot generated by SPICE
are given for the field emission equivalent circuit. / Graduation date: 1996
|
532 |
Signal decompositions using trans-dimensional Bayesian methods.Roodaki, Alireza 14 May 2012 (has links) (PDF)
This thesis addresses the challenges encountered when dealing with signal decomposition problems with an unknown number of components in a Bayesian framework. Particularly, we focus on the issue of summarizing the variable-dimensional posterior distributions that typically arise in such problems. Such posterior distributions are defined over union of subspaces of differing dimensionality, and can be sampled from using modern Monte Carlo techniques, for instance the increasingly popular Reversible-Jump MCMC (RJ-MCMC) sampler. No generic approach is available, however, to summarize the resulting variable-dimensional samples and extract from them component-specific parameters. One of the main challenges that needs to be addressed to this end is the label-switching issue, which is caused by the invariance of the posterior distribution to the permutation of the components. We propose a novel approach to this problem, which consists in approximating the complex posterior of interest by a "simple"--but still variable-dimensional parametric distribution. We develop stochastic EM-type algorithms, driven by the RJ-MCMC sampler, to estimate the parameters of the model through the minimization of a divergence measure between the two distributions. Two signal decomposition problems are considered, to show the capability of the proposed approach both for relabeling and for summarizing variable dimensional posterior distributions: the classical problem of detecting and estimating sinusoids in white Gaussian noise on the one hand, and a particle counting problem motivated by the Pierre Auger project in astrophysics on the other hand.
|
533 |
Behavioral Level Simulation Methods for Early Noise Coupling Quantification in Mixed-Signal SystemsLundgren, Jan January 2005 (has links)
In this thesis, noise coupling simulation is introduced into the behavioral level. Methods and models for simulating on-chip noise coupling at a behavioral level in a design flow are presented and verified for accuracy and validity. Today, designs of electronic systems are becoming denser and more and more mixed-signal systems such as System-on-Chip (SoC) are being devised. This raises problems when the electronics components start to interfere with each other. Often, digital components disturb analog components, introducing noise into the system causing degradation of the performance or even introducing errors into the functionality of the system. Today, these effects can only be simulated at a very late stage in the design process, causing large design iterations and increased costs if the designers are required to return and make alterations, which may have occurred at a very early stage in the process. This is why the focus of this work is centered on extracting noise coupling simulation models that can be used at a very early design stage such as the behavioral level and then follow the design through the various design stages. To realize this, SystemC is selected as a platform and implementation example for the behavioral level models. SystemC supports design refinement, which means that when designs are being refined and are crossing the design levels, the noise coupling models can also be refined to suit the current design. This new way of thinking in primarily mixed-signal designs is called Behavioral level Noise Coupling (BeNoC) simulation and shows great promise in enabling a reduction in the costs of design iterations due to component cross-talk and simplifies the work for mixed-signal system designers. / Electronics Design Division
|
534 |
Specification and Automatic Generation of Simulation Models with Applications in Semiconductor ManufacturingMueller, Ralph 21 May 2007 (has links)
The creation of large-scale simulation models is a difficult and time-consuming task. Yet simulation is one of the techniques most frequently used by practitioners in Operations Research and Industrial Engineering, as it is less limited by modeling assumptions than many analytical methods. The effective generation of simulation models is an important challenge. Due to the rapid increase in computing power, it is possible to simulate significantly larger systems than in the past. However, the verification and validation of these large-scale simulations is typically a very challenging task.
This thesis introduces a simulation framework that can generate a large variety of manufacturing simulation models. These models have to be described with a simulation data specification. This specification is then used to generate a simulation model which is described as a Petri net. This approach reduces the effort of model verification.
The proposed Petri net data structure has extensions for time and token priorities. Since it builds on existing theory for classical Petri nets, it is possible to make certain assertions about the behavior of the generated simulation model.
The elements of the proposed framework and the simulation execution mechanism are described in detail. Measures of complexity for simulation models that are built with the framework are also developed.
The applicability of the framework to real-world systems is demonstrated by means of a semiconductor manufacturing system simulation model.
|
535 |
Finding a representative day for simulation analysesWatson, Jebulan Ryan 23 November 2009 (has links)
Many models exist in the aerospace industry that attempt to replicate the National Airspace System (NAS). The complexity of the NAS makes it a system that can be modeled in a variety of ways. While some NAS models are very detailed and take many factors into account, runtime of these simulations can be on the magnitude of hours (to simulate a single day). Other models forgo details in order to decrease the runtime of their simulation. Most models are capable of simulating a 24 hour period in the NAS. An analysis of an entire year would mean running the simulation for every day in the year, which would result in a long run time.
The following thesis work presents a tool that is capable of giving the user a day that can be used in a simulation and will produce results similar to simulating the entire year. Taking in parameters chosen by the user, the tool outputs a single day, multiple days, or a composite day (based on percentages of days). Statistical methods were then used to compare each day to the overall year. On top of finding a single representative day, the ability to find a composite day was added. After implementing a brute force search technique to find the composite day, the long runtime was deemed inconvenient for the user. To solve this problem, a heuristic search method was created that would search the solution space in a short time and still output a composite day that represented the year. With a short runtime, the user would be able to run the program multiple times. Once the heuristic method was implemented, it was found that it performed well enough to make it an option for the user to choose.
The final version of this tool was used to find a representative day and the result was used in comparison with output data from a NAS simulation model. Because the tool found the representative day based on historical data, it could be used to validate the effectiveness of the simulation model. The following thesis will go into detail about how this tool, the Representative Day Finder, was created.
|
536 |
A federated simulation approach to modeling port and roadway operationsWall, Thomas Aubrey 08 April 2010 (has links)
This research develops a computer simulation method for federating an Arena© port operations model and a VISSIM© roadway network operations model. The development of this method is inspired by the High Level Architecture (HLA) standard for federating simulations, and incorporates several elements of the HLA principles into its design. The federated simulation model is then tested using a time-lag experiment to demonstrate the presence of feedback loops between federated model components wherein changes to input parameters of one model during runtime can be shown to affect the operational performance of the other model. This experiment also demonstrates how several initial transient phase and steady state operating characteristics of the federated system can be determined from the federation output data.
The results indicate that the method developed in this study is capable of capturing the dynamic interaction of two models in federated simulation. It is shown that feedback loops can exist between two models in federated simulation. Most notably, the federation output shows that increased traffic volume in the roadway network model influences the accumulation of containers in the port terminal queue of the port model. The federation output also shows that increased container volume leaving the port terminal model affects both port and road truck utilization, as well as the total number of port trucks in the roadway network model.
Challenges and future directions for research in federating transportation-related simulations are also presented.
|
537 |
Penalized method based on representatives and nonparametric analysis of gap dataPark, Soyoun 14 September 2010 (has links)
When there are a large number of predictors and few observations, building a regression model to explain the behavior of a response variable such as a patient's medical condition is very challenging. This is a "p ≫n " variable selection problem encountered often in modern applied statistics and data mining. Chapter one of this thesis proposes a rigorous procedure which groups predictors into clusters of "highly-correlated" variables, selects a representative from each cluster, and uses a subset of the representatives for regression modeling. The proposed Penalized method based on Representatives (PR) extends the Lasso for the p ≫ n data and highly correlated variables, to build a sparse model practically interpretable and maintain prediction quality. Moreover, we provide the PR-Sequential Grouped Regression (PR-SGR) to make computation of the PR procedure efficient. Simulation studies show the proposed method outperforms existing methods such as the Lasso/Lars. A real-life example from a mental health diagnosis illustrates the applicability of the PR-SGR. In the second part of the thesis, we study the analysis of time-to-event data called a gap data when missing time intervals (gaps) possibly happen prior to the first observed event time. If a gap occurs prior to the first observed event, then the first observed event may or may not be the first true event. This incomplete knowledge makes the gap data different from the well-studied regular interval censored data. We propose a Non-Parametric Estimate for the Gap data (NPEG) to estimate the survival function for the first true event time, derive its analytic properties and demonstrate its performance in simulations. We also extend the Imputed Empirical Estimating method (IEE), which is an existing nonparametric method for the gap data up to one gap, to handle the gap data with multiple gaps.
|
538 |
Statistical validation and calibration of computer modelsLiu, Xuyuan 21 January 2011 (has links)
This thesis deals with modeling, validation and calibration problems in experiments of computer models. Computer models are mathematic representations of real systems developed for understanding and investigating the systems. Before a computer model
is used, it often needs to be validated by comparing the computer outputs with physical observations and calibrated by adjusting internal model parameters in order to improve the agreement between the computer outputs and physical observations.
As computer models become more powerful and popular, the complexity of input and output data raises new computational challenges and stimulates the development of novel statistical modeling methods.
One challenge is to deal with computer models with random inputs (random effects). This kind of computer models is very common in engineering applications. For example, in a thermal experiment in the Sandia National Lab (Dowding et al. 2008), the volumetric heat capacity and thermal conductivity are random input variables. If input variables are randomly sampled from particular distributions with unknown parameters, the existing methods in the literature are not directly applicable. The reason is that integration over the random variable distribution is needed for the joint likelihood and the integration cannot always be expressed in a closed form. In this research, we propose a new approach which combines the nonlinear mixed effects model and the Gaussian process model (Kriging model). Different model formulations are also studied to have an better understanding of validation and calibration activities by using the thermal problem.
Another challenge comes from computer models with functional outputs. While many methods have been developed for modeling computer experiments with single response, the literature on modeling computer experiments with functional response is sketchy. Dimension reduction techniques can be used to overcome the complexity problem of function response; however, they generally involve two steps. Models are first fit at each individual setting of the input to reduce the dimensionality of the functional data. Then the estimated parameters of the models are treated as new responses, which are further modeled for prediction. Alternatively, pointwise models are first constructed at each time point and then functional curves are fit to the parameter estimates obtained from the fitted models. In this research, we first propose a functional regression model to relate functional responses to both design and time variables in one single step. Secondly, we propose a functional kriging model which uses variable selection methods by imposing a penalty function. we show that the proposed model performs better than dimension reduction based approaches and the kriging model without regularization. In addition, non-asymptotic theoretical bounds on the estimation error are presented.
|
539 |
Framework for robust design: a forecast environment using intelligent discrete event simulationBeisecker, Elise K. 29 March 2012 (has links)
The US Navy is shifting to power projection from the sea which stresses the capabilities of its current fleet and exposes a need for a new surface connector. The design of complex systems in the presence of changing requirements, rapidly evolving technologies, and operational uncertainty continues to be a challenge. Furthermore, the design of future naval platforms must take into account the interoperability of a variety of heterogeneous systems and their role in a larger system-of-systems context. To date, methodologies to address these complex interactions and optimize the system at the macro-level have lacked a clear direction and structure and have largely been conducted in an ad-hoc fashion. Traditional optimization has centered around individual vehicles with little regard for the impact on the overall system. A key enabler in designing a future connector is the ability to rapidly analyze technologies and perform trade studies using a system-of-systems level approach.
The objective of this work is a process that can quantitatively assess the impacts of new capabilities and vessels at the systems-of-systems level. This new methodology must be able to investigate diverse, disruptive technologies acting on multiple elements within the system-of-systems architecture. Illustrated through a test case for a Medium Exploratory Connector (MEC), the method must be capable of capturing the complex interactions between elements and the architecture and must be able to assess the impacts of new systems). Following a review of current methods, six gaps were identified, including the need to break the problem into subproblems in order to incorporate a heterogeneous, interacting fleet, dynamic loading, and dynamic routing. For the robust selection of design requirements, analysis must be performed across multiple scenarios, which requires the method to include parametric scenario definition.
The identified gaps are investigated and methods recommended to address these gaps to enable overall operational analysis across scenarios. Scenarios are fully defined by a scheduled set of demands, distances between locations, and physical characteristics that can be treated as input variables. Introducing matrix manipulation into discrete event simulations enables the abstraction of sub-processes at an object level and reduces the effort required to integrate new assets. Incorporating these linear algebra principles enables resource management for individual elements and abstraction of decision processes. Although the run time is slightly greater than traditional if-then formulations, the gain in data handling abilities enables the abstraction of loading and routing algorithms.
The loading and routing problems are abstracted and solution options are developed and compared. Realistic loading of vessels and other assets is needed to capture the cargo delivery capability of the modeled mission. The dynamic loading algorithm is based on the traditional knapsack formulation where a linear program is formulated using the lift and area of the connector as constraints. The schedule of demands from the scenarios represents additional constraints and the reward equation. Cargo available is distributed between cargo sources thus an assignment problem formulation is added to the linear program, requiring the cargo selected to load on a single connector to be available from a single load point.
Dynamic routing allows a reconfigurable supply chain to maintain a robust and flexible operation in response to changing customer demands and operating environment. Algorithms based on vehicle routing and computer packet routing are compared across five operational scenarios, testing the algorithms ability to route connectors without introducing additional wait time. Predicting the wait times of interfaces based on connectors en route and incorporating reconsideration of interface to use upon arrival performed consistently, especially when stochastic load times are introduced, is expandable to a large scale application. This algorithm selects the quickest load-unload location pairing based on the connectors routed to those locations and the interfaces selected for those connectors. A future connector could have the ability to unload at multiple locations if a single load exceeds the demand at an unload location. The capability for multiple unload locations is considered a special case in the calculation of the unload location in the routing. To determine the unload location to visit, a traveling salesman formulation is added to the dynamic loading algorithm. Using the cost to travel and unload at locations balanced against the additional cargo that could be delivered, the order and locations to visit are selected. Predicting the workload at load and unload locations to route vessels with reconsideration to handle disturbances can include multiple unload locations and creates a robust and flexible routing algorithm.
The incorporation of matrix manipulation, dynamic loading, and dynamic routing enables the robust investigation of the design requirements for a new connector. The robust process will use shortfall, capturing the delay and lack of cargo delivered, and fuel usage as measures of performance. The design parameters for the MEC, including the number available and vessel characteristics such as speed and size were analyzed across four ways of testing the noise space. The four testing methods are: a single scenario, a selected number of scenarios, full coverage of the noise space, and feasible noise space. The feasible noise space is defined using uncertainty around scenarios of interest. The number available, maximum lift, maximum area, and SES speed were consistently design drivers. There was a trade-off in the number available and size along with speed. When looking at the feasible space, the relationship between size and number available was strong enough to reverse the number available, to desiring fewer and larger ships. The secondary design impacts come from factors that directly impacted the time per trip, such as the time between repairs and time to repair. As the noise sampling moved from four scenario to full coverage to feasible space, the option to use interfaces were replaced with the time to load at these locations and the time to unload at the beach gained importance. The change in impact can be attributed to the reduction in the number of needed trips with the feasible space. The four scenarios had higher average demand than the feasible space sampling, leading to loading options being more important. The selection of the noise sampling had an impact of the design requirements selected for the MEC, indicating the importance of developing a method to investigate the future Naval assets across multiple scenarios at a system-of-systems level.
|
540 |
A metamodeling approach for approximation of multivariate, stochastic and dynamic simulationsHernandez Moreno, Andres Felipe 04 April 2012 (has links)
This thesis describes the implementation of metamodeling approaches as a solution to approximate multivariate, stochastic and dynamic simulations. In the area of statistics, metamodeling (or ``model of a model") refers to the scenario where an empirical model is build based on simulated data. In this thesis, this idea is exploited by using pre-recorded dynamic simulations as a source of simulated dynamic data. Based on this simulated dynamic data, an empirical model is trained to map the dynamic evolution of the system from the current discrete time step, to the next discrete time step. Therefore, it is possible to approximate the dynamics of the complex dynamic simulation, by iteratively applying the trained empirical model. The rationale in creating such approximate dynamic representation is that the empirical models / metamodels are much more affordable to compute than the original dynamic simulation, while having an acceptable prediction error.
The successful implementation of metamodeling approaches, as approximations of complex dynamic simulations, requires understanding of the propagation of error during the iterative process. Prediction errors made by the empirical model at earlier times of the iterative process propagate into future predictions of the model. The propagation of error means that the trained empirical model will deviate from the expensive dynamic simulation because of its own errors. Based on this idea, Gaussian process model is chosen as the metamodeling approach for the approximation of expensive dynamic simulations in this thesis. This empirical model was selected not only for its flexibility and error estimation properties, but also because it can illustrate relevant issues to be considered if other metamodeling approaches were used for this purpose.
|
Page generated in 0.0954 seconds