Spelling suggestions: "subject:"simulationlation methods."" "subject:"motionsimulation methods.""
541 |
Design space exploration of stochastic system-of-systems simulations using adaptive sequential experimentsKernstine, Kemp H. 25 June 2012 (has links)
The complexities of our surrounding environments are becoming increasingly diverse, more integrated, and continuously more difficult to predict and characterize. These modeling complexities are ever more prevalent in System-of-Systems (SoS) simulations where computational times can surpass real-time and are often dictated by stochastic processes and non-continuous emergent behaviors. As the number of connections continue to increase in modeling environments and the number of external noise variables continue to multiply, these SoS simulations can no longer be explored with traditional means without significantly wasting computational resources.
This research develops and tests an adaptive sequential design of experiments to reduce the computational expense of exploring these complex design spaces. Prior to developing the algorithm, the defining statistical attributes of these spaces are researched and identified. Following this identification, various techniques capable of capturing these features are compared and an algorithm is synthesized. The final algorithm will be shown to improve the exploration of stochastic simulations over existing methods by increasing the global accuracy and computational speed, while reducing the number of simulations required to learn these spaces.
|
542 |
Modified selection mechanisms designed to help evolution strategies cope with noisy response surfacesGadiraju, Sriphani Raju. January 2003 (has links)
Thesis (M.S.)--Mississippi State University. Department of Industrial Engineering. / Title from title screen. Includes bibliographical references.
|
543 |
Towards a novel unified framework for developing formal, network and validated agent-based simulation models of complex adaptive systemsNiazi, Muaz A. K. January 2011 (has links)
Literature on the modeling and simulation of complex adaptive systems (cas) has primarily advanced vertically in different scientific domains with scientists developing a variety of domain-specific approaches and applications. However, while cas researchers are inherently interested in an interdisciplinary comparison of models, to the best of our knowledge, there is currently no single unified framework for facilitating the development, comparison, communication and validation of models across different scientific domains. In this thesis, we propose first steps towards such a unified framework using a combination of agent-based and complex network-based modeling approaches and guidelines formulated in the form of a set of four levels of usage, which allow multidisciplinary researchers to adopt a suitable framework level on the basis of available data types, their research study objectives and expected outcomes, thus allowing them to better plan and conduct their respective research case studies. Firstly, the complex network modeling level of the proposed framework entails the development of appropriate complex network models for the case where interaction data of cas components is available, with the aim of detecting emergent patterns in the cas under study. The exploratory agent-based modeling level of the proposed framework allows for the development of proof-of-concept models for the cas system, primarily for purposes of exploring feasibility of further research. Descriptive agent-based modeling level of the proposed framework allows for the use of a formal step-by-step approach for developing agent-based models coupled with a quantitative complex network and pseudocode-based specification of the model, which will, in turn, facilitate interdisciplinary cas model comparison and knowledge transfer. Finally, the validated agent-based modeling level of the proposed framework is concerned with the building of in-simulation verification and validation of agent-based models using a proposed Virtual Overlay Multiagent System approach for use in a systematic team-oriented approach to developing models. The proposed framework is evaluated and validated using seven detailed case study examples selected from various scientific domains including ecology, social sciences and a range of complex adaptive communication networks. The successful case studies demonstrate the potential of the framework in appealing to multidisciplinary researchers as a methodological approach to the modeling and simulation of cas by facilitating effective communication and knowledge transfer across scientific disciplines without the requirement of extensive learning curves.
|
544 |
Adhesion of particles on indoor flooring materialsLohaus, James Harold, 1968- 14 June 2012 (has links)
This dissertation involved a theoretical and experimental investigation of the adhesive forces between spherical particles of four different diameters and two selected flooring materials under different air velocities. Previous theoretical work and experiments described in the literature tended to be conducted with idealized surfaces, and therefore have limited applicability to indoor environments. Controlled experiments were designed, constructed and executed to measure the air velocity required to overcome adhesion forces. The diameters of the particles investigated were 0.5, 3.0, 5.0 and 9.9 [mu]m, and the flooring materials were linoleum and wooden flooring. The critical velocity, the flow at which 50% of the particles detached, is presented as a function of particle diameter for each surface. The measured values were then compared to empirical and theoretical models as well as to a scaling analysis that considers component forces that act on a particle-surface system. The results suggest that critical velocity decreases with increasing particle diameter and that existing models have limited applicability to resuspension from flooring materials. / text
|
545 |
Stochastic tomography and Gaussian beam depth migrationHu, Chaoshun, 1976- 25 September 2012 (has links)
Ocean-bottom seismometers (OBS) allow wider angle recording and therefore, they have the potential to significantly enhance imaging of deep subsurface structures. Currently, conventional OBS data analysis still uses first arrival traveltime tomography and prestack Kirchhoff depth migration method. However, using first arrival traveltimes to build a velocity model has its limitations. In the Taiwan region, subduction and collision cause very complex subsurface structures and generate extensive basalt-like anomalies. Since the velocity beneath basalt-like anomalies is lower than that of high velocity anomalies, no first-arrival refractions for the target areas occur. Thus, conventional traveltime tomography is not accurate and amplitude constrained traveltime tomography can be dangerous. Here, a new first-arrival stochastic tomography method for automatic background velocity estimation is proposed. Our method uses the local beam semblance of each common-shot or common-receiver gathers instead of first-arrival picking. Both the ray parameter and traveltime information are utilized. The use of Very Fast Simulated Annealing (VFSA) method also allows for easier implementation of the uncertainty analysis. Synthetic and real data benchmark tests demonstrate that this new method is robust, efficient, and accurate. In addition, migrated images of low-fold data or data with limited observation geometry like OBS are often corrupted by migration aliasing. Incorporation of prestack instantaneous-slowness information into the imaging condition can significantly reduce migration artifacts and noise and improve the image quality in areas of poor illumination. Here I combine slowness information with Gaussian beam depth migration and implement a new slowness driven Gaussian beam prestack depth migration. The prestack instantaneous slowness information, denoted by ray parameter gathers p(x,t), is extracted from the original OBS or shot gathers using local slant stacking and subsequent localsemblance analysis. In migration, we propagate both the seismic energy and the principal instantaneous slowness information backward. At a specific image location, the beam summation is localized in the resolution-dependent Fresnel zone where the instantaneousslowness-information-related weights are used to control the beams. The effectiveness of the new method is illustrated using two synthetic data examples: a simple model and a more realistic complicated sub-basalt model. / text
|
546 |
Adaptive multiscale modeling of polymeric materials using goal-oriented error estimation, Arlequin coupling, and goals algorithmsBauman, Paul Thomas, 1980- 29 August 2008 (has links)
Scientific theories that explain how physical systems behave are described by mathematical models which provide the basis for computer simulations of events that occur in the physical universe. These models, being only mathematical characterizations of actual phenomena, are obviously subject to error because of the inherent limitations of all mathematical abstractions. In this work, new theory and methodologies are developed to quantify such modeling error in a special way that resolves a fundamental and standing issue: multiscale modeling, the development of models of events that transcend many spatial and temporal scales. Specifically, we devise the machinery for a posteriori estimates of relative modeling error between a model of fine scale and another of coarser scale, and we use this methodology as a general approach to multiscale problems. The target application is one of critical importance to nanomanufacturing: imprint lithography of semiconductor devices. The development of numerical methods for multiscale modeling has become one of the most important areas of computational science. Technological developments in the manufacturing of semiconductors hinge upon the ability to understand physical phenomena from the nanoscale to the microscale and beyond. Predictive simulation tools are critical to the advancement of nanomanufacturing semiconductor devices. In principle, they can displace expensive experiments and testing and optimize the design of the manufacturing process. The development of such tools rest on the edge of contemporary methods and high-performance computing capabilities and is a major open problem in computational science. In this dissertation, a molecular model is used to simulate the deformation of polymeric materials used in the fabrication of semiconductor devices. Algorithms are described which lead to a complex molecular model of polymer materials designed to produce an etch barrier, a critical component in imprint lithography approaches to semiconductor manufacturing. Each application of this so-called polymerization process leads to one realization of a lattice-type model of the polymer, a molecular statics model of enormous size and complexity. This is referred to as the base model for analyzing the deformation of the etch barrier, a critical feature of the manufacturing process. To reduce the size and complexity of this model, a sequence of coarser surrogate models is generated. These surrogates are the multiscale models critical to the successful computer simulation of the entire manufacturing process. The surrogate involves a combination of particle models, the molecular model of the polymer, and a coarse-scale model of the polymer as a nonlinear hyperelastic material. Coefficients for the nonlinear elastic continuum model are determined using numerical experiments on representative volume elements of the polymer model. Furthermore, a simple model of initial strain is incorporated in the continuum equations to model the inherit shrinking of the A coupled particle and continuum model is constructed using a special algorithm designed to provide constraints on a region of overlap between the continuum and particle models. This coupled model is based on the so-called Arlequin method that was introduced in the context of coupling two continuum models with differing levels of discretization. It is shown that the Arlequin problem for the particle-tocontinuum model is well posed in a one-dimensional setting involving linear harmonic springs coupled with a linearly elastic continuum. Several numerical examples are presented. Numerical experiments in three dimensions are also discussed in which the polymer model is coupled to a nonlinear elastic continuum. Error estimates in local quantities of interest are constructed in order to estimate the modeling error due to the approximation of the particle model by the coupled multiscale surrogate model. The estimates of the error are computed by solving an auxiliary adjoint, or dual, problem that incorporates as data the quantity of interest or its derivatives. The solution of the adjoint problem indicates how the error in the approximation of the polymer model inferences the error in the quantity of interest. The error in the quantity of interest represents the relative error between the value of the quantity evaluated for the base model, a quantity typically unavailable or intractable, and the value of the quantity of interest provided by the multiscale surrogate model. To estimate the error in the quantity of interest, a theorem is employed that establishes that the error coincides with the value of the residual functional acting on the adjoint solution plus a higher-order remainder. For each surrogate in a sequence of surrogates generated, the residual functional acting on various approximations of the adjoint is computed. These error estimates are used to construct an adaptive algorithm whereby the model is adapted by supplying additional fine-scale data in certain subdomains in order to reduce the error in the quantity of interest. The adaptation algorithm involves partitioning the domain and selecting which subdomains are to use the particle model, the continuum model, and where the two overlap. When the algorithm identifies that a region contributes a relatively large amount to the error in the quantity of interest, it is scheduled for refinement by switching the model for that region to the particle model. Numerical experiments on several configurations representative of nano-features in semiconductor device fabrication demonstrate the effectiveness of the error estimate in controlling the modeling error as well as the ability of the adaptive algorithm to reduce the error in the quantity of interest. There are two major conclusions of this study: 1. an effective and well posed multiscale model that couples particle and continuum models can be constructed as a surrogate to molecular statics models of polymer networks and 2. an error estimate of the modeling error for such systems can be estimated with sufficient accuracy to provide the basis for very effective multiscale modeling procedures. The methodology developed in this study provides a general approach to multiscale modeling. The computational procedures, computer codes, and results could provide a powerful tool in understanding, designing, and optimizing an important class of semiconductormanufacturing processes. The study in this dissertation involves all three components of the CAM graduate program requirements: Area A, Applicable Mathematics; Area B, Numerical Analysis and Scientific Computation; and Area C, Mathematical Modeling and Applications. The multiscale modeling approach developed here is based on the construction of continuum surrogates and coupling them to molecular statics models of polymer as well as a posteriori estimates of error and their adaptive control. A detailed mathematical analysis is provided for the Arlequin method in the context of coupling particle and continuum models for a class of one-dimensional model problems. Algorithms are described and implemented that solve the adaptive, nonlinear problem proposed in the multiscale surrogate problem. Large scale, parallel computations for the base model are also shown. Finally, detailed studies of models relevant to applications to semiconductor manufacturing are presented. / text
|
547 |
Iteratively coupled reservoir simulation for multiphase flow in porous mediaLu, Bo, 1979- 29 August 2008 (has links)
Not available / text
|
548 |
Reservoir simulation studies for coupled CO₂ sequestration and enhanced oil recoveryGhomian, Yousef, 1974- 29 August 2008 (has links)
Compositional reservoir simulation studies were performed to investigate the effect of uncertain reservoir parameters, flood design variables, and economic factors on coupled CO₂ sequestration and EOR projects. Typical sandstone and carbonate reservoir properties were used to build generic reservoir models. A large number of simulations were needed to quantify the impact of all these factors and their corresponding uncertainties taking into account various combinations of the factors. The design of experiment method along with response surface methodology and Monte-Carlo simulations were utilized to maximize the information gained from each uncertainty analysis. The two objective functions were project profit in the form of $/bbl of oil produced and sequestered amount of CO₂ in the reservoir. The optimized values for all objective functions predicted by design of experiment and the response surface method were found to be close to the values obtained by the simulation study, but with only a small fraction of the computational time. After the statistical analysis of the simulation results, the most to least influential factors for maximizing both profit and amount of stored CO₂ are the produced gas oil ratio constraint, production and injection well types, and well spacing. For WAG injection scenarios, the Dykstra-Parsons coefficient and combinations of WAG ratio and slug size are important parameters. Also for a CO₂ flood, no significant reduction of profit occurred when only the storage of CO₂ was maximized. In terms of the economic parameters, it was demonstrated that the oil price dominates the CO₂ EOR and storage. This study showed that sandstone reservoirs have higher probability of need for CO₂i ncentives. In addition, higher CO₂ credit is needed for WAG injection scenarios than continuous CO₂ injection. As the second part of this study, scaling groups for miscible CO₂ flooding in a three-dimensional oil reservoir were derived using inspectional analysis with special emphasis on the equations related to phase behavior. Some of these scaling groups were used to develop a new MMP correlation. This correlation was compared with published correlations using a wide range of reservoir fluids and found to give more accurate predictions of the MMP. / text
|
549 |
Scheduling of Generalized Cambridge RingsBauer, Daniel Howard 14 October 2009 (has links)
A Generalized Cambridge Ring is a queueing system that can be used
as an approximate model of some material handling systems used in modern
factories. It consists of one or more vehicles that carry cargo from origins
to destinations around a loop, with queues forming when cargo temporarily
exceeds the capacity of the system. For some Generalized Cambridge Rings
that satisfy the usual traffic conditions for stability, it is demonstrated that
some nonidling scheduling polices are unstable. A good scheduling policy
will increase the efficiency of these systems by reducing waiting times and by
therefore also reducing work in process (WIP). Simple heuristic policies are
developed which provide substantial improvements over the commonly used
first-in-first-out (FIFO) policy. Variances are incorporated into previously
developed fluid models that used only means to produce a more accurate
partially discrete fluid mean-variance model, which is used to further reduce
waiting times. Optimal policies are obtained for some simple special cases, and
simulations are used to compare policies in more general cases. The methods
developed may be applicable to other queueing systems. / text
|
550 |
AN EXPERIMENTAL STUDY OF THE PLATO COMPUTERIZED DOUBLE-AUCTION MARKET MECHANISMWilliams, Arlington Walton January 1978 (has links)
No description available.
|
Page generated in 0.1023 seconds