• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 31
  • 8
  • 5
  • 4
  • 1
  • 1
  • 1
  • Tagged with
  • 66
  • 20
  • 19
  • 18
  • 17
  • 16
  • 13
  • 12
  • 11
  • 10
  • 10
  • 9
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

A Dual Metamodeling Perspective for Design and Analysis of Stochastic Simulation Experiments

Wang, Wenjing 17 July 2019 (has links)
Fueled by a growing number of applications in science and engineering, the development of stochastic simulation metamodeling methodologies has gained momentum in recent years. A majority of the existing methods, such as stochastic kriging (SK), only focus on efficiently metamodeling the mean response surface implied by a stochastic simulation experiment. As the simulation outputs are stochastic with the simulation variance varying significantly across the design space, suitable methods for variance modeling are required. This thesis takes a dual metamodeling perspective and aims at exploiting the benefits of fitting the mean and variance functions simultaneously for achieving an improved predictive performance. We first explore the effects of replacing the sample variances with various smoothed variance estimates on the performance of SK and propose a dual metamodeling approach to obtain an efficient simulation budget allocation rule. Second, we articulate the links between SK and least-square support vector regression and propose to use a ``dense and shallow'' initial design to facilitate selection of important design points and efficient allocation of the computational budget. Third, we propose a variational Bayesian inference-based Gaussian process (VBGP) metamodeling approach to accommodate the situation where either one or multiple simulation replications are available at every design point. VBGP can fit the mean and variance response surfaces simultaneously, while taking into full account the uncertainty in the heteroscedastic variance. Lastly, we generalize VBGP for handling large-scale heteroscedastic datasets based on the idea of ``transductive combination of GP experts.'' / Doctor of Philosophy / In solving real-world complex engineering problems, it is often helpful to learn the relationship between the decision variables and the response variables to better understand the real system of interest. Directly conducting experiments on the real system can be impossible or impractical, due to the high cost or time involved. Instead, simulation models are often used as a surrogate to model the complex stochastic systems for conducting simulation-based design and analysis. However, even simulation models can be very expensive to run. To alleviate the computational burden, a metamodel is often built based on the outputs of the simulation runs at some selected design points to map the performance response surface as a function of the controllable decision variables, or uncontrollable environmental variables, to approximate the behavior of the original simulation model. There has been a plethora of work in the simulation research community dedicated to studying stochastic simulation metamodeling methodologies suitable for analyzing stochastic simulation experiments in science and engineering. A majority of the existing methods, such as stochastic kriging (SK), have been known as effective metamodeling tool for approximating a mean response surface implied by a stochastic simulation. Despite that SK has been extensively used as an effective metamodeling methodology for stochastic simulations, SK and metamodeling techniques alike still face four methodological barriers: 1) Lack of the study in variance estimates methods; 2) Absence of an efficient experimental design for simultaneous mean and variance metamodeling; 3) Lack of flexibility to accommodate situations where simulation replications are not available; and 4) Lack of scalability. To overcome the aforementioned barriers, this thesis takes a dual metamodeling perspective and aims at exploiting the benefits of fitting the mean and variance functions simultaneously for achieving an improved predictive performance. We first explore the effects of replacing the sample variances with various smoothed variance estimates on the performance of SK and propose a dual metamodeling approach to obtain an efficient simulation budget allocation rule. Second, we articulate the links between SK and least-square support vector regression and propose to use a “dense and shallow” initial design to facilitate selection of important design points and efficient allocation of the computational budget. Third, we propose a variational Bayesian inference-based Gaussian process (VBGP) metamodeling approach to accommodate the situation where either one or multiple simulation replications are available at every design point. VBGP can fit the mean and variance response surfaces simultaneously, while taking into full account the uncertainty in the heteroscedastic variance. Lastly, we generalize VBGP for handling large-scale heteroscedastic datasets based on the idea of “transductive combination of GP experts.”
32

Robust and Data-Efficient Metamodel-Based Approaches for Online Analysis of Time-Dependent Systems

Xie, Guangrui 04 June 2020 (has links)
Metamodeling is regarded as a powerful analysis tool to learn the input-output relationship of a system based on a limited amount of data collected when experiments with real systems are costly or impractical. As a popular metamodeling method, Gaussian process regression (GPR), has been successfully applied to analyses of various engineering systems. However, GPR-based metamodeling for time-dependent systems (TDSs) is especially challenging due to three reasons. First, TDSs require an appropriate account for temporal effects, however, standard GPR cannot address temporal effects easily and satisfactorily. Second, TDSs typically require analytics tools with a sufficiently high computational efficiency to support online decision making, but standard GPR may not be adequate for real-time implementation. Lastly, reliable uncertainty quantification is a key to success for operational planning of TDSs in real world, however, research on how to construct adequate error bounds for GPR-based metamodeling is sparse. Inspired by the challenges encountered in GPR-based analyses of two representative stochastic TDSs, i.e., load forecasting in a power system and trajectory prediction for unmanned aerial vehicles (UAVs), this dissertation aims to develop novel modeling, sampling, and statistical analysis techniques for enhancing the computational and statistical efficiencies of GPR-based metamodeling to meet the requirements of practical implementations. Furthermore, an in-depth investigation on building uniform error bounds for stochastic kriging is conducted, which sets up a foundation for developing robust GPR-based metamodeling techniques for analyses of TDSs under the impact of strong heteroscedasticity. / Ph.D. / Metamodeling has been regarded as a powerful analysis tool to learn the input-output relationship of an engineering system with a limited amount of experimental data available. As a popular metamodeling method, Gaussian process regression (GPR) has been widely applied to analyses of various engineering systems whose input-output relationships do not depend on time. However, GPR-based metamodeling for time-dependent systems (TDSs), whose input-output relationships depend on time, is especially challenging due to three reasons. First, standard GPR cannot properly address temporal effects for TDSs. Second, standard GPR is typically not computationally efficient enough for real-time implementations in TDSs. Lastly, research on how to adequately quantify the uncertainty associated with the performance of GPR-based metamodeling is sparse. To fill this knowledge gap, this dissertation aims to develop novel modeling, sampling, and statistical analysis techniques for enhancing standard GPR to meet the requirements of practical implementations for TDSs. Effective solutions are provided to address the challenges encountered in GPR-based analyses of two representative stochastic TDSs, i.e., load forecasting in a power system and trajectory prediction for unmanned aerial vehicles (UAVs). Furthermore, an in-depth investigation on quantifying the uncertainty associated with the performance of stochastic kriging (a variant of standard GPR) is conducted, which sets up a foundation for developing robust GPR-based metamodeling techniques for analyses of more complex TDSs.
33

Enterprise Architecture for Information System Analysis : Modeling and assessing data accuracy, availability, performance and application usage

Per, Närman January 2012 (has links)
Decisions concerning IT systems are often made without adequate decision-support. This has led to unnecessary IT costs and failures to realize business benefits. The present thesis presents a framework for analysis of four information systems properties relevant to IT decision-making. The work is founded on enterprise architecture, a model-based IT and business management discipline. Based on the existing ArchiMate framework, a new enterprise architecture framework has been developed and implemented in a software tool. The framework supports modeling and analysis of data accuracy, service performance, service availability and application usage. To analyze data accuracy, data flows are modeled, the service availability analysis uses fault tree analysis, the performance analysis employs queuing networks and the application usage analysis combines the Technology Acceptance Model and Task-Technology Fit model. The accuracy of the framework's estimates was empirically tested. Data accuracy and service performance were evaluated in studies at the same power utility. Service availability was tested in multiple studies at banks and power utilities. Data was collected through interviews with system development or maintenance staff. The application usage model was tested in the maintenance management domain. Here, data was collected by means of a survey answered by 55 respondents from three power utilities, one manufacturing company and one nuclear power plant. The service availability studies provided estimates that were accurate within a few hours of logged yearly downtime. The data accuracy estimate was correct within a percentage point when compared to a sample of data objects. Deviations for four out of five service performance estimates were within 15 % from measured values. The application usage analysis explained a high degree of variation in application usage when applied to the maintenance management domain. During the studies of data accuracy, service performance and service availability, records were kept concerning the required modeling and analysis effort. The estimates were obtained with a total effort of about 20 man-hours per estimate. In summary the framework should be useful for IT decision-makers requiring fairly accurate, but not too expensive, estimates of the four properties. / <p>QC 20120912</p>
34

Enterprise Systems Modifiability Analysis : An Enterprise Architecture Modeling Approach for Decision Making

Lagerström, Robert January 2010 (has links)
Contemporary enterprises depend to great extent on software systems. During the past decades the number of systems has been constantly increasing and these systems have become more integrated with one another. This has lead to a growing complexity in managing software systems and their environment. At the same time business environments today need to progress and change rapidly to keep up with evolving markets. As the business processes change, the systems need to be modified in order to continue supporting the processes. The complexity increase and growing demand for rapid change makes the management of enterprise systems a very important issue. In order to achieve effective and efficient management, it is essential to be able to analyze the system modifiability (i.e. estimate the future change cost). This is addressed in the thesis by employing architectural models. The contribution of this thesis is a method for software system modifiability analysis using enterprise architecture models. The contribution includes an enterprise architecture analysis formalism, a modifiability metamodel (i.e. a modeling language), and a method for creating metamodels. The proposed approach allows IT-decision makers to model and analyze change projects. By doing so, high-quality decision support regarding change project costs is received. This thesis is a composite thesis consisting of five papers and an introduction. Paper A evaluatesa number of analysis formalisms and proposes extended influence diagrams to be employed for enterprise architecture analysis. Paper B presents the first version of the modifiability metamodel. InPaper C, a method for creating enterprise architecture metamodels is proposed. This method aims to be general, i.e. can be employed for other IT-related quality analyses such as interoperability, security, and availability. The paper does however use modifiability as a running case. The second version of the modifiability metamodel for change project cost estimation is fully described in Paper D. Finally, Paper E validates the proposed method and metamodel by surveying 110 experts and studying 21 change projects at four large Nordic companies. The validation indicates that the method and metamodel are useful, contain the right set of elements and provide good estimation capabilities. / QC20100716
35

Error Propagation and Metamodeling for a Fidelity Tradeoff Capability in Complex Systems Design

McDonald, Robert Alan 07 July 2006 (has links)
Complex man-made systems are ubiquitous in modern technological society. The national air transportation infrastructure and the aircraft that operate within it, the highways stretching coast-to-coast and the vehicles that travel on them, and global communications networks and the computers that make them possible are all complex systems. It is impossible to fully validate a systems analysis or a design process. Systems are too large, complex, and expensive to build test and validation articles. Furthermore, the operating conditions throughout the life cycle of a system are impossible to predict and control for a validation experiment. Error is introduced at every point in a complex systems design process. Every error source propagates through the complex system in the same way information propagates, feedforward, feedback, and coupling are all present with error. As with error propagation through a single analysis, error sources grow and decay when propagated through a complex system. These behaviors are made more complex by the complex interactions of a complete system. This complication and the loss of intuition that accompanies it make proper error propagation calculations even more important to aid the decision maker. Error allocation and fidelity trade decisions answer questions like: Is the fidelity of a complex systems analysis adequate, or is an improvement needed, and how is that improvement best achieved? Where should limited resources be invested for the improvement of fidelity? How does knowledge of the imperfection of a model impact design decisions based on the model and the certainty of the performance of a particular design? In this research, a fidelity trade environment was conceived, formulated, developed, and demonstrated. This development relied on the advancement of enabling techniques including error propagation, metamodeling, and information management. A notional transport aircraft is modeled in the fidelity trade environment. Using the environment, the designer is able to make design decisions while considering error and he is able to make decisions regarding required tool fidelity as the design problem continues. These decisions could not be made in a quantitative manner before the fidelity trade environment was developed.
36

Metamodeling For The Hla Federation Architectures

Topcu, Okan 01 December 2007 (has links) (PDF)
This study proposes a metamodel, named Federation Architecture Metamodel (FAMM), for describing the architecture of a High Level Architecture (HLA) compliant federation. The metamodel provides a domain specific language and a formal representation for the federation adopting Domain Specific Metamodeling approach to HLA-compliant federations. The metamodel supports the definitions of transformations both as source and as target. Specifically, it supports federate base code generation from a described federate behavior, and it supports transformations from a simulation conceptual model. A salient feature of FAMM is the behavioral description of federates based on live sequence charts (LSCs). It is formulated in metaGME, the meta-metamodel for the Generic Modeling Environment (GME). This thesis discusses specifically the following points: the approach to building the metamodel, metamodel extension from Message Sequence Chart (MSC) to LSC, support for model-based code generation, and action model and domain-specific data model integration. Lastly, this thesis presents, through a series of modeling case studies, the Federation Architecture Modeling Environment (FAME), which is a domain-specific model-building environment provided by GME once FAMM is invoked as the base paradigm.
37

Simulation-Based Robust Revenue Maximization Of Coal Mines Using Response Surface Methodology

Nageshwaraniyergopalakrishnan, Saisrinivas January 2014 (has links)
A robust simulation-based optimization approach is proposed for truck-shovel systems in surface coal mines to maximize the expected value of revenue obtained from loading customer trains. To this end, a large surface coal mine in North America is considered as case study. A data-driven modeling framework is developed and then applied to automatically generate a highly detailed simulation model of the mine in Arena. The framework comprises a formal information model based on Unified Modeling Language (UML), which is used to input mine structural as well as production information. Petri net-based model generation procedures are applied to automatically generate the simulation model based on the whole set of simulation inputs. Then, factors encountered in material handling operations that may affect the robustness of revenue are then classified into 1) controllable; and 2) uncontrollable categories. While controllable factors are trucks locked to routes, uncontrollable factors are inverses of summation over truck haul, and shovel loading and truck-dumping times for each route. Historical production data of the mine contained in a data warehouse is used to derive probability distributions for the uncontrollable factors. The data warehouse is implemented in Microsoft SQL, and contains snapshots of historical equipment statuses and production outputs taken at regular intervals in each shift of the mine. Response Surface Methodology is applied to derive an expression for the variance of revenue as a function of controllable and uncontrollable factors. More specifically, 1) first order and second order effects for controllable factors, 2) first order effects for uncontrollable factors, and 3) two factor interactions for controllable and uncontrollable factors are considered. Latin Hypercube Sampling method is applied for setting controllable factors and the means of uncontrollable factors. Also, Common Random Numbers method is applied to generate the sequence of pseudo-random numbers for uncontrollable factors in simulation experiments for variance reduction between different design points of the metamodel. The variance of the metamodel is validated using leave-one-out cross validation. It is later applied as an additional constraint to the mathematical formulation to maximize revenue in the simulation model using OptQuest. The decision variables in this formulation are truck locks only. Revenue is a function of the actual quality of coal delivered to each customer and their corresponding quality specifications for premiums and penalties. OptQuest is an optimization add-on for Arena that uses Tabu search and Scatter search algorithms to arrive at the optimal solution. The upper bound on the variance as a constraint is varied to obtain different sets of expected value as well as variance of optimal revenue. After comparison with results using OptQuest with random sampling and without variance expression of metamodel, it has been shown that the proposed approach can be applied to obtain the decision variable set that not only results in a higher expected value but also a narrower confidence interval for optimum revenue. According to the best of our knowledge, there are two major contributions from this research: 1) It is theoretically demonstrated using 2-point and orthonormal k-point response surfaces that Common Random Numbers reduces the error in estimation of variance of metamodel of simulation model. 2) A data-driven modeling and simulation framework has been proposed for automatically generating discrete-event simulation model of large surface coal mines to reduce modeling time, expenditure, as well as human errors associated with manual development.
38

A Metamodel For The High Level Architecture Object Model

Cetinkaya, Deniz 01 August 2005 (has links) (PDF)
The High Level Architecture (HLA), IEEE Std. 1516-2000, provides a general framework for distributed modeling and simulation applications, called federations. HLA focuses on interconnection of interacting simulations, called federates, with special emphasis on reusability and interoperability. An HLA object model, be it a simulation object model (SOM), a federation object model (FOM) or the management object model (MOM), describes the data exchanged during federation execution. This thesis introduces a metamodel for the HLA Object Model, fully accounting for IEEE Std. 1516.2. The metamodel is constructed with GME (Generic Modeling Environment), a meta-programmable tool for domain-specific modeling, developed at Vanderbilt University. GME generates a design environment for HLA object models having the HLA OM metamodel as input. This work can be regarded as a step for bringing model-integrated computing to bear on HLA-based distributed simulation.
39

Systematic use of models of concurrency in executable domain-specific modelling languages / Utilisation systématique des modèles de concurrence dans les langages de modélisation dédiés exécutables

Latombe, Florent 13 July 2016 (has links)
La programmation orientée langage (Language-Oriented Programming – LOP) préconise l’utilisation de langages de modélisation dédiés exécutables (eXecutable Domain-Specific Modeling Languages – xDSMLs) pour la conception, le développement, la vérification et la validation de systèmes hautement concurrents. De tels systèmes placent l’expression de la concurrence dans les langages informatiques au coeur du processus d’ingénierie logicielle, par exemple à l’aide de formalismes dédiés appelés modèles de concurrence (Models of Concurrency – MoCs). Ceux-ci permettent une analyse poussée du comportement des systèmes durant les phases de vérification et de validation, mais demeurent complexes à comprendre, utiliser, et maîtriser. Dans cette thèse, nous développons et étendons une approche qui vise à faire collaborer l’approche LOP et les MoCs à travers le développement de xDSMLs dans lesquels la concurrence est spécifiée de façon explicite (Concurrency-aware xDSMLs). Dans de tels langages, on spécifie l’utilisation systématique d’un MoC au niveau de la sémantique d’exécution du langage, facilitant l’expérience pour l’utilisateur final qui n’a alors pas besoin d’appréhender et de maîtriser l’utilisation du MoC choisi.Un tel langage peut être raffiné lors de la phase de déploiement, pour s’adapter à la plateforme utilisée, et les systèmes décrits peuvent être analysés sur la base du MoC utilisé. / Language-Oriented Programming (LOP) advocates designing eXecutable Domain-Specific Modeling Languages (xDSMLs) to facilitate the design, development, verification and validation of modern softwareintensive and highly-concurrent systems. These systems place their needs of rich concurrency constructs at the heart of modern software engineering processes. To ease theirdevelopment, theoretical computer science has studied the use of dedicated paradigms for the specification of concurrent systems, called Models of Concurrency (MoCs). They enable the use of concurrencyaware analyses such as detecting deadlocks or starvation situations, but are complex to understand and master. In this thesis, we develop and extend an approach that aims at reconciling LOP and MoCs by designing so-called Concurrencyaware xDSMLs. In these languages, the systematic use of a MoC is specified at the language level, removing from the end-user the burden of understanding or using MoCs. It also allows the refinement of the language for specific execution platforms, and enables the use of concurrency-aware analyses on the systems.
40

Modelling of metal removal rate in titanium alloy milling

Andersson, Niklas January 2018 (has links)
Titanium is one of fourth most abundant structural metal in earths soil. It is in a composition with other elements, forming titanium alloys. These alloys are used in many different areas, such as medical, energy and sports, but is most commonly used in aerospace applications. Titanium alloys have different solid phases, α, α+β and β depending on temperature and the amount of α and β-stabilizers.When machining titanium alloys, one of the most important factors to control is the temperature in the cutting zone. The built-up heat in the cutting edge of the tool, are connected to titanium alloys low thermal conductivity and high heat capacity, which means that the alloy has low heat conduction from the cutting zone. The temperature is strongly depending on the cutting speed, which is the relative speed difference between the cutting tool and the workpiece. Many studies and research work have been conducted surrounding this fact, focusing on the physical and chemical quantities, to model tool wear progression and how this affects the tool life and the metal removal. These models are often implemented and analyzed in finite element software providing detailed but time-consuming solutions.The focus for this work have been on developing a suitable tool life expectancy model, using design of experiments in combination with metamodeling to establish a model connecting cutting parameters and measured responses in terms of tool life, from a conducted milling experiment. This models where supposed to provide a platform for customer recommendation and cutting data optimization to secure reliable machining operations. The study was limited to focus on the common α+β titanium alloy 6Al-4V. The outcome and conclusion for this study, is that the tool life is strongly connected to the choice of cutting speed and the radial width of cut and that these parameters can be predicted by the two models that have been develop in this project. The models ensure the highest possible metal removal rate, to selected parameters.

Page generated in 0.055 seconds