• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 152
  • 151
  • 38
  • 21
  • 13
  • 9
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 452
  • 123
  • 60
  • 58
  • 57
  • 51
  • 49
  • 45
  • 42
  • 40
  • 39
  • 36
  • 35
  • 33
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Estudos geoestatísticos aplicados à um depósito magmático de Ni-Cu / Geoestatiscal studies applied to a Ni-Cu magmatic deposit

Oliveira, Saulo Batista de 06 March 2009 (has links)
O depósito estudado é composto por uma suíte de rochas máfico-ultramáficas com mineralizações sulfetadas cupro-niquelíferas associadas, apresentando um extenso banco de dados com informações tanto de análises químicas e de densidade, quanto de descrição litológica para as amostras de sondagem diamantada. Este trabalho apresenta a aplicação de diferentes técnicas geoestatísticas com dois própositos distintos. Primeiramente, o cálculo dos recursos minerais do depósito através de krigagem ordinária, e segundo a apresentação de um modelo geológico gerado a partir de estimativa de litologias através de krigagem indicadora. Para tanto foi realizado uma criteriosa validação da base de dados através da análise das estatísticas descritivas e análise por regressão mútipla das variáveis contínuas e análise de agrupamento para as variáveis categóricas. Seguiram-se então as etapas de modelagem tridimensional das três unidades geológicas e dos corpos de minério e, posteriormente, as estimativas de teores de níquel e cobre por krigagem ordinária e estimativa de litologias por krigagem indicadora. Assim foi possível, além de se gerar um modelo geológico probabilístico útil no entendimento das relações geométricas e estratigráficas dos corpos rochosos, comparar as interpretações geológicas e os teores químicos com os dados categóricos estimados, apresentando a krigagem de indicadores como uma interessante alternativa em estudos de avaliação de depósitos. / The studied deposit is composed of a mafic-ultramafic suite with cupro-nickeliferous sulphide mineralization associates and has an extensive data base with information such as chemical analyses and density, and such as lithologic description on the borehole samples. This work presents the application of different geostatistics techniques with two distinct aims. First, the calculation of the mineral resources of the deposit through ordinary kriging, and second the generation of a geological model from estimated of lithologies through indicator kriging. In order to reach this aim, validation of the database through the analysis of the descriptive statisticians and analysis of multiple regression was done for the continuous variable, and cluster analysis for the categorical variable. The procedure for threedimensional modeling was carried out for the three geologic units and the ore bodies, just then the estimates of nickel and copper grades were calculated through ordinary kriging and lithologies through indicator kriging. In this way it was possible to generate probabilistic geological model useful to understanding rock body geometry and stratigraphy and to compare geological classic interpretation and chemical grades with categorical estimates, which leads to conclude the use of indicator kriging as an insteristing alternative to mineral deposits evaluation.
142

Multiscale modeling of multimaterial systems using a Kriging based approach

Sen, Oishik 01 December 2016 (has links)
The present work presents a framework for multiscale modeling of multimaterial flows using surrogate modeling techniques in the particular context of shocks interacting with clusters of particles. The work builds a framework for bridging scales in shock-particle interaction by using ensembles of resolved mesoscale computations of shocked particle laden flows. The information from mesoscale models is “lifted” by constructing metamodels of the closure terms - the thesis analyzes several issues pertaining to surrogate-based multiscale modeling frameworks. First, to create surrogate models, the effectiveness of several metamodeling techniques, viz. the Polynomial Stochastic Collocation method, Adaptive Stochastic Collocation method, a Radial Basis Function Neural Network, a Kriging Method and a Dynamic Kriging Method is evaluated. The rate of convergence of the error when used to reconstruct hypersurfaces of known functions is studied. For sufficiently large number of training points, Stochastic Collocation methods generally converge faster than the other metamodeling techniques, while the DKG method converges faster when the number of input points is less than 100 in a two-dimensional parameter space. Because the input points correspond to computationally expensive micro/meso-scale computations, the DKG is favored for bridging scales in a multi-scale solver. After this, closure laws for drag are constructed in the form of surrogate models derived from real-time resolved mesoscale computations of shock-particle interactions. The mesoscale computations are performed to calculate the drag force on a cluster of particles for different values of Mach Number and particle volume fraction. Two Kriging-based methods, viz. the Dynamic Kriging Method (DKG) and the Modified Bayesian Kriging Method (MBKG) are evaluated for their ability to construct surrogate models with sparse data; i.e. using the least number of mesoscale simulations. It is shown that unlike the DKG method, the MBKG method converges monotonically even with noisy input data and is therefore more suitable for surrogate model construction from numerical experiments. In macroscale models for shock-particle interactions, Subgrid Particle Reynolds’ Stress Equivalent (SPARSE) terms arise because of velocity fluctuations due to fluid-particle interaction in the subgrid/meso scales. Mesoscale computations are performed to calculate the SPARSE terms and the kinetic energy of the fluctuations for different values of Mach Number and particle volume fraction. Closure laws for SPARSE terms are constructed using the MBKG method. It is found that the directions normal and parallel to those of shock propagation are the principal directions of the SPARSE tensor. It is also found that the kinetic energy of the fluctuations is independent of the particle volume fraction and is 12-15% of the incoming shock kinetic energy for higher Mach Numbers. Finally, the thesis addresses the cost of performing large ensembles of resolved mesoscale computations for constructing surrogates. Variable fidelity techniques are used to construct an initial surrogate from ensembles of coarse-grid, relative inexpensive computations, while the use of resolved high-fidelity simulations is limited to the correction of initial surrogate. Different variable-fidelity techniques, viz the Space Mapping Method, RBFs and the MBKG methods are evaluated based on their ability to correct the initial surrogate. It is found that the MBKG method uses the least number of resolved mesoscale computations to correct the low-fidelity metamodel. Instead of using 56 high-fidelity computations for obtaining a surrogate, the MBKG method constructs surrogates from only 15 resolved computations, resulting in drastic reduction of computational cost.
143

Efficient sampling-based Rbdo by using virtual support vector machine and improving the accuracy of the Kriging method

Song, Hyeongjin 01 December 2013 (has links)
The objective of this study is to propose an efficient sampling-based RBDO using a new classification method to reduce the computational cost. In addition, accuracy improvement strategies for the Kriging method are proposed to reduce the number of expensive computer experiments. Current research effort involves: (1) developing a new classification method that is more efficient than conventional surrogate modeling methods while maintaining required accuracy level; (2) developing a sequential adaptive sampling method that inserts samples near the limit state function; (3) improving the efficiency of the RBDO process by using a fixed hyper-spherical local window with an efficient uniform sampling method and identification of active/violated constraints; and (4) improving the accuracy of the Kriging method by introducing several strategies. In the sampling-based RBDO, only accurate classification information is needed instead of accurate response surface. On the other hand, in general, surrogates are constructed using all available DoE samples instead of focusing on the limit state function. Therefore, the computational cost of surrogates can be relatively expensive; and the accuracy of the limit state (or decision) function can be sacrificed in return for reducing the error on unnecessary regions away from the limit state function. On the contrary, the support vector machine (SVM), which is a classification method, only uses support vectors, which are located near the limit state function, to focus on the decision function. Therefore, the SVM is very efficient and ideally applicable to sampling-based RBDO, if the accuracy of SVM is improved by inserting virtual samples near the limit state function. The proposed sequential sampling method inserts new samples near the limit state function so that the number of DoE samples is minimized. In many engineering problems, expensive computer simulations are used and thus the total computational cost needs to be reduced by using less number of DoE samples. Several efficiency strategies such as: (1) launching RBDO at a deterministic optimum design, (2) hyper-spherical local windows with an efficient uniform sampling method, (3) filtering of constraints, (4) sample reuse, (5) improved virtual sample generation, are used for the proposed sampling-based RBDO using virtual SVM. The number of computer experiments is also reduced by implementing accuracy improvement strategies for the Kriging method. Since the Kriging method is used for generating virtual samples and generating response surface of the cost function, the number of computer experiments can be reduced by introducing: (1) accurate correlation parameter estimation, (2) penalized maximum likelihood estimation (PMLE) for small sample size, (3) correlation model selection by MLE, and (4) mean structure selection by cross-validation (CV) error.
144

A Spatio-Temporal Analysis of Dolphinfish; Coryphaena hippurus, Abundance in the Western Atlantic: Implications for Stock Assessment of a Data-Limited Pelagic Resource.

Kleisner, Kristin Marie 26 July 2008 (has links)
Dolphinfish (Coryphaena hippurus) is a pelagic species that is ecologically and commercially important in the western Atlantic region. This species has been linked to dominant oceanographic features such as sea surface temperature (SST) frontal regions. This work first explored the linkages between the catch rates of dolphinfish and the oceanography (satellite-derived SST, distance to front calculations, bottom depth and hook depth) using Principal Components Analysis (PCA). It was demonstrated that higher catch rates are found in relation to warmer SST and nearer to frontal regions. This environmental information was then included in standardizations of catch-per-unit-effort (CPUE) indices. It was found that including the satellite-derived SST and distance to front increases the confidence in the index. The second part of this work focused on addressing spatial variability in the catch rate data for a subsection of the sampling area: the Gulf of Mexico region. This study used geostatistical techniques to model and predict spatial abundances of two pelagic species with different habitat utilization patterns: dolphinfish (Coryphaena hippurus) and swordfish (Xiphias gladius). We partitioned catch rates into two components, the probability of encounter, and the abundance, given a positive encounter. We obtained separate variograms and kriged predictions for each component and combined them to give a single density estimate with corresponding variance. By using this two stage approach we were able to detect patterns of spatial autocorrelation that had distinct differences between the two species, likely due to differences in vertical habitat utilization. The patchy distribution of many living resources necessitates a two-stage variogram modeling and prediction process where the probability of encounter and the positive observations are modeled and predicted separately. Such a "geostatistical delta-lognormal" approach to modeling spatial autocorrelation has distinct advantages in allowing the probability of encounter and the abundance, given an encounter to possess separate patterns of autocorrelation and in modeling of severely non-normally distributed data that is plagued by zeros.
145

Robust design using sequential computer experiments

Gupta, Abhishek 30 September 2004 (has links)
Modern engineering design tends to use computer simulations such as Finite Element Analysis (FEA) to replace physical experiments when evaluating a quality response, e.g., the stress level in a phone packaging process. The use of computer models has certain advantages over running physical experiments, such as being cost effective, easy to try out different design alternatives, and having greater impact on product design. However, due to the complexity of FEA codes, it could be computationally expensive to calculate the quality response function over a large number of combinations of design and environmental factors. Traditional experimental design and response surface methodology, which were developed for physical experiments with the presence of random errors, are not very effective in dealing with deterministic FEA simulation outputs. In this thesis, we will utilize a spatial statistical method (i.e., Kriging model) for analyzing deterministic computer simulation-based experiments. Subsequently, we will devise a sequential strategy, which allows us to explore the whole response surface in an efficient way. The overall number of computer experiments will be remarkably reduced compared with the traditional response surface methodology. The proposed methodology is illustrated using an electronic packaging example.
146

An Efficient Robust Concept Exploration Method and Sequential Exploratory Experimental Design

Lin, Yao 31 August 2004 (has links)
Experimentation and approximation are essential for efficiency and effectiveness in concurrent engineering analyses of large-scale complex systems. The approximation-based design strategy is not fully utilized in industrial applications in which designers have to deal with multi-disciplinary, multi-variable, multi-response, and multi-objective analysis using very complicated and expensive-to-run computer analysis codes or physical experiments. With current experimental design and metamodeling techniques, it is difficult for engineers to develop acceptable metamodels for irregular responses and achieve good design solutions in large design spaces at low prices. To circumvent this problem, engineers tend to either adopt low-fidelity simulations or models with which important response properties may be lost, or restrict the study to very small design spaces. Information from expensive physical or computer experiments is often used as a validation in late design stages instead of analysis tools that are used in early-stage design. This increases the possibility of expensive re-design processes and the time-to-market. In this dissertation, two methods, the Sequential Exploratory Experimental Design (SEED) and the Efficient Robust Concept Exploration Method (E-RCEM) are developed to address these problems. The SEED and E-RCEM methods help develop acceptable metamodels for irregular responses with expensive experiments and achieve satisficing design solutions in large design spaces with limited computational or monetary resources. It is verified that more accurate metamodels are developed and better design solutions are achieved with SEED and E-RCEM than with traditional approximation-based design methods. SEED and E-RCEM facilitate the full utility of the simulation-and-approximation-based design strategy in engineering and scientific applications. Several preliminary approaches for metamodel validation with additional validation points are proposed in this dissertation, after verifying that the most-widely-used method of leave-one-out cross-validation is theoretically inappropriate in testing the accuracy of metamodels. A comparison of the performance of kriging and MARS metamodels is done in this dissertation. Then a sequential metamodeling approach is proposed to utilize different types of metamodels along the design timeline. Several single-variable or two-variable examples and two engineering example, the design of pressure vessels and the design of unit cells for linear cellular alloys, are used in this dissertation to facilitate our studies.
147

Robust design using sequential computer experiments

Gupta, Abhishek 30 September 2004 (has links)
Modern engineering design tends to use computer simulations such as Finite Element Analysis (FEA) to replace physical experiments when evaluating a quality response, e.g., the stress level in a phone packaging process. The use of computer models has certain advantages over running physical experiments, such as being cost effective, easy to try out different design alternatives, and having greater impact on product design. However, due to the complexity of FEA codes, it could be computationally expensive to calculate the quality response function over a large number of combinations of design and environmental factors. Traditional experimental design and response surface methodology, which were developed for physical experiments with the presence of random errors, are not very effective in dealing with deterministic FEA simulation outputs. In this thesis, we will utilize a spatial statistical method (i.e., Kriging model) for analyzing deterministic computer simulation-based experiments. Subsequently, we will devise a sequential strategy, which allows us to explore the whole response surface in an efficient way. The overall number of computer experiments will be remarkably reduced compared with the traditional response surface methodology. The proposed methodology is illustrated using an electronic packaging example.
148

Spatial analysis modeling for marine reserve planning¡Ðexample of Kaomei wetland

Chen, Chun-te 16 July 2008 (has links)
It is an internationally acknowledged that marine protected area (MPA) is an important measure for maintaining biodiversity and rescuing endangered species. MPA can also effectively inhibit human interferences such as development and pollution discharge. The establishment of MPA is possible to fulfill the goal of sustainable management, which is to conserve marine habitat for an integrative ecosystem and a higher biodiversity. However, how to design an effective MPA remains an important research issue to be explored. In order to grasp the spatial distribution of the ecological data in the study area, the current research uses spatial interpolation tool, Kriging, provided by the Geographic information system (GIS) software. Then three spatial analytical models have been developed based on integer programming techniques. It is guarantee that all three models can find the global optimal solutions for the best protective area partitions. This quantitative approach is more efficient and effective compared to the qualitative methods in many aspects. The models are able to preserve the maximum ecological resources under the limited spatial area. Besides, the model formulation can be adjusted from different environmental impact factors to fulfill the requirements of users. The case study of the research is to design a MPA for Kaomei wetland. However the spatial analytical models developed in this research can also be applied to protected area design in land area.
149

Statistical validation and calibration of computer models

Liu, Xuyuan 21 January 2011 (has links)
This thesis deals with modeling, validation and calibration problems in experiments of computer models. Computer models are mathematic representations of real systems developed for understanding and investigating the systems. Before a computer model is used, it often needs to be validated by comparing the computer outputs with physical observations and calibrated by adjusting internal model parameters in order to improve the agreement between the computer outputs and physical observations. As computer models become more powerful and popular, the complexity of input and output data raises new computational challenges and stimulates the development of novel statistical modeling methods. One challenge is to deal with computer models with random inputs (random effects). This kind of computer models is very common in engineering applications. For example, in a thermal experiment in the Sandia National Lab (Dowding et al. 2008), the volumetric heat capacity and thermal conductivity are random input variables. If input variables are randomly sampled from particular distributions with unknown parameters, the existing methods in the literature are not directly applicable. The reason is that integration over the random variable distribution is needed for the joint likelihood and the integration cannot always be expressed in a closed form. In this research, we propose a new approach which combines the nonlinear mixed effects model and the Gaussian process model (Kriging model). Different model formulations are also studied to have an better understanding of validation and calibration activities by using the thermal problem. Another challenge comes from computer models with functional outputs. While many methods have been developed for modeling computer experiments with single response, the literature on modeling computer experiments with functional response is sketchy. Dimension reduction techniques can be used to overcome the complexity problem of function response; however, they generally involve two steps. Models are first fit at each individual setting of the input to reduce the dimensionality of the functional data. Then the estimated parameters of the models are treated as new responses, which are further modeled for prediction. Alternatively, pointwise models are first constructed at each time point and then functional curves are fit to the parameter estimates obtained from the fitted models. In this research, we first propose a functional regression model to relate functional responses to both design and time variables in one single step. Secondly, we propose a functional kriging model which uses variable selection methods by imposing a penalty function. we show that the proposed model performs better than dimension reduction based approaches and the kriging model without regularization. In addition, non-asymptotic theoretical bounds on the estimation error are presented.
150

Effektive Beobachtung von zufälligen Funktionen unter besonderer Berücksichtigung von Ableitungen

Holtmann, Markus 10 December 2009 (has links) (PDF)
Es wird die Versuchsplanung für die Approximation zufälliger Funktionen untersucht, wobei sowohl deterministische Spline-, stochastisch-deterministische Krigingverfahren als auch Regressionsverfahren jeweils unter Verwendung von Ableitungssamples betrachtet werden. Dabei wird das mathematische Gerüst für den Beweis einer allgemeinen Äquivalenz zwischen Kriging- und Splineverfahren entwickelt. Für den in den praktischen Anwendungen wichtigen Fall der Verwendung endlich vieler nichthermitescher Samples wird ein Versuchsplanungsverfahren für zufällige Funktionen mit asymptotisch verschwindender Korrelation entwickelt. Ferner wird der Einfluss von Ableitungen auf die Varianz von (lokalen) Regressionsschätzern untersucht. Schließlich wird ein Verfahren zur Versuchsplanung vorgestellt, das durch Regularisierung mittels gestörter Kovarianzmatrizen Prinzipien der klassischen Versuchsplanung im korrelierten Fall nachahmt.

Page generated in 0.0613 seconds