• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 453
  • 274
  • 163
  • 47
  • 25
  • 22
  • 19
  • 10
  • 6
  • 5
  • 5
  • 4
  • 2
  • 2
  • 1
  • Tagged with
  • 1190
  • 259
  • 193
  • 143
  • 124
  • 87
  • 74
  • 67
  • 61
  • 61
  • 61
  • 61
  • 57
  • 54
  • 54
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Numerical simulation of backward erosion piping in heterogeneous fields

Liang, Yue, Yeh, Tian-Chyi Jim, Wang, Yu-Li, Liu, Mingwei, Wang, Junjie, Hao, Yonghong 04 1900 (has links)
Backward erosion piping (BEP) is one of the major causes of seepage failures in levees. Seepage fields dictate the BEP behaviors and are influenced by the heterogeneity of soil properties. To investigate the effects of the heterogeneity on the seepage failures, we develop a numerical algorithm and conduct simulations to study BEP progressions in geologic media with spatially stochastic parameters. Specifically, the void ratio e, the hydraulic conductivity k, and the ratio of the particle contents r of the media are represented as the stochastic variables. They are characterized by means and variances, the spatial correlation structures, and the cross correlation between variables. Results of the simulations reveal that the heterogeneity accelerates the development of preferential flow paths, which profoundly increase the likelihood of seepage failures. To account for unknown heterogeneity, we define the probability of the seepage instability (PI) to evaluate the failure potential of a given site. Using Monte-Carlo simulation (MCS), we demonstrate that the PI value is significantly influenced by the mean and the variance of ln k and its spatial correlation scales. But the other parameters, such as means and variances of e and r, and their cross correlation, have minor impacts. Based on PI analyses, we introduce a risk rating system to classify the field into different regions according to risk levels. This rating system is useful for seepage failures prevention and assists decision making when BEP occurs.
22

Méthodes non linéaires pour séries temporelles : prédiction par Double Quantification Vectorielle et sélection du délai en hautes dimensions

Simon, Geoffroy 15 June 2007 (has links)
De la finance à la climatologie, en passant par les processus industriels, nombreux sont les domaines où on rencontre des séries temporelles. L'analyse, la modélisation et la prédiction de séries temporelles constituent aujourd'hui encore des défis, sur le plan scientifique tout comme dans ces nombreux domaines d'applications. En alternative aux modèles linéaires, les modèles non linéaires sont utilisés ici pour l'analyse, la modélisation et la prédiction de séries temporelles. Les modèles non linéaires sont potentiellement plus performants que les modèles linéaires, mais les questions de sélection de structure de modèle, de prédiction à long terme ou de construction des régresseurs sont plus complexes à résoudre dans le cadre non linéaire. Les paramètres de structure de certains modèles et des méthodes de sélection de structure sont d'abord décrits. La sélection de structure par FastBootrap est complétée par un test statistique qui constitue un argument théorique en faveur de l'approximation par régression linéaire du terme d'optimisme du Bootstrap. La Double Quantification Vectorielle (DQV), modèle de prédiction à long terme de séries temporelles, est introduite. La détermination des paramètres est détaillée, pour des séries scalaires et pour des séries multidimensionnelles auxquelles la DQV peut aisément être appliquée. La stabilité de la DQV en prédiction à long terme est établie théoriquement. Les capacités de la méthode sont illustrées sur divers exemples, en prédiction à court terme, à long terme, en scalaire et en multidimensionnel. La construction du régresseur est abordée lors de l'étude du caractère significatif de l'application des méthodes de clustering à des régresseurs. Une méthodologie de comparaison entre reconstructions de l'espace de phase de séries temporelles est décrite et appliquée sur plusieurs séries. Les résultats obtenus illustrent l'importance du délai dans la construction de régresseurs et permettent de prendre position dans un débat scientifique en cours : l'application des méthodes de clustering à des régresseurs a un sens. La construction du régresseur avec sélection d'un délai unique est alors généralisée au cas de plusieurs délais. Des généralisations des critères d'autocorrélation et d'information mutuelle à plus de deux variables sont proposées. Le critère géométrique de Distance à la Diagonale est également introduit. Tous ces critères de sélection de plusieurs délais sont comparés expérimentalement.
23

Using Decline Curve Analysis, Volumetric Analysis, and Bayesian Methodology to Quantify Uncertainty in Shale Gas Reserve Estimates

Gonzalez Jimenez, Raul 1988- 14 March 2013 (has links)
Probabilistic decline curve analysis (PDCA) methods have been developed to quantify uncertainty in production forecasts and reserves estimates. However, the application of PDCA in shale gas reservoirs is relatively new. Limited work has been done on the performance of PDCA methods when the available production data are limited. In addition, PDCA methods have often been coupled with Arp’s equations, which might not be the optimum decline curve analysis model (DCA) to use, as new DCA models for shale reservoirs have been developed. Also, decline curve methods are based on production data only and do not by themselves incorporate other types of information, such as volumetric data. My research objective was to integrate volumetric information with PDCA methods and DCA models to reliably quantify the uncertainty in production forecasts from hydraulically fractured horizontal shale gas wells, regardless of the stage of depletion. In this work, hindcasts of multiple DCA models coupled to different probabilistic methods were performed to determine the reliability of the probabilistic DCA methods. In a hindcast, only a portion of the historical data is matched; predictions are made for the remainder of the historical period and compared to the actual historical production. Most of the DCA models were well calibrated visually when used with an appropriate probabilistic method, regardless of the amount of production data available to match. Volumetric assessments, used as prior information, were incorporated to further enhance the calibration of production forecasts and reserves estimates when using the Markov Chain Monte Carlo (MCMC) as the PDCA method and the logistic growth DCA model. The proposed combination of the MCMC PDCA method, the logistic growth DCA model, and use of volumetric data provides an integrated procedure to reliably quantify the uncertainty in production forecasts and reserves estimates in shale gas reservoirs. Reliable quantification of uncertainty should yield more reliable expected values of reserves estimates, as well as more reliable assessment of upside and downside potential. This can be particularly valuable early in the development of a play, because decisions regarding continued development are based to a large degree on production forecasts and reserves estimates for early wells in the play.
24

Continuous reservoir simulation incorporating uncertainty quantification and real-time data

Holmes, Jay Cuthbert 15 May 2009 (has links)
A significant body of work has demonstrated both the promise and difficulty of quantifying uncertainty in reservoir simulation forecasts. It is generally accepted that accurate and complete quantification of uncertainty should lead to better decision making and greater profitability. Many of the techniques presented in past work attempt to quantify uncertainty without sampling the full parameter space, saving on the number of simulation runs, but inherently limiting and biasing the uncertainty quantification in the resulting forecasts. In addition, past work generally has looked at uncertainty in synthetic models and does not address the practical issues of quantifying uncertainty in an actual field. Both of these issues must be addressed in order to rigorously quantify uncertainty in practice. In this study a new approach to reservoir simulation is taken whereby the traditional one-time simulation study is replaced with a new continuous process potentially spanning the life of the reservoir. In this process, reservoir models are generated and run 24 hours a day, seven days a week, allowing many more runs than previously possible and yielding a more thorough exploration of possible reservoir descriptions. In turn, more runs enabled better estimates of uncertainty in resulting forecasts. A new technology to allow this process to run continuously with little human interaction is real-time production and pressure data, which can be automatically integrated into runs. Two tests of this continuous simulation process were conducted. The first test was conducted on the Production with Uncertainty Quantification (PUNQ) synthetic reservoir. Comparison of our results with previous studies shows that the continuous approach gives consistent and reasonable estimates of uncertainty. The second study was conducted in real time on a live field. This study demonstrates the continuous simulation process and shows that it is feasible and practical for real world applications.
25

The Method of Manufactured Universes for Testing Uncertainty Quantification Methods

Stripling, Hayes Franklin 2010 December 1900 (has links)
The Method of Manufactured Universes is presented as a validation framework for uncertainty quantification (UQ) methodologies and as a tool for exploring the effects of statistical and modeling assumptions embedded in these methods. The framework calls for a manufactured reality from which "experimental" data are created (possibly with experimental error), an imperfect model (with uncertain inputs) from which simulation results are created (possibly with numerical error), the application of a system for quantifying uncertainties in model predictions, and an assessment of how accurately those uncertainties are quantified. The application presented for this research manufactures a particle-transport "universe," models it using diffusion theory with uncertain material parameters, and applies both Gaussian process and Bayesian MARS algorithms to make quantitative predictions about new "experiments" within the manufactured reality. To test further the responses of these UQ methods, we conduct exercises with "experimental" replicates, "measurement" error, and choices of physical inputs that reduce the accuracy of the diffusion model's approximation of our manufactured laws. Our first application of MMU was rich in areas for exploration and highly informative. In the case of the Gaussian process code, we found that the fundamental statistical formulation was not appropriate for our functional data, but that the code allows a knowledgable user to vary parameters within this formulation to tailor its behavior for a specific problem. The Bayesian MARS formulation was a more natural emulator given our manufactured laws, and we used the MMU framework to develop further a calibration method and to characterize the diffusion model discrepancy. Overall, we conclude that an MMU exercise with a properly designed universe (that is, one that is an adequate representation of some real-world problem) will provide the modeler with an added understanding of the interaction between a given UQ method and his/her more complex problem of interest. The modeler can then apply this added understanding and make more informed predictive statements.
26

Integration and quantification of uncertainty of volumetric and material balance analyses using a Bayesian framework

Ogele, Chile 01 November 2005 (has links)
Estimating original hydrocarbons in place (OHIP) in a reservoir is fundamentally important to estimating reserves and potential profitability. Quantifying the uncertainties in OHIP estimates can improve reservoir development and investment decision-making for individual reservoirs and can lead to improved portfolio performance. Two traditional methods for estimating OHIP are volumetric and material balance methods. Probabilistic estimates of OHIP are commonly generated prior to significant production from a reservoir by combining volumetric analysis with Monte Carlo methods. Material balance is routinely used to analyze reservoir performance and estimate OHIP. Although material balance has uncertainties due to errors in pressure and other parameters, probabilistic estimates are seldom done. In this thesis I use a Bayesian formulation to integrate volumetric and material balance analyses and to quantify uncertainty in the combined OHIP estimates. Specifically, I apply Bayes?? rule to the Havlena and Odeh material balance equation to estimate original oil in place, N, and relative gas-cap size, m, for a gas-cap drive oil reservoir. The paper considers uncertainty and correlation in the volumetric estimates of N and m (reflected in the prior probability distribution), as well as uncertainty in the pressure data (reflected in the likelihood distribution). Approximation of the covariance of the posterior distribution allows quantification of uncertainty in the estimates of N and m resulting from the combined volumetric and material balance analyses. Several example applications to illustrate the value of this integrated approach are presented. Material balance data reduce the uncertainty in the volumetric estimate, and the volumetric data reduce the considerable non-uniqueness of the material balance solution, resulting in more accurate OHIP estimates than from the separate analyses. One of the advantages over reservoir simulation is that, with the smaller number of parameters in this approach, we can easily sample the entire posterior distribution, resulting in more complete quantification of uncertainty. The approach can also detect underestimation of uncertainty in either volumetric data or material balance data, indicated by insufficient overlap of the prior and likelihood distributions. When this occurs, the volumetric and material balance analyses should be revisited and the uncertainties of each reevaluated.
27

Error analysis for radiation transport

Tencer, John Thomas 18 February 2014 (has links)
All relevant sources of error in the numerical solution of the radiative transport equation are considered. Common spatial discretization methods are discussed for completeness. The application of these methods to the radiative transport equation is not substantially different than for any other partial differential equation. Several of the most prevalent angular approximations within the heat transfer community are implemented and compared. Three model problems are proposed. The relative accuracy of each of the angular approximations is assessed for a range of optical thickness and scattering albedo. The model problems represent a range of application spaces. The quantified comparison of these approximations on the basis of accuracy over such a wide parameter space is one of the contributions of this work. The major original contribution of this work involves the treatment of errors associated with the energy-dependence of intensity. The full spectrum correlated-k distribution (FSK) method has received recent attention as being a good compromise between computational expense and accuracy. Two approaches are taken towards quantifying the error associated with the FSK method. The Multi-Source Full Spectrum k–Distribution (MSFSK) method makes use of the convenient property that the FSK method is exact for homogeneous media. It involves a line-by-line solution on a coarse grid and a number of k-distribution solutions on subdomains to effectively increase the grid resolution. This yields highly accurate solutions on fine grids and a known rate of convergence as the number of subdomains increases. The stochastic full spectrum k-distribution (SFSK) method is a more general approach to estimating the error in k-distribution solutions. The FSK method relies on a spectral reordering and scaling which greatly simplify the spectral dependence of the absorption coefficient. This reordering is not necessarily consistent across the entire domain which results in errors. The SFSK method involves treating the absorption line blackbody distribution function not as deterministic but rather as a stochastic process. The mean, covariance, and correlation structure are all fit empirically to data from a high resolution spectral database. The standard deviation of the heat flux prediction is found to be a good error estimator for the k-distribution method. / text
28

On goal-oriented error estimation and adaptivity for nonlinear systems with uncertain data and application to flow problems

Bryant, Corey Michael 09 February 2015 (has links)
The objective of this work is to develop a posteriori error estimates and adaptive strategies for the numerical solution to nonlinear systems of partial differential equations with uncertain data. Areas of application cover problems in fluid mechanics including a Bayesian model selection study of turbulence comparing different uncertainty models. Accounting for uncertainties in model parameters may significantly increase the computational time when simulating complex problems. The premise is that using error estimates and adaptively refining the solution process can reduce the cost of such simulations while preserving their accuracy within some tolerance. New insights for goal-oriented error estimation for deterministic nonlinear problems are first presented. Linearization of the adjoint problems and quantities of interest introduces higher-order terms in the error representation that are generally neglected. Their effects on goal-oriented adaptive strategies are investigated in detail here. Contributions on that subject include extensions of well-known theoretical results for linear problems to the nonlinear setting, computational studies in support of these results, and an extensive comparative study of goal-oriented adaptive schemes that do, and do not, include the higher-order terms. Approaches for goal-oriented error estimation for PDEs with uncertain coefficients have already been presented, but lack the capability of distinguishing between the different sources of error. A novel approach is proposed here, that decomposes the error estimate into contributions from the physical discretization and the uncertainty approximation. Theoretical bounds are proven and numerical examples are presented to verify that the approach identifies the predominant source of the error in a surrogate model. Adaptive strategies, that use this error decomposition and refine the approximation space accordingly, are designed and tested. All methodologies are demonstrated on benchmark flow problems: Stokes lid-driven cavity, 1D Burger’s equation, 2D incompressible flows at low Reynolds numbers. The procedure is also applied to an uncertainty quantification study of RANS turbulence models in channel flows. Adaptive surrogate models are constructed to make parameter uncertainty propagation more efficient. Using surrogate models and adaptivity in a Bayesian model selection procedure, it is shown that significant computational savings can be gained over the full RANS model while maintaining similar accuracy in the predictions. / text
29

Adjoint-Based Uncertainty Quantification and Sensitivity Analysis for Reactor Depletion Calculations

Stripling, Hayes Franklin 16 December 2013 (has links)
Depletion calculations for nuclear reactors model the dynamic coupling between the material composition and neutron flux and help predict reactor performance and safety characteristics. In order to be trusted as reliable predictive tools and inputs to licensing and operational decisions, the simulations must include an accurate and holistic quantification of errors and uncertainties in its outputs. Uncertainty quantification is a formidable challenge in large, realistic reactor models because of the large number of unknowns and myriad sources of uncertainty and error. We present a framework for performing efficient uncertainty quantification in depletion problems using an adjoint approach, with emphasis on high-fidelity calculations using advanced massively parallel computing architectures. This approach calls for a solution to two systems of equations: (a) the forward, engineering system that models the reactor, and (b) the adjoint system, which is mathematically related to but different from the forward system. We use the solutions of these systems to produce sensitivity and error estimates at a cost that does not grow rapidly with the number of uncertain inputs. We present the framework in a general fashion and apply it to both the source-driven and k-eigenvalue forms of the depletion equations. We describe the implementation and verification of solvers for the forward and ad- joint equations in the PDT code, and we test the algorithms on realistic reactor analysis problems. We demonstrate a new approach for reducing the memory and I/O demands on the host machine, which can be overwhelming for typical adjoint algorithms. Our conclusion is that adjoint depletion calculations using full transport solutions are not only computationally tractable, they are the most attractive option for performing uncertainty quantification on high-fidelity reactor analysis problems.
30

Prevention of cervical cancer through the characterization of E6 and E7 mRNA transcriptional activity as biological markers of human papillomavirus infections

Tchir, Jayme Dianna Radford Unknown Date
No description available.

Page generated in 0.1007 seconds