• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27
  • 8
  • 5
  • 5
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 65
  • 65
  • 13
  • 12
  • 10
  • 10
  • 9
  • 9
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Thermomechanical Characterization and Modeling of Shape Memory Polymers

Volk, Brent L. 16 January 2010 (has links)
This work focuses on the thermomechanical characterization and constitutive model calibration of shape memory polymers (SMPs). These polymers have the ability to recover seemingly permanent large deformations under the appropriate thermomechanical load path. In this work, a contribution is made to both existing experimental and modeling efforts. First, an experimental investigation is conducted which subjects SMPs to a thermomechanical load path that includes varying the value of applied deformations and temperature rates. Specifically, SMPs are deformed to tensile extensions of 10% to 100% at temperature rates varying from 1 degree C /min to 5 degree C/min, and the complete shape recovery profile is captured. The results from this experimental investigation show that the SMP in question can recover approximately 95% of the value of the applied deformation, independent of the temperature rate during the test. The data obtained in the experimental investigation are then used to calibrate, in one-dimension, two constitutive models which have been developed to describe and predict the material response of SMPs. The models include a model in terms of general deformation gradients, thus making it capable of handling large deformations. In addition, the data are used to calibrate a linearized version of the constitutive model for small deformations. The material properties required for calibrating the constitutive models are derived from portions of the experimental results, and the model is then used to predict the shape memory effect for an SMP undergoing various levels of deformation. The model predictions are shown to match well with the experimental data.
12

Extending and formalizing the energy signature method for calibrating simulations and illustrating with application for three California climates

Bensouda, Nabil 15 November 2004 (has links)
This thesis extends and formalizes the energy signature method developed by Wei et al. (1998) for the rapid calibration of cooling and heating energy consumption simulations for commercial buildings. This method is based on the use of "calibration signatures" which characterize the difference between measured and simulated performance. By creating a library of shapes for certain known errors, clues can be provided to the analyst to use in identifying what simulation input errors may be causing the discrepancies. These are referred to as "characteristic signatures". In this thesis, sets of characteristic signatures are produced for the climates typified by Pasadena, Sacramento and Oakland, California for each of the four major system types: single-duct variable-air-volume, single-duct constant-volume, dual-duct variable-air-volume and dual-duct constant-volume. A detailed step-by-step description is given for the proposed methodology, and two examples and a real-world case study serve to illustrate the use of the signature method.
13

Application of Fast Marching Methods for Rapid Reservoir Forecast and Uncertainty Quantification

Olalotiti-Lawal, Feyisayo 16 December 2013 (has links)
Rapid economic evaluations of investment alternatives in the oil and gas industry are typically contingent on fast and credible evaluations of reservoir models to make future forecasts. It is often important to also quantify inherent risks and uncertainties in these evaluations. These ideally require several full-scale numerical simulations which is time consuming, impractical, if not impossible to do with conventional (Finite Difference) simulators in real life situations. In this research, the aim will be to improve on the efficiencies associated with these tasks. This involved exploring the applications of Fast Marching Methods (FMM) in both conventional and unconventional reservoir characterization problems. In this work, we first applied the FMM for rapidly ranking multiple equi-probable geologic models. We demonstrated the suitability of drainage volume, efficiently calculated using FMM, as a surrogate parameter for field-wide cumulative oil production (FOPT). The probability distribution function (PDF) of the surrogate parameter was point-discretized to obtain 3 representative models for full simulations. Using the results from the simulations, the PDF of the reservoir performance parameter was constructed. Also, we investigated the applicability of a higher-order-moment-preserving approach which resulted in better uncertainty quantification over the traditional model selection methods. Next we applied the FMM for a hydraulically fractured tight oil reservoir model calibration problem. We specifically applied the FMM geometric pressure approximation as a proxy for rapidly evaluating model proposals in a two-stage Markov Chain Monte Carlo (MCMC) algorithm. Here, we demonstrated the FMM-based proxy as a suitable proxy for evaluating model proposals. We obtained results showing a significant improvement in the efficiency compared to conventional single stage MCMC algorithm. Also in this work, we investigated the possibility of enhancing the computational efficiency for calculating the pressure field for both conventional and unconventional reservoirs using FMM. Good approximations of the steady state pressure distributions were obtained for homogeneous conventional waterflood systems. In unconventional system, we also recorded slight improvement in computational efficiency using FMM pressure approximations as initial guess in pressure solvers.
14

Application of Fast Marching Method in Shale Gas Reservoir Model Calibration

Yang, Changdong 16 December 2013 (has links)
Unconventional reservoirs are typically characterized by very low permeabilities, and thus, the pressure depletion from a producing well may not propagate far from the well during the life of a development. Currently, two approaches are widely utilized to perform unconventional reservoir analysis: analytical techniques, including the decline curve analysis and the pressure/rate transient analysis, and numerical simulation. The numerical simulation can rigorously account for complex well geometry and reservoir heterogeneity but also is time consuming. In this thesis, we propose and apply an efficient technique, fast marching method (FMM), to analyze the shale gas reservoirs. Our proposed approach stands midway between analytic techniques and numerical simulation. In contrast to analytical techniques, it takes into account complex well geometry and reservoir heterogeneity, and it is less time consuming compared to numerical simulation. The fast marching method can efficiently provide us with the solution of the pressure front propagation equation, which can be expressed as an Eikonal equation. Our approach is based on the generalization of the concept of depth of investigation. Its application to unconventional reservoirs can provide the understanding necessary to describe and optimize the interaction between complex multi-stage fractured wells, reservoir heterogeneity, drainage volumes, pressure depletion, and well rates. The proposed method allows rapid approximation of reservoir simulation results without resorting to detailed flow simulation, and also provides the time-evolution of the well drainage volume for visualization. Calibration of reservoir models to match historical dynamic data is necessary to increase confidence in simulation models and also minimize risks in decision making. In this thesis, we propose an integrated workflow: applying the genetic algorithm (GA) to calibrate the model parameters, and utilizing the fast marching based approach for forward simulation. This workflow takes advantages of both the derivative free characteristics of GA and the speed of FMM. In addition, we also provide a novel approach to incorporate the micro-seismic events (if available) into our history matching workflow so as to further constrain and better calibrate our models.
15

The Black-Scholes and Heston Models for Option Pricing

Ye, Ziqun 14 May 2013 (has links)
Stochastic volatility models on option pricing have received much study following the discovery of the non-at implied surface following the crash of the stock markets in 1987. The most widely used stochastic volatility model is introduced by Heston (1993) because of its ability to generate volatility satisfying the market observations, being non-negative and mean-reverting, and also providing a closed-form solution for the European options. However, little research has been done on Heston model used to price early-exercise options. This presumably is largely due to the absence of a closed-form solution and the increase in computational requirement that complicates the required calibration exercise. This thesis examines the performance of the Heston model versus the Black-Scholes model for the American Style equity option of Microsoft and the index option of S&P 100 index. We employ a finite difference method combined with a Projected Successive Over-relaxation method for pricing an American put option under the Black-Scholes model, while an Alternating Direction Implicit method is utilized to decompose a multi-dimensional partial differential equation into several one dimensional steps under the Heston model. For the calibration of the Heston model, we apply a two step procedure where in the first step we apply an indirect inference method to historical stock prices to estimate diffusion parameters under a probability measure and then use a least squares method to estimate the instantaneous volatility and the market risk premium which are used to switch from working under the probability measure to working under the risk-neutral measure. We find that option price is positively related with the value of the mean reverting speed and the long-term variance. It is not sensitive to the market price of risk and it is negatively related with the risk free rate and the volatility of volatility. By comparing the European put option and the American put option under the Heston model, we observe that their implied volatility generally follow similar patterns. However, there are still some interesting observations that can be made from the comparison of the two put options. First, for the out-of-the-money category, the American and European options have rather comparable implied volatilities with the American options' implied volatility being slightly bigger than the European options. While for the in-the-money category, the implied volatility of the European options is notably higher than the American options and its value exceeds the implied volatility of the American options. We also assess the performance of the Heston model by comparing its result with the result from the Black-Scholes model. We observe that overall the Heston model performs better than the Black-Scholes model. In particular, the Heston model has tendency of underpricing the in-the-money option and overpricing the out-of-the-money option. Whereas, the Black-Scholes model is inclined to underprice both the in-the-money option and the out-of-the-money option.b
16

Fast History Matching of Time-Lapse Seismic and Production-Data for High Resolution Models

Rey Amaya, Alvaro 2011 August 1900 (has links)
Seismic data have been established as a valuable source of information for the construction of reservoir simulation models, most commonly for determination of the modeled geologic structure, and also for population of static petrophysical properties (e.g. porosity, permeability). More recently, the availability of repeated seismic surveys over the time scale of years (i.e., 4D seismic) has shown promising results for the qualitative determination of changes in fluid phase distributions and pressure required for determination of areas of bypassed oil, swept volumes and pressure maintenance mechanisms. Quantitatively, and currently the state of the art in reservoir model characterization, 4D seismic data have proven distinctively useful for the calibration of geologic spatial variability which ultimately contributes to the improvement of reservoir development and management strategies. Among the limited variety of techniques for the integration of dynamic seismic data into reservoir models, streamline-based techniques have been demonstrated as one of the more efficient approaches as a result of their analytical sensitivity formulations. Although streamline techniques have been used in the past to integrate time-lapse seismic attributes, the applications were limited to the simplified modeling scenarios of two-phase fluid flow and invariant streamline geometry throughout the production schedule. This research builds upon and advances existing approaches to streamline-based seismic data integration for the inclusion of both production and seismic data under varying field conditions. The proposed approach integrates data from reservoirs under active reservoir management and the corresponding simulation models can be constrained using highly detailed or realistic schedules. Fundamentally, a new derivation of seismic sensitivities is proposed that is able to represent a complex reservoir evolution between consecutive seismic surveys. The approach is further extended to manage compositional reservoir simulation with dissolution effects and gravity-convective-driven flows which, in particular, are typical of CO2 transport behavior following injection into deep saline aquifers. As a final component of this research, the benefits of dynamic data integration on the determination of swept and drained volumes by injection and production, respectively, are investigated. Several synthetic and field reservoir modeling scenarios are used for an extensive demonstration of the efficacy and practical feasibility of the proposed developments.
17

Kalibrace mikrosimulačního modelu dopravy / Microscopic Traffic Simulation Model Calibration

Pokorný, Pavel January 2013 (has links)
This thesis main focus is microscopic traffic sumulation. Part of this work is the design and implementation of microsimulation model based on cellular automaton. Implemented model supports calibration with genetic algorithm. The results of calibration and simulations are included.
18

Hydraulické posouzení stokové sítě obce Lipůvka / Hydraulic calculation of sewage system in Lipůvka

Balas, Jan January 2017 (has links)
Diploma thesis is focused on hydraulic assessment of Lipůvka´s sewer system, done by dynamic rainfall-runoff simulation model. Because of this fact, chapter about mathematical models is included. Commonly used programs to this actions are listed as well. Next part of the thesis describes monitoring campaign, which was done in Lipůvka and results are used to calibrate the simulation model. Hydraulic assessment of sewer system by this calibrated model was done.
19

Modelling and future performance assessment of Duvbacken wastewater treatment plant

Milathianakis, Emmanouil January 2017 (has links)
Duvbacken wastewater treatment plant in Gävle, Sweden, currently designed for 100,000 person equivalent (P.E.) is looking for a new permit for 120,000 P.E. due to the expected increase of the population in the community. Moreover, the recipient of the plant’s effluent water was characterized as eutrophic in 2009. The plant emissions are regulated regarding seven days biological oxygen demand (BOD7) and total phosphorus (Ptot) emissions. Yet, there is no available computer model to simulate the plant operations and investigate the emissions of the requested permit. However, it was uncertain if the available data would be sufficient for the development of a new model. A model of the plant was eventually developed in BioWin® software under a number of assumptions and simplifications. A sensitivity analysis was conducted and used conversely than in other studies. The sensitivity analysis was conducted for the uncalibrated model in order to indicate its sensitive parameters. The parameters of substrate half saturation constant for ordinary heterotrophic organisms (KS) and phosphorus/acetate release ratio for polyphosphate accumulating organisms (YP/acetic) were finally used for model calibration. Following, the model validation confirmed the correctness of the calibrated model and the ability to develop a basic model under data deficiency. The new model was used to investigate a loading scenario corresponding to 120,000 P.E. where plant emissions that meet the current permits were predicted. Some suggestions proposed were the installation of disc filters in order to further reduce the effluent phosphorus and BOD precipitation in cases of high influent concentrations. In case of the application of a nitrogen (N) permit, the installation of membrane bioreactors and a full-scale chemical P removal was proposed as an alternative that will require a smaller footprint expansion of the plant.
20

Bedömning av prediktiv förmåga för Finita Elementberäkningar med optisk töjningsmätning (DIC) / Predictive Capability Assesment of Finite Element Model using Digital Image Correlation (DIC)

Zetterqvist, Albin, Hjelm, Linus January 2023 (has links)
The goal of this thesis is to improve the predictive capability of Finite element (FE) by gathering data from experimental test and implement the characteristics into the material model that is used. FE is a commonly used method to predict the mechanical behavior of materials and components during applied forces. Therefore, it’s an important part of product development since it gives an opportunity to lower the costs as well as saving resources since it reduces the number of experimental tests. The method for this thesis was to first simulate tensile tests in Abaqus and then to analyze its results. Once all the simulations were done, we replicated the simulation with experimental tests. This was done with DIC (Digital Image Correlation) to help gather data. Since the goal of this thesis is to see how the predictive capability of the FEM-simulation can be improved the results are compared and discussed to see what from the FEM-simulation matches the DIC results and what does not. This will help understand what in the material model that needs to be changed to better match the testing. DIC is a non-contact method that is used to measure deformations and strain locally over an area which results in a more detailed view of the mechanical behavior of the material. The idea of using DIC during this thesis is to sample enough valuable data and apply it to the original material model of the FE-simulations to increase the predictive capability. After the results from the experimental tests were analyzed it was clear that there were both resemblances and differences in the results, for example the Young’s modulus in the FEM-calculations was higher than it was for the experimental tests, Yield strength was lower in the FEM-calculations compared to the experimental tests, maximum load at fracture was lower in the FEM-calculations compared to the experimental tests and elongation was lower in the FEM-calculations compared to the experimental tests. The FEM-calculations were based of the assumptions that the material was homogenous but that wasn’t the case for the experimental tests. Due to the strain varying over the tests the material model could be improved by adding a statistical variation, to all the elements to give them varying mechanical properties simulate how the strain vary more correctly over the specimen.

Page generated in 0.0952 seconds