• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 426
  • 217
  • 73
  • 66
  • 34
  • 29
  • 26
  • 24
  • 12
  • 9
  • 8
  • 6
  • 4
  • 4
  • 2
  • Tagged with
  • 1008
  • 1008
  • 1008
  • 120
  • 117
  • 98
  • 96
  • 83
  • 74
  • 65
  • 64
  • 61
  • 57
  • 53
  • 53
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
381

Modeling the Transient Effects during the Hot-Pressing of Wood-Based Composites

Zombori, Balazs Gergely 27 April 2001 (has links)
A numerical model based on fundamental engineering principles was developed and validated to establish a relationship between process parameters and the final properties of woodbased composite boards. The model simulates the mat formation, then compresses the reconstituted mat to its final thickness in a virtual press. The number of interacting variables during the hot-compression process is prohibitively large to assess a wide variety of data by experimental means. Therefore, the main advantage of the model based approach that the effect of the hot-compression parameters on the final properties of wood-based composite boards can be monitored without extensive experimentation. The mat formation part of the model is based on the Monte Carlo simulation technique to reproduce the spatial structure of the mat. The dimensions and the density of each flake are considered as random variables in the model, which follow certain probability density distributions. The parameters of these distributions are derived from data collected on industrial flakes by using an image analysis technique. The model can simulate the structure of a threelayer oriented strandboard (OSB) mat as well as the structure of random fiber networks. A grid is superimposed on the simulated mat and the number of flakes, the thickness, and the density of the mat at each grid point are computed. Additionally, the model predicts the change in several void volume fractions within the mat and the contact area between the flakes during consolidation. The void volume fractions are directly related to the physical properties of the mat, such as thermal conductivity, diffusivity, and permeability, and the contact area is an indicator of the effectively bonded area within the mat. The heat and mass transfer part of the model predicts the change of air content, moisture content, and temperature at designated mesh points in the cross section of the mat during the hotcompression. The water content is subdivided into vapor and bound water components. The free water component is not considered in the model due to the low (typically 6-7 %) initial moisture content of the flakes. The gas phase (air and vapor) moves by bulk flow and diffusion, while the bound water only moves by diffusion across the mat. The heat flow occurs by conduction and convection. The spatial derivatives of the resulting coupled partial differential equations are discretized by finite differences. The resulting ordinary differential equation in time is solved by a differential-algebraic system solver (DDASSL). The internal environment within the mat can be predicted among different initial and boundary conditions by this part of the hot-compression model. In the next phase of the research, the viscoelastic (time, temperature, and moisture dependent) response of the flakes was modeled using the time-temperature-moisture superposition principle of polymers. A master curve was created from data available in the literature, which describes the changing relaxation modulus of the flakes as a function of moisture and temperature at different locations in the mat. Then the flake mat was compressed in a virtual press. The stress-strain response is highly nonlinear due to the cellular structure of the mat. Hooke's Law was modified with a nonlinear strain function to account for the behavior of the flake mat in transverse compression. This part of the model gives insight into the vertical density profile formation through the thickness of the mat. Laboratory boards were produced to validate the model. A split-plot experimental design, with three different initial mat moisture contents (5, 8.5, 12 %), three final densities (609, 641, 673 kg êm3 or 38, 40, 42 lb ê ft3), two press platen temperatures (150, 200 °C), and three different press closing times (40, 60, 80 s) was applied to investigate the effect of production parameters on the internal mat conditions and the formation of the vertical density profile. The temperature and gas pressure at six locations in the mat, and the resultant density profiles of the laboratory boards, were measured. Adequate agreement was found between the model predicted and the experimentally measured temperature, pressure, and vertical density profiles. The complete model uses pressing parameters (press platen temperature, press schedule) and mat properties (flake dimensions and orientation, density distribution, initial moisture content and temperature) to predict the resulting internal conditions and vertical density profile formation within the compressed board. The density profile is related to all the relevant mechanical properties (bending strength, modulus of elasticity, internal bond strength) of the final board. The model can assist in the optimization of the parameters for hot-pressing woodbased composites and improve the performance of the final panel. / Ph. D.
382

Uncertainty Quantification and Accuracy Improvement of the Double-Sensor Conductivity Probe for Two-Phase Flow Measurement

Wang, Dewei 29 October 2019 (has links)
The double-sensor conductivity probe is one of the most commonly used techniques for obtaining local time-averaged parameters in two-phase flows. The uncertainty of this measurement technique has not been well understood in the past as it involves many different steps and influential factors in a typical measurement. This dissertation aims to address this gap by performing a systematic and comprehensive study on the measurement uncertainty of the probe. Three types of uncertainties are analyzed: that of measurands, of the model input parameters, and of the mathematical models. A Monte Carlo uncertainty evaluation framework closely simulating the actual measuring process is developed to link various uncertainty sources to the time-averaged two-phase flow quantities outputted by the probe. Based on the Monte Carlo uncertainty evaluation framework, an iteration method is developed to infer the true values of the quantities that are being measured. A better understanding of the uncertainty of the double-sensor conductivity probe is obtained. Multiple advanced techniques, such as high speed optical imaging and fast X-ray densitometry, recently become mature and easily accessible. To further improve the accuracy of local two-phase flow measurement, a method is developed to integrate these techniques with the double-sensor conductivity probe by considering the measuring principles and unique advantages of each technique. It has been demonstrated that after processing and synergizing the data from different techniques using the current integration method, the final results show improved accuracy for void fraction, gas velocity and superficial gas velocity, compared to the original probe measurements. High-resolution two-phase flow data is essential for the further development of various two-phase flow models and validation of two-phase CFD codes. Therefore, a comprehensive high-accuracy database of two-phase flows is acquired. The gas-phase information is obtained by the integration method developed in this dissertation, and the recently developed Particle Image Velocimetry and Planar Laser Induced Fluorescence (PIV-PLIF) technique is utilized to measure liquid-phase velocity and turbulence characteristics. Flow characteristics of bubbly flow, slug flow and churn-turbulent flow are investigated. The 1-D drift-flux model is re-evaluated by the newly obtained dataset. The distribution parameter model has been optimized based on a new void-profile classification method proposed in this study. The optimized drift-flux model has significant improvements in predicting both gas velocity and void fraction. / Doctor of Philosophy / The double-sensor conductivity probe is one widely used technique for measuring local time-averaged parameters in two-phase flows. Although a number of studies have been carried out in the past, a good understanding of the uncertainty of this technique is still lacking. This paper aims to address this gap by performing a systematic and comprehensive study on the measurement uncertainty of the probe. Three types of uncertainties are analyzed: that of measurands, of the model input parameters, and of the mathematical models. A better understanding of the uncertainty of the double-sensor conductivity probe has been obtained. Considering the unique measuring principles and advantages of multiple advanced techniques, a method is developed to integrate these techniques with the double-sensor conductivity probe to further improve the accuracy of local two-phase flow measurement. It has been demonstrated that the integration method significantly improves the accuracy of probe measurements. Realizing the needs of high-resolution two-phase flow data to the further development of various two-phase flow models and validation of two-phase CFD codes, a comprehensive database of two-phase flows is acquired. The gas-phase and liquid-phase information are acquired by the new integration method and the recently developed Particle Image Velocimetry and Planar Laser Induced Fluorescence (PIV-PLIF) technique, respectively. The classical 1-D drift-flux model is re-evaluated by the newly obtained dataset. The distribution parameter model has been optimized, resulting in significant improvements in predicting both gas velocity and void fraction.
383

Simulation of X-ray imaging systems for luggage inspection

Xie, Wei 23 December 2009 (has links)
This thesis describes XL, an x-ray imaging simulator for luggage inspection. This software system runs on a workstation and models x-ray sources, x-ray detectors and objects between them. A simple graphical interface permits the user to specify simulation parameters and inputs. XL then uses Monte Carlo methods to simulate x-ray interaction with matter, including the photoelectric effect, coherent scattering, and incoherent scattering. Finally, XL can produce x-ray images which agree closely with experimental data obtained from a commercial luggage scanner. The simulator will be a valuable tool in the development of future x-ray scanners, particularly those designed to detect explosives in luggage. / Master of Science
384

Generalized Principal Component Analysis

Solat, Karo 05 June 2018 (has links)
The primary objective of this dissertation is to extend the classical Principal Components Analysis (PCA), aiming to reduce the dimensionality of a large number of Normal interrelated variables, in two directions. The first is to go beyond the static (contemporaneous or synchronous) covariance matrix among these interrelated variables to include certain forms of temporal (over time) dependence. The second direction takes the form of extending the PCA model beyond the Normal multivariate distribution to the Elliptically Symmetric family of distributions, which includes the Normal, the Student's t, the Laplace and the Pearson type II distributions as special cases. The result of these extensions is called the Generalized principal component analysis (GPCA). The GPCA is illustrated using both Monte Carlo simulations as well as an empirical study, in an attempt to demonstrate the enhanced reliability of these more general factor models in the context of out-of-sample forecasting. The empirical study examines the predictive capacity of the GPCA method in the context of Exchange Rate Forecasting, showing how the GPCA method dominates forecasts based on existing standard methods, including the random walk models, with or without including macroeconomic fundamentals. / Ph. D.
385

Maintenance Data Augmentation, using Markov Chain Monte Carlo Simulation : (Hamiltonian MCMC using NUTS)

Roohani, Muhammad Ammar January 2024 (has links)
Reliable and efficient utilization and operation of any engineering asset require carefully designed maintenance planning and maintenance related data in the form of failure times, repair times, Mean Time between Failure (MTBF) and conditioning data etc. play a pivotal role in maintenance decision support. With the advancement in data analytics sciences and industrial artificial intelligence, maintenance related data is being used for maintenance prognostics modeling to predict future maintenance requirements that form the basis of maintenance design and planning in any maintenance-conscious industry like railways. The lack of such available data creates a no. of different types of problems in data driven prognostics modelling. There have been a few methods, the researchers have employed to counter the problems due to lack of available data. The proposed methodology involves data augmentation technique using Markov Chain Monte Carlo (MCMC) Simulation to enhance maintenance data to be used in maintenance prognostics modeling that can serve as basis for better maintenance decision support and planning.
386

Unbonded Monostrands for Camber Adjustment

Sethi, Vivek 15 March 2006 (has links)
Prestressed concrete structural members camber upwards or downwards depending upon the location of application of prestress force. Identical members do not camber equally due to variability of the factors influencing it. Differential camber in the beams, if significant, results in excessively tall haunches or girder top flange extending into the bottom of the slab. For adjacent members like deck bulb-tees and box girders that are to be transversely post-tensioned the differential camber causes problems during the fit up process. This variation is undesirable and hinders the smooth progress of construction work if not properly accounted for at the design stage. Various factors influence camber and camber growth in prestressed members. Some of the factors are concrete strength and modulus, concrete creep and shrinkage properties, curing conditions, maturity of concrete at release of prestress force, initial strand stress, climatic conditions in storage and length of time in storage. Combinations of these variables result in variation of camber of otherwise similar beams at the time they are erected. One way to increase the precision of camber estimation is to use Monte Carlo simulation based upon the randomized parameters affecting the camber and camber growth. In this method, the parameters, in the form of a probability distribution function, are combined and passed through a deterministic model resulting in camber and camber growth prediction with narrowed probability bounds as compared to single definite value given by most contemporary methods. This outcome gives the expected range of cambers for a given girder design. After determining the expected range of camber, the ultimate goal is to provide guidelines for using unbonded monostrands for camber adjustment. / Master of Science
387

Probabilistic Post-Liquefaction Residual Shear Strength Analyses of Cohesionless Soil Deposits: Application to the Kocaeli (1999) and Duzce (1999) Earthquakes

Lumbantoruan, Partahi Mamora Halomoan 31 October 2005 (has links)
Liquefaction of granular soil deposits can have extremely detrimental effects on the stability of embankment dams, natural soil slopes, and mine tailings. The residual or liquefied shear strength of the liquefiable soils is a very important parameter when evaluating stability and deformation of level and sloping ground. Current procedures for estimating the liquefied shear strength are based on extensive laboratory testing programs or from the back-analysis of failures where liquefaction was involved and in-situ testing data was available. All available procedures utilize deterministic methods for estimation and selection of the liquefied shear strength. Over the past decade, there has been an increasing trend towards analyzing geotechnical problems using probability and reliability. This study presents procedures for assessing the liquefied shear strength of cohesionless soil deposits within a risk-based framework. Probabilistic slope stability procedures using reliability methods and Monte Carlo Simulations are developed to incorporate uncertainties associated with geometrical and material parameters. The probabilistic methods are applied to flow liquefaction case histories from the 1999 Kocaeli/Duzce, Turkey Earthquake, where extensive liquefaction was observed. The methods presented in this paper should aid in making better decisions about the design and rehabilitation of structures constructed of or atop liquefiable soil deposits. / Master of Science
388

Scaling of Steady States in a Simple Driven Three-State Lattice Gas

Thies, Michael 15 September 1998 (has links)
Phase segregated states in a simple three-state stochastic lattice gas are investigated. A two dimensional finite lattice with periodic boundary conditions is filled with one hole and two oppositely "charged" species of particles, subject to an excluded volume constraint. Starting from a completely disordered initial configuration, a sufficiently large external "electric" field <I>E</I> induces the phase segregation, by separating the charges into two strips and "trapping" the hole at an interface between them. Focusing on the steady state, the scaling properties of an appropriate order parameter, depending on drive and system size, are investigated by mean-field theory and Monte Carlo methods. Density profiles of the two interfaces in the ordered system are studied with the help of Monte Carlo simulations and are found to scale in the field-dependent variable, Ε = 2 tanh <I>E</I> /2), for <I>E</I> ≲ 0.8. For larger values of <I>E</I>, independent approximations of the interfacial profiles, obtained within the framework of mean-field theory, exhibit significant deviations from the Monte Carlo data. Interestingly, the deviations can be reduced significantly by a slight modification of the mean-field theory. / Master of Science
389

Statistical arbitrage: Factor investing approach

Akyildirim, Erdinc, Goncu, A., Hekimoglu, A., Nquyen, D.K., Sensoy, A. 26 September 2023 (has links)
Yes / We introduce a continuous time model for stock prices in a general factor representation with the noise driven by a geometric Brownian motion process. We derive the theoretical hitting probability distribution for the long-until-barrier strategies and the conditions for statistical arbitrage. We optimize our statistical arbitrage strategies with respect to the expected discounted returns and the Sharpe ratio. Bootstrapping results show that the theoretical hitting probability distribution is a realistic representation of the empirical hitting probabilities. We test the empirical performance of the long-until-barrier strategies using US equities and demonstrate that our trading rules can generate statistical arbitrage profits. / The full-text of this article will be released for public view at the end of the publisher embargo on 16 Sep 2024.
390

Prepayment Modeling in Mortgage Backed Securities : Independent and Strategic Approaches to Prepayment Timing

Andersson, Johanna January 2024 (has links)
Mortgage Backed Securities (MBS) are a type of security backed by mortgages as the underlying asset. This is achieved through a process called securitization, where specific mortgages are grouped together and separated from the bank’s other assets, and then sold to investors. One of the risks for investors in MBS is mortgage prepayments made by the borrowers of the underlying mortgages. This risk arises due to the uncertainty of the expected cash flows to be distributed among the investors. There is a correlation between falling market interest rates and an increase in prepayments. When market interest rates fall, borrowers have an incentive to refinance their mortgages at lower interest rates, leading to higher prepayment rates. The Public Securities Association (PSA) model is recognized as a standard benchmark for estimating prepayment rates in MBS. In this paper, we have introduced models to generate time points for prepayments and compare how well these models match with the PSA model. Some of these models determine the timing of each prepayment event using an exponentially distributed Poisson process, while one model employs the Gamma distribution. Additionally, we introduce a strategy where prepayment is strategically triggered by whether the market rate falls below the contract rate. In that strategy, we investigate when it is most beneficial to make a prepayment. The results show that among the models employing random generation of prepayment events, the Gamma distribution best aligns with the PSA rule. Regarding the strategic prepayment strategy, our findings suggest that it is most advantageous to make prepayments early in the mortgage term, aligning with the most rational behavior as well.

Page generated in 0.0314 seconds