• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 474
  • 92
  • 35
  • 32
  • 10
  • 5
  • 5
  • 5
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 818
  • 818
  • 126
  • 120
  • 117
  • 101
  • 85
  • 81
  • 75
  • 70
  • 68
  • 63
  • 62
  • 58
  • 56
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Lab Scale Hydraulic Parameter Estimation

Hartz, Andrew Scott January 2011 (has links)
Hydraulic tomography has been tested at the field scale, lab scale and in synthetic experiments. Recently Illman and Berg have conducted studies at the lab scale. Using their data hydraulic tomography can be compared to homogeneous anisotropic solutions using one pumping well or multiple pumping wells. It has been found that hydraulic tomography out performs homogenous methods at predicting hydraulic head for validation pumping experiments. Also it has been shown in this study that homogenous anisotropic tests exhibit scenario dependent behavior. Additional tests performed to further validate the conclusions made in this experiment include spatial moment analysis, response surface analysis, and synthetic hydraulic tomography and show consistent results providing additional validation of these findings. Additional study examining the principle of reciprocity has proven inconclusive.
102

A Mathematical Model for the Devolatilization of EPDM Rubber in a Series of Steam Stripping Vessels

Francoeur, Angelica 24 October 2012 (has links)
A steady-state mathematical model for the stripping section of an industrial EPDM rubber production process was developed for a three-tank process, and two four-tank processes. The experiments that were conducted to determine model parameters such as equivalent radius for EPDM particles, as well as solubility and diffusivity parameters for hexane and ENB in EPDM polymer are described. A single-particle multiple-tank model was developed first, and a process model that accounts for the residence-time distribution of crumb particles was developed second. Plant data as well as input data from an existing steady-state model was used to determine estimates for the tuning parameters used in the multiple-particle, multiple-tank model. Using plant data to assess the model’s predictive accuracy, the resulting three-tank and four-tank process B models provide accurate model predictions with a typical error of 0.35 parts per hundred resin (phr) and 0.12 phr. The four-tank process A model provides less-accurate model predictions for residual crumb concentrations in the second tank and has an overall typical error of 1.05 phr. Additional plant data from the three- and four-tank processes would increase the estimability of the parameter values for parameter ranking and estimations steps and thus, yield increased model predictive accuracy. / Thesis (Master, Chemical Engineering) -- Queen's University, 2012-10-23 21:06:05.509
103

Design and implementation of a special protection scheme to prevent voltage collapse

2012 March 1900 (has links)
The trend of making more profits for the owners, deregulation of the utility market and need for obtaining permission from regulatory agencies have forced electric power utilities to operate their systems close to the security limits of their generation, transmission and distribution systems. The result is that power systems are now exposed to substantial risks of experiencing voltage collapse. This phenomenon is complex and is localized in nature but has widespread adverse consequences. The worst scenario of voltage collapse is partial or total outage of the power system resulting in loss of industrial productivity of the country and major financial loss to the utility. On-line monitoring of voltage stability is, therefore becoming a vital practice that is being increasingly adopted by electric power utilities. The phenomenon of voltage collapse has been studied for quite some time, and techniques for identifying voltage collapse situations have been suggested. Most suggested techniques examine steady-state and dynamic behaviors of the power system in off-line modes. Very few on-line protection and control schemes have been proposed and implemented. In this thesis, a new technique for preventing voltage collapse is presented. The developed technique uses subset of measurements from local bus as well as neighbouring buses and considers not only the present state of the system but also future load and topology changes in the system. The technique improves the robustness of the local-based methods and can be implemented in on-line as well as off-line modes. The technique monitors voltages and currents and calculates from those measurements time to voltage collapse. As the system approaches voltage collapse, control actions are implemented to relieve the system to prevent major disturbances. The developed technique was tested by simulating a variety of operating states and generating voltage collapse situations on the IEEE 30-Bus test system. Some results from the simulation studies are reported in this thesis. The results obtained from the simulations indicates that the proposed technique is able to estimate the time to voltage collapse and can implement control actions as well as alert operators.
104

Crop model parameter estimation and sensitivity analysis for large scale data using supercomputers

Lamsal, Abhishes January 1900 (has links)
Doctor of Philosophy / Department of Agronomy / Stephen M. Welch / Global crop production must be doubled by 2050 to feed 9 billion people. Novel crop improvement methods and management strategies are the sine qua non for achieving this goal. This requires reliable quantitative methods for predicting the behavior of crop cultivars in novel, time-varying environments. In the last century, two different mathematical prediction approaches emerged (1) quantitative genetics (QG) and (2) ecophysiological crop modeling (ECM). These methods are completely disjoint in terms of both their mathematics and their strengths and weaknesses. However, in the period from 1996 to 2006 a method for melding them emerged to support breeding programs. The method involves two steps: (1) exploiting ECM’s to describe the intricate, dynamic and environmentally responsive biological mechanisms determining crop growth and development on daily/hourly time scales; (2) using QG to link genetic markers to the values of ECM constants (called genotype-specific parameters, GSP’s) that encode the responses of different varieties to the environment. This can require huge amounts of computation because ECM’s have many GSP’s as well as site-specific properties (SSP’s, e.g. soil water holding capacity). Moreover, one cannot employ QG methods, unless the GSP’s from hundreds to thousands of lines are known. Thus, the overall objective of this study is to identify better ways to reduce the computational burden without minimizing ECM predictability. The study has three parts: (1) using the extended Fourier Amplitude Sensitivity Test (eFAST) to globally identify parameters of the CERES-Sorghum model that require accurate estimation under wet and dry environments; (2) developing a novel estimation method (Holographic Genetic Algorithm, HGA) applicable to both GSP and SSP estimation and testing it with the CROPGRO-Soybean model using 182 soybean lines planted in 352 site-years (7,426 yield observations); and (3) examining the behavior under estimation of the anthesis data prediction component of the CERES-Maize model. The latter study used 5,266 maize Nested Associated Mapping lines and a total 49,491 anthesis date observations from 11 plantings. Three major problems were discovered that challenge the ability to link QG and ECM’s: 1) model expressibility, 2) parameter equifinality, and 3) parameter instability. Poor expressibility is the structural inability of a model to accurately predict an observation. It can only be solved by model changes. Parameter equifinality occurs when multiple parameter values produce equivalent model predictions. This can be solved by using eFAST as a guide to reduce the numbers of interacting parameters and by collecting additional data types. When parameters are unstable, it is impossible to know what values to use in environments other than those used in calibration. All of the methods that will have to be applied to solve these problems will expand the amount of data used with ECM’s. This will require better optimization methods to estimate model parameters efficiently. The HGA developed in this study will be a good foundation to build on. Thus, future research should be directed towards solving these issues to enable ECM’s to be used as tools to support breeders, farmers, and researchers addressing global food security issues.
105

Assessing the Roof Structure of the Breeding Barn Using Truss Member Resonant Frequencies

Maille, Nathan James 17 June 2008 (has links)
The motivation for this research was to apply methods of vibrations testing in order to determine axial loads in the pin-ended truss members of the Breeding Bam. This method of vibrations testing was necessary in order to determine the in-situ axial loads of the truss members in the bam. Other common methods, such as strain gauges, were not useful for this application. This is because strain gauges can only detect changes in strain and therefore only changes in load. However due to the size and weight of the roof at the Breeding Bam, significant axial loads are produced in the truss members. This in-situ axial load due to the dead load of the roof is a significant portion of any additional loading and cannot be ignored. The ultimate goal of determining the axial loads in the truss members was to develop a model for the roof structure of the bam that accurately predicts axial loads in the truss members over a range of loading conditions. Developing such a model was important in order to make a structural assessment ofthe Breeding Bam's roof structure. In order to determine the axial loads in the truss members, acceleration time histories of the individual truss members were collected using wireless accelerometers provided by MicroStrain of Williston, Vermont. Using the Fourier transform, power spectral densities were produced from the raw acceleration time histories. It was from these plots that the resonant frequencies of the truss members were determined. Knowing the resonant frequencies for a member and the beam vibration equation developed for pin-ended members, the axial load of the truss member were calculated. This process was done for each wrought iron truss member for three separate loading conditions. The purpose of this was to provide enough experimental data so that it could be compared with predictions of several proposed frame models of the bam's roof structure. Ultimately a model was chosen that best predicted the axial loads in the truss members based upon the three loading combinations tested. Using this frame model, an assessment of the bam's roof structure could be made.
106

Calibration of discrete element modelling parameters for bulk materials handling applications

Guya, Solomon Ramas January 2018 (has links)
A dissertation submitted in fulfilment of the requirements for the degree of Master of Science in Engineering to the Faculty of Engineering and the Built Environment, School of Mechanical, Industrial and Aeronautical Engineering, University of the Witwatersrand, Johannesburg , 2018 / The Discrete Element Method (DEM) models and simulates the flow of gran ular material through confining geometry. The method has the potential to significantly reduce the costs associated with the design and operation of bulk materials handling equipment. The challenge, however, is the difficulty of determining the required input parameters. Previous calibration approaches involved direct measurements and random parameter search. The aim of this research was to develop a sequential DEM calibration framework, identify ap propriate calibration experiments and validate the framework on real flows in a laboratory-scale silo and chute. A systematic and sequential DEM calibration framework was developed. The framework consists of categorising the DEM input parameters into three cat egories of determining the directly measured input parameters, obtaining the literature acquired input parameters, and linking physical experiments with DEM simulations to obtain the calibrated parameter values. The direct mea surement parameters comprised the coefficients of restitution and the particle to wall surface coefficient of rolling friction. Literature obtained parameters were the Young’s Modulus and Poisson’s ratio. The calibrated parameters comprised the particle to wall surface coefficient of sliding friction calibrated from the wall fiction angle, the particle to particle friction coefficients (sliding and rolling) calibrated from two independent angles of repose, particle den sity calibrated from bulk density, and adhesion and cohesion energy densities. The framework was then tested using iron ore with a particle size distribution between +2mm and - 4.75 mm in LIGGGHTS DEM software. i Validation of the obtained input parameter values in the silo and chute showed very good qualitative comparisons between the measured and simulated flows. Quantitative predictions of flow rate were found to be particularly sensitive to variations in the particle to particle coefficient of sliding friction. It was concluded that due to their inherent limitations, angle of repose tests were not totally reliable to calibrate the particle to particle coefficient of sliding friction. Sensitivity tests conducted showed that in the quasi-static flow regime, only the frictional parameters were dominant, while both the frictional and colli sional parameters were dominant in the dynamic flow regime. These results are expected to lay a solid foundation for further research in systematic DEM cali bration and greatly increase the effectiveness of DEM models in bulk materials handling applications. / XL2019
107

Various Approaches on Parameter Estimation in Mixture and Non-Mixture Cure Models

Unknown Date (has links)
Analyzing life-time data with long-term survivors is an important topic in medical application. Cure models are usually used to analyze survival data with the proportion of cure subjects or long-term survivors. In order to include the propor- tion of cure subjects, mixture and non-mixture cure models are considered. In this dissertation, we utilize both maximum likelihood and Bayesian methods to estimate model parameters. Simulation studies are carried out to verify the nite sample per- formance of the estimation methods. Real data analyses are reported to illustrate the goodness-of- t via Fr echet, Weibull and Exponentiated Exponential susceptible distributions. Among the three parametric susceptible distributions, Fr echet is the most promising. Next, we extend the non-mixture cure model to include a change point in a covariate for right censored data. The smoothed likelihood approach is used to address the problem of a log-likelihood function which is not di erentiable with respect to the change point. The simulation study is based on the non-mixture change point cure model with an exponential distribution for the susceptible subjects. The simulation results revealed a convincing performance of the proposed method of estimation. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2018. / FAU Electronic Theses and Dissertations Collection
108

A Monte-Carlo comparison of methods in analyzing structural equation models with incomplete data.

January 1991 (has links)
by Siu-fung Chan. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1991. / Bibliography: leaves 38-41. / Chapter Chapter 1 --- Introduction --- p.1 / Chapter Chapter 2 --- Analysis of the Structural Equation Model with Continuous Data --- p.6 / Chapter §2.1 --- The Model --- p.6 / Chapter §2.2 --- Mehtods of Handling Incomplete Data --- p.8 / Chapter §2.3 --- Design of the Monte-Carlo Study --- p.12 / Chapter §2.4 --- Results of the Monte-Carlo Study --- p.15 / Chapter Chapter 3 --- Analysis of the Structural Equation Model with Polytomous Data --- p.24 / Chapter §3.1 --- The Model --- p.24 / Chapter §3.2 --- Methods of Handling Incomplete Data --- p.25 / Chapter §3.3 --- Design of the Monte-Carlo Study --- p.27 / Chapter §3.4 --- Results of the Monte-Carlo Study --- p.31 / Chapter Chapter 4 --- Summary and Discussion --- p.36 / References --- p.38 / Tables --- p.42 / Figures --- p.78
109

Analyzing and Modeling Low-Cost MEMS IMUs for use in an Inertial Navigation System

Barrett, Justin Michael 30 April 2014 (has links)
Inertial navigation is a relative navigation technique commonly used by autonomous vehicles to determine their linear velocity, position and orientation in three-dimensional space. The basic premise of inertial navigation is that measurements of acceleration and angular velocity from an inertial measurement unit (IMU) are integrated over time to produce estimates of linear velocity, position and orientation. However, this process is a particularly involved one. The raw inertial data must first be properly analyzed and modeled in order to ensure that any inertial navigation system (INS) that uses the inertial data will produce accurate results. This thesis describes the process of analyzing and modeling raw IMU data, as well as how to use the results of that analysis to design an INS. Two separate INS units are designed using two different micro-electro-mechanical system (MEMS) IMUs. To test the effectiveness of each INS, each IMU is rigidly mounted to an unmanned ground vehicle (UGV) and the vehicle is driven through a known test course. The linear velocity, position and orientation estimates produced by each INS are then compared to the true linear velocity, position and orientation of the UGV over time. Final results from these experiments include quantifications of how well each INS was able to estimate the true linear velocity, position and orientation of the UGV in several different navigation scenarios as well as a direct comparison of the performances of the two separate INS units.
110

Improving Estimates of Seismic Source Parameters Using Surface-Wave Observations: Applications to Earthquakes and Underground Nuclear Explosions

Howe, Michael Joseph January 2019 (has links)
We address questions related to the parameterization of two distinct types of seismic sources: earthquakes and underground nuclear explosions. For earthquakes, we focus on the improvement of location parameters, latitude and longitude, using relative measurements of spatial cluster of events. For underground nuclear explosions, we focus on the seismic source model, especially with regard to the generation of surface waves. We develop a procedure to improve relative earthquake location estimates by fitting predicted differential travel times to those measured by cross-correlating Rayleigh- and Love-wave arrivals for multiple earthquakes recorded at common stations. Our procedure can be applied to populations of earthquakes with arbitrary source mechanisms because we mitigate the phase delay that results from surface-wave radiation patterns by making source corrections calculated from the source mechanism solutions published in the Global CMT Catalog. We demonstrate the effectiveness of this relocation procedure by first applying it to two suites of synthetic earthquakes. We then relocate real earthquakes in three separate regions: two ridge-transform systems and one subduction zone. In each scenario, relocated epicenters show a reduction in location uncertainty compared to initial single-event location estimates. We apply the relocation procedure on a larger scale to the seismicity of the Eltanin Fault System which is comprised of three large transform faults: the Heezen transform, the Tharp transform, and the Hollister transform. We examine the localization of seismicity in each transform, the locations of earthquakes with atypical source mechanisms, and the spatial extent of seismic rupture and repeating earthquakes in each transform. We show that improved relative location estimates, aligned with bathymetry, greatly reduces the localization of seismicity on each of the three transforms. We also show how improved location estimates enhance the ability to use earthquake locations to address geophysical questions such as the presence of atypical earthquakes and the nature of seismic rupture along an oceanic transform fault. We investigate the physical basis for the mb-MS discriminant, which relies on differences between amplitudes of body waves and surface waves. We analyze observations for 71 well-recorded underground nuclear tests that were conducted between 1977-1989 at the Balapan test site near Semipalatinsk, Kazakhstan in the former Soviet Union. We combine revised mb values and earlier long-period surface-wave results with a new source model, which allows the vertical and horizontal forces of the explosive source to be different. We introduce a scaling factor between vertical and horizontal forces in the explosion model, to reconcile differences between body wave and surface wave observations. We find that this parameter is well correlated with the scaled depth of burial for UNEs at this test site. We use the modified source model to estimate the scaled depth of burial for the 71 UNEs considered in this study.

Page generated in 0.1032 seconds