• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 480
  • 92
  • 36
  • 32
  • 10
  • 5
  • 5
  • 5
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 823
  • 823
  • 127
  • 122
  • 118
  • 101
  • 85
  • 81
  • 77
  • 70
  • 70
  • 63
  • 62
  • 59
  • 56
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

On Parameter Estimation Employing Sinewave Fit andPhase Noise Compensation in OFDM Systems

Negusse, Senay January 2015 (has links)
In today’s modern society, we are surrounded by a multitude of digital devices.The number of available digital devices is set to grow even more. As the trendcontinues, product life-cycle is a major issue in mass production of these devices.Testing and verification is responsible for a significant percentage of the productioncost of digital devices. Time efficient procedures for testing and characterization aretherefore sought for. Moreover, the need for flexible and low-cost solutions in thedesign architecture of radio frequency devices coupled with the demand for highdata rate has presented a challenge caused by interferences from the analog circuitparts. Study of digital signal processing based techniques which would alleviate theeffects of the analog impairments is therefore a pertinent subject. In the first part of this thesis, we address parameter estimation based on wave-form fitting. We look at the sinewave model for parameter estimation which iseventually used to characterize the performance of a device. The underlying goal isto formulate and analyze a set of new parameter estimators which provide a moreaccurate estimate than well known estimators. Specifically, we study the maximum-likelihood (ML) SNR estimator employing the three-parameter sine fit and derivealternative estimator based on its statistical distribution. We show that the meansquare error (MSE) of the alternative estimators is lower than the MSE of the MLestimator for a small sample size and a few of the new estimators are very close tothe Cramér-Rao lower bound (CRB). Simply put, the number of acquired measure-ment samples translate to measurement time, implying that the fewer the numberof samples required for a given accuracy, the faster the test would be. We alsostudy a sub-sampling approach for frequency estimation problem in a dual channelsinewave model with common frequency. Coprime subsampling technique is usedwhere the signals from both channels are uniformly subsampled with coprime pairof sparse samplers. Such subsampling technique is especially beneficial to lower thesampling frequency required in applications with high bandwidth requirement. TheCRB based on the co-prime subsampled data set is derived and numerical illus-trations are given showing the relation between the cost in performance based onthe mean squared error and the employed coprime factors for a given measurementtime. In the second part of the thesis, we deal with the problem of phase-noise (PHN).First, we look at a scheme in orthogonal frequency-division multiplexing (OFDM)system where pilot subcarriers are employed for joint PHN compensation, channelestimation and symbol detection. We investigate a method where the PHN statis-tics is approximated by a finite number of vectors and design a PHN codebook. Amethod of selecting the element in the codebook that is closest to the current PHNrealization with the corresponding channel estimate is discussed. We present simula-tion results showing improved performance compared to state-of-the art techniques.We also look at a sequential Monte-Carlo based method for combined channel im-pulse response and PHN tracking employing known OFDM symbols. Such techniqueallows time domain compensation of PHN such that simultaneous cancellation ofthe common phase error and reduction of the inter-carrier interference occurs. / <p>QC 20150529</p>
102

Lab Scale Hydraulic Parameter Estimation

Hartz, Andrew Scott January 2011 (has links)
Hydraulic tomography has been tested at the field scale, lab scale and in synthetic experiments. Recently Illman and Berg have conducted studies at the lab scale. Using their data hydraulic tomography can be compared to homogeneous anisotropic solutions using one pumping well or multiple pumping wells. It has been found that hydraulic tomography out performs homogenous methods at predicting hydraulic head for validation pumping experiments. Also it has been shown in this study that homogenous anisotropic tests exhibit scenario dependent behavior. Additional tests performed to further validate the conclusions made in this experiment include spatial moment analysis, response surface analysis, and synthetic hydraulic tomography and show consistent results providing additional validation of these findings. Additional study examining the principle of reciprocity has proven inconclusive.
103

A Mathematical Model for the Devolatilization of EPDM Rubber in a Series of Steam Stripping Vessels

Francoeur, Angelica 24 October 2012 (has links)
A steady-state mathematical model for the stripping section of an industrial EPDM rubber production process was developed for a three-tank process, and two four-tank processes. The experiments that were conducted to determine model parameters such as equivalent radius for EPDM particles, as well as solubility and diffusivity parameters for hexane and ENB in EPDM polymer are described. A single-particle multiple-tank model was developed first, and a process model that accounts for the residence-time distribution of crumb particles was developed second. Plant data as well as input data from an existing steady-state model was used to determine estimates for the tuning parameters used in the multiple-particle, multiple-tank model. Using plant data to assess the model’s predictive accuracy, the resulting three-tank and four-tank process B models provide accurate model predictions with a typical error of 0.35 parts per hundred resin (phr) and 0.12 phr. The four-tank process A model provides less-accurate model predictions for residual crumb concentrations in the second tank and has an overall typical error of 1.05 phr. Additional plant data from the three- and four-tank processes would increase the estimability of the parameter values for parameter ranking and estimations steps and thus, yield increased model predictive accuracy. / Thesis (Master, Chemical Engineering) -- Queen's University, 2012-10-23 21:06:05.509
104

Design and implementation of a special protection scheme to prevent voltage collapse

2012 March 1900 (has links)
The trend of making more profits for the owners, deregulation of the utility market and need for obtaining permission from regulatory agencies have forced electric power utilities to operate their systems close to the security limits of their generation, transmission and distribution systems. The result is that power systems are now exposed to substantial risks of experiencing voltage collapse. This phenomenon is complex and is localized in nature but has widespread adverse consequences. The worst scenario of voltage collapse is partial or total outage of the power system resulting in loss of industrial productivity of the country and major financial loss to the utility. On-line monitoring of voltage stability is, therefore becoming a vital practice that is being increasingly adopted by electric power utilities. The phenomenon of voltage collapse has been studied for quite some time, and techniques for identifying voltage collapse situations have been suggested. Most suggested techniques examine steady-state and dynamic behaviors of the power system in off-line modes. Very few on-line protection and control schemes have been proposed and implemented. In this thesis, a new technique for preventing voltage collapse is presented. The developed technique uses subset of measurements from local bus as well as neighbouring buses and considers not only the present state of the system but also future load and topology changes in the system. The technique improves the robustness of the local-based methods and can be implemented in on-line as well as off-line modes. The technique monitors voltages and currents and calculates from those measurements time to voltage collapse. As the system approaches voltage collapse, control actions are implemented to relieve the system to prevent major disturbances. The developed technique was tested by simulating a variety of operating states and generating voltage collapse situations on the IEEE 30-Bus test system. Some results from the simulation studies are reported in this thesis. The results obtained from the simulations indicates that the proposed technique is able to estimate the time to voltage collapse and can implement control actions as well as alert operators.
105

Crop model parameter estimation and sensitivity analysis for large scale data using supercomputers

Lamsal, Abhishes January 1900 (has links)
Doctor of Philosophy / Department of Agronomy / Stephen M. Welch / Global crop production must be doubled by 2050 to feed 9 billion people. Novel crop improvement methods and management strategies are the sine qua non for achieving this goal. This requires reliable quantitative methods for predicting the behavior of crop cultivars in novel, time-varying environments. In the last century, two different mathematical prediction approaches emerged (1) quantitative genetics (QG) and (2) ecophysiological crop modeling (ECM). These methods are completely disjoint in terms of both their mathematics and their strengths and weaknesses. However, in the period from 1996 to 2006 a method for melding them emerged to support breeding programs. The method involves two steps: (1) exploiting ECM’s to describe the intricate, dynamic and environmentally responsive biological mechanisms determining crop growth and development on daily/hourly time scales; (2) using QG to link genetic markers to the values of ECM constants (called genotype-specific parameters, GSP’s) that encode the responses of different varieties to the environment. This can require huge amounts of computation because ECM’s have many GSP’s as well as site-specific properties (SSP’s, e.g. soil water holding capacity). Moreover, one cannot employ QG methods, unless the GSP’s from hundreds to thousands of lines are known. Thus, the overall objective of this study is to identify better ways to reduce the computational burden without minimizing ECM predictability. The study has three parts: (1) using the extended Fourier Amplitude Sensitivity Test (eFAST) to globally identify parameters of the CERES-Sorghum model that require accurate estimation under wet and dry environments; (2) developing a novel estimation method (Holographic Genetic Algorithm, HGA) applicable to both GSP and SSP estimation and testing it with the CROPGRO-Soybean model using 182 soybean lines planted in 352 site-years (7,426 yield observations); and (3) examining the behavior under estimation of the anthesis data prediction component of the CERES-Maize model. The latter study used 5,266 maize Nested Associated Mapping lines and a total 49,491 anthesis date observations from 11 plantings. Three major problems were discovered that challenge the ability to link QG and ECM’s: 1) model expressibility, 2) parameter equifinality, and 3) parameter instability. Poor expressibility is the structural inability of a model to accurately predict an observation. It can only be solved by model changes. Parameter equifinality occurs when multiple parameter values produce equivalent model predictions. This can be solved by using eFAST as a guide to reduce the numbers of interacting parameters and by collecting additional data types. When parameters are unstable, it is impossible to know what values to use in environments other than those used in calibration. All of the methods that will have to be applied to solve these problems will expand the amount of data used with ECM’s. This will require better optimization methods to estimate model parameters efficiently. The HGA developed in this study will be a good foundation to build on. Thus, future research should be directed towards solving these issues to enable ECM’s to be used as tools to support breeders, farmers, and researchers addressing global food security issues.
106

Assessing the Roof Structure of the Breeding Barn Using Truss Member Resonant Frequencies

Maille, Nathan James 17 June 2008 (has links)
The motivation for this research was to apply methods of vibrations testing in order to determine axial loads in the pin-ended truss members of the Breeding Bam. This method of vibrations testing was necessary in order to determine the in-situ axial loads of the truss members in the bam. Other common methods, such as strain gauges, were not useful for this application. This is because strain gauges can only detect changes in strain and therefore only changes in load. However due to the size and weight of the roof at the Breeding Bam, significant axial loads are produced in the truss members. This in-situ axial load due to the dead load of the roof is a significant portion of any additional loading and cannot be ignored. The ultimate goal of determining the axial loads in the truss members was to develop a model for the roof structure of the bam that accurately predicts axial loads in the truss members over a range of loading conditions. Developing such a model was important in order to make a structural assessment ofthe Breeding Bam's roof structure. In order to determine the axial loads in the truss members, acceleration time histories of the individual truss members were collected using wireless accelerometers provided by MicroStrain of Williston, Vermont. Using the Fourier transform, power spectral densities were produced from the raw acceleration time histories. It was from these plots that the resonant frequencies of the truss members were determined. Knowing the resonant frequencies for a member and the beam vibration equation developed for pin-ended members, the axial load of the truss member were calculated. This process was done for each wrought iron truss member for three separate loading conditions. The purpose of this was to provide enough experimental data so that it could be compared with predictions of several proposed frame models of the bam's roof structure. Ultimately a model was chosen that best predicted the axial loads in the truss members based upon the three loading combinations tested. Using this frame model, an assessment of the bam's roof structure could be made.
107

Calibration of discrete element modelling parameters for bulk materials handling applications

Guya, Solomon Ramas January 2018 (has links)
A dissertation submitted in fulfilment of the requirements for the degree of Master of Science in Engineering to the Faculty of Engineering and the Built Environment, School of Mechanical, Industrial and Aeronautical Engineering, University of the Witwatersrand, Johannesburg , 2018 / The Discrete Element Method (DEM) models and simulates the flow of gran ular material through confining geometry. The method has the potential to significantly reduce the costs associated with the design and operation of bulk materials handling equipment. The challenge, however, is the difficulty of determining the required input parameters. Previous calibration approaches involved direct measurements and random parameter search. The aim of this research was to develop a sequential DEM calibration framework, identify ap propriate calibration experiments and validate the framework on real flows in a laboratory-scale silo and chute. A systematic and sequential DEM calibration framework was developed. The framework consists of categorising the DEM input parameters into three cat egories of determining the directly measured input parameters, obtaining the literature acquired input parameters, and linking physical experiments with DEM simulations to obtain the calibrated parameter values. The direct mea surement parameters comprised the coefficients of restitution and the particle to wall surface coefficient of rolling friction. Literature obtained parameters were the Young’s Modulus and Poisson’s ratio. The calibrated parameters comprised the particle to wall surface coefficient of sliding friction calibrated from the wall fiction angle, the particle to particle friction coefficients (sliding and rolling) calibrated from two independent angles of repose, particle den sity calibrated from bulk density, and adhesion and cohesion energy densities. The framework was then tested using iron ore with a particle size distribution between +2mm and - 4.75 mm in LIGGGHTS DEM software. i Validation of the obtained input parameter values in the silo and chute showed very good qualitative comparisons between the measured and simulated flows. Quantitative predictions of flow rate were found to be particularly sensitive to variations in the particle to particle coefficient of sliding friction. It was concluded that due to their inherent limitations, angle of repose tests were not totally reliable to calibrate the particle to particle coefficient of sliding friction. Sensitivity tests conducted showed that in the quasi-static flow regime, only the frictional parameters were dominant, while both the frictional and colli sional parameters were dominant in the dynamic flow regime. These results are expected to lay a solid foundation for further research in systematic DEM cali bration and greatly increase the effectiveness of DEM models in bulk materials handling applications. / XL2019
108

Various Approaches on Parameter Estimation in Mixture and Non-Mixture Cure Models

Unknown Date (has links)
Analyzing life-time data with long-term survivors is an important topic in medical application. Cure models are usually used to analyze survival data with the proportion of cure subjects or long-term survivors. In order to include the propor- tion of cure subjects, mixture and non-mixture cure models are considered. In this dissertation, we utilize both maximum likelihood and Bayesian methods to estimate model parameters. Simulation studies are carried out to verify the nite sample per- formance of the estimation methods. Real data analyses are reported to illustrate the goodness-of- t via Fr echet, Weibull and Exponentiated Exponential susceptible distributions. Among the three parametric susceptible distributions, Fr echet is the most promising. Next, we extend the non-mixture cure model to include a change point in a covariate for right censored data. The smoothed likelihood approach is used to address the problem of a log-likelihood function which is not di erentiable with respect to the change point. The simulation study is based on the non-mixture change point cure model with an exponential distribution for the susceptible subjects. The simulation results revealed a convincing performance of the proposed method of estimation. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2018. / FAU Electronic Theses and Dissertations Collection
109

A Monte-Carlo comparison of methods in analyzing structural equation models with incomplete data.

January 1991 (has links)
by Siu-fung Chan. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1991. / Bibliography: leaves 38-41. / Chapter Chapter 1 --- Introduction --- p.1 / Chapter Chapter 2 --- Analysis of the Structural Equation Model with Continuous Data --- p.6 / Chapter §2.1 --- The Model --- p.6 / Chapter §2.2 --- Mehtods of Handling Incomplete Data --- p.8 / Chapter §2.3 --- Design of the Monte-Carlo Study --- p.12 / Chapter §2.4 --- Results of the Monte-Carlo Study --- p.15 / Chapter Chapter 3 --- Analysis of the Structural Equation Model with Polytomous Data --- p.24 / Chapter §3.1 --- The Model --- p.24 / Chapter §3.2 --- Methods of Handling Incomplete Data --- p.25 / Chapter §3.3 --- Design of the Monte-Carlo Study --- p.27 / Chapter §3.4 --- Results of the Monte-Carlo Study --- p.31 / Chapter Chapter 4 --- Summary and Discussion --- p.36 / References --- p.38 / Tables --- p.42 / Figures --- p.78
110

Analyzing and Modeling Low-Cost MEMS IMUs for use in an Inertial Navigation System

Barrett, Justin Michael 30 April 2014 (has links)
Inertial navigation is a relative navigation technique commonly used by autonomous vehicles to determine their linear velocity, position and orientation in three-dimensional space. The basic premise of inertial navigation is that measurements of acceleration and angular velocity from an inertial measurement unit (IMU) are integrated over time to produce estimates of linear velocity, position and orientation. However, this process is a particularly involved one. The raw inertial data must first be properly analyzed and modeled in order to ensure that any inertial navigation system (INS) that uses the inertial data will produce accurate results. This thesis describes the process of analyzing and modeling raw IMU data, as well as how to use the results of that analysis to design an INS. Two separate INS units are designed using two different micro-electro-mechanical system (MEMS) IMUs. To test the effectiveness of each INS, each IMU is rigidly mounted to an unmanned ground vehicle (UGV) and the vehicle is driven through a known test course. The linear velocity, position and orientation estimates produced by each INS are then compared to the true linear velocity, position and orientation of the UGV over time. Final results from these experiments include quantifications of how well each INS was able to estimate the true linear velocity, position and orientation of the UGV in several different navigation scenarios as well as a direct comparison of the performances of the two separate INS units.

Page generated in 0.1558 seconds