• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3649
  • 914
  • 683
  • 424
  • 160
  • 93
  • 59
  • 57
  • 45
  • 38
  • 36
  • 35
  • 35
  • 34
  • 27
  • Tagged with
  • 7482
  • 1120
  • 876
  • 795
  • 716
  • 711
  • 706
  • 559
  • 528
  • 522
  • 518
  • 516
  • 495
  • 479
  • 471
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Post stratified estimation using a known auxiliary variable

Bedier, Mostafa Abdellatif 18 September 1989 (has links)
Post stratification is considered desirable in sample surveys for two reasons - it reduces the mean squared error when averaged over all possible samples, and it reduces the conditional bias when conditioned on stratum sample sizes. The problem studied in this thesis is post stratified estimation of a finite population mean when there is a known auxiliary variable for each population unit. The primary direction of the thesis follows the lines of Holt and Smith (1979). A method is given for using the auxiliary variable in selection of the stratum boundaries and, using this approach to determine strata, to compare post stratified estimates with the self -weighting estimates from the analytical and empirical points of view. Estimates studied are: the post stratified mean, the post stratified combined ratio, and the post stratified separate ratio. The thesis contains simulation results that explore the distributions of the self -weighting estimates, and the post stratified estimates using conditional and unconditional inferences. The correct coverage properties of the confidence intervals are compared and the design effect, i.e. the ratio of the variance of the self -weighting to the variance of post stratified estimates, is calculated from the samples and its distribution explored by the simulation study for several real and artificial populations. The confidence intervals of post stratified estimates using conditional variances had good coverage properties for each sample configuration used, and hence the correct coverage property over all possible samples provided that the Central Limit Theorem was applied. The comparisons indicated that post stratification is an effective approach when the boundaries are obtained based on proper stratification using an auxiliary variable. Moreover it is more efficient than estimation based on simple random sampling in reducing the mean squared error. Finally, there is strong evidence that the post stratified estimates are robust against poorly distributed samples, whereas empirical investigations suggested that the self -weighting estimates are very poor when the samples are unbalanced. / Graduation date: 1990
112

Estimation of Survival with a Combination of Prevalent and Incident Cases in the Presence of Length Bias

Makvandi-Nejad, Ewa 24 September 2012 (has links)
In studying natural history of a disease, incident studies provide the best quality estimates; in contrast, prevalent studies introduce a sampling bias, which, if the onset time of the disease follows a stationary Poisson process, is called length bias. When both types of data are available, combining the samples under the assumption that failure times in incident and prevalent cohorts come from the same distribution function, could improve the estimation process from a revalent sample. We verify this assumption using a Smirnov type of test and construct a likelihood function from a combined sample to parametrically estimate the survival through maximum likelihood approach. Finally, we use Accelerated Failure Time models to compare the effect of covariates on survival in incident, prevalent, and combined populations. Properties of the proposed test and the combined estimator are assessed using simulations, and illustrated with data from the Canadian Study of Health and Aging.
113

Translating parameter estimation problems from EASY-FIT to SOCS

Donaldson, Matthew W 29 April 2008
Mathematical models often involve unknown parameters that must be fit to experimental data. These so-called parameter estimation problems have many applications that may involve differential equations, optimization, and control theory. EASY-FIT and SOCS are two software packages that solve parameter estimation problems. In this thesis, we discuss the design and implementation of a source-to-source translator called EFtoSOCS used to translate EASY FIT input into SOCS input. This makes it possible to test SOCS on a large number of parameter estimation problems available in the EASY-FIT problem database that vary both in size and difficulty.<p>Parameter estimation problems typically have many locally optimal solutions, and the solution obtained often depends critically on the initial guess for the solution. A 3-stage approach is followed to enhance the convergence of solutions in SOCS. The stages are designed to use an initial guess that is progressively closer to the optimal solution found by EASY-FIT. Using this approach we run EFtoSOCS on all translatable problems (691) from the EASY-FIT database. We find that all but 7 problems produce converged solutions in SOCS. We describe the reasons that SOCS was not able solve these problems, compare the solutions found by SOCS and EASY-FIT, and suggest possible improvements to both EFtoSOCS and SOCS.
114

Dynamic payload estimation in four wheel drive loaders

Hindman, Jahmy J. 22 December 2008
Knowledge of the mass of the manipulated load (i.e. payload) in off-highway machines is useful information for a variety of reasons ranging from knowledge of machine stability to ensuring compliance with transportion regulations. This knowledge is difficult to ascertain however. This dissertation concerns itself with delineating the motivations for, and difficulties in development of a dynamic payload weighing algorithm. The dissertation will describe how the new type of dynamic payload weighing algorithm was developed and progressively overcame some of these difficulties.<p> The payload mass estimate is dependent upon many different variables within the off-highway vehicle. These variables include static variability such as machining tolerances of the revolute joints in the linkage, mass of the linkage members, etc as well as dynamic variability such as whole-machine accelerations, hydraulic cylinder friction, pin joint friction, etc. Some initial effort was undertaken to understand the static variables in this problem first by studying the effects of machining tolerances on the working linkage kinematics in a four-wheel-drive loader. This effort showed that if the linkage members were machined within the tolerances prescribed by the design of the linkage components, the tolerance stack-up of the machining variability had very little impact on overall linkage kinematics.<p> Once some of the static dependent variables were understood in greater detail significant effort was undertaken to understand and compensate for the dynamic dependent variables of the estimation problem. The first algorithm took a simple approach of using the kinematic linkage model coupled with hydraulic cylinder pressure information to calculate a payload estimate directly. This algorithm did not account for many of the aforementioned dynamic variables (joint friction, machine acceleration, etc) but was computationally expedient. This work however produced payload estimates with error far greater than the 1% full scale value being targeted. Since this initial simplistic effort met with failure, a second algorithm was needed. The second algorithm was developed upon the information known about the limitations of the first algorithm. A suitable method of compensating for the non-linear dependent dynamic variables was needed. To address this dilemma, an artificial neural network approach was taken for the second algorithm. The second algorithms construction was to utilise an artificial neural network to capture the kinematic linkage characteristics and all other dynamic dependent variable behaviour and estimate the payload information based upon the linkage position and hydraulic cylinder pressures. This algorithm was trained using emperically collected data and then subjected to actual use in the field. This experiment showed that that the dynamic complexity of the estimation problem was too large for a small (and computationally feasible) artificial neural network to characterize such that the error estimate was less than the 1% full scale requirement.<p> A third algorithm was required due to the failures of the first two. The third algorithm was constructed to ii take advantage of the kinematic model developed and utilise the artificial neural networks ability to perform nonlinear mapping. As such, the third algorithm developed uses the kinematic model output as an input to the artificial neural network. This change from the second algorithm keeps the network from having to characterize the linkage kinematics and only forces the network to compensate for the dependent dynamic variables excluded by the kinematic linkage model. This algorithm showed significant improvement over the previous two but still did not meet the required 1% full scale requirement. The promise shown by this algorithm however was convincing enough that further effort was spent in trying to refine it to improve the accuracy.<p> The fourth algorithm developed proceeded with improving the third algorithm. This was accomplished by adding additional inputs to the artificial neural network that allowed the network to better compensate for the variables present in the problem. This effort produced an algorithm that, when subjected to actual field use, produced results very near the 1% full scale accuracy requirement. This algorithm could be improved upon slightly with better input data filtering and possibly adding additional network inputs.<p> The final algorithm produced results very near the desired accuracy. This algorithm was also novel in that for this estimation, the artificial neural network was not used soley as the means to characterize the problem for estimation purposes. Instead, much of the responsibility for the mathematical characterization of the problem was placed upon a kinematic linkage model that then fed its own payload estimate into the neural network where the estimate was further refined during network training with calibration data and additional inputs. This method of nonlinear state estimation (i.e. utilising a neural network to compensate for nonlinear effects in conjunction with a first principles model) has not been seen previously in the literature.
115

Positron emission tomography (PET) image reconstruction by density estimation

Pawlak, Barbara 17 September 2007 (has links)
PET (positron emission tomography) scans are still in the experimental phase, as one of the newest breast cancer diagnostic techniques. It is becoming the new standard in neurology, oncology and cardiology. PET, like other nuclear medicine diagnostic and treatment techniques, involves the use of radiation. Because of the negative impact of radioactivity to our bodies the radiation doses in PET should be small. The existing computing algorithms for calculating PET images can be divided into two broad categories: analytical and iterative methods. In the analytical approach the relation between the picture and its projections is expressed by a set of integral equations which are then solved analytically. The Fourier backprojection (FBP) algorithm is a numerical approximation of this analytical solution. Iterative approaches use deterministic (ART = Algebraic Reconstructed Technique) or stochastic (EM = Expectation Maximization) algorithms. My proposed kernel density estimation (KDE) algorithm also falls also into the category of iterative methods. However, in this approach each coincidence event is considered individually. The estimate location of the annihilation event that caused each coincidence event is based on the previously assigned location of events processed earlier. To accomplish this, we construct a probability distribution along each coincidence line. This is generated from previous annihilation points by density estimation. It is shown that this density estimation approach to PET can reconstruct an image of an existing tumor using significantly less data than the standard CT algorithms, such as FBP. Therefore, it might be very promising technique allowing reduced radiation dose for patients, while retaining or improving image quality. / October 2007
116

Error estimates for finite element approximations of effective elastic properties of periodic structures / Feluppskattningar för finita element-approximationer av effektiva elastiska egenskaper hos periodiska strukturer

Pettersson, Klas January 2010 (has links)
Techniques for a posteriori error estimation for finite element approximations of an elliptic partial differential equation are studied.This extends previous work on localized error control in finite element methods for linear elasticity.The methods are then applied to the problem of homogenization of periodic structures. In particular, error estimates for the effective elastic properties are obtained. The usefulness of these estimates is twofold.First, adaptive methods using mesh refinements based on the estimates can be constructed.Secondly, one of the estimates can give reasonable measure of the magnitude ofthe error. Numerical examples of this are given.
117

Translating parameter estimation problems from EASY-FIT to SOCS

Donaldson, Matthew W 29 April 2008 (has links)
Mathematical models often involve unknown parameters that must be fit to experimental data. These so-called parameter estimation problems have many applications that may involve differential equations, optimization, and control theory. EASY-FIT and SOCS are two software packages that solve parameter estimation problems. In this thesis, we discuss the design and implementation of a source-to-source translator called EFtoSOCS used to translate EASY FIT input into SOCS input. This makes it possible to test SOCS on a large number of parameter estimation problems available in the EASY-FIT problem database that vary both in size and difficulty.<p>Parameter estimation problems typically have many locally optimal solutions, and the solution obtained often depends critically on the initial guess for the solution. A 3-stage approach is followed to enhance the convergence of solutions in SOCS. The stages are designed to use an initial guess that is progressively closer to the optimal solution found by EASY-FIT. Using this approach we run EFtoSOCS on all translatable problems (691) from the EASY-FIT database. We find that all but 7 problems produce converged solutions in SOCS. We describe the reasons that SOCS was not able solve these problems, compare the solutions found by SOCS and EASY-FIT, and suggest possible improvements to both EFtoSOCS and SOCS.
118

Dynamic payload estimation in four wheel drive loaders

Hindman, Jahmy J. 22 December 2008 (has links)
Knowledge of the mass of the manipulated load (i.e. payload) in off-highway machines is useful information for a variety of reasons ranging from knowledge of machine stability to ensuring compliance with transportion regulations. This knowledge is difficult to ascertain however. This dissertation concerns itself with delineating the motivations for, and difficulties in development of a dynamic payload weighing algorithm. The dissertation will describe how the new type of dynamic payload weighing algorithm was developed and progressively overcame some of these difficulties.<p> The payload mass estimate is dependent upon many different variables within the off-highway vehicle. These variables include static variability such as machining tolerances of the revolute joints in the linkage, mass of the linkage members, etc as well as dynamic variability such as whole-machine accelerations, hydraulic cylinder friction, pin joint friction, etc. Some initial effort was undertaken to understand the static variables in this problem first by studying the effects of machining tolerances on the working linkage kinematics in a four-wheel-drive loader. This effort showed that if the linkage members were machined within the tolerances prescribed by the design of the linkage components, the tolerance stack-up of the machining variability had very little impact on overall linkage kinematics.<p> Once some of the static dependent variables were understood in greater detail significant effort was undertaken to understand and compensate for the dynamic dependent variables of the estimation problem. The first algorithm took a simple approach of using the kinematic linkage model coupled with hydraulic cylinder pressure information to calculate a payload estimate directly. This algorithm did not account for many of the aforementioned dynamic variables (joint friction, machine acceleration, etc) but was computationally expedient. This work however produced payload estimates with error far greater than the 1% full scale value being targeted. Since this initial simplistic effort met with failure, a second algorithm was needed. The second algorithm was developed upon the information known about the limitations of the first algorithm. A suitable method of compensating for the non-linear dependent dynamic variables was needed. To address this dilemma, an artificial neural network approach was taken for the second algorithm. The second algorithms construction was to utilise an artificial neural network to capture the kinematic linkage characteristics and all other dynamic dependent variable behaviour and estimate the payload information based upon the linkage position and hydraulic cylinder pressures. This algorithm was trained using emperically collected data and then subjected to actual use in the field. This experiment showed that that the dynamic complexity of the estimation problem was too large for a small (and computationally feasible) artificial neural network to characterize such that the error estimate was less than the 1% full scale requirement.<p> A third algorithm was required due to the failures of the first two. The third algorithm was constructed to ii take advantage of the kinematic model developed and utilise the artificial neural networks ability to perform nonlinear mapping. As such, the third algorithm developed uses the kinematic model output as an input to the artificial neural network. This change from the second algorithm keeps the network from having to characterize the linkage kinematics and only forces the network to compensate for the dependent dynamic variables excluded by the kinematic linkage model. This algorithm showed significant improvement over the previous two but still did not meet the required 1% full scale requirement. The promise shown by this algorithm however was convincing enough that further effort was spent in trying to refine it to improve the accuracy.<p> The fourth algorithm developed proceeded with improving the third algorithm. This was accomplished by adding additional inputs to the artificial neural network that allowed the network to better compensate for the variables present in the problem. This effort produced an algorithm that, when subjected to actual field use, produced results very near the 1% full scale accuracy requirement. This algorithm could be improved upon slightly with better input data filtering and possibly adding additional network inputs.<p> The final algorithm produced results very near the desired accuracy. This algorithm was also novel in that for this estimation, the artificial neural network was not used soley as the means to characterize the problem for estimation purposes. Instead, much of the responsibility for the mathematical characterization of the problem was placed upon a kinematic linkage model that then fed its own payload estimate into the neural network where the estimate was further refined during network training with calibration data and additional inputs. This method of nonlinear state estimation (i.e. utilising a neural network to compensate for nonlinear effects in conjunction with a first principles model) has not been seen previously in the literature.
119

Measurement enhancement for state estimation

Chen, Jian 15 May 2009 (has links)
After the deregulation of the power industry, power systems are required to be operated efficiently and economically in today’s strongly competitive environment. In order to achieve these objectives, it is crucial for power system control centers to accurately monitor the system operating state. State estimation is an essential tool in an energy management system (EMS). It is responsible for providing an accurate and correct estimate for the system operating state based on the available measurements in the power system. A robust state estimation should have the capability of keeping the system observable during different contingencies, as well as detecting and identifying the gross errors in measurement set and network topology. However, this capability relies directly on the system network configuration and measurement locations. In other words, a reliable and redundant measurement system is the primary condition for a robust state estimation. This dissertation is focused on the possible benefits to state estimation of using synchronized phasor measurements to improve the measurement system. The benefits are investigated with respect to the measurement redundancy, bad data and topology error processing functions in state estimation. This dissertation studies how to utilize the phasor measurements in the traditional state estimation. The optimal placement of measurement to realize the maximum benefit is also considered and practical algorithms are designed. It is shown that strategic placement of a few phasor measurement units (PMU) in the system can significantly increase measurement redundancy, which in turn can improve the capability of state estimation to detect and identify bad data, even during loss of measurements. Meanwhile, strategic placement of traditional and phasor measurements can also improve the state estimation’s topology error detection and identification capability, as well as its robustness against branch outages. The proposed procedures and algorithms are illustrated and demonstrated with different sizes of test systems. And numerical simulations verify the gained benefits of state estimation in bad data processing and topology error processing.
120

Comparison of the Performance of Different Time Delay Estimation Techniques for Ultrasound Elastography

Sambasubramanian, Srinath 2010 August 1900 (has links)
Elastography is a non-invasive medical imaging modality that is used as a diagnostic tool for the early detection of several pathological changes in soft tissues. Elastography techniques provide the local strain distributions experienced by soft tissues due to compression. The resulting strain images are called “elastograms”. In elastography, the local tissue strains are usually estimated as the gradient of local tissue displacement. The local tissue displacements are estimated from the time delays between gated pre- and post-compression echo signals. The quality of the resulting elastograms is highly dependent on the accuracy of these local displacement estimates. While several time delay estimation (TDE) techniques have been proposed for elastography applications, there is a lack of systematic study that statistically compares the performance of these techniques. This information could prove to be of great importance to improve currently employed elastographic clinical methods. This study investigates the performance of selected time delay estimators for elastography applications. Time delay estimators based on Generalized Cross Correlation (GCC), Sum of Squared Differences (SSD) and Sum of Absolute Differences (SAD) are proposed and implemented. Within the class of GCC algorithms, we further consider: an FFT-based cross correlation algorithm (GCC-FFT), a hybrid time-domain and frequency domain cross correlation algorithm with prior estimates (GCC-PE) and an algorithm based on the use of fractional Fourier transform to compute the cross correlation (GCC -FRFT) . Image quality factors of the elastograms obtained using the different TDE techniques are analyzed and the results are compared using standard statistical tools. The results of this research suggests that correlation based techniques outperform SSD and SAD techniques in terms of SNRe, CNRe, dynamic range and robustness. The sensitivity of GCC-FFT and SSD were statistically similar and statistically higher than those of all other methods. Within the class of GCC methods, there is no statistically significant difference between SNRe of GCC-FFT, GCC-PE and GCC –FRFT for most of the strain values considered in this study. However, in terms of CNRe, GCC-FFT and GCC-FRFT were significantly better than other TDE algorithms. Based on these results, it is concluded that correlation-based algorithms are the most effective in obtaining high quality elastograms.

Page generated in 0.0837 seconds