• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 202
  • 88
  • 54
  • 34
  • 14
  • 13
  • 12
  • 9
  • 6
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 484
  • 86
  • 70
  • 59
  • 56
  • 55
  • 50
  • 48
  • 48
  • 45
  • 45
  • 44
  • 41
  • 40
  • 37
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Determining fuzzy link quality membership functions in wireless sensor networks

Kazmi, Syed Ali Hussain 01 April 2014 (has links)
Wireless Sensor Network routing protocols rely on the estimation of the quality of the links between nodes to determine a suitable path from the data source nodes to a data-collecting node. Several link estimators have been proposed, but most of these use only one link property. Fuzzy logic based link quality estimators have been recently proposed which consider a number of link quality metrics. The fuzzification of crisp values to fuzzy values is done through membership functions. The shape of the fuzzy link quality estimator membership functions is primarily performed leveraging qualitative knowledge and an improper assignment of fuzzy membership functions can lead to poor route selection and hence to unacceptable packet losses. This thesis evaluated the Channel Quality membership function of, an existing fuzzy link quality estimator and it was seen that this membership function didn???t perform as well as expected. This thesis presents an experimental approach to determine a suitable Channel Quality fuzzy membership function based on varying the shape of the fuzzy set for a multipath wireless sensor network scenario and choosing an optimum shape that maximizes the Packet Delivery Ratio of the network. The computed fuzzy set membership functions were evaluated against an existing fuzzy link quality estimator under more complex scenarios and it is shown the performance of the experimental refined membership function was better in terms of packet reception ratio and end to end delay.The fuzzy link quality estimator was applied in WiseRoute (a simple converge cast based routing protocol) and shown that this SNR based fuzzy link estimator performed better than the original implemented RSSI based link quality used in WiseRoute.
52

複雑な内生抽出法に基づく標本への離散選択モデルの適用

KITAMURA, Ryuichi, 酒井, 弘, SAKAI, Hiroshi, 北村, 隆一, 山本, 俊行, YAMAMOTO, Toshiyuki 01 1900 (has links)
No description available.
53

A robust test of homogeneity in zero-inflated models for count data

Mawella, Nadeesha R. January 1900 (has links)
Doctor of Philosophy / Department of Statistics / Wei-Wen Hsu / Evaluating heterogeneity in the class of zero-inflated models has attracted considerable attention in the literature, where the heterogeneity refers to the instances of zero counts generated from two different sources. The mixture probability or the so-called mixing weight in the zero-inflated model is used to measure the extent of such heterogeneity in the population. Typically, the homogeneity tests are employed to examine the mixing weight at zero. Various testing procedures for homogeneity in zero-inflated models, such as score test and Wald test, have been well discussed and established in the literature. However, it is well known that these classical tests require the correct model specification in order to provide valid statistical inferences. In practice, the testing procedure could be performed under model misspecification, which could result in biased and invalid inferences. There are two common misspecifications in zero-inflated models, which are the incorrect specification of the baseline distribution and the misspecified mean function of the baseline distribution. As an empirical evidence, intensive simulation studies revealed that the empirical sizes of the homogeneity tests for zero-inflated models might behave extremely liberal and unstable under these misspecifications for both cross-sectional and correlated count data. We propose a robust score statistic to evaluate heterogeneity in cross-sectional zero-inflated data. Technically, the test is developed based on the Poisson-Gamma mixture model which provides a more general framework to incorporate various baseline distributions without specifying their associated mean function. The testing procedure is further extended to correlated count data. We develop a robust Wald test statistic for correlated count data with the use of working independence model assumption coupled with a sandwich estimator to adjust for any misspecification of the covariance structure in the data. The empirical performances of the proposed robust score test and Wald test are evaluated in simulation studies. It is worth to mention that the proposed Wald test can be implemented easily with minimal programming efforts in a routine statistical software such as SAS. Dental caries data from the Detroit Dental Health Project (DDHP) and Girl Scout data from Scouting Nutrition and Activity Program (SNAP) are used to illustrate the proposed methodologies.
54

Democratization and real exchange rates

Furlan, Benjamin, Gächter, Martin, Krebs, Bob, Oberhofer, Harald January 2016 (has links) (PDF)
In this article, we combine two so far separate strands of the economic literature and argue that democratization leads to a real exchange rate appreciation. We test this hypothesis empirically for a sample of countries observed from 1980 to 2007 by combining a difference-in-difference approach with propensity score matching estimators. Our empirical results reveal a strong and significant finding: democratization causes real exchange rates to appreciate. Consequently, the ongoing process of democratization observed in many parts of the world is likely to reduce exchange rate distortions.
55

EMG onset detection – development and comparison of algorithms

Magda, Mateusz January 2015 (has links)
Context. EMG (Electromyographic) signal is a response of a neuromuscular system for an electrical stimulus generated either by brain or by spinal cord. This thesis concerns the subject of onset detection in the context of a muscle activity. Estimation is based on an EMG signal observed during a muscle activity. Objectives. The aim of this research is to propose new onset estimation algorithms and compare them with solutions currently existing in the academia. Two benchmarks are being considered to evaluate the algorithms’ results- a muscle torque signal synchronized with  an EMG signal and a specialist’s assessment. Bias, absolute value of a mean error and standard deviation are the criteria taken into account. Methods. The research is based on EMG data collected in the physiological laboratory at Wroclaw University of Physical Education. Empty samples were cut off the dataset. Proposed estimation algorithms were constructed basing on the EMG signal analysis and review on state of the art solutions. In order to collate them with existing solutions a simple comparison have been conducted. Results. Two new onset detection methods are proposed. They are compared to two estimators taken from the literature review (sAGLR &amp; Komi). One of presented solutions seems to give promising results. Conclusions. One of presented solutions- Sign Changes algorithm can be widely applied in the area of EMG signal processing. It is more accurate and less parameter-sensitive than three other methods. This estimator can be recommended as a part of ensembled algorithms solution in further development. / <p>This is a Master Thesis completed in Double Diploma Programme. Dr. Jarosław Drapała was a supervisor from my maternal university. </p>
56

Bayesian Estimation of Small Proportions Using Binomial Group Test

Luo, Shihua 09 November 2012 (has links)
Group testing has long been considered as a safe and sensible relative to one-at-a-time testing in applications where the prevalence rate p is small. In this thesis, we applied Bayes approach to estimate p using Beta-type prior distribution. First, we showed two Bayes estimators of p from prior on p derived from two different loss functions. Second, we presented two more Bayes estimators of p from prior on π according to two loss functions. We also displayed credible and HPD interval for p. In addition, we did intensive numerical studies. All results showed that the Bayes estimator was preferred over the usual maximum likelihood estimator (MLE) for small p. We also presented the optimal β for different p, m, and k.
57

Estimating the Variance of the Sample Median

Price, Robert M., Bonett, Douglas G. 01 January 2001 (has links)
The small-sample bias and root mean squared error of several distribution-free estimators of the variance of the sample median are examined. A new estimator is proposed that is easy to compute and tends to have the smallest bias and root mean squared error.
58

A Comparative Simulation Study of Robust Estimators of Standard Errors

Johnson, Natalie 10 July 2007 (has links) (PDF)
The estimation of standard errors is essential to statistical inference. Statistical variability is inherent within data, but is usually of secondary interest; still, some options exist to deal with this variability. One approach is to carefully model the covariance structure. Another approach is robust estimation. In this approach, the covariance structure is estimated from the data. White (1980) introduced a biased, but consistent, robust estimator. Long et al. (2000) added an adjustment factor to White's estimator to remove the bias of the original estimator. Through the use of simulations, this project compares restricted maximum likelihood (REML) with four robust estimation techniques: the Standard Robust Estimator (White 1980), the Long estimator (Long 2000), the Long estimator with a quantile adjustment (Kauermann 2001), and the empirical option of the MIXED procedure in SAS. The results of the simulation show small sample and asymptotic properties of the five estimators. The REML procedure is modelled under the true covariance structure, and is the most consistent of the five estimators. The REML procedure shows a slight small-sample bias as the number of repeated measures increases. The REML procedure may not be the best estimator in a situation in which the covariance structure is in question. The Standard Robust Estimator is consistent, but it has an extreme downward bias for small sample sizes. The Standard Robust Estimator changes little when complexity is added to the covariance structure. The Long estimator is unstable estimator. As complexity is introduced into the covariance structure, the coverage probability with the Long estimator increases. The Long estimator with the quantile adjustment works as designed by mimicking the Long estimator at an inflated quantile level. The empirical option of the MIXED procedure in SAS works well for homogeneous covariance structures. The empirical option of the MIXED procedure in SAS reduces the downward bias of the Standard Robust Estimator when the covariance structure is homogeneous.
59

Robustness in design of experiments in manufacturing course

Amana, Ahmed January 2022 (has links)
Design of experiment (DOE) is a statistical method for testing effects of input factors into a process based on its responses or outputs. Since the influence of these factors and their interactions are studied from the process outputs, then quality of these outputs or the measurements play a significant role in a correct statistical conclusion about the significance of factors and their interactions. Linear regression is a method, which can be applied for the DOE purpose, the parameters of such a regression model are estimated by the ordinary least-squares (OLS) method. This method is sensitive to the presence of any blunder in measurements, meaning that blunders significantly affect the result of a regression using OLS method. This research aims to perform a robustness analysis for some full factorial DOEs by different robust estimators as well as the Taguchi methodology. A full factorial DOE with three factors at three levels, two replicants, and three replicants are performed is studied. Taguchi's approach is conducted by computing the signal-to-noise ratio (S/N) from three replicants, where the lower noise factor means the stronger signal. Robust estimators of Andrews, Cauchy, Fair, Huber, Logistic, Talwar, and Welsch are applied to the DOE in different setups and adding different types and percentages of blunders or gross errors to the data to assess the success rate of each. Number and size of the blunders in the measurements are two important factors influencing the success rate of a robust estimator. For evaluation, our measurements are infected by blunders up to different percentages of data. Our study showed the Talwar robust estimator is the best amongst the rest of estimators and resists well against up to 80% of presence of blunders. Consequently, the use of this estimator instated of the OLS is recommended for DOE purposes. The comparison between Taguchi’s method and robust estimators showed that blunders affect the signal-to-noise ratio as the signal is significantly changed by them, whilst robust estimators suppress the blunders well and the same conclusion as that with the OLS with no blunder can be drawn from them.
60

Methodological advances in benefit transfer and hedonic analysis

Puri, Roshan 19 September 2023 (has links)
This dissertation introduces advanced statistical and econometric methods in two distinct areas of non-market valuation: benefit transfer (BT) and hedonic analysis. While the first and the third chapters address the challenge of estimating the societal benefits of prospective environmental policy changes by adopting locally weighted regression (LWR) technique in an environmental valuation context, the second chapter combines the output from traditional hedonic regression and matching estimators and provides guidance on the choice of model with low risk of bias in housing market studies. The economic and societal benefits associated with various environmental conservation programs, such as improvement in water quality, or increment in wetland acreages, can be directly estimated using primary studies. However, conducting primary studies can be highly resource-intensive and time-consuming as they typically involve extensive data collection, sophisticated models, and a considerable investment of financial and human resources. As a result, BT offers a practical alternative, which involves employing valuation estimates, functions, or models from prior primary studies to predict the societal benefit of conservation policies at a policy site. Existing studies typically fit one single regression model to all observations within the given metadata and generate a single set of coefficients to predict welfare (willingness-to-pay) in a prospective policy site. However, a single set of coefficients may not reflect the true relationship between dependent and independent variables, especially when multiple source studies/locations are involved in the data-generating process which, in turn, degrades the predictive accuracy of the given meta-regression model (MRM). To address this shortcoming, we employ the LWR technique in an environmental valuation context. LWR allows an estimation of a different set of coefficients for each location to be used for BT prediction. However, the empirical exercise carried out in the existing literature is rigorous from a computational perspective and is cumbersome for practical adaptation. In the first chapter, we simplify the experimental setup required for LWR-BT analysis by taking a closer look at the choice of weight variables for different window sizes and weight function settings. We propose a pragmatic solution by suggesting "universal weights" instead of striving to identify the best of thousands of different weight variable settings. We use the water quality metadata employed in the published literature and show that our universal weights generate more efficient and equally plausible BT estimates for policy sites than the best weight variable settings that emerge from a time-consuming cross-validation search over the entire universe of individual variable combinations. The third chapter expands the scope of LWR to wetland meta-data. We use a conceptually similar set of weight variables as in the first chapter and replicate the methodological approach of that chapter. We show that LWR, under our proposed weight settings, generates substantial gain in both predictive accuracy and efficiency compared to the one generated by standard globally-linear MRM. Our second chapter delves into a separate yet interrelated realm of non-market valuation, i.e., hedonic analysis. Here, we explore the combined inferential power of traditional hedonic regression and matching estimators to provide guidance on model choice for housing market studies where researchers aim to estimate an unbiased binary treatment effect in the presence of unobserved spatial and temporal effects. We examine the potential sources of bias within both hedonic regression and basic matching. We discuss the theoretical routes to mitigate these biases and assess their feasibility in practical contexts. We propose a novel route towards unbiasedness, i.e., the "cancellation effect" and illustrate its empirical feasibility while estimating the impact of flood hazards on housing prices. / Doctor of Philosophy / This dissertation introduces novel statistical and econometric methods to better understand the value of environmental resources that do not have an explicit market price, such as the benefits we get from the changes in water quality, size of wetlands, or the impact of flood risk zoning in the sales price of residential properties. The first and third chapters tackle the challenge of estimating the value of environmental changes, such as cleaner water or more wetlands. To figure out how much people benefit from these changes, we can look at how much they would be willing to pay for such improved water quality or increased wetland area. This typically requires conducting a primary survey, which is expensive and time-consuming. Instead, researchers can draw insights from prior studies to predict welfare in a new policy site. This approach is analogous to applying a methodology and/or findings from one research work to another. However, the direct application of findings from one context to another assumes uniformity across the different studies which is unlikely, especially when past studies are associated with different spatial locations. To address this, we propose a ``locally-weighting" technique. This places greater emphasis on the studies that closely align with the characteristics of the new (policy) context. Determining the weight variables/factors that dictate this alignment is a question that requires an empirical investigation. One recent study attempts this locally-weighting technique to estimate the benefits of improved water quality and suggests experimenting with different factors to find the similarity between the past and new studies. However, their approach is computationally intensive, making it impractical for adaptation. In our first chapter, we propose a more pragmatic solution---using a "universal weight" that does not require assessing multiple factors. With our proposed weights in an otherwise similar context, we find more efficient and equally plausible estimates of the benefits as previous studies. We expand the scope of the local weighting to the valuation of gains or losses in wetland areas in the third chapter. We use a conceptually similar set of weight variables and replicate the empirical exercise from the first chapter. We show that the local-weighting technique, under our proposed settings, substantially improves the accuracy and efficiency of estimated benefits associated with the change in wetland acreage. This highlights the diverse potential of the local weighting technique in an environmental valuation context. The second chapter of this dissertation attempts to understand the impact of flood risk on housing prices. We can use "hedonic regression" to understand how different features of a house, like its size, location, sales year, amenities, and flood zone location affect its price. However, if we do not correctly specify this function, then the estimates will be misleading. Alternatively, we can use "matching" technique where we pair the houses inside and outside of the flood zone in all observable characteristics, and differentiate their price to estimate the flood zone impact. However, finding identical houses in all aspects of household and neighborhood characteristics is practically impossible. We propose that any leftover differences in features of the matched houses can be balanced out by considering where the houses are located (school zone, for example) and when they were sold. We refer to this route as the "cancellation effect" and show that this can indeed be achieved in practice especially when we pair a single house in a flood zone with many houses outside that zone. This not only allows us to accurately estimate the effect of flood zones on housing prices but also reduces the uncertainty around our findings.

Page generated in 0.0162 seconds