• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 538
  • 94
  • 78
  • 58
  • 36
  • 25
  • 25
  • 25
  • 25
  • 25
  • 24
  • 22
  • 15
  • 4
  • 3
  • Tagged with
  • 939
  • 939
  • 218
  • 156
  • 135
  • 124
  • 95
  • 89
  • 84
  • 73
  • 67
  • 66
  • 66
  • 62
  • 62
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

A NON-PARAMETRIC TEST PROCEDURE BASED ON RANGE STATISTICS TO IDENTIFY CAUSES OF NON-NORMALITY IN SPECULATIVE PRICE CHANGE DISTRIBUTIONS.

ABRAHAMSON, ALLEN ARNOLD. January 1982 (has links)
Most models of asset pricing or market equilibrium generally require the assumption of stationary price change generation. That is, the mean and/or variance of the price change is hypothesized to be constant over time. On the other hand, the widely accepted models of speculative price change generation, such as the subordinated stochastic process models, have their basis in mixtures of random variables. These mixtures, or compositisations, define non-stationary, non-Normally distributed forms. Therefore, the models based on mixtures cannot be reconciled to requirements of stationarity. A contaminated process, such as that suggested by Mandelbroit, implies continuously changing mean and/or variance. However, an alternative concept of mixture exists, which is consistent with models requiring stationary moments. This process is referred to as slippage. Slippage defines a state where moments are constant for intervals of time, but do change value. If speculative price changes were found to be characterized by slippage, rather than by contamination, then such a finding would still be consistent with the empirical distributions of price changes. More importantly, slippage would meet the requirement of stationarity imposed on the capital market and options models. This work advanced a methodology that discriminates between contamination-based and slippage-based non-stationarity in speculative price changes. Such a technique is necessary, inasmuch as curve fitting or estimation of moments cannot so discriminate. The technique employs non-parametric range estimators. Any given form of non-Normality induces an identifiable pattern of bias upon these estimators. Once a pattern induced by a time series of price changes is identified; this pattern then infers whether contamination, or, alternatively, slippage, generated the time series. Due to the composition and technique of the procedure developed here, it is referred to as a "Range Spectrum." The results examined here find that stocks do display contamination, as hypothesized by the subordinate stochastic models. A broad based index of price change, however, displays the characteristics of slippage. This quality not only has implications for, but suggests possibilities for further research, in the areas of diversification, securities and options pricing, and market timing.
12

The application of the jackknife in geostatistical resource estimation: Robust estimator and its measure of uncertainty.

Adisoma, Gatut Suryoprapto January 1993 (has links)
The application of the jackknife in geostatistical resource estimation (in conjunction with kriging) is shown to yield two significant contributions. The first one is a robust new estimator, called jackknife kriging, which retains ordinary kriging's simplicity and global unbiasedness while at the same time reduces its local bias and oversmoothing tendency. The second contribution is the ability, through the jackknife standard deviation, to set a confidence limit for a reserve estimate of a general shape. Jackknifing the ordinary kriging estimate maximizes sample utilization, as well as information of sample spatial correlation. The jackknife kriging estimator handles the high grade smearing problem typical in ordinary kriging by assigning more weight to the closest sample(s). The result is a reduction in the local bias without sacrificing global unbiasedness. When data distribution is skewed, log transformation of the data prior to jackknifing is shown to improve the estimate by making the data behave better under jackknifing. The technique of block kriging short-cut, combined with jackknifing, are shown as an easy-to-use solution to the problem of grade estimation of a general three-dimensional digitized shape and the uncertainty associated with the estimate. The results are a single jackknife kriging estimate for the shape and its corresponding jackknife variance. This approach solves the problem of combining independent block estimation variances, and provides a simple way to set confidence levels for global estimates. Unlike the ordinary kriging variance, which is a measure of data configuration and is independent of data values, the jackknife kriging variance reflects the variability of the values being inferred, both on an individual block level and on the global level. Case studies involving two exhaustive (symmetric and highly skewed) data sets indicates the superiority of the jackknife kriging estimator over the original (ordinary kriging) estimator. Some instability of the log-transformed jackknife estimate is noted in the highly skewed situation, where the data do not generally behave well under standard jackknifing. A promising solution for future investigations seems to lie in the use of weighted jackknife formulation, which should better handle a wider spectrum of data distribution.
13

Summarizing research findings in structural equation modeling: a meta-analytic approach.

January 1999 (has links)
Mike W.L., Cheung. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1999. / Includes bibliographical references (leaves 52-58). / Abstract also in Chinese. / ACKNOWLEDGMENTS --- p.2 / ABSTRACT --- p.3 / TABLE OF CONTENTS --- p.5 / Chapter CHAPTER 1 --- INTRODUCTION OF RESEARCH TOPIC --- p.8 / Structural equation modeling as an important statistical technique in behavioral sciences --- p.8 / Meta analysis as a statistical technique to combine information --- p.9 / Objectives of the present study --- p.10 / Chapter CHAPTER 2 --- COMMON METHODS USED IN SUMMARIZING RESEARCH FINDINGS AND THEIR PROBLEMS --- p.11 / Potential problems in individual study --- p.11 / Overviews of meta analysis --- p.12 / Common methods of summarizing research findings in SEM --- p.13 / Stage 1: Estimating the pooled correlation matrix --- p.14 / Test for homogeneity before combining --- p.14 / Correction for artifacts --- p.15 / Method 1: Averaging by the weighted correlation coefficients --- p.16 / Method 2: Averaging by the weighted Fisher z transformed correlation coefficients --- p.17 / Choices of weightings --- p.17 / Potential problems of common methods in stage1 --- p.19 / Stage 2: Fitting the proposed model --- p.20 / Potential problems of common methods in stage2 --- p.20 / Chapter CHAPTER 3 --- PROCEDURES OF THE NEW PROPOSED TWO-STAGE METHOD --- p.23 / Introduction --- p.23 / Terminology of SEM in single group --- p.23 / Terminology of SEM in multigroup --- p.26 / The proposed two-stage method --- p.27 / Stage 1: Estimating the pooled correlation matrix --- p.27 / An artificial example --- p.29 / Stage 2: Fitting the SEM model --- p.30 / Advantages of the proposed two-stage method --- p.31 / Chapter CHAPTER 4 --- SIMULATION STUDY OF THE PROPOSED METHOD WITH THE COMMON METHODS --- p.33 / Introduction --- p.33 / Method --- p.33 / Results and discussion --- p.38 / Chapter CHAPTER 5 --- A REAL EXAMPLE FITTING PATH MODEL --- p.44 / Introduction --- p.44 / A real example --- p.44 / Results and discussion --- p.45 / Chapter CHAPTER 6 --- "LIMITATIONS, FUTURE DIRECTIONS AND CONCLUSION" --- p.48 / Summaries of the proposed two-stage method and findings --- p.48 / Limitations and future directions --- p.49 / REFERENCES --- p.52 / APPENDIX A --- p.59 / APPENDIX B --- p.60 / APPENDIX C --- p.62 / FOOTNOTES --- p.82 / TABLE 1 TO TABLE8 --- p.84 / FIGURE CAPTION --- p.97 / FIGURE1 --- p.98 / FIGURE2 --- p.99
14

Effectiveness of SETAR trading strategies: an empirical investigation.

January 2006 (has links)
Lam Tau Hing. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2006. / Includes bibliographical references (leaves 60-64). / Abstracts in English and Chinese. / Chapter Chapter 1. --- Introduction --- p.1 / Chapter Chapter 2. --- Trading Strategies --- p.6 / Chapter 2.1. --- SETAR Model --- p.6 / Chapter 2.2. --- AR Model --- p.8 / Chapter 2.3. --- Moving Average --- p.9 / Chapter Chapter 3. --- Data and Methodology --- p.11 / Chapter Chapter 4. --- Empirical Results --- p.16 / Chapter Chapter 5. --- Bootstrap Analysis --- p.25 / Chapter 5.1. --- Random-Walk Model --- p.28 / Chapter 5.2. --- GARCH-M Model --- p.30 / Chapter Chapter 6. --- Combined Strategy --- p.33 / Chapter Chapter 7. --- Conclusion --- p.34 / Tables --- p.36 / References --- p.60
15

Tools for environmental statistics : creative visualization and estimating variance from complex surveys

Courbois, Jean-Yves Pip 07 January 2000 (has links)
Environmental monitoring poses two challenges to statistical analysis: complex data and complex survey designs. Monitoring for system health involves measuring physical, chemical, and biological properties that have complex relations. Exploring these relations is an integral part of understanding how systems are changing under stress. How does one explore high dimensional data? Many of our current methods rely on "black-box" mathematical methods. Visualization techniques on the other hand are either restricted to low dimensions or hopelessly out of context. The first topic explored in this dissertation suggests a direct search method for use in projection pursuit guided tours. In Chapter 2 a direct search method for index optimization, the multidirectional pattern search, was explored for use in projection pursuit guided tours. The benefit of this method is that it does not require the projection pursuit index to be continuously differentiable; in contrast to existing methods that require differentiability. Computational comparisons with test data revealed the feasibility and promise of the method. It successfully found hidden structure in 4 of 6 test data sets. The study demonstrates that the direct search method lends itself well to use in guided tours and allows for non-differentiable indices. Evaluating estimators of the population variance is covered in Chapter 3. Good estimates of the population variance are useful when designing a survey. These estimates may come from a pilot project or survey. Often in environmental sampling simple random sampling is not possible;�� instead complex designs are used. In this case there is no clear estimator for the population variance. We propose an estimator that is (1) based on a methods of moments approach and (2) extendible to more complex variance component models. Several other estimators have been proposed in the literature. This study compared our method of moment estimator to other variance estimators. Unfortunately our estimator did not do as well as some of the other estimators that have been suggested implying that these estimators do not perform similarly as the literature suggests they do. Two estimators, the sample variance and a ratio estimator based on the Horvitz-Thompson Theorem and a consistency argument, proved to be favorable. / Graduation date: 2000
16

Exploiting linguistic knowledge for statistical natural language processing

Zhang, Lidan., 张丽丹. January 2011 (has links)
published_or_final_version / Computer Science / Doctoral / Doctor of Philosophy
17

The use of different statistical approaches in examining the longitudinal change in quality of life

王曉暉, Wong, Hiu-fai, Jennifer. January 2012 (has links)
Quality of life (QoL) is now firmly recognized as a significant outcome measure in public health, clinical and patient care research (1, 2). Despite a growing trend in conducting longitudinal QoL studies, the longitudinal changes in QoL in the general population remain poorly understood due to the limited number of studies. Furthermore, few studies have discussed the use of different statistical methods in analyzing the longitudinal change in QoL. This paper aimed to discuss the application of traditional statistical approach: R-ANOVA and newer statistical approaches: LMM and LGCA in analyzing the longitudinal change in QoL. The underlying assumptions, characteristics and specifications of each of the statistical methods were explained. Different public health studies that examined the longitudinal change of QoL would be elaborated in order to show how the criterions of each statistical method were fulfilled in the research analysis. Additionally, the limitations of applying the traditional statistical approach: R-ANOVA and the newer statistical approaches: LMM and LGCA in analyzing longitudinal QoL data will be discussed with the emphasis on how each analytical method overcome the weaknesses of one another. The understanding of the application of different statistical approaches in analyzing the longitudinal change in QoL can advance the future development of a robust statistical approach for QoL research. / published_or_final_version / Community Medicine / Master / Master of Public Health
18

Discrete-time insurance risk models with dependence structures

Wat, Kam-pui., 屈錦培. January 2012 (has links)
Regarding the relationships among different insurance claims, especially in non-life insurance, the dependence behaviour in various models has been studied extensively. In this thesis, some discrete-time risk models with dependence structures would be investigated. One traditional discrete-time risk model is the time series risk model, in which the dependence would be on two aspects: time correlated claims and dependent business classes. A general vector (multivariate) autoregressive moving average (VARMA) model would be adopted to analyze the ruin probability of a surplus process. An upper bound for the ruin probability is derived for the general order of multivariate time series models in claims. Simulation studies are carried out for model comparison for finite time ruin probabilities. Another class of risk model is the compound binomial risk model, where the dependence structure would be based on the existence of a so-called by-claim in the claim process. The by-claim could be incurred in the same period as the main insurance claim, or it would be incurred in the next period, depending on a certain probability. A randomized dividend payment scheme with some fixed threshold value in surplus level would also be considered in this thesis. A methodology is discovered to obtain the Gerber-Shiu expected penalty function for the extended model. The final model investigated in this thesis is the periodic time series risk model. The periodic structure of the model gives a practical interpretation of the business cycle, in which there are high season and low season for the business. Some lower order periodic time series models are considered for the claim structures. / published_or_final_version / Statistics and Actuarial Science / Doctoral / Doctor of Philosophy
19

Advances in spatial analysis of traffic crashes: the identification of hazardous road locations

Yao, Shenjun., 姚申君. January 2013 (has links)
The identification of hazardous road locations is important to the improvement of road safety. However, there is still no consensus on the best method of identifying hazardous road locations. While traditional methods, such as the hot spot methodology, focus on the physical distances separating road crashes only, the hot zone methodology takes network contiguity into consideration and treats contiguous road segments as hazardous road locations. Compared with the hot spot method, hot zone methodology is a relatively new direction and there still remain a number of methodological issues in applying the method to the identification of hazardous road locations. Hence, this study aims to provide a GIS-based study on the identification of crash hot zones as hazardous road locations with both link-attribute and event-based approaches. It first explores the general procedures of the two approaches in identifying traffic crash hot zones, and then investigates the characteristics of the two approaches by conducting a range of sensitivity analysis on defining threshold value and crash intensity with both simulated and empirical data. The results suggest that it is better to use a dissolved road network instead of a raw-link-node road network. The segmentation length and the interval of reference points have great impacts on the identification of hot zones, and they are better defined as 100 meters considering the stabilities of the performance. While employing a numerical definition to identify hot zones is a simple and effort-saving approach, using the Monte Carlo method can avoid selection bias in choosing an appropriate number as the threshold value. If the two approaches are compared, it is observed that the link-attribute approach is more likely to cause false negative problem and the event-based approach is prone to false positive problem around road junctions. No matter which method is used, the link-attribute approach requires less computer time in identifying crash hot zones. When a range of environmental variables have to be taken into consideration, the link-attribute approach is superior to the event-based approach in that it is easier for the link-attribute approach to incorporate environmental variables with statistical models. By investigating the hot zone methodology, this research is expected to enrich the theoretical knowledge of the identification of hazardous road locations and to practically provide policy-makers with more information on identifying road hazards. Further research efforts have to be dedicated to the ranking of hot zones and the investigation of false positive and false negative problems. / published_or_final_version / Geography / Doctoral / Doctor of Philosophy
20

Two-stage adaptive designs in early phase clinical trials

Xu, Jiajing, 徐佳静 January 2013 (has links)
The primary goal of clinical trials is to collect enough scientific evidence for a new intervention. Despite the widespread use of equal randomization in clinical trials, response-adaptive randomization has attracted considerable interest in terms of ethical concerns. In this thesis, delayed response problems and innovative designs for cytostatic agents in oncology clinical trials are studied. There is typically a prerun of equal randomization before the implementation of response-adaptive randomization, while it is often not clear how many subjects are needed in this prephase, and in practice an arbitrary number of patients are allocated in this equal randomization stage. In addition, real-time response-adaptive randomization often requires patient response to be immediately available after the treatment, while clinical response, such as tumor shrinkage, may take a relatively long period of time to exhibit. In the first part of the thesis, a nonparametric fractional model and a parametric optimal allocation scheme are developed to tackle the common problem caused by delayed response. In addition, a two-stage procedure to achieve a balance between power and the number of responders is investigated, which is equipped with a likelihood ratio test before skewing the allocation probability toward a better treatment. The operating characteristics of the two-stage designs are evaluated through extensive simulation studies and an HIV clinical trial is used for illustration. Numerical results show that the proposed method satisfactorily resolves the issues involved in response-adaptive randomization and delayed response. In phase I clinical trials with cytostatic agents, toxicity endpoints, as well as efficacy effects, should be taken into consideration for identifying the optimal biological dose (OBD). In the second part of the thesis, a two-stage Bayesian mixture modeling approach is developed, which first locates the maximum tolerated dose (MTD) through a mixture of parametric and nonparametric models, and then determines the most efficacious dose using Bayesian adaptive randomization among multiple candidate models. In the first stage searching for the MTD, a beta-binomial model in conjunction with a probit model as a mixture modeling approach is studied, and decisions are made based on the model that better fits the toxicity data. The model fitting adequacy is measured by the deviance information criterion and the posterior model probability. In the second stage searching for the OBD, the assumption that efficacy monotonically increases with the dose is abandoned and, instead, all the possibilities that each dose could have the highest efficacy effect are enumerated so that the dose-efficacy curve can be increasing, decreasing, or umbrella-shape. Simulation studies show the advantages of the proposed mixture modeling approach for pinpointing the MTD and OBD, and demonstrate its satisfactory performance with cytostatic agents. / published_or_final_version / Statistics and Actuarial Science / Master / Master of Philosophy

Page generated in 0.1452 seconds