• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 551
  • 94
  • 78
  • 58
  • 36
  • 25
  • 25
  • 25
  • 25
  • 25
  • 24
  • 22
  • 15
  • 4
  • 3
  • Tagged with
  • 956
  • 956
  • 221
  • 163
  • 139
  • 126
  • 97
  • 92
  • 90
  • 74
  • 72
  • 69
  • 66
  • 65
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Optimal designs for mixture models

Sandhu, Manjinder Kaur. January 1995 (has links)
published_or_final_version / Industrial and Manufacturing Systems Engineering / Master / Master of Philosophy
72

Forecasting with smoothing techniques for inventory control

何添賢, Ho, Tim Yin, Timothy. January 1994 (has links)
published_or_final_version / Statistics / Master / Master of Philosophy
73

Development of a bioinformatics and statistical framework to integratebiological resources for genome-wide genetic mapping and itsapplications

Li, Miaoxin., 李淼新. January 2009 (has links)
published_or_final_version / Biochemistry / Doctoral / Doctor of Philosophy
74

Statistical methods and analyses in human gene mapping

Kwan, Sheung-him., 關尚謙. January 2009 (has links)
published_or_final_version / Psychiatry / Doctoral / Doctor of Philosophy
75

The application of geostatistical techniques in the analysis of joint data

Grady, Lenard Alden 22 January 2015 (has links)
No description available.
76

New statistics to compare two groups with heterogeneous skewness.

January 2012 (has links)
筆者在論文中引入一個名為加權距離檢驗的雙變項統計。此檢驗方法用於比較兩個隨機變數的集中趨勢。其優勢在於在有偏度的數據中,仍能穩定地控制第一型錯誤,並同時提供可觀的統計檢力。加權距離檢驗利用冪函數修正在偏度數據中的不對稱現象。與一般的冪函數轉換法不同,加權距離檢驗將冪值限制在0和1之間。文中亦提供了一個有效決定冪值的方法,以方便在實際運算中使用。 / 筆者總結了四個主流的雙變項統計方法,並利用蒙地卡羅模擬法在正態分佈、同程度偏度分佈以及不同程度偏度分佈三個情況中比較了它們與加權距離檢驗的表現。結果顯示,加權距離檢驗雖然沒有在任何一個情況中勝出,但卻於兩方面表現了其優勢。首先,它在任何情況下都能把第一型錯誤控制在合理水平之下;其次,它在任何情況下都不至於表現得太差。反觀其他四個檢驗方法總會在某些情況下表現失敗。由此可見,加權距離檢驗比起其他檢驗方法更能提供一個穩定而簡單的方法去比較集中趨勢。 / A new bivariant statistics, namely the weighted distance test, for comparing two groups were introduced. The test aims at providing reliable type I error control and reasonable statistical power across different types of skewed data. It corrects the skewness of the data by applying power transformation with power index ranged between 0 to 1. I also proposed in this thesis a possible way of deciding the power index by considering the skewness difference between the two groups under comparison. / I reviewed 4 commonly used inferential statistics for two-group comparison and compared their performances with the weighted distance test under 1) normal distribution, 2) skewed distribution with equal skewness across groups, and 3) skewed distribution with unequal skewness across groups. Monte Carlo simulations were ran to evaluate the 5 tests. Results showed that the weighted distance test was not the best test in any particular situation, but was the most stable test in the sense that 1) it provided accurate type I error control and 2) it did not produce catastrophically small power in any scenario. All other 4 tests failed in some of the simulated scenario for either inflated type I error, or unsatisfactory power. Therefore, I suggested that the weighted distance test could be one easy-to-use test that works fairly well across a wide range of situation. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Lee, Yung Ho. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2012. / Includes bibliographical references (leaves 31-33). / Abstracts also in Chinese. / Chapter Chapter One --- : Introduction --- p.1 / Common methods in comparing central tendency --- p.2 / T-test --- p.2 / Median and rank --- p.3 / Trimming --- p.3 / Power transformation --- p.4 / Chapter Chapter Two --- : Weighted distance statistic --- p.5 / Definition --- p.5 / Statistical properties --- p.5 / Specification of Lambda λ --- p.7 / Estimation and inference --- p.7 / Chapter Chapter Three --- : Simulation --- p.10 / Study 1 --- p.12 / Study 2 --- p.15 / Study 3 --- p.18 / Chapter Chapter Four --- : Discussion --- p.21 / Summary --- p.21 / Limitation --- p.22 / Further development --- p.23 / Chapter Appendix I --- : Proofs of theorems of weighted distance statistic --- p.24 / Chapter Appendix II --- : Table of numerical results of simulations --- p.26 / Bibliography --- p.30
77

The analysis of protein sequences: statistical modeling for short structural motifs. / Statistical modeling for short structural motifs

January 1997 (has links)
by Sze-wan Man. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1997. / Includes bibliographical references (leaves 41-42). / Chapter Chapter 1 --- Introduction --- p.1 / Chapter Chapter 2 --- The probability model --- p.8 / Chapter 2.1 --- Introduction --- p.8 / Chapter 2.2 --- The coding system --- p.11 / Chapter 2.3 --- The likelihood estimates of hexamer codes --- p.13 / Chapter 2.4 --- A cross validation study --- p.15 / Chapter Chapter 3 --- An application of the likelihood ratio --- p.21 / Chapter 3.1 --- Introduction --- p.21 / Chapter 3.2 --- The Needleman-Wunsch algorithm --- p.21 / Chapter 3.2.1 --- Background --- p.21 / Chapter 3.2.2 --- The principle of the algorithm --- p.21 / Chapter 3.2.3 --- The algorithm --- p.23 / Chapter 3.3 --- Application of the structural information --- p.25 / Chapter 3.3.1 --- Basic idea --- p.25 / Chapter 3.3.2 --- Comparison between pairs of hexamer sequences --- p.25 / Chapter 3.3.3 --- The score of similarity of a pair of hexamer sequences --- p.26 / Chapter Chapter 4 --- Application of the modified Needleman-Wunsch algorithm --- p.27 / Chapter 4.1 --- The structurally homologous pair --- p.27 / Chapter 4.1.1 --- The horse hemoglobin beta chain --- p.28 / Chapter 4.1.2 --- The antartic fish hemoglobin beta chain --- p.29 / Chapter 4.2 --- Other proteins --- p.31 / Chapter 4.2.1 --- The acetylchoinesterase --- p.31 / Chapter 4.2.2 --- The lipase --- p.32 / Chapter 4.2.3 --- The two A and D chains of the deoxyribonuclease --- p.33 / Chapter 4.3 --- Evaluation of the significance of the maximum match --- p.36 / Chapter Chapter 5 --- Conclusion and discussion --- p.38 / References --- p.41 / Tables --- p.43
78

Reliable techniques for survey with sensitive question

Wu, Qin 01 January 2013 (has links)
No description available.
79

Multi-period value-at-risk scaling rules: calculations and approximations. / CUHK electronic theses & dissertations collection

January 2011 (has links)
Zhou, Pengpeng. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2011. / Includes bibliographical references (leaves 76-89). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese.
80

Fault probability and confidence interval estimation of random defects seen in integrated circuit processing

Hu, David T. 11 September 2003 (has links)
Various methods of estimating the fault probabilities based on defect data of random defects seen in integrated circuit manufacturing are examined. Estimates of fault probabilities based on defect data are less costly than those based on critical area analysis and are potentially more reliable because they are based on actual manufacturing data. Due to limited sample size, means of estimating the confidence interval associated with these estimates are also examined. Because the mathematical expressions associated with defect data-based estimates of the fault probabilities are not amenable to analytical means of obtaining confidence intervals, bootstrapping was employed. The results show that one method of estimating the fault probabilities based on defect data proposed previously is not applicable when using typical in-line data. Furthermore, the results indicate that under typical fab conditions, the assumption of a Poisson random defect distribution gives accurate fault probabilities. The yields as predicted by the fault probabilities estimated from the limited yield concept and kill ratio and those estimated from critical area simulation are shown to be comparable to actual yields observed in the fab. It is also shown that with in-line data, the FP estimated for a given inspection step is a weighted average of the fault probabilities of the defect mechanisms operating at that inspection step. Four bootstrapped based methods of confidence interval estimation for fault probabilities of random defects are examined. The study is based on computer simulation of randomly distributed defects with pre-assigned fault probabilities on dice and the resulting count of different categories of die. The results show that all four methods perform well when the number of fatal defects is reasonably high but deteriorate in performance as the number of fatal defects decrease. The results also show that the BCA (bias-corrected and accelerated) method is more likely to succeed with a smaller number of fatal defects. This success is attributed to its ability to account for change of the standard deviation of the sampling distribution of the FP estimates with the PP of the population, and to account for median bias in the sampling distribution. / Graduation date: 2004

Page generated in 0.0986 seconds