• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 550
  • 94
  • 78
  • 58
  • 36
  • 25
  • 25
  • 25
  • 25
  • 25
  • 24
  • 22
  • 15
  • 4
  • 3
  • Tagged with
  • 955
  • 955
  • 221
  • 163
  • 139
  • 126
  • 97
  • 92
  • 89
  • 74
  • 72
  • 69
  • 66
  • 65
  • 63
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
501

Dealing with paucity of data in meta-analysis of binary outcomes. / CUHK electronic theses & dissertations collection

January 2006 (has links)
A clinical trial may have no subject (0%) or every subject (100%) developing the outcome of concern in either of the two comparison groups. This will cause a zero-cell in the four-cell (2x2) table of a trial using a binary outcome and make it impossible to estimate the odds ratio, a commonly used effect measure. A usual way to deal with this problem is to add 0.5 to each of the four cells in the 2x2 table. This is known as Haldane's approximation. In meta-analysis, Haldane's approximation can also be applied. Two approaches are possible: add 0.5 to only the trials with a zero cell or to all the trials in the meta-analysis. Little is known which approach is better when used in combination with different definitions of the odds ratio: the ordinary odds ratio, Peto's odds ratio and Mantel-Haenszel odds ratio. / A new formula is derived for converting Peto's odds ratio to the risk difference. The derived risk difference through the new method was then compared with the true risk difference and the risk difference derived by taking the Peto's odds ratio as the ordinary odds ratio. All simulations and analyses were conducted on the Statistical Analysis Software (SAS). / Conclusions. The estimated confidence interval of a meta-analysis would mostly exclude the truth if an inappropriate correction method is used to deal with zero cells. Counter-intuitively, the combined result of a meta-analysis will be worse as the number of studies included becomes larger. Mantel-Haenszel odds ratio without applying Haldane's approximation is recommended in general for dealing with sparse data in meta-analysis. The ordinary odds ratio with adding 0.5 to only the trials with a zero cell can be used when the trials are heterogeneous and the odds ratio is close to 1. Applying Haldane's approximation to all trials in a meta-analysis should always be avoided. Peto's odds ratio without Haldane's approximation can always be considered but the new formula should be used for converting Peto's odds ratio to the risk difference. / In addition, the odds ratio needs to be converted to a risk difference to aid decision making. Peto's odds ratio is preferable in some situations and the risk difference is derived by taking Peto's odds ratio as an ordinary odds ratio. It is unclear whether this is appropriate. / Methods. For studying the validity of Haldane's approximation, we defined 361 types of meta-analysis. Each type of meta-analysis is determined by a unique combination of the risk in the two compared groups and thus provides a unique true odds ratio. The number of trials in a meta-analysis is set at 5, 10 and 50 and the sample size of each trial in a meta-analysis varies at random but is made sufficiently small so that at least one trial in a meta-analysis will have a zero-cell. The number of outcome events in a comparison group of a trial is generated at random according to the pre-determined risk for that group. One thousand homogeneous meta-analyses and one thousand heterogeneous meta-analyses are simulated for each type of meta-analysis. Two Haldane's approximation approaches in addition to no approximation are evaluated for three definitions of the odds ratio. Thus, nine combined odds ratios are estimated for each type of meta-analysis and are all compared with the true odds ratio. The percentage of meta-analyses with the 95% confidence interval including the true odds ratio is estimated as the main index for validity of the correction methods. / Objectives. (1) We conducted a simulation study to examine the validity of Haldane's approximation as applied to meta-analysis, and (2) we derived and evaluated a new method to covert Peto's odds ratio to the risk difference, and compared it with the conventional conversion method. / Results. By using the true ordinary odds ratio, the percentage of meta-analyses with the confidence interval containing the truth was lowest (from 23.2% to 53.6%) when Haldane's approximation was applied to all the trials regardless the definition of the odds ratios used. The percentage was highest with Mantel-Haenszel odds ratio (95.0%) with no approximation applied. The validity of the corrections methods increases as the true odds ratio gets close to one, as the number of trials in a meta-analysis decreases, as the heterogeneity decreases and the trial size increases. / The proposed new formula performed better than the conventional method. The mean relative difference between the true risk difference and the risk difference obtained from the new formula is -0.006% while the mean relative difference between the true risk difference and the risk difference obtained from the conventional method is -10.9%. / The validity is relatively close (varying from 86.8% to 95.8%) when the true odds ratio is between 1/3 and 3 for all combinations of the correction methods and definitions of the odds ratio. However, Peto's odds ratio performed consistently best if the true Peto's odds ratio is used as the truth for comparison among the three definitions of the odds ratio regardless the correction method (varying from 88% to 98.7%). / Tam Wai-san Wilson. / "Jan 2006." / Adviser: J. L. Tang. / Source: Dissertation Abstracts International, Volume: 67-11, Section: B, page: 6488. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2006. / Includes bibliographical references (p. 151-157). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
502

Analysis of health-related quality of life data in clinical trial with non-ignorable missing based on pattern mixture model. / CUHK electronic theses & dissertations collection

January 2006 (has links)
Conclusion. The missing data is a common problem in clinical trial. The methodology development is urgently needed to detect the difference of two treatments drug in patient quality of life. The modified pattern mixture model incorporating generalized estimating equation method or multiple imputation method provides a solution to tackle the non-ignorable missing data problem. Different clinical trials with various treatment schedules, missing data patterns will be formed. Further studies are needed to study the optimal choice of patterns under the methods. / Introduction. Health-related Quality of Life (HRQoL) has now been included as a major endpoint in many cancer clinical trials in addition to the traditional endpoints such as tumor response and survival. It refers to how illness or its treatment affects patients' ability to function and whether it induces symptoms. Toxicity, progression and death are common outcome affecting patient's QOL in cancer trial. Since this type of missing data are not occurred at random and are called non-ignorable missing data, conventional methods of analyses are not appropriate. It is important to develop general methods to deal with this problem so that treatment effectiveness for improving patient's QOL or those with serious side effect that is detrimental to patient's QOL can be identified. / Methods. The generalized estimating equation based on modified pattern mixture model is constructed to deal with non-ignorable missing data problem. We conducted a simulation study to examine performance of the model for different types of data. Two scenarios were examined. The first case assumes that two groups have quadratic trend but with different rates of change. The second case assumes that one group has linear trend with time while the other group has quadratic trend with time. Moreover, the second methodology is the multiple imputation based on modified pattern mixture model. The main idea is to resample the data within each pattern to create the full data set and use the standard method to analyze the data. Comparison between two methods was carried out in this study. / Recently, joint models for the QOL outcomes and the indicators of drop-outs are used in longitudinal studies to correct for non-ignorable missing. Two broad classes of joint models, selection model and pattern mixture model, were used. Most of the methodology has been developed in the selection model while the pattern mixture model has attracted less attention due to the identifiability problem. Although pattern mixture model has its own limitation, a modified version of this model incorporating Generalized Estimating Equation can be used in practice. / Result. The power of generalized estimating equation alone is higher than pattern mixture model when the missing data is missing at random. Moreover, the bias of generalized estimating equation is less than that of pattern mixture model when the missing data is missing at random. However, the pattern mixture model performs well when the missing data is missing not at random. On the other hand, the modified pattern mixture model has higher power than the standard pattern mixture model if one group has quadratic trend and other group has linear trend. However, the power of modified pattern mixture model is similar or worst than the standard when the data is both quadratic trends with different rates of change. On the other hand, the results of multiple imputation based on modified pattern mixture model were similar but the power was less than the generalized estimating equation model. / Mo Kwok Fai. / "August 2006." / Adviser: Benny Zee. / Source: Dissertation Abstracts International, Volume: 68-09, Section: B, page: 6051. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2006. / Includes bibliographical references (p. 91-93). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
503

Statistical machine learning for data mining and collaborative multimedia retrieval. / CUHK electronic theses & dissertations collection

January 2006 (has links)
Another issue studied in the framework is Distance Metric Learning (DML). Learning distance metrics is critical to many machine learning tasks, especially when contextual information is available. To learn effective metrics from pairwise contextual constraints, two novel methods, Discriminative Component Analysis (DCA) and Kernel DCA, are proposed to learn both linear and nonlinear distance metrics. Empirical results on data clustering validate the advantages of the algorithms. / Based on this unified learning framework, a novel scheme is suggested for learning Unified Kernel Machines (UKM). The UKM scheme combines supervised kernel machine learning, unsupervised kernel de sign, semi-supervised kernel learning, and active learning in an effective fashion. A key component in the UKM scheme is to learn kernels from both labeled and unlabeled data. To this purpose; a new Spectral Kernel Learning (SKL) algorithm is proposed, which is related to a quadratic program. Empirical results show that the UKM technique is promising for classification tasks. / In addition to the above methodologies, this thesis also addresses some practical issues in applying machine learning techniques to real-world applications. For example, in a time-dependent data mining application, in order to design a domain-specific kernel, marginalized kernel techniques are suggested to formulate an effective kernel aimed at web data mining tasks. / Last, the thesis investigates statistical machine learning techniques with applications to multimedia retrieval and addresses some practical issues, such as robustness to noise and scalability. To bridge semantic gap issues of multimedia retrieval, a Collaborative Multimedia Retrieval (CMR) scheme is proposed to exploit historical log data of users' relevance feedback for improving retrieval tasks. Two types of learning tasks in the CMR scheme are identified and two innovative algorithms are proposed to effectively solve the problems respectively. / Statistical machine learning techniques have been widely applied in data mining and multimedia information retrieval. While traditional methods; such as supervised learning, unsupervised learning, and active learning, have been extensively studied separately, there are few comprehensive schemes to investigate these techniques in a unified approach. This thesis proposes a unified learning paradigm (ULP) framework that integrates several machine learning techniques including supervised learning; unsupervised learning, semi-supervised learning, active learning and metric learning in a synergistic way to maximize the effectiveness of a learning task. / Within the unified learning framework, this thesis further explores two important challenging tasks. One is Batch Mode Active Learning (BMAL). In contrast to traditional approaches, the BMAL method searches a batch of informative examples for labeling. To develop an effective algorithm, the BMAL task is formulated into a convex optimization problem and a novel bound optimization algorithm is proposed to efficiently solve it with global optima. Extensive evaluations on text categorization tasks show that the BMAL algorithm is superior to traditional methods. / Hoi Chu Hong. / "September 2006." / Adviser: Michael R. Lyu. / Source: Dissertation Abstracts International, Volume: 68-03, Section: B, page: 1723. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2006. / Includes bibliographical references (p. 203-223). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
504

Some statistical analysis of handicap horse racing.

January 2001 (has links)
Lau Siu Ping. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2001. / Includes bibliographical references (leaf 44). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Pari-Mutuel System --- p.1 / Chapter 1.2 --- Different Types of Betting --- p.4 / Chapter 1.3 --- Overview --- p.6 / Chapter 2 --- Testing on Tipsters Prediction --- p.8 / Chapter 2.1 --- Introduction --- p.8 / Chapter 2.2 --- Summary Tables on Tipsters Performance --- p.11 / Chapter 2.3 --- Tipsters Prediction Vs Random Betting --- p.15 / Chapter 3 --- Multinomial Logistic Regression --- p.19 / Chapter 3.1 --- Review --- p.19 / Chapter 3.2 --- Proposed Models for the Horse Racing --- p.23 / Chapter 3.3 --- Simulation and Result --- p.26 / Chapter 3.4 --- Comparison between four Models --- p.35 / Chapter 3.5 --- Concluding Remarks --- p.36 / Appendix I --- p.37 / Reference --- p.44
505

Comparison of Bayesian and two-stage approaches in analyzing finite mixtures of structural equation model.

January 2003 (has links)
Leung Shek-hay. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2003. / Includes bibliographical references (leaves 53-55). / Abstracts in English and Chinese. / Chapter Chapter 1 --- Introduction --- p.1 / Chapter Chapter 2 --- Finite Mixtures of Structural Equation Model --- p.4 / Chapter Chapter 3 --- Bayesian Approach --- p.7 / Chapter Chapter 4 --- Two-stage Approach --- p.16 / Chapter Chapter 5 --- Simualtion Study --- p.22 / Chapter 5.1 --- Performance of the Two Approaches --- p.22 / Chapter 5.2 --- Influence of Prior Information of the Two Approaches --- p.26 / Chapter 5.3 --- Influence of the Component Probability to the Two Approaches --- p.28 / Chapter 5.4 --- Performance of the Two Approaches when the Components are not well-separated --- p.29 / Chapter Chapter 6 --- A Real Data Analysis --- p.31 / Chapter Chapter 7 --- Conclusion and Discussion --- p.35 / Appendix A Derviation of the Conditional Distribution --- p.37 / Appendix B Manifest Variables in the ICPSR Example --- p.39 / Appendix C A Sample LISREL Program for a Classified Group in the Simualtion Study --- p.40 / Appendix D A Sample LISREL Program for a Classified Group in the ICPSR Example --- p.41 / Tables 1-9 --- p.42 / Figures 1-2 --- p.51 / References --- p.53
506

Random Walk Models, Preferential Attachment, and Sequential Monte Carlo Methods for Analysis of Network Data

Bloem-Reddy, Benjamin Michael January 2017 (has links)
Networks arise in nearly every branch of science, from biology and physics to sociology and economics. A signature of many network datasets is strong local dependence, which gives rise to phenomena such as sparsity, power law degree distributions, clustering, and structural heterogeneity. Statistical models of networks require a careful balance of flexibility to faithfully capture that dependence, and simplicity, to make analysis and inference tractable. In this dissertation, we introduce a class of models that insert one network edge at a time via a random walk, permitting the location of new edges to depend explicitly on the structure of the existing network, while remaining probabilistically and computationally tractable. Connections to graph kernels are made through the probability generating function of the random walk length distribution. The limiting degree distribution is shown to exhibit power law behavior, and the properties of the limiting degree sequence are studied analytically with martingale methods. In the second part of the dissertation, we develop a class of particle Markov chain Monte Carlo algorithms to perform inference for a large class of sequential random graph models, even when the observation consists only of a single graph. Using these methods, we derive a particle Gibbs sampler for random walk models. Fit to synthetic data, the sampler accurately recovers the model parameters; fit to real data, the model offers insight into the typical length scale of dependence in the network, and provides a new measure of vertex centrality. The arrival times of new vertices are the key to obtaining results for both theory and inference. In the third part, we undertake a careful study of the relationship between the arrival times, sparsity, and heavy tailed degree distributions in preferential attachment-type models of partitions and graphs. A number of constructive representations of the limiting degrees are obtained, and connections are made to exchangeable Gibbs partitions as well as to recent results on the limiting degrees of preferential attachment graphs.
507

Dosimetria por imagem para o planejamento específico por paciente em iodoterapia / Patient-Specific Imaging Dosimetry for Radioiodine Treatment Planning

Daniel Luis Franzé 23 October 2015 (has links)
Pacientes que sofrem de doenças na tireoide, como hipertireoidismo causado pela doença de Graves, ou câncer de tireoide, têm como principal forma de tratamento a chamada terapia por radioiodo. Este tratamento consiste na ingestão de um radionuclídeo, no caso, o isótopo de massa atômica 131 do iodo (131I). A terapia utilizando radioisótopos é aplicada em uma variedade de tumores e, por se tratar de um material radioativo que o paciente recebe por via venosa ou oral, certa quantidade de radionuclídeos chegam a órgãos e tecidos diferentes do esperado e mesmo o acúmulo de material radioativo na região de interesse contribui para dose em tecidos sadios. Logo, é necessário um planejamento prévio. Em 80% dos planejamentos, a atividade a ser administrada no paciente é calculada através de valores pré-determinados, como peso, idade ou altura. Apenas cerca de 20% das terapias são realizadas com um planejamento personalizado, específico para cada paciente. Levando essas informações em consideração, este trabalho tem como objetivo realizar um estudo dosimétrico através de imagens para que no futuro seja utilizado em rotinas clínicas para planejamento de iodoterapia individualizado para cada paciente. Neste trabalho foram adquiridas imagens tomográficas (SPECT-CT) de um fantoma de tireoide preenchido com 131I. O fantoma foi construído com base na literatura, reproduzido de maneira fidedigna, e aperfeiçoado, permitindo a inserção de dosímetros termoluminescentes (TLD) em pequenas cavidades. As imagens foram inseridas no software GATE, baseado na ferramenta GEANT4, que permite a simulação da interação da radiação com a matéria pelo método Monte Carlo. Essas imagens foram convertidas em formato reconhecível pelo GATE e através da elaboração de um script de comandos, foram realizadas simulações com o intuito de estimar a dose em cada região da imagem. Uma vez que o dosímetro permanecia exposto ao material radioativo por alguns dias, para evitar um dispêndio de tempo computacional muito grande e estimar o valor final da dose no mesmo período de tempo em que o dosímetro ficou exposto através da simulação, foi necessário extrapolar uma equação e calcular a dose para este tempo. Foram realizadas duas aquisições diferentes, a primeira com uma distribuição não homogênea da fonte e a segunda com distribuição homogênea. Para a distribuição não homogênea, a comparação dos resultados da simulação com resultados obtidos por TLD mostram que ambos possuem a mesma ordem de grandeza e variam proporcionalmente em relação à distância que se encontram da fonte. A diferença relativa entre eles varia de 1% a viii 39% dependendo do dosímetro. Para a distribuição homogênea, os valores possuem a mesma ordem de grandeza, mas estão muito abaixo do esperado, com uma diferença relativa de até 70% e os valores da dose simulados estão, em sua maioria, duas vezes menores que o real. A técnica ainda não está pronta para ser implementada na rotina clínica, mas através de estudos de fatores de correção e novas aquisições, essa técnica pode, em um futuro próximo, ser utilizada. / Radio-iodine therapy is the main form of treatment for patients with diseases on the thyroid, such as hyperthyroidism caused by Graves\' disease or thyroid cancer. This treatment consists in the intake of a radionuclide, the iodine isotope of atomic mass 131 (131I). The radioisotope therapy is applied in a variety of tumors and since the patient receives it intravenously or orally, certain amount of radionuclide reaches different organs and tissues than the ones expected. Even the radioactive material accumulated in the region of interest contributes to the energy deposition on healthy tissues. Therefore, it is necessary a treatment planning. However, 80% of nuclear medicine therapy the administered activity is based in quantity as patients weight, age or height. The patient-specific therapy planning occurs in less than 20% of applications in nuclear medicine. Considering that information, this work aims to conduct a dosimetric study based on images so that in the future could be used in clinical routines for patient-specific radioiodine therapy. Were acquired tomographic images (SPECT-CT) of a thyroid phantom filled with 131I. The phantom was consistently reproduced according to the literature, with some improvements allowing the placement of thermoluminescent dosimeters into small cavities. Such phantom was used for the acquisition of SPECT-CT images. The images were inserted into the GATE software, based on GEANT4 tool, which allows the simulation of radiation interaction with matter, through the Monte Carlo method. Those images were converted into acceptable format for GATE and through the development of a command script, the simulations were performed in order to estimate the dose in each region of the image. Since the dosimeter remained exposed to the radioactive material for a few days, to reduce computational time and estimate, by simulation, the dose over the same period of time which the dosimeter has been exposed, it was necessary to extrapolate the equation and calculate the dose for this time. Two images acquisitions were made, the first with an inhomogeneous source distribution and the second with a homogeneous distribution. For the inhomogeneous acquisition, the simulation and TLD values have the same magnitude and both of them vary in proportion to the source distance. The relative difference ranges from 1% to 39% depending on the dosimeter. For the homogeneous one, despite being in the same magnitude either, the values are much lower x than expected, with a difference of up to 70%, and the simulated data, in general are half the TLD values. The technique is not yet ready to be implemented in clinical routine, but through studies of correction factors and new acquisitions, this technique may in the near future, be used.
508

Optimal allocation of simple step-stress model with Weibull distributed lifetimes under type-I censoring.

January 2010 (has links)
Lo, Kwok Yuen. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2010. / Includes bibliographical references (leaves 52-53). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Background --- p.1 / Chapter 1.2 --- Scope of the thesis --- p.3 / Chapter 2 --- Lifetime Model --- p.4 / Chapter 2.1 --- Introduction --- p.4 / Chapter 2.2 --- Weibull Distribution --- p.4 / Chapter 2.3 --- Step-Stress Experiment --- p.5 / Chapter 3 --- Maximum Likelihood Estimation of Model Parameters --- p.9 / Chapter 3.1 --- Introduction --- p.9 / Chapter 3.2 --- Maximum Likelihood Estimation --- p.10 / Chapter 3.3 --- Fisher Information Matrix --- p.13 / Chapter 3.4 --- Numerical Methods improving Newton's method. --- p.17 / Chapter 3.4.1 --- Initial values --- p.18 / Chapter 3.4.2 --- Fisher-Scoring method --- p.19 / Chapter 4 --- Optimal Experimental Design --- p.21 / Chapter 4.1 --- Introduction --- p.21 / Chapter 4.2 --- Optimal Criteria --- p.22 / Chapter 4.3 --- Optimal Stress-changing-time Proportion --- p.23 / Chapter 4.3.1 --- Optimal n versus the shape parameter B --- p.24 / Chapter 4.3.2 --- "Optimal n versus the parameters ao, a1" --- p.27 / Chapter 4.3.3 --- Optimal n versus the initial stress level x1 --- p.32 / Chapter 4.3.4 --- Optimal n versus the censoring time t2 --- p.33 / Chapter 4.4 --- Sensitivity Analysis --- p.34 / Chapter 4.4.1 --- Effects of the shape parameter B --- p.34 / Chapter 4.4.2 --- "Effects of the parameters ao, al" --- p.37 / Chapter 5 --- Conclusion Remarks and Further Research --- p.39 / Chapter A --- Simulation Algorithm for a Weibull Type-I Censored Simple Step-Stress Model --- p.41 / Chapter B --- Expected values of Fisher Information Matrix --- p.42 / Chapter C --- "Derivation of P(A1, A2)" --- p.50 / Bibliography --- p.52
509

Exact simulation of SDE: a closed form approximation approach. / Exact simulation of stochastic differential equations: a closed form approximation approach

January 2010 (has links)
Chan, Tsz Him. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2010. / Includes bibliographical references (p. 94-96). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgement --- p.iii / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Monte Carlo method in Finance --- p.6 / Chapter 2.1 --- Principle of MC and pricing theory --- p.6 / Chapter 2.2 --- An illustrative example --- p.9 / Chapter 3 --- Discretization method --- p.15 / Chapter 3.1 --- The Euler scheme and Milstein scheme --- p.16 / Chapter 3.2 --- Convergence of Mean Square Error --- p.19 / Chapter 4 --- Quasi Monte Carlo method --- p.22 / Chapter 4.1 --- Basic idea of QMC --- p.23 / Chapter 4.2 --- Application of QMC in Finance --- p.29 / Chapter 4.3 --- Another illustrative example --- p.34 / Chapter 5 --- Our Methodology --- p.42 / Chapter 5.1 --- Measure decomposition --- p.43 / Chapter 5.2 --- QMC in SDE simulation --- p.51 / Chapter 5.3 --- Towards a workable algorithm --- p.58 / Chapter 6 --- Numerical Result --- p.69 / Chapter 6.1 --- Case I Generalized Wiener Process --- p.69 / Chapter 6.2 --- Case II Geometric Brownian Motion --- p.76 / Chapter 6.3 --- Case III Ornstein-Uhlenbeck Process --- p.83 / Chapter 7 --- Conclusion --- p.91 / Bibliography --- p.96
510

In search of diamond rules: Monte Carlo evaluations of goodness of fit indices.

January 2008 (has links)
Wang, Chang. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2008. / Includes bibliographical references (leaves 139-145). / Abstracts in English and Chinese. / ABSTRACT --- p.3 / CHINESE ABSTRACT --- p.5 / ACKNOWLEDGMENTS --- p.6 / TABLE OF CONTENTS --- p.7 / LIST OF TABLES --- p.9 / LIST OF FIGURES --- p.10 / INTRODUCTION --- p.11 / Chapter 1.1 --- ISSUE OF MODEL FIT IN SEM --- p.11 / Chapter 1.2 --- CLASSIFICATION AND DEVELOPMENT OF FIT INDICES --- p.13 / Chapter 1.3 --- ORGANIZATION OF THIS THESIS --- p.18 / Chapter CHAPTER 2 --- ISSUES OF FIT INDICES IN ASSESSING MODEL FIT --- p.19 / Chapter 2.1 --- SENSITIVITY OF FIS TO MODL PARAMETER --- p.19 / Chapter 2.1.1 --- Sample size --- p.20 / Chapter 2.1.2 --- Model complexity --- p.21 / Chapter 2.1.3 --- Misspecification --- p.23 / Chapter 2.2 --- MEASUREMENT ERROR --- p.26 / Chapter 2.3 --- PERFECT FIT VS. APPROXIMATE FIT --- p.26 / Chapter 2.4 --- Minimum Fit Function chi-square vs. Normal-theory Weighted Least chi-square --- p.29 / Chapter 2.5 --- RULE OF THUMB --- p.30 / Chapter 2.6 --- FIVE RESEARCH QUESTIONS --- p.37 / Chapter CHAPTER 3 --- SIMULATION --- p.39 / Chapter 3.1 --- FIT INDICES --- p.39 / Chapter 3.2 --- DESIGN OF MONTE CARLO SIMULATIONS --- p.38 / Chapter 3.3 --- MODEL COMPLEXITY AND MODEL SPECIFICATION --- p.39 / Chapter 3.4 --- SIMULATION PROCEDURE --- p.41 / Chapter CHAPTER 4 --- RESULTS --- p.45 / Chapter 4.1 --- MEASUREMENT ERROR AND CRONBACH´ةS ALPHA --- p.45 / Chapter 4.2 --- ANSWER TO Q1 --- p.45 / Chapter 4.3 --- ANSWER TO Q2 --- p.53 / Chapter 4.4 --- ANSWER TO Q3 --- p.56 / Chapter 4.5 --- ANSWER TO Q4 --- p.60 / Chapter 4.6 --- ANSWER TO Q5 --- p.62 / Chapter CHAPTER 5 --- DUSCUSSION --- p.77 / Chapter 5.1 --- DUSCUSSION TO Q1 --- p.77 / Chapter 5.2 --- DUSCUSSION TO Q2 --- p.83 / Chapter 5.3 --- DUSCUSSION TO Q3 --- p.85 / Chapter 5.4 --- DUSCUSSION TO Q4 --- p.88 / Chapter 5.5 --- DUSCUSSION TO Q5 --- p.89 / Chapter CHAPTER 6 --- LIMITATION --- p.99 / Chapter CHAPTER 7 --- CONCLUSION --- p.101 / PREFERENCE --- p.139

Page generated in 0.1008 seconds