711 |
Statistical inference on some long memory volatility modelsLi, Muyi., 李木易. January 2011 (has links)
published_or_final_version / Statistics and Actuarial Science / Doctoral / Doctor of Philosophy
|
712 |
A novel framework for expression quantitative trait loci mappingAi, Ni., 艾妮. January 2011 (has links)
published_or_final_version / Electrical and Electronic Engineering / Master / Master of Philosophy
|
713 |
Spatio-temporal development of foreign investment in Guangdong, 1980-2006Xu, Zhihua, 许志桦 January 2010 (has links)
Since China’s economic reform and opening up to the outside world, foreign investment has become the major dynamo of regional development in China. Under the polarized development strategy in the 1980s and 1990s, foreign investment has been highly concentrated in the coastal regions and the uneven distribution of foreign investment may be partly responsible for the significant regional disparity at the inter-provincial or even intra-provincial level within the country. Since the early 2000s, with China’s integration into the global economy and the dynamic economic restructuring in the coastal regions, foreign investment has been undergoing redistribution with the country.
Based on a combination of both quantitative and qualitative research methods, this study examines the spatio-temporal pattern and dynamics of foreign investment from different sources within a full coverage of time periods from China’s opening up to recent years (1980-2006) at the intra-provincial level in Guangdong. This study demonstrates that the spatio-temporal pattern of foreign investment in Guangdong has transformed from having Shenzhen as its single growth pole in the early stage to having Shenzhen and Guangzhou as bi-growth poles in the later stages and foreign investment has widely diffused via the transportation lines from the growth poles to the peripheral regions within or out of the province. The emergence of Shenzhen instead of Guangzhou as the initial growth pole enriches traditional theories and empirical experiences of having the historic economic and industrial centre as the growth pole in the early stage of take-off of a region. Moreover, this study examines the theoretical base and effectiveness of the growth pole strategy and argues that it can be a means of transferring economic growth to new regions instead of reinforcing the cumulative effects of polarized development. Most failure of the growth pole strategy is due to the procedure of the implementation instead of the theoretical base of the strategy.
To explain the initial and ongoing diffusion of foreign investment in Guangdong, this study proposes a multi-level analytical framework, which includes the push and pull factors of home and host regions at the local level, the role of government at the regional level, the structure of global production network at the global level, and the characteristics of foreign investment form different sources at the firm level. Specifically, four distinctive diffusion models of foreign investment are identified, i.e. the “cost-and-government-driven” model of Hong Kong firms, the “supply-chain-driven” model of Taiwan firms, the “market-and-group-driven” model of Japan firms, and the “market-and-institution-driven” model of US firms. This study suggests that the local push and pull factors are far from enough in interpreting such dynamic and various patterns of the diffusion of foreign investment, which should be put into a multi-level context including the local, regional, global and firm level factors and considerations. In-depth knowledge of the spatio-temporal pattern and dynamics of foreign investment from different sources is considered as the prerequisite to improve the efficiency of government development policies. / published_or_final_version / Urban Planning and Design / Doctoral / Doctor of Philosophy
|
714 |
Bayesian analysis in Markov regime-switching modelsKoh, You Beng., 辜有明. January 2012 (has links)
van Norden and Schaller (1996) develop a standard regime-switching model to study stock market crashes. In their seminal paper, they use the maximum likelihood estimation to estimate the model parameters and show that a two-regime speculative bubble model has significant explanatory power for stock market returns in some observed periods. However, it is well known that the maximum likelihood estimation can lead to bias if the model contains multiple local maximum points or the estimation starts with poor initial values. Therefore, a better approach to estimate the parameters in the regime-switching models is to be found. One possible way is the Bayesian Gibbs-sampling approach, where its advantages are well discussed in Albert and Chib (1993). In this thesis, the Bayesian Gibbs-sampling estimation is examined by using two U.S. stock datasets: CRSP monthly value-weighted index from Jan 1926 to Dec 2010 and S&P 500 index from Jan 1871 to Dec 2010. It is found that the Gibbs-sampling estimation explains the U.S. data better than the maximum likelihood estimation. Moreover, the existing standard regime-switching speculative behaviour model is extended by considering the time-varying transition probabilities which are governed by the first-order Markov chain. It is shown that the time-varying first-order transition probabilities of Markov regime-switching speculative rational bubbles can lead stock market returns to have a second-order Markov regime. In addition, a Bayesian Gibbs-sampling algorithm is developed to estimate the parameters in the second-order two-state Markov regime-switching model. / published_or_final_version / Statistics and Actuarial Science / Doctoral / Doctor of Philosophy
|
715 |
On discrete-time risk models with dependence based on integer-valued time series processesLi, Jiahui, 黎嘉慧 January 2012 (has links)
In the actuarial literature, dependence structures in risk models have been extensively studied. The main theme of this thesis is to investigate some discrete-time risk models with claim numbers modeled by integer-valued time series processes.
The first model is a common shock risk model with temporal dependence between the claim numbers in each individual class of business. Specifically the Poisson MA(1) process and Poisson AR(1) process are considered for the temporal dependence. To study the ruin probability, the equations associated with the adjustment coefficients are derived. Comparisons are also made to assess the impact of the dependence structures on the ruin probability.
Another model involving both the correlated classes of business and the time series approach is then studied. Thinning dependence structure is adopted to model the dependence among classes of business. The Poisson MA(1) and Poisson AR(1) processes are used to describe the claim-number processes. Adjustment coefficients and ruin probabilities are examined.
Finally a discrete-time risk model with the claim number following a Poisson ARCH process is proposed. In this model, the mean of the current claim number depends on the previous observations. Within this framework, the equation for finding the adjustment coefficient is derived. Numerical studies are also carried out to examine the effect of the Poisson ARCH dependence structure on several risk measures including ruin probability, Value at Risk, and conditional tail expectation. / published_or_final_version / Statistics and Actuarial Science / Master / Master of Philosophy
|
716 |
A factor analysis approach to transcription regulatory network reconstruction using gene expression dataChen, Wei, 陈玮 January 2012 (has links)
Reconstruction of Transcription Regulatory Network (TRN) and Transcription Factor Activity (TFA) from gene expression data is an important problem in systems biology. Currently, there exist various factor analysis methods for TRN reconstruction, but most approaches have specific assumptions not satisfied by real biological data. Network Component Analysis (NCA) can handle such limitations and is considered to be one of the most effective methods. The prerequisite for NCA is knowledge of the structure of TRN. Such structure can be obtained from ChIP-chip or ChIP-seq experiments, which however have quite limited applications. In order to cope with the difficulty, we resort to heuristic optimization algorithm such as Particle Swarm Optimization (PSO), in order to explore the possible structures of TRN and choose the most plausible one. Regarding the structure estimation problem, we extend classical PSO and propose a novel Probabilistic binary PSO. Furthermore, an improved NCA called FastNCA is adopted to compute the objective function accurately and fast, which enables PSO to run efficiently. Since heuristic optimization cannot guarantee global convergence, we run PSO multiple times and integrate the results. Then GCV-LASSO (Generalized Cross Validation - Least Absolute Shrinkage and Selection Operator) is performed to estimate TRN. We apply our approach and other factor analysis methods on the synthetic data. The results indicate that the proposed PSOFastNCA-GCV-LASSO algorithm gives better estimation.
In order to incorporate more prior information on TRN structure and gene expression dynamics in the linear factor analysis model for improved estimation of TRN and TFAs, a linear Bayesian framework is adopted. Under the unified Bayesian framework, Bayesian Linear Sparse Factor Analysis Model (BLSFM) and Bayesian Linear State Space Model (BLSSM) are developed for instantaneous and dynamic TRN, respectively. Various approaches to incorporate partial and ambiguous prior network structure information in the Bayesian framework are proposed to improve performance in practical applications. Furthermore, we propose a novel mechanism for estimating the hyper-parameters of the distribution priors in our BLSFM and BLSSM models, which can significantly improve the estimation compared to traditional ways of hyper-parameter setting. With this development, reasonably good estimation of TFAs and TRN can be obtained even without use of any structure prior of TRN. Extensive numerical experiments are performed to investigate our developed methods under various settings, with comparison to some existing alternative approaches. It is demonstrated that our hyper-parameter estimation method improves the estimation of TFA and TRN in most settings and has superior performance, and that structure priors in general leads to improved estimation performance.
Regarding application to real biological data, we execute the PSO-FastNCAGCV-LASSO algorithm developed in the thesis using E. Coli microarray data and obtain sensible estimation of TFAs and TRN. We apply BLSFM without structure priors of TRN, BLSSM without structure priors as well as with partial structure priors to Yeast S. cerevisiae microarray data and obtain a reasonable estimation of TFAs and TRN. / published_or_final_version / Electrical and Electronic Engineering / Doctoral / Doctor of Philosophy
|
717 |
Using statistical downscaling to project the future climate of Hong KongCheung, Chi-shing, Calvin, 張志成 January 2014 (has links)
Climate in Hong Kong is very likely to be modified due to global climate change. In this study the output of General Circulation Models (GCMs) was statistically downscaled to produce future climate projections for the time periods 2046 –2065 and 2081 –2100 for Hong Kong. The future climate projections are based on two emission scenarios provided by the Intergovernmental Panel on Climate Change (IPCC). The emission scenarios, A1B (rapid economic growth with balanced energy technology) and B1 (global environmental sustainability), make assumptions on future human development, and the resulting emissions of greenhouse gases.
This study established a method to evaluate GCMs for use in statistical downscaling and utilised six GCMs, selected from the 3rd phase of the Coupled Model Intercomparison Project (CMIP3). They were evaluated based upon their performance in simulating past climate in the southeast China region on three aspects: 1) monthly mean temperature; 2) sensitivity to greenhouse gases and 3) climate variability. Three GCMs were selected for statistical downscaling and climate projection in this study.
Downscaling was undertaken by relating large scale climate variables, from NCEP/NCAR reanalysis, a gridded data set incorporating observations and climate models, to local scale observations. Temperature, specific humidity and wind speed were downscaled using multiple linear regressions methods. Rain occurrence was determined using logistic regression and rainfall volume from a generalised linear model. The resultant statistical models were subsequently applied to future climate projections.
Overall, all three GCMs, via statistical downscaling, show that daily average, minimum and maximum temperatures, along with specific humidity, will increase under future climate scenarios. Comparing the model ensemble mean projections with current climate (1981 –2010), the annual average temperature in Hong Kong is projected to increase by 1.0 °C (B1) to 1.6 °C (A1B) in 2046 –2065, and by 1.4 °C (B1) to 2.2 °C (A1B) in 2081 –2100. Furthermore, the projections in this study show an increase of high temperature extremes (daily average temperature ≥ 29.6 °C), by three to four times in 2046 –2065 and four to five times in 2081 –2100.
The projections of rainfall indicate that annual rainfall will increase in the future. Total annual rainfall is projected to increase by 4.9% (A1B) to 8% (B1) in 2046 –2065, and by 8.7% (B1) to 21.5% (A1B) in 2081 –2100. However, this change in rainfall is seasonally dependent; summer and autumn exhibit an increase in rainfall whilst spring and winter exhibit decreases.
In order to test one possible impact of this change in climate, the downscaled climate variables were used to estimate how outdoor thermal comfort (using the Universal Thermal Comfort Index) might change under future climate scenarios in Hong Kong. Results showed that there will be a shift from 'No Thermal Stress' towards 'Moderate Heat Stress' and 'Strong Heat Stress' during the period 2046 –2065, becoming more severe for the later period (2081 –2100). The projections of future climate presented in this study will be important when assessing potential climate change impacts, along with adaptation and mitigation options, in Hong Kong. / published_or_final_version / Geography / Doctoral / Doctor of Philosophy
|
718 |
Bayesian variable selection for GLMWang, Xinlei 28 August 2008 (has links)
Not available / text
|
719 |
Infant birthweight, gestational age and mortality by race/ethnicity: a non-parametric regression approach to birthweight optima identification / Infant birth weight, gestational age and mortality by race/ethnicity / Non-parametric regression approach to birthweight optima identification / Non-parametric regression approach to birth weight optima identificationEchevarria-Cruz, Samuel, 1973- 28 August 2008 (has links)
In order to better understand the statistical relationship between measures of birthweight and gestational age and their effects on infant mortality, national vital statistics data was examined using non-parametric regression techniques (GAM) that allow for a sophisticated and detailed analysis of infant mortality models. These models allow for various non-linear effects of birthweight and gestational age on infant mortality to be quantified based upon extant methodologies (Solis, Pullum and Frisbie, 2000). Utilizing over-time, race/ethnicand sex-specific approaches, the identification of "zones" of optimal birth outcomes based upon infant mortality probabilities is successfully accomplished. This process results from the creation of a rigorous cross-classification of GAMsupplied birthweight and gestational age parameters. From these results, I find that Non-Hispanic Black infants still exhibit an infant mortality disadvantage relative to Non-Hispanic Whites and Mexican American infants. For the four birth outcome parameters and their interactions, I find evidence of infant mortality disadvantage for infants that are early or late as well as small or heavy relative to their race/ethnic-specific, birthweight-adjusted optima.
|
720 |
A comparison of the performance of testlet-based computer adaptive tests and multistage testsKeng, Leslie, 1974- 29 August 2008 (has links)
Computer adaptive testing (CAT) has grown both in research and implementation. Test construction and security issues, however, have led many to reconsider the merits of CAT. Multistage testing (MST) is an alternative adaptive test design that purportedly addresses CAT's shortcomings. Yet considerably less research has been conducted on MST. Also, most research in adaptive testing has been based on item response theory (IRT). Many tests now make use of testlets -- bundles of items administered together, often based on a common stimulus. The use of testlets violates local independence, a fundamental assumptions of IRT. Testlet response theory (TRT) is a relatively new measurement model designed to measure testlet-based tests. Few studies though have examined its use in testlet-based CAT and MST designs. This dissertation investigated the performance of testlet-based CATs and MSTs measured using the TRT model. The test designs compared included a CAT that is adaptive at the testlet level only (testlet-level CAT), a CAT that is adaptive at both the testlet and item levels (item-level CAT) and a MST design (MST). Test conditions manipulated included test length, item pool size, and examinee ability distribution. Examinee data were generated using TRT-calibrated item parameters based on data from a large-scale reading assessment. The three test designs were evaluated based on measurement effectiveness and exposure control properties. The study found that all three adaptive test designs yielded similar and good measurement accuracy. Overall, the item-level CAT produced better measurement precision, followed by the MST design. However, the MST and CAT designs yielded better measurement precision at different areas of the ability scale. All three test designs yielded acceptable exposure control properties at the testlet level. At the item level, the testlet-level CAT produced the best overall result. The item-level CAT had less than ideal pool utilization, but was able to meet its pre-specified maximum exposure control rate and maintain low item exposure rates. The MST had excellent pool utilization, but a higher percentage of items with high exposure rates. Skewing the underlying ability distribution also had a particularly notable negative effect on the exposure control properties of the MST. / text
|
Page generated in 0.078 seconds