• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • Tagged with
  • 244
  • 244
  • 42
  • 35
  • 32
  • 27
  • 26
  • 26
  • 26
  • 25
  • 24
  • 22
  • 22
  • 20
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Multilevel modelling of event history data : comparing methods appropriate for large datasets

Stewart, Catherine Helen January 2010 (has links)
Abstract When analysing medical or public health datasets, it may often be of interest to measure the time until a particular pre-defined event occurs, such as death from some disease. As it is known that the health status of individuals living within the same area tends to be more similar than for individuals from different areas, event times of individuals from the same area may be correlated. As a result, multilevel models must be used to account for the clustering of individuals within the same geographical location. When the outcome is time until some event, multilevel event history models must be used. Although software does exist for fitting multilevel event history models, such as MLwiN, computational requirements mean that the use of these models is limited for large datasets. For example, to fit the proportional hazards model (PHM), the most commonly used event history model for modelling the effect of risk factors on event times, in MLwiN a Poisson model is fitted to a person-period dataset. The person-period dataset is created by rearranging the original dataset so that each individual has a line of data corresponding to every risk set they survive until either censoring or the event of interest occurs. When time is treated as a continuous variable so that each risk set corresponds to a distinct event time, as is the case for the PHM, the size of the person-period dataset can be very large. This presents a problem for those working in public health as datasets used for measuring and monitoring public health are typically large. Furthermore, individuals may be followed-up for a long period of time and this can also contribute to a large person-period dataset. A further complication is that interest may be in modelling a rare event, resulting in a high proportion of censored observations. This can also be problematic when estimating multilevel event history models. Since multilevel event history models are important in public health, the aim of this thesis is to develop these models so they can be fitted to large datasets considering, in particular, datasets with long periods of follow-up and rare events. Two datasets are used throughout the thesis to investigate three possible alternatives to fitting the multilevel proportional hazards model in MLwiN in order to overcome the problems discussed. The first is a moderately-sized Scottish dataset, which will be the main focus of the thesis, and is used as a ‘training dataset’ to explore the limitations of existing software packages for fitting multilevel event history models and also for investigating alternative methods. The second dataset, from Sweden, is used to test the effectiveness of each alternative method when fitted to a much larger dataset. The adequacy of the alternative methods are assessed on the following criteria: how effective they are at reducing the size of the person-period dataset, how similar parameter estimates obtained from using methods are compared to the PHM and how easy they are to implement. The first alternative method involves defining discrete-time risk sets and then estimating discrete-time hazard models via multilevel logistic regression models fitted to a person-period dataset. The second alternative method involves aggregating the data of individuals within the same higher-level units who have the same values for the covariates in a particular model. Aggregating the data like this means that one line of data is used to represent all such individuals since these individuals are at risk of experiencing the event of interest at the same time. This method is termed ‘grouping according to covariates’. Both continuous-time and discrete-time event history models can be fitted to the aggregated person-period dataset. The ‘grouping according to covariates’ method and the first method, which involves defining discrete-time risk sets, are both implemented in MLwiN and pseudo-likelihood methods of estimation are used. The third and final method to be considered, however, involves fitting Bayesian event history (frailty) models and using Markov chain Monte Carlo (MCMC) methods of estimation. These models are fitted in WinBUGS, a software package specially designed to make practical MCMC methods available to applied statisticians. In WinBUGS, an additive frailty model is adopted and a Weibull distribution is assumed for the survivor function. Methodological findings were that the discrete-time method led to a successful reduction in the continuous-time person-period dataset; however, it was necessary to experiment with the length of time intervals in order to have the widest interval without influencing parameter estimates. The grouping according to covariates method worked best when there were, on average, a larger number of individuals per higher-level unit, there were few risk factors in the model and little or none of the risk factors were continuous. The Bayesian method could be favourable as no data expansion is required to fit the Weibull model in WinBUGS and time is treated as a continuous variable. However, models took a much longer time to run using MCMC methods of estimation as opposed to likelihood methods. This thesis showed that it was possible to use a re-parameterised version of the Weibull model, as well as a variance expansion technique, to overcome slow convergence by reducing correlation in the Markov chains. This may be a more efficient way to reduce computing time than running further iterations.
172

Optimum experimental designs for models with a skewed error distribution : with an application to stochastic frontier models

Thompson, Mery Helena January 2008 (has links)
In this thesis, optimum experimental designs for a statistical model possessing a skewed error distribution are considered, with particular interest in investigating possible parameter dependence of the optimum designs. The skewness in the distribution of the error arises from its assumed structure. The error consists of two components (i) random error, say V, which is symmetrically distributed with zero expectation, and (ii) some type of systematic error, say U, which is asymmetrically distributed with nonzero expectation. Error of this type is sometimes called 'composed' error. A stochastic frontier model is an example of a model that possesses such an error structure. The systematic error, U, in a stochastic frontier model represents the economic efficiency of an organisation. Three methods for approximating information matrices are presented. An approximation is required since the information matrix contains complicated expressions, which are difficult to evaluate. However, only one method, 'Method 1', is recommended because it guarantees nonnegative definiteness of the information matrix. It is suggested that the optimum design is likely to be sensitive to the approximation. For models that are linearly dependent on the model parameters, the information matrix is independent of the model parameters but depends on the variance parameters of the random and systematic error components. Consequently, the optimum design is independent of the model parameters but may depend on the variance parameters. Thus, designs for linear models with skewed error may be parameter dependent. For nonlinear models, the optimum design may be parameter dependent in respect of both the variance and model parameters. The information matrix is rank deficient. As a result, only subsets or linear combinations of the parameters are estimable. The rank of the partitioned information matrix is such that designs are only admissible for optimal estimation of the model parameters, excluding any intercept term, plus one linear combination of the variance parameters and the intercept. The linear model is shown to be equivalent to the usual linear regression model, but with a shifted intercept. This suggests that the admissible designs should be optimal for estimation of the slope parameters plus the shifted intercept. The shifted intercept can be viewed as a transformation of the intercept in the usual linear regression model. Since D_A-optimum designs are invariant to linear transformations of the parameters, the D_A-optimum design for the asymmetrically distributed linear model is just the linear, parameter independent, D_A-optimum design for the usual linear regression model with nonzero intercept. C-optimum designs are not invariant to linear transformations. However, if interest is in optimally estimating the slope parameters, the linear transformation of the intercept to the shifted intercept is no longer a consideration and the C-optimum design is just the linear, parameter independent, C-optimum design for the usual linear regression model with nonzero intercept. If interest is in estimating the slope parameters, and the shifted intercept, the C-optimum design will depend on (i) the design region; (ii) the distributional assumption on U; (iii) the matrix used to define admissible linear combinations of parameters; (iv) the variance parameters of U and V; (v) the method used to approximate the information matrix. Some numerical examples of designs for a cross-sectional log-linear Cobb-Douglas stochastic production frontier model are presented to demonstrate the nonlinearity of designs for models with a skewed error distribution. Torsney's (1977) multiplicative algorithm was implemented in finding the optimum designs.
173

Empirical essays in macroeconomics and finance

Modena, Matteo January 2010 (has links)
This work provides an empirical examination of the relationship between macroeconomics and finance. In particular, we exploit non linear econometric methods to analyse the information content of the term structure of interest rates. We find that both monetary and financial variables are useful to predict the future evolution of economic activity.
174

Handling sparse spatial data in ecological applications

Embleton, Nina Lois January 2015 (has links)
Estimating the size of an insect pest population in an agricultural field is an integral part of insect pest monitoring. An abundance estimate can be used to decide if action is needed to bring the population size under control, and accuracy is important in ensuring that the correct decision is made. Conventionally, statistical techniques are used to formulate an estimate from population density data obtained via sampling. This thesis thoroughly investigates an alternative approach of applying numerical integration techniques. We show that when the pest population is spread over the entire field, numerical integration methods provide more accurate results than the statistical counterpart. Meanwhile, when the spatial distribution is more aggregated, the error behaves as a random variable and the conventional error estimates do not hold. We thus present a new probabilistic approach to assessing integration accuracy for such functions, and formulate a mathematically rigorous estimate of the minimum number of sample units required for accurate abundance evaluation in terms of the species diffusion rate. We show that the integration error dominates the error introduced by noise in the density data and thus demonstrate the importance of formulating numerical integration techniques which provide accurate results for sparse spatial data.
175

The choice of terrorism in conflict and the outcomes of mixed methods of dissent

Belgioioso, Margherita January 2018 (has links)
This thesis aims at understanding the choice of terrorism in mass dissident movements and the outcomes of civil resistance campaigns that coexist with the use of terrorist tactics by radicals. Towards this end, it focuses on dissident organizations and conflict dynamics and therefore contributes to the existing literature on terrorism and conflict, both methodologically and theoretically. Study one investigates the conditions under which groups that participate in mass dissent choose to initiate terrorist campaigns. I find that groups involved in either civil war or mass civil resistance might face strategic constraints that encourage them to resort to terrorism, due to perceived lower costs and higher tactical effectiveness. These constraints are higher repression and longer duration of mass dissent. Study two contributes to the literature on ‘radical flanks effect’. I find that terrorism generates incentives for the state to accommodate civil resistance movement, especially if nonviolent movements have a centralized leadership and hierarchical structure and can thereby credibly commit to nonviolent discipline and to avoid the escalation of the conflict to large-scale violence. Study three focuses on international support to rebel groups as determinants of the variation in the portfolio of killings across rebel groups. I find that rebels that receive financial support from external non-state actors are less likely to target civilians than combatants. This is because investing financial support domestically is more economically efficient and increased rebel dependency on the local population generating incentives to restrain the use of terrorism. In turn, rebels that receive military support from external non-state actors are more likely to target civilians than combatants. Military resources are efficiently invested in warfare activities without the need to increase reliance on the population. To test these mechanisms empirically, I model the portfolios of killings of rebel groups as a proportion of terrorist-related deaths and battle-related deaths.
176

Three essays in applied microeconomics

Rialland, P. C. R. P. January 2018 (has links)
This thesis focuses on three vulnerable groups in Europe that have recently been highlighted both in media and in the economics literature; and that are policy priorities. Chapter 1 is a joint work with Giovanni Mastrobuoni which focuses on prisoners and peer effects in prison. Studies that estimate criminal peer effects need to define the reference group. Researchers usually use the amount of time inmates overlap in prison, sometimes in combination with nationality to define such groups. Yet, there is often little discussion about such assumptions, which could potentially have important effects on the estimates of peer effects. We show that the date of rearrest of inmates who spend time together in prison signals with some error co-offending, and can thus be used to measure reference groups. Exploiting recidivism data on inmates released after a mass pardon with a simple econometric model which adjusts the estimates for the misclassification errors, we document homophily in peer group formation with regards to age, nationality, and degrees of deterrence. There is no evidence of homophily with respect to education and employment status. Chapter 2 evaluates a policy in the English county of Essex that aims to reduce domestic abuse through informing high-risk suspects that they will be put under higher surveillance, hence increasing their probability of being caught in case of recidivism, and encouraging their victims to report. Using a Regression Discontinuity Design (RDD), it underlines that suspects that are targeted by the policy are more 9% more likely to be reported again for domestic abuse. Although increasing reporting is widely seen as essential to identify and protect victims, this paper shows that policies to increase reporting will deter crime only if they give rise to a legal response. Moreover, results highlight that increasing the reporting of events of that do not lead to criminal charges may create escalation and be more detrimental to the victim in the long run. Chapter 3 investigates how migrants in the United Kingdom respond to natural disasters in their home countries. Combining a household panel survey of migrants in the United Kingdom and natural disasters data, this paper first shows, in the UK context, that male migrants are more likely to remit in the wake of natural disasters. Then, it underlines that to fund remittances male migrants also increase labour supply, decrease monthly savings and leisure. By showing how migrants in the UK adjust their economic behaviours in response to an unexpected shocks i.e. natural disasters, this paper demonstrates both how UK migrants may fund remittances and that they have the capacity to adjust their economic behaviours to increase remittances.
177

Socio-economic disparities in science knowledge, biomedical self-efficacy, and public participation in medical decision-making

Moldovan, Andreea-Loredana January 2018 (has links)
The thesis consists of three self-contained articles that empirically investigate socio-economic differences in, and interrelationships amongst, science knowledge, biomedical self-efficacy, and participation in medical decision-making. Chapter 2 investigates age-related bias in the science knowledge questions in the Wellcome Trust Monitor Survey Waves I and II. It also examines what evidence there is for three dimensions of knowledge. Chapter 3 studies the influence of Internet use and paying attention to medical stories online in reducing science knowledge and biomedical self-efficacy gaps between low and high educational groups. Wave II of the Wellcome Trust Monitor Surveys is employed in this chapter. Chapter 4 scrutinises the influence of various socio-economic factors, biomedical self-efficacy, and trust in physicians and other medical practitioners on public willingness and confidence to take part in the medical decision-making process. Chapter 4 uses Wave III of the Wellcome Trust Monitor Survey. Chapter 2 finds evidence for age-related bias in the science knowledge battery of questions; no evidence of a misinformed group of respondents was found; a group who consistently said they didn’t know instead of providing an answer that was wrong was found; a sensitivity analysis showed that using the summed score approach leads to the same substantive conclusions as a model taking into account age-related non-invariance. Chapter 3 finds evidence of education-based knowledge and efficacy gaps. It also finds some evidence that the Internet can help reduce that democratic deficit in information. Chapter 4 finds evidence that people are generally confident to participate. Those who are more self-efficacious are also more confident to participate in medical decisions. The opposite held true for those who place high trust in doctors. Women were found to be more confident than men.
178

Currency crash risk in the carry trade

Li, Yating January 2017 (has links)
This thesis provides a systematic study of currency crash risk and funding liquidity risk in carry trade strategy in the foreign exchange (FX) market. Carry trade, which involves longing currencies with high interest rate and shorting currencies with low interest rate, is a popular currency trading strategy in the FX market for obtaining annualized excess return as high as 12%. This thesis studies exchange rates of 9 currencies over 13 years from a microstructure perspective. We identify a global skewness factor and use it to measure the currency crash risk. Applying a portfolio approach in cross-sectional asset pricing, we find that global skewness factor explains more than 80% of carry trade excess returns. On the other hand, funding liquidity is effective in predicting the future currency crash risk. Funding liquidity explains more than 70% of carry trade excess returns. We also use the coefficient of price impact from customer order flows to measure the liquidity, which reveals heterogeneous information content possessed by different types of customers. We find that the order flow implied liquidity risk factor can explain a fraction of carry trade excess returns but with small risk premium on quarterly basis. We provide empirical evidence to show that the excess return and crash risk in carry trade is endogenous; i.e., the crash risk premium is inherent in carry trade process. As the natural condition widely affects all investors, we argue that funding constraints are effective in explaining the excess returns of carry trade. When capital moves smoothly in a liquid condition and investor have sufficient funding supply, carry trade is prosperous in the FX market. When investors hit their funding constraints, market-wide liquidity drop, which force the carry trade positions diminishing. The exchange rates respond as that the low interest rate currencies appreciate and high interest rate currencies depreciate, which exacerbates currency crash risk and induces large loss to carry traders. Our cross-sectional analysis provides empirical evidence to show that funding constraints helps to explain the forward premium puzzle and push the exchange rate shift back to the direction the UIP expects.
179

The microstructure of bank lending to SMEs : evidence from a survey of loan officers in Nigerian banks

Ekpu, Victor Uche January 2015 (has links)
The opacity and riskiness of small and medium sized enterprises (SMEs) make them an interesting area for the study of banks’ lending practices and procedures. SMEs in Nigeria, like in many low and middle-income economies, face financing difficulties because they are relatively young, inexperienced and informationally opaque. Since the consolidation of the Nigerian banking industry in 2006, the share of commercial bank loans to SMEs has declined markedly despite the fact that Nigerian banks are well capitalized and are among the largest players in Sub-Saharan Africa. The researcher conducted a questionnaire survey to investigate the microstructure of SME lending decisions, policies and practices in Nigerian banks. Using a sample of 121 Nigerian bank lending officers, this study specifically investigates three research questions: (1) the demand and supply side constraints to bank involvement with SMEs (2) the determinants of loan contract terms (i.e. risk premium and collateralisation), and (3) the economic value to banks from investing in customer relationships. Results from analysis of survey responses reveal that the high incidence of loan diversion, weak management capacity and the inability of SMEs to service debts are chief contributory factors to the riskiness of SME loans in Nigeria. On the supply side, the high transaction costs associated with processing and monitoring small loans impact negatively on lending profitability. There are also constraints posed by regulation and the business environment. Most notably, the recent rise in yield on competing assets, such as government treasury bills, has led to the crowding out of private sector lending as Nigerian banks hold a sizeable proportion of their assets in relatively safer government securities, which tends to lower their appetite for lending to SMEs. The risk profile of the SME sector is further enhanced by poor information economics, infrastructural deficiencies, the inefficient credit referencing on business loans as well as the inability to enforce loans contracts due to legal and judicial constraints. The econometric results show that the determinants of risk premium on SME loans are largely connected with factors that underline the opacity and riskiness of SMEs in Nigeria. Customers with longer relationships with their bank tend to benefit from lower interest rates. What determines the likelihood of requesting collateral from SMEs varies significantly from bank to bank and is likely to be connected to the lenders’ specialization as well as differences in the business model and lending technologies used. Loan size, borrowing firm’s age and credit rating also determine the amount of collateral pledged. There is also evidence to suggest that the predominantly centralised lending strategy in Nigerian banks impacts negatively on the accumulation of soft information by loan officers, implying that not all information collected by the loan officers is utilised in taking lending decisions. However, the proprietary information (or knowledge) loan officers gather through frequent communication and interaction with their customers is likely to yield some potential benefits for Nigerian banks. The most dominant is the high probability that customer satisfaction from bank relationships will generate repeat business for the banks.
180

Multiple Frame Sampling Theory And Applications

Dalcik, Aylin 01 February 2010 (has links) (PDF)
One of the most important practical problems in conducting sample surveys is the list that can be used for selecting the sample is generally incomplete or out of date. Therefore, sample surveys can produce seriously biased estimates of the population parameters. On the other hand updating a list is a difficult and very expensive operation. Multiple-frame sampling refers to surveys where two or more frames are used and independent samples are taken respectively from each of the frames. It is assumed that the union of the different frames covers the whole population. There are two major reasons for the use of multiple-frame sampling method. One is that, using two or more frames can cover most of the target population and therefore reduces biases due to coverage error. The second is that multipleframe sampling design may result in considerable cost savings over a single frame design.

Page generated in 0.0581 seconds