161 |
An investigation of factors associated with traffic accident and casualty risk in ScotlandWhite, David Ian January 2002 (has links)
An investigation was conducted to identify factors associated with traffic accident involvement and traffic casualty involvement of road users in Scotland. This was done to determine to what extent accident and casualty involvement are related, and so assist policy-makers in the allocation of scarce resources. Traffic accident involvement was identified for Scottish-resident vehicle drivers. Traffic casualty involvement was identified for vulnerable road users, particularly child pedestrians. Traffic accident rates were determined from information provided by approximately one thousand Scottish-resident drivers who completed an extensive questionnaire on driving behaviours. Their personal characteristics, socio-demographic data, and information on attitudes to road safety issues, were also provided. This broad investigation revealed that traffic accident involvement was found to be associated with personal characteristics, driving behaviour, and attitudes to road safety issues. There is no evidence of any area effect on accident involvement of Scottish drivers, in terms of the administrative area in which they live, the relative level of affluence/deprivation of the area, or the population density of the area. A detailed statistical analysis of STATS19 traffic accident data was conducted to determine casualty rates for different groups of road user in Lothian, Scotland, for the years 1991-97. This involved the development of a unique index of multiple deprivation suitable for both urban and rural areas. Traffic casualty rates were found to be positively associated with the level of deprivation and the population density at postcode sector level. Analysis of injury-accident data identified that personal characteristics are also associated with casualty involvement for children aged 0-15 years old. As with accident involvement, the influence of behavioural and attitudinal factors on casualty involvement needs to be examined. A significant finding from this study is that traffic accident risk and traffic casualty risk are not associated with the same factors. Place of residence is significant in determining casualty risk, but has no significant effect on accident risk. Implications from this research are discussed and suitable recommendations are made.
|
162 |
Four essays in international macroeconomicsJiang, Shifu January 2018 (has links)
Chapter 1: We propose an integral correction mechanism to model real exchange rate dynamics. In estimation, we also allow a Harrod-Balassa-Samuelson effect on real exchange rate long-run equilibrium. Using data from 19 OECD countries, we find the integral correction mechanism fitting in-sample data significantly better than the popular smooth transition autoregression model. The special dynamics of the integral correction mechanism help explain the PPP puzzle by distinguishing mean-reversion speeds in the long- and short- run. However, the integral correction mechanism shows a significant out-of-sample forecast gain over the random walk in only few cases. Though the gain is robust across forecast horizons and quite large at long horizons. Chapter 2: This chapter evaluates the ability of a standard IRBC model augmented with an input adjustment cost of imported goods to explain different aspects of the real exchange rate like the standard deviation, the autocorrelation function, the spectrum and the integral correction mechanism. I find that the simple IRBC model with an appropriate calibration can well capture all features of the real exchange rate. The input adjustment cost plays the key role. As compared to the standard model, it implies a reversed impulse response of the real exchange rate with a fast speed going back to steady state and introduces a long-run cyclical movement in most macroeconomic variables. I find that this particular impulse response helps explain the PPP puzzle. Chapter 3: I study optimal unconventional monetary policy under commitment in a two-country model where domestic policy entails larger spillovers to foreign countries. Equity injections into financial intermediaries turn out to be more efficient than discount window lending and the large-scale asset purchases that have been employed in many countries. Due to precautionary effects of future crises, a central bank should exit from its policy more slowly than the speed of deleveraging in the financial sector. The optimal policy can be changed considerably if cross-country policy cooperation is not imposed. In this case, interventions tend to be too strong in one country but too weak in the other. Gains from cooperation become positive if using unconventional monetary policy is costly enough, then correlates positively with the cost. Chapter 4: I consider the implementation of optimal unconventional monetary policy outlined in chapter 3. I find the Ramsey policy characterised by a simple rules responding to gaps in asset prices. However, it requires knowledge of asset prices that would be realized in a world free of financial friction so cannot be used to guide unconventional monetary policy in practice. The best practical simple rule responds to credit spread with inertia.
|
163 |
Time and frequency domain statistical methods for high-frequency time seriesElayouty, Amira Sherif Mohamed January 2017 (has links)
Advances in sensor technology enable environmental monitoring programmes to record and store measurements at high-temporal resolution over long time periods. These large volumes of high-frequency data promote an increasingly comprehensive picture of many environmental processes that would not have been accessible in the past with monthly, fortnightly or even daily sampling. However, benefiting from these increasing amounts of high-frequency data presents various challenges in terms of data processing and statistical modeling using standard methods and software tools. These challenges are attributed to the large volumes of data, the persistent and long memory serial correlation in the data, the signal to noise ratio, and the complex and time-varying dynamics and inter-relationships between the different drivers of the process at different timescales. This thesis aims at using and developing a variety of statistical methods in both the time and frequency domains to effectively explore and analyze high-frequency time series data as well as to reduce their dimensionality, with specific application to a 3 year hydrological time series. Firstly, the thesis investigates the statistical challenges of exploring, modeling and analyzing these large volumes of high-frequency time series. Thereafter, it uses and develops more advanced statistical techniques to: (i) better visualize and identify the different modes of variability and common patterns in such data, and (ii) provide a more adequate dimension reduction representation to the data, which takes into account the persistent serial dependence structure and non-stationarity in the series. Throughout the thesis, a 15-minute resolution time series of excess partial pressure of carbon dioxide (EpCO2) obtained for a small catchment in the River Dee in Scotland has been used as an illustrative data set. Understanding the bio-geochemical and hydrological drivers of EpCO 2 is very important to the assessment of the global carbon budget. Specifically, Chapters 1 and 2 present a range of advanced statistical approaches in both the time and frequency domains, including wavelet analysis and additive models, to visualize and explore temporal variations and relationships between variables for the River Dee data across the different timescales to investigate the statistical challenges posed by such data. In Chapter 3, a functional data analysis approach is employed to identify the common daily patterns of EpCO2 by means of functional principal component analysis and functional cluster analysis. The techniques used in this chapter assume independent functional data. However, in numerous applications, functional observations are serially correlated over time, e.g. where each curve represents a segment of the whole time interval. In this situation, ignoring the temporal dependence may result in an inappropriate dimension reduction of the data and inefficient inference procedures. Subsequently, the dynamic functional principal components, recently developed by Hor mann et al. (2014), are considered in Chapter 4 to account for the temporal correlation using a frequency domain approach. A specific contribution of this thesis is the extension of the methodology of dynamic functional principal components to temporally dependent functional data estimated using any type of basis functions, not only orthogonal basis functions. Based on the scores of the proposed general version of dynamic functional principal components, a novel clustering approach is proposed and used to cluster the daily curves of EpCO2 taking into account the dependence structure in the data. The dynamic functional principal components depend in their construction on the assumption of second-order stationarity, which is not a realistic assumption in most environmental applications. Therefore, in Chapter 5, a second specific contribution of this thesis is the development of a time-varying dynamic functional principal components which allows the components to vary smoothly over time. The performance of these smooth dynamic functional principal components is evaluated empirically using the EpCO2 data and using a simulation study. The simulation study compares the performance of smooth and original dynamic functional principal components under both stationary and non-stationary conditions. The smooth dynamic functional principal components have shown considerable improvement in representing non-stationary dependent functional data in smaller dimensions. Using a bootstrap inference procedure, the smooth dynamic functional principal components have been subsequently employed to investigate whether or not the spectral density and covariance structure of the functional time series under study change over time. To account for the possible changes in the covariance structure, a clustering approach based on the proposed smooth dynamic functional principal components is suggested and the results of application are discussed. Finally, Chapter 6 provides a summary of the work presented within this thesis, discusses the limitations and implications and proposes areas for future research.
|
164 |
Modelling statistical variability within circuits using nano-CMOS technologiesMerrett, Michael January 2012 (has links)
Systems have been designed and synthesized using CMOS technology for many years, with improvements in the fabrication process allowing designs to be scaled onto smaller areas with relative ease. The introduction of nano scale CMOS technologies has ended this time of simple scaling, as variations within the silicon now dramatically affect circuit performance and manufacturing yield. These random physical variations cannot be removed from the manufacturing process, requiring that their affects are modelled, predicted and accommodated within the design process. This thesis presents an investigation into the challenges of including these affects within the design process, with a review of the recent research conducted in incorporating variability within timing analysis tools. The conclusion from the literature review is that an accurate, efficient and transparent method of predicting the impact of statistical process variations on the performance of a circuit has not yet been created and adopted by the IC design industry. The investigation begins with the modelling of transistor based statistical process variations at the standard cell level, where it is determined that simple statistical models do not accurately reflect the extremes in performance, and can provide overly pessimistic predictions. The techniques of Monte Carlo Cell Characterisation (MCCC) and Monte Carlo Static Timing Analysis (MCSTA) are introduced as more suitable approaches, which accurately reflect the performance of circuits as modelled by Monte Carlo SPICE simulations, with far less pessimism than the traditional method of Corner Analysis or even modern Statistical Static Timing Analysis. The final section of this thesis focuses on practical implementations of MCSTA, where the sample sizes required to accurately predict circuit behaviour (to within 1% of SPICE)can be reduced to as few as ten, using simple statistical sampling techniques.
|
165 |
Informative censoring in transplantation statisticsStaplin, Natalie January 2012 (has links)
Observations are informatively censored when there is dependence between the time to the event of interest and time to censoring. When considering the time to death of patients on the waiting list for a transplant, particularly a liver transplant, patients that are removed for transplantation are potentially informatively censored, as generally the most ill patients are transplanted. If this censoring is assumed to be non-informative then any inferences may be misleading. The existing methods in the literature that account for informative censoring are applied to data to assess their suitability for the liver transplantation setting. As the amount of dependence between the time to failure and time to censoring variables cannot be identied from the observed data, estimators that give bounds on the marginal survival function for a given range of dependence values are considered. However, the bounds are too wide to be of use in practice. Sensitivity analyses are also reviewed as these allow us to assess how inferences are affected by assuming differing amounts of dependence and whether methods that account for informative censoring are necessary. Of the other methods considered IPCW estimators were found to be the most useful in practice. Sensitivity analyses for parametric models are less computationally intensive than those for Cox models, although they are not suitable for all sets of data. Therefore, we develop a sensitivity analysis for piecewise exponential models that is still quick to apply. These models are exible enough to be suitable for a wide range of baseline hazards. The sensitivity analysis suggests that for the liver transplantation setting the inferences about time to failure are sensitive to informative censoring. A simulation study is carried out that shows that the sensitivity analysis is accurate in many situations, although not when there is a large proportion of censoring in the data set. Finally, a method to calculate the survival benefit of liver transplantation is adapted to make it more suitable for UK data. This method calculates the expected change in post-transplant mortality relative to waiting list mortality. It uses IPCW methods to account for the informative censoring encountered when estimating waiting list mortality to ensure the estimated survival benefit is as accurate as possible.
|
166 |
Leaving lone parenthood : analysis of the repartnering patterns of lone mothers in the U.KSkew, Alexandra Jane January 2009 (has links)
Despite a wealth of research in the U.K. on the stock of lone parents, in recent years there has been a lack of research on the dynamics of lone parenthood, particularly leaving lone parenthood. In an attempt to fill this gap, this thesis provides a detailed study of repartnering patterns of lone mothers in the U.K. This study uses the first 14 waves of the British Household Panel Survey (BHPS), a nationally representative survey conducted annually which interviews every adult member of a sample of around 5,000 households amounting to around 10,000 individual interviews. This data is particularly advantageous for this study due to its prospective longitudinal nature, allowing lone mothers to be captured at the point of entry into lone motherhood and their repartnering patterns to be analysed over subsequent waves. In addition the data enabled the construction of marital and cohabitation histories for lone mothers in order to control for any effect of prior union history on the probability of repartnering. Employing discrete time event history analysis techniques, the first part of this research examines repartnering among two distinct groups of lone mothers; those entering through the breakdown of a cohabiting or marital union and those entering through the birth of a child whilst single and never-married. Of particular interest is the effect of these different routes of entry into lone motherhood on the timing and determinants of repartnering and the types of new unions formed. The second part of the study seeks to identify if repartnering is associated with improved well-being for lone mothers. Using a series of pooled logistic regression models this thesis explores the association of repartnering with transitions in three domains: economic, demographic and health. Amongst those entering lone motherhood through the breakdown of a previous partnership the most important determinant of repartnering is found to be age at entry into lone motherhood. However, the economic situation of a lone mother, in particular whether or not she was receiving Income Support, has a much stronger influence on repartnering among single never married lone mothers than age. The duration of lone motherhood is found to be similar for both types of lone mother, -estimated at around five years, however controlling for a number of demographic and socio-economic factors suggests the probability of repartnering is lower for those entering through the breakdown of a cohabitation compared with those entering through the dissolution of a marriage. There appears to be a preference for cohabitation over marriage with nearly three quarters of those who repartnered moving into a cohabiting union. However, the higher chance of moving into a marriage for those who were previously married appears to result from a high proportion reconciling with a former partner. Examining the relationship between repartnering and other transitions occurring in three domains reveals that repartnering is likely to occur against a backdrop of other changes. Repartnering is strongly associated with an improvement in financial situation, residential mobility and an increase in the number of resident dependent children. Although no direct link is found between repartnering and improved mental health outcomes, the strong association between improved financial well-being and an improvement in mental health indicates repartnering may be indirectly related to better mental health. However, the finding of a direct association between poorer mental health and repartnering warrants further investigation
|
167 |
Spatiotemporal population modelling to assess exposure to flood riskSmith, Alan D. January 2015 (has links)
No description available.
|
168 |
An investigation into establishing a generalised approach for defining similarity metrics between 3D shapes for the casting design problem in case-based reasoning (CBR)Saeed, Soran January 2006 (has links)
This thesis investigates the feasibility of establishing a generalised approach for defining similarity metrics between 3D shapes for the casting design problem in Case-Based Reasoning (CBR). This research investigates a new approach for improving the quality of casting design advice achieved from a CBR system using casting design knowledge associated with past cases. The new approach uses enhanced similarity metrics to those used in previous research in this area to achieve improvements in the advice given. The new similarity metrics proposed here are based on the decomposition of casting shape cases into a set of components. The research into metrics defines and uses the Component Type Similarity Metric (CTM) and Maximum Common Subgraph (MCS) metric between graph representations of the case shapes and are focused on the definition of partial similarity between the components of the same type that take into account the geometrical features and proportions of each single shape component. Additionally, the investigation extends the scope of the research to 3D shapes by defining and evaluating a new metric for the overall similarity between 3D shapes. Additionally, this research investigates a methodology for the integration of the CBR cycle and automation of the feature extraction from target and source case shapes. The ShapeCBR system has been developed to demonstrate the feasibility of integrating the CBR approach for retrieving and reusing casting design advice. The ShapeCBR system automates the decomposition process, the classification process and the shape matching process and is used to evaluate the new similarity metrics proposed in this research and the extension of the approach to 3D shapes. Evaluation of the new similarity metrics show that the efficiency of the system is enhanced using the new similarity metrics and that the new approach provides useful casting design information for 3D casting shapes. Additionally, ShapeCBR shows that it is possible to automate the decomposition and classification of components that allow a case shape to be represented in graph form and thus provide the basis for automating the overall CBR cycle. The thesis concludes with new research questions that emerge from this research and an agenda for further work to be pursued in further research in the area.
|
169 |
Contesting identity and preventing belonging? : an analysis of British counter terrorism policy since the Terrorism Act 2000 and the selective use of the terrorism label by the British GovernmentNorris, Maria January 2015 (has links)
In 2013, Lee Rigby was murdered in Woolwich. In retaliation, there were several attacks on the Muslim community. Both series of events fall under the Terrorism Act 2000 legal definition of terrorism. Nonetheless, only Rigby's murder was treated as an act of terror by the government. This begs the question, as terrorism is defined in a broad and neutral way legally, what explains the selective use of the label of terrorism by the UK government? Answering this question begins by looking at terrorism from the perspective of Critical Terrorism Studies, approaching the label of terrorism as an act of securitization. As such, the thesis goes beyond the legal definition of terrorism, seeking to unearth the official policy narrative of terrorism on the UK. In order to do this, it analyses the three versions of Contest: The United Kingdom’s Strategy for Countering Terrorism, the government’s official terrorism policy papers. The analysis reveals an official policy narrative of terrorism which securitizes Islam, Muslims and Muslim identity, by constructing a causal story that places ideology and identity at the heart of the explanation for terrorism. Moreover, the concern with identity gives the narrative a strong nationalist characteristic. This is further deconstructed using the boundary-security nexus. The boundarysecurity nexus incorporates boundary and nationalism theory into securitization, which better helps to understand and explain how discursive constructions of security and identity work in a dialectic relationship. Once the nexus is introduced, it becomes clear how the selective use of the terrorism label by the government may not just further securitize Islam and the Muslim Community, but also act as a way of protecting and reinforcing the bounded community of the nation state.
|
170 |
Applications of random matrix theory to portfolio management and financial networksEterovic, Nicolas January 2016 (has links)
This thesis is an application of Random Matrix Theory (RMT) to portfolio management and financial networks. From a portfolio management perspective, we apply the RMT approach to clean measurement noise from correlation matrices constructed for large portfolios of stocks of the FTSE 100. We apply this methodology to a number of correlation estimators, i.e., the sample correlation matrix, the Constant Conditional Correlation Model (CCC) of Bollerslev (1990), the Dynamic Conditional Correlation (DCC) Model of Engle (2002) and the Regime-Switching Beta CAPM Correlation Model, based on Ang and Bekaert (2004). For these estimators, we find that the RMT- filtering delivers portfolios with the lowest realised risk, the best prediction accuracy and also the highest cumulated returns and Sharpe Ratios. The gains from using the RMT-filtering, in terms of cumulated wealth, range from 65%, for the sample correlation matrix to 30%, for the regime-dependent correlation estimator. In the case of regime switching CAPM models, we find that the regime switching correlation matrices, in the high volatility regime are found to be a good filter which makes further RMT- filtering to be redundant. This establishes the validity of using regime sensitive portfolio management to deal with asymmetric asset correlations during high and low volatility regimes. From a financial network perspective, we assess the stability of a global banking network built from bilateral exposures of 18 BIS reporting banking systems to net debtor countries. For this, we applied the eigen-pair method of Markose (2012), which is based on the work of May (1972, 1974) for random networks. We use a stability condition based on the maximum eigenvalue (λmax) of a matrix of net bilateral exposures relative to equity capital as a systemic risk index (SRI). We provide evidence of the early warning capabilities of λmax, when this surpasses a prespecified threshold. We use the right and left associated eigenvectors as a gauge for systemic importance and systemic vulnerability, respectively. The λmax SRI was found to be superior in terms of early warning when compared to the standard SRIs based on market price data, viz. the DCC-MES of Acharya et al. (2010), the SRISK of Acharya et al. (2012) and the DCC-ΔCoVaR of Adrian and Brunnermeier (2011).
|
Page generated in 0.0175 seconds