• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • Tagged with
  • 244
  • 244
  • 42
  • 35
  • 32
  • 27
  • 26
  • 26
  • 26
  • 25
  • 24
  • 22
  • 22
  • 20
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

On variable selection in high dimensions, segmentation and multiscale time series

Baranowski, Rafal January 2016 (has links)
In this dissertation, we study the following three statistical problems. First, we consider a high-dimensional data framework, where the number of covariates potentially affecting the response is large relatively to the sample size. In this setting, some of the covariates are observed to exhibit an impact on the response spuriously. Addressing this issue, we rank the covariates according to their impact on the response and use certain subsampling scheme to identify the covariates which non-spuriously appear at the top of the ranking. We study the conditions under which such set is unique and show that, with high probability, it can be recovered from the data by our procedure, for rankings based on measures commonly used in statistics. We illustrate its good practical performance in an extensive comparative simulation study and on microarray data. Second, we propose a generic approach to the problem of detecting the unknown number of features in the time series of interest, such as changes in trend or jumps in the mean, occurring at the unknown locations in time. Those locations naturally imply the decomposition of the data into segments of homogeneity, the knowledge of which is useful in e.g. estimation of the mean of the series. We provide a precise description of the type of features we are interested in and, in two important scenarios, demonstrate that our methodology enjoys appealing theoretical properties. We show that the performance of our proposal matches or surpasses the state of the art in the scenarios tested and present its applications on three real datasets: oil price log-returns, temperature anomalies data and the UK House Price Index Finally, we introduce a class of univariate multiscale time series models and propose an estimation procedure to fit those models from the data. We demonstrate that our proposal, with a large probability, correctly identifies important timescales, under the framework in which the largest timescale in the model diverges with the sample size. A good empirical performance of the method is illustrated in an application to high-frequency financial returns for stocks listed on New York Stock Exchange. For all proposed methods, we provide efficient and publicly-available computer implementations.
62

Improving predictability of the future by grasping probability less tightly

Wheatcroft, Edward January 2015 (has links)
In the last 30 years, whilst there has been an explosion in our ability to make quantative predictions, less progress has been made in terms of building useful forecasts to aid decision support. In most real world systems, single point forecasts are fundamentally limited because they only simulate a single scenario and thus do not account for observational uncertainty. Ensemble forecasts aim to account for this uncertainty but are of limited use since it is unclear how they should be interpreted. Building probabilistic forecast densities is a theoretically sound approach with an end result that is easy to interpret for decision makers; it is not clear how to implement this approach given finite ensemble sizes and structurally imperfect models. This thesis explores methods that aid the interpretation of model simulations into predictions of the real world. This includes evaluation of forecasts, evaluation of the models used to make forecasts and the evaluation of the techniques used to interpret ensembles of simulations as forecasts. Bayes theorem is a fundamental relationship used to update a prior probability of the occurence of some event given new information. Under the assumption that each of the probabilities in Bayes theorem are perfect, it can be shown to make optimal use of the information available. Bayes theorem can also be applied to probability density functions and thus updating some previously constructed forecast density with a new one can be expected to improve forecast skill, as long as each forecast density gives a good representation of the uncertainty at that point in time. The relevance of the probability calculus, however, is in doubt when the forecasting system is imperfect, as is always the case in real world systems. Taking the view that we wish to maximise the logarithm of the probability density placed on the outcome, two new approaches to the combination of forecast densities formed at different lead times are introduced and shown to be informative even in the imperfect model scenario, that is a case where the Bayesian approach is shown to fail.
63

In-sample forecasting : structured models and reserving

Hiabu, Munir January 2016 (has links)
In most developed countries, the insurance sector accounts for around eight percent of the GDP. In Europe alone the insurers liabilities are estimated at around e900 billion. Every insurance company regularly estimates its liabilities and reports them, in conjunction with statements about capital and assets, to the regulators. The liabilities determine the insurers solvency and also its pricing and investment strategy. The new EU directive, Solvency II, which came into effect in the beginning of 2016, states that those liabilities should be estimated with ‘realistic assumption’ using ‘relevant actuarial and statistical methods’. However, modern statistics has not found its way in the reserving departments of today’s insurance companies. This thesis attempts to contribute to the connection between the world of mathematical statistics and the reserving practice in general insurance. As part of this thesis, it is in particular shown that today’s reserving practice can be understood as a non-parametric estimation approach in a structured model setting. The forecast of future claims is done without the use of exposure information, i.e., without knowledge about the number of underwritten policies. New statistical estimation techniques and properties are derived which are build from this motivating application.
64

The educational and labour market expectations of adolescents and young adults

Jerrim, John January 2011 (has links)
Understanding why some suitably qualified young adults go on to enter higher education and others do not has been the subject of extensive research by a number of social scientists from a range of disciplines. Economists suggest that young adults’ willingness to invest in a tertiary qualification depends upon what they believe the costs and benefits of this investment will be. On the other hand, sociologists stress that an early expectation of completing university is a key driver of later participation in higher education. Children's subjective beliefs of the future (their “expectations”) are a consistent theme within these distinctively different approaches. Researchers from both disciplines might argue that children's low or mistaken expectations (of future income, financial returns, their ability to complete university) might lead them into making inappropriate educational choices. For instance, young adults who do not have a proper understanding of the graduate labour market may mistakenly invest (or not invest) in tertiary education. Alternatively some academically talented children may not enter university if they do not see it as realistic possibility, or that it is 'not for the likes of them'. I take an interdisciplinary approach within this thesis to tackle both of these issues. Specifically, I investigate whether young adults have realistic expectations about their future in the labour market and if disadvantaged children scoring high marks on a maths assessment at age 15 believe they can complete university.
65

Model-based approaches to the estimation of poverty measures in small areas

Donbavand, Steven January 2015 (has links)
No description available.
66

Bias corrections in multilevel modelling of survey data with applications to small area estimation

Correa, Solange Trinidade January 2008 (has links)
In this thesis, a general approach for correcting the bias of an estimator and for obtaining estimates of the accuracy of the bias-corrected estimator is proposed. The method, entitled extended bootstrap bias correction (EBS), is based on the bootstrap resampling technique and attempts to identify the functional relationship between the estimates obtained from the original and bootstrap samples and the true parameter values, drawn from a plausible parameter space. The bootstrap samples are used for studying the behaviour of the bias and, consequently, for the bias correction itself. The EBS approach is assessed by extensive Monte Carlo studies in three different applications of multilevel analysis of survey data. First, the proposed EBS method is applied to bias adjustment of unweighted and probability weighted estimators of two-level model parameters under informative sampling designs with small sample sizes. Second, the EBS approach is considered for estimating the mean squared error (MSE) of predictors of small area means under the area level Fay-Herriot model for different distributions of the model error terms. Finally, the EBS procedure is applied to MSE estimation of predictors of small area proportions under a unit level generalized linear mixed model. The general conclusion emerging from this thesis is that the EBS approach is effective in providing bias corrected estimators in all the three cases considered.
67

Estimation of population totals from imperfect census, survey and administrative records

Baffour-Awuah, Bernard January 2009 (has links)
The theoretical framework of estimating the population totals from the Census, Survey and an Administrative Records List is based on capture-recapture methodology which has traditionally been employed for the measurement of abundance of biological populations. Under this framework, in order to estimate the unknown population total, N, an initial set of individuals is captured. Further subsequent captures are taken at later periods. The possible capture histories can be represented by the cells of a 2r contingency table, where r is the number of captures. This contingency table will have one cell missing, corresponding to the population missed in all r captures. If this cell count can be estimated, adding this to the sum of the observed cells will yield the population size of interest. There are a number of models that may be specied based on the incomplete 2r
68

On variance estimation under complex sampling designs

Lopez Escobar, Emilio January 2013 (has links)
This thesis is formed of three manuscripts (chapters) about variance estimation. Each of the chapters focuses on developing new original variance estimators. The Chapter 1 proposes a novel jackknife variance estimator for self weighted two-stage sampling. Customary jackknifes for these designs rely only on the first sampling stage. This omission may induce a bias in the variance estimation when cluster sizes vary, second stage sampling fractions are small or when there is low variability between clusters. The proposed jackknife accounts of all sampling stages via deletion of clusters and observations within clusters. It does not need join-inclusion probabilities and naturally includes finite population corrections. Its asymptotic design-consistency is shown. A simulation study show that it can be more accurate than the customary jackknife used for this kind of sampling designs (Rao, Wu and Yue, 1992). The Chapter 2 proposes a totally new replication variance estimator for any unequal-probability without-replacement sampling design. The proposed replication estimator is approximately equal to the linearisation variance estimators obtained by the Demnati and Rao (2004) approach. It is more general than the Campbell (1980); Berger and Skinner (2005) generalised jackknife. Its asymptotic design consistency is shown. A simulation study shows it is more stable than standard jackknifes (Quenouille, 1956; Tukey, 1958) with ad hoc finite population corrections and than the generalised jackknife (Campbell, 1980; Berger and Skinner, 2005). The Chapter 3 proposes a new variance estimator which accounts the item non-response under unequal-probability without-replacement sampling when estimating a change from rotating (overlapping) repeated surveys. The proposed estimator combines the original approach by Berger and Priam (2010, 2012) and the non response reverse approach for variance estimation (Fay, 1991; Shao and Steel, 1999). It gives design-consistent estimation of the variance of change when the sampling fraction is small. The proposed estimator uses random Hot-deck imputation, but it can be implemented with other imputation techniques. Further, there are two more complementary chapters. One introduces the R package called samplingVarEst which implements of some methods for variance estimation utilised for the simulations. Finally, there is a brief chapter which discusses future research work.
69

Exploring political non-participation : conceptualising, distinguishing and explaining political apathy

Thompson, Emma January 2015 (has links)
No description available.
70

Applications of the dependence ratio association measure for multivariate categorical data

North, Robert January 2015 (has links)
No description available.

Page generated in 0.1155 seconds