• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 65
  • 18
  • 10
  • 6
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 136
  • 136
  • 62
  • 45
  • 38
  • 31
  • 27
  • 26
  • 24
  • 24
  • 20
  • 16
  • 16
  • 14
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Multiple Time Scales and Longitudinal Measurements in Event History Analysis

Danardono, January 2005 (has links)
<p>A general time-to-event data analysis known as event history analysis is considered. The focus is on the analysis of time-to-event data using Cox's regression model when the time to the event may be measured from different origins giving several observable time scales and when longitudinal measurements are involved. For the multiple time scales problem, procedures to choose a basic time scale in Cox's regression model are proposed. The connections between piecewise constant hazards, time-dependent covariates and time-dependent strata in the dual time scales are discussed. For the longitudinal measurements problem, four methods known in the literature together with two proposed methods are compared. All quantitative comparisons are performed by means of simulations. Applications to the analysis of infant mortality, morbidity, and growth are provided.</p>
72

Efficient strategies for collecting posture data using observation and direct measurement / Effektiva strategier för insamling av data om arbetsställningar geom observation och direkta mätning

Liv, Per January 2012 (has links)
Relationships between occupational physical exposures and risks of contracting musculoskeletal disorders are still not well understood; exposure-response relationships are scarce in the musculoskeletal epidemiology literature, and many epidemiological studies, including intervention studies, fail to reach conclusive results. Insufficient exposure assessment has been pointed out as a possible explanation for this deficiency. One important aspect of assessing exposure is the selected measurement strategy; this includes issues related to the necessary number of data required to give sufficient information, and to allocation of measurement efforts, both over time and between subjects in order to achieve precise and accurate exposure estimates. These issues have been discussed mainly in the occupational hygiene literature considering chemical exposures, while the corresponding literature on biomechanical exposure is sparse. The overall aim of the present thesis was to increase knowledge on the relationship between data collection design and the resulting precision and accuracy of biomechanical exposure assessments, represented in this thesis by upper arm postures during work, data which have been shown to be relevant to disorder risk. Four papers are included in the thesis. In papers I and II, non-parametric bootstrapping was used to investigate the statistical efficiency of different strategies for distributing upper arm elevation measurements between and within working days into different numbers of measurement periods of differing durations. Paper I compared the different measurement strategies with respect to the eventual precision of estimated mean exposure level. The results showed that it was more efficient to use a higher number of shorter measurement periods spread across a working day than to use a smaller number for longer uninterrupted measurement periods, in particular if the total sample covered only a small part of the working day. Paper II evaluated sampling strategies for the purpose of determining posture variance components with respect to the accuracy and precision of the eventual variance component estimators. The paper showed that variance component estimators may be both biased and imprecise when based on sampling from small parts of working days, and that errors were larger with continuous sampling periods. The results suggest that larger posture samples than are conventionally used in ergonomics research and practice may be needed to achieve trustworthy estimates of variance components. Papers III and IV focused on method development. Paper III examined procedures for estimating statistical power when testing for a group difference in postures assessed by observation. Power determination was based either on a traditional analytical power analysis or on parametric bootstrapping, both of which accounted for methodological variance introduced by the observers to the exposure data. The study showed that repeated observations of the same video recordings may be an efficient way of increasing the power in an observation-based study, and that observations can be distributed between several observers without loss in power, provided that all observers contribute data to both of the compared groups, and that the statistical analysis model acknowledges observer variability. Paper IV discussed calibration of an inferior exposure assessment method against a superior “golden standard” method, with a particular emphasis on calibration of observed posture data against postures determined by inclinometry. The paper developed equations for bias correction of results obtained using the inferior instrument through calibration, as well as for determining the additional uncertainty of the eventual exposure value introduced through calibration. In conclusion, the results of the present thesis emphasize the importance of carefully selecting a measurement strategy on the basis of statistically well informed decisions. It is common in the literature that postural exposure is assessed from one continuous measurement collected over only a small part of a working day. In paper I, this was shown to be highly inefficient compared to spreading out the corresponding sample time across the entire working day, and the inefficiency was also obvious when assessing variance components, as shown in paper II. The thesis also shows how a well thought-out strategy for observation-based exposure assessment can reduce the effects of measurement error, both for random methodological variance (paper III) and systematic observation errors (bias) (paper IV).
73

Bayesian time series and panel models : unit roots, dynamics and random effects

Salabasis, Mickael January 2004 (has links)
This thesis consists of four papers and the main theme present is dependence, through time as in serial correlation, and across individuals, as in random effects. The individual papers may be grouped in many different ways. As is, the first two are concerned with autoregressive dynamics in a single time series and then a panel context, while the subject of the final two papers is parametric covariance modeling. Though set in a panel context the results in the latter are generally applicable. The approach taken is Bayesian. This choice is prompted by the coherent framework that the Bayesian principle offers for quantifying uncertainty and subsequently making inference in the presence of it. Recent advances in numerical methods have also made the Bayesian choice simpler. In the first paper an existing model to conduct inference directly on the roots of the autoregressive polynomial is extended to include seasonal components and to allow for a polynomial trend of arbitrary degree. The resulting highly flexible model robustifies against misspecification by implicitly averaging over different lag lengths, number of unit roots and specifications for the deterministic trend. An application to the Swedish real GDP illustrates the rich set of information about the dynamics of a time series that can be extracted using this modeling framework. The second paper offers an extension to a panel of time series. Limiting the scope, but at the same time simplifying matters considerably, the mean model is dropped restricting the applicability to non-trending panels. The main motivation of the extension is the construction of a flexible panel unit root test. The proposed approach circumvents the classical confusing problem of stating a relevant null hypothesis. It offers the possibility of more distinct inference with respect to unit root composition in the collection of time series. It also addresses the two important issues of model uncertainty and cross-section correlation. The model is illustrated using a panel of real exchange rates to investigate the purchasing power parity hypothesis. Many interesting panel models imply a structure on the covariance matrix in terms of a small number of parameters. In the third paper, exploiting this structure it is demonstrated how common panel data models lend themselves to direct sampling of the variance parameters. Not necessarily always practical, the implementation can be described by a simple and generally applicable template. For the method to be practical, simple to program and quick to execute, it is essential that the inverse of the covariance matrix can be written as a reasonably simple function of the parameters of interest. Also preferable but in no way necessary is the availability of a computationally convenient expression for the determinant of the covariance matrix as well as a bounded support for the parameters. Using the template, the computations involved in direct sampling and effect selection are illustrated in the context of a one- and two-way random effects model respectively. Having established direct sampling as a viable alternative in the previous paper, the generic template is applied to panel models with serial correlation in the fourth paper. In the case of pure serial correlation, with no random effects present, applying the template and using a Jeffreys type prior leads to very simple computations. In the very general setting of a mixed effects model with autocorrelated errors direct sampling of all variance parameters does not appear to be possible or at least not obviously practical. One important special case is identified in the model with the random individual effects model with autocorrelation. / <p>Diss. Stockholm : Handelshögskolan i Stockholm, 2004 viii s., s. 1-9: sammanfattning, s. 10-116: 4 uppsatser</p>
74

Multiple Time Scales and Longitudinal Measurements in Event History Analysis

Danardono, January 2005 (has links)
A general time-to-event data analysis known as event history analysis is considered. The focus is on the analysis of time-to-event data using Cox's regression model when the time to the event may be measured from different origins giving several observable time scales and when longitudinal measurements are involved. For the multiple time scales problem, procedures to choose a basic time scale in Cox's regression model are proposed. The connections between piecewise constant hazards, time-dependent covariates and time-dependent strata in the dual time scales are discussed. For the longitudinal measurements problem, four methods known in the literature together with two proposed methods are compared. All quantitative comparisons are performed by means of simulations. Applications to the analysis of infant mortality, morbidity, and growth are provided.
75

The statistical theory underlying human genetic linkage analysis based on quantitative data from extended families

Galal, Ushma January 2010 (has links)
<p>Traditionally in human genetic linkage analysis, extended families were only used in the analysis of dichotomous traits, such as Disease/No Disease. For quantitative traits, analyses initially focused on data from family trios (for example, mother, father, and child) or sib-pairs. Recently however, there have been two very important developments in genetics: It became clear that if the disease status of several generations of a family is known and their genetic information is obtained, researchers can pinpoint which pieces of genetic material are linked to the disease or trait. It also became evident that if a trait is quantitative (numerical), as blood pressure or viral loads are, rather than dichotomous, one has much more power for the same sample size. This led to the&nbsp / development of statistical mixed models which could incorporate all the features of the data, including the degree of relationship between each pair of family members. This is necessary because a parent-child pair definitely shares half their genetic material, whereas a pair of cousins share, on average, only an eighth. The statistical methods involved here have however been developed by geneticists, for their specific studies, so there does not seem to be a unified and general description of the theory underlying the methods. The aim of this dissertation is to explain in a unified and statistically comprehensive manner, the theory involved in the analysis of quantitative trait genetic data from extended families. The focus is on linkage analysis: what it is and what it aims to do.&nbsp / There is a step-by-step build up to it, starting with an introduction to genetic epidemiology. This includes an explanation of the relevant genetic terminology. There is also an application section where an appropriate human genetic family dataset is analysed, illustrating the methods explained in the theory sections.</p>
76

Prediction of recurrent events

Fredette, Marc January 2004 (has links)
In this thesis, we will study issues related to prediction problems and put an emphasis on those arising when recurrent events are involved. First we define the basic concepts of frequentist and Bayesian statistical prediction in the first chapter. In the second chapter, we study frequentist prediction intervals and their associated predictive distributions. We will then present an approach based on asymptotically uniform pivotals that is shown to dominate the plug-in approach under certain conditions. The following three chapters consider the prediction of recurrent events. The third chapter presents different prediction models when these events can be modeled using homogeneous Poisson processes. Amongst these models, those using random effects are shown to possess interesting features. In the fourth chapter, the time homogeneity assumption is relaxed and we present prediction models for non-homogeneous Poisson processes. The behavior of these models is then studied for prediction problems with a finite horizon. In the fifth chapter, we apply the concepts discussed previously to a warranty dataset coming from the automobile industry. The number of processes in this dataset being very large, we focus on methods providing computationally rapid prediction intervals. Finally, we discuss the possibilities of future research in the last chapter.
77

Prediction of recurrent events

Fredette, Marc January 2004 (has links)
In this thesis, we will study issues related to prediction problems and put an emphasis on those arising when recurrent events are involved. First we define the basic concepts of frequentist and Bayesian statistical prediction in the first chapter. In the second chapter, we study frequentist prediction intervals and their associated predictive distributions. We will then present an approach based on asymptotically uniform pivotals that is shown to dominate the plug-in approach under certain conditions. The following three chapters consider the prediction of recurrent events. The third chapter presents different prediction models when these events can be modeled using homogeneous Poisson processes. Amongst these models, those using random effects are shown to possess interesting features. In the fourth chapter, the time homogeneity assumption is relaxed and we present prediction models for non-homogeneous Poisson processes. The behavior of these models is then studied for prediction problems with a finite horizon. In the fifth chapter, we apply the concepts discussed previously to a warranty dataset coming from the automobile industry. The number of processes in this dataset being very large, we focus on methods providing computationally rapid prediction intervals. Finally, we discuss the possibilities of future research in the last chapter.
78

Assessing Binary Measurement Systems

Danila, Oana Mihaela January 2012 (has links)
Binary measurement systems (BMS) are widely used in both manufacturing industry and medicine. In industry, a BMS is often used to measure various characteristics of parts and then classify them as pass or fail, according to some quality standards. Good measurement systems are essential both for problem solving (i.e., reducing the rate of defectives) and to protect customers from receiving defective products. As a result, it is desirable to assess the performance of the BMS as well as to separate the effects of the measurement system and the production process on the observed classifications. In medicine, BMSs are known as diagnostic or screening tests, and are used to detect a target condition in subjects, thus classifying them as positive or negative. Assessing the performance of a medical test is essential in quantifying the costs due to misclassification of patients, and in the future prevention of these errors. In both industry and medicine, the most commonly used characteristics to quantify the performance a BMS are the two misclassification rates, defined as the chance of passing a nonconforming (non-diseased) unit, called the consumer's risk (false positive), and the chance of failing a conforming (diseased) unit, called the producer's risk (false negative). In most assessment studies, it is also of interest to estimate the conforming (prevalence) rate, i.e. probability that a randomly selected unit is conforming (diseased). There are two main approaches for assessing the performance of a BMS. Both approaches involve measuring a number of units one or more times with the BMS. The first one, called the "gold standard" approach, requires the use of a gold-standard measurement system that can determine the state of units with no classification errors. When a gold standard does not exist, is too expensive or time-consuming, another option is to repeatedly measure units with the BMS, and then use a latent class approach to estimate the parameters of interest. In industry, for both approaches, the standard sampling plan involves randomly selecting parts from the population of manufactured parts. In this thesis, we focus on a specific context commonly found in the manufacturing industry. First, the BMS under study is nondestructive. Second, the BMS is used for 100% inspection or any kind of systematic inspection of the production yield. In this context, we are likely to have available a large number of previously passed and failed parts. Furthermore, the inspection system typically tracks the number of parts passed and failed; that is, we often have baseline data about the current pass rate, separate from the assessment study. Finally, we assume that during the time of the evaluation, the process is under statistical control and the BMS is stable. Our main goal is to investigate the effect of using sampling plans that involve random selection of parts from the available populations of previously passed and failed parts, i.e. conditional selection, on the estimation procedure and the main characteristics of the estimators. Also, we demonstrate the value of combining the additional information provided by the baseline data with those collected in the assessment study, in improving the overall estimation procedure. We also examine how the availability of baseline data and using a conditional selection sampling plan affect recommendations on the design of the assessment study. In Chapter 2, we give a summary of the existing estimation methods and sampling plans for a BMS assessment study in both industrial and medical settings, that are relevant in our context. In Chapters 3 and 4, we investigate the assessment of a BMS in the case where we assume that the misclassification rates are common for all conforming/nonconforming parts and that repeated measurements on the same part are independent, conditional on the true state of the part, i.e. conditional independence. We call models using these assumptions fixed-effects models. In Chapter 3, we look at the case where a gold standard is available, whereas in Chapter 4, we investigate the "no gold standard" case. In both cases, we show that using a conditional selection plan, along with the baseline information, substantially improves the accuracy and precision of the estimators, compared to the standard sampling plan. In Chapters 5 and 6, we investigate the case where we allow for possible variation in the misclassification rates within conforming and nonconforming parts, by proposing some new random-effects models. These models relax the fixed-effects model assumptions regarding constant misclassification rates and conditional independence. As in the previous chapters, we focus on investigating the effect of using conditional selection and baseline information on the properties of the estimators, and give study design recommendations based on our findings. In Chapter 7, we discuss other potential applications of the conditional selection plan, where the study data are augmented with the baseline information on the pass rate, especially in the context where there are multiple BMSs under investigation.
79

Carry-over and interaction effects of different hand-milking techniques and milkers on milk

HE, Ran January 1986 (has links)
The main idea of this thesis is studying the importance of the carry-over effects and interaction effects in statistical models. To investigate it, a hand-milking experiment in Burkina Faso was studied. In many no electricity access countries, such as Burkina Faso, the amount of milk and milk compositions are still highly  relying on hand-milking techniques and milkers. Moreover, the time effects also plays a important role in stockbreeding system. Therefore, falling all effects, carry-over effects and interaction effects into a linear mixed effects model, it is concluded that the carry-over effects of milker and hand-milking techniques cannot be neglected, and the interaction effects among hand-milking techniques, different milkers, days and periods can be substantial.
80

Looking in the Crystal Ball: Determinants of Excess Return

Akolly, Kokou S 18 August 2010 (has links)
This paper investigates the determinants of excess returns using dividend yields as a proxy in a cross-sectional setting. First, we find that types of industry and the current business cycle are determining factors of returns. Second, our results suggest that dividend yield serves a signaling mechanism indicating “healthiness” of a firm among prospective investors. Third we see that there is a positive relationship between dividend yield and risk, especially in the utility and financial sectors. And finally, using actual excess returns, instead of dividend yield in our model shows that all predictors of dividend yield were also significant predictors of excess returns. This connection between dividend yield and excess returns support our use of dividend yield as a proxy for excess returns.

Page generated in 0.0374 seconds