• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • Tagged with
  • 244
  • 244
  • 42
  • 35
  • 32
  • 27
  • 26
  • 26
  • 26
  • 25
  • 24
  • 22
  • 22
  • 20
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Acceptability of healthcare interventions

Sekhon, Mandeep January 2017 (has links)
Background: Problems with acceptability of healthcare interventions can undermine the validity of randomised evaluation studies. Hence, assessing acceptability is an important methodological issue. However, the research literature provides little guidance on how to define and assess acceptability. Acceptability of a healthcare intervention could be different, depending on the perspective taken: patients and healthcare professionals may have different views. Perceptions of acceptability may also change according to when acceptability is assessed, in relation to a person’s engagement with the intervention. A person can have perceptions about prospective acceptability (i.e. prior to taking part in the intervention); concurrent acceptability (i.e. whilst taking part in the intervention) and retrospective acceptability (after participating in the intervention). Objectives: The overall aim of this programme of research was to define acceptability in the context of healthcare interventions and to develop a Theoretical Framework of Acceptability (TFA) that can be applied to assess acceptability from two stakeholder perspectives: healthcare professionals and patients. The specific objectives were to: 1) Identify, from the published literature, how the acceptability of healthcare interventions has been defined, operationalised and theorised; 2) Theorise the concept of acceptability and develop a theoretical framework of acceptability (TFA) to guide assessment and develop preliminary assessment tools; 3) Use the tools to apply the TFA to assess intervention acceptability qualitatively, and 4) Apply pre-validation methods to develop preliminary versions of two TFA-based questionnaires. Methods: Six studies were conducted: 1. A systematic overview of reviews of published studies to investigate how the acceptability of healthcare interventions has been defined, theorised and assessed. The results of this study formed the basis for study 2. 2. Inductive and deductive methods of reasoning were applied to theorise acceptability and to develop the Theoretical Framework of Acceptability (TFA). 3. Semi-structured interviews with eligible participants who declined to participate in a Randomised Controlled Trial (RCT) comparing a new patient-led model of care with standard care, for managing blepharospasm and hemifacial spasm. The TFA was applied to identify whether participants’ reasons for refusal were associated with prospective acceptability of the intervention or with other factors. 4. Application of the TFA to analyse semi-structured interviews to assess healthcare professionals’ retrospective acceptability of two feedback interventions delivered in a research programme aimed at developing and evaluating audit and feedback interventions to increase evidence-based transfusion practice. 5. An extension of Study 3: semi-structured interviews with patients who agreed to participate in the RCT, at three-month follow-up, to assess patients’ concurrent acceptability of the standard model of care and the patient led model of care for managing blepharospasm and hemifacial spasm. 6. Pre-validation methods were applied to develop two TFA-based questionnaires applicable to the RCTs described in Studies 3, 4 and 5. Results: Study 1: acceptability had not been theorised and there was no standard definition used in the literature. Operational definitions of acceptability were often reported and often reflected measures of observed behaviour. Study 2: proposed definition: Acceptability is a multi-faceted construct that reflects the extent to which people delivering or receiving a healthcare intervention consider it to be appropriate, based on anticipated or experienced cognitive and emotional responses to the intervention. The TFA was proposed as a multi-component framework that can be applied to assess intervention acceptability across three temporal perspectives: prospective, concurrent and retrospective. The TFA consists of seven component constructs: Affective attitude, Burden, Ethicality, Intervention Coherence, Opportunity Costs, Perceived Effectiveness and Self-efficacy. Studies 3-5: It was feasible to apply the TFA in these empirical studies. Study 6: Two acceptability questionnaires were developed; the TFA informed the development of items reflecting the seven component constructs of the TFA. Conclusion: Despite frequent claims that the acceptability of healthcare interventions has been assessed, acceptability research could be more robust. Investigating acceptability as a multi-component construct resulted in richer information about the acceptability of each intervention, and suggestions for enhancing intervention acceptability across three temporal perspectives. The TFA offers the research community a systematic and theoretical approach to advance the science and practice of acceptability assessment for healthcare interventions.
202

Computational econometrics with applications to housing markets

Rahal, Charles January 2016 (has links)
This thesis develops a variety of econometric approaches in order to examine model specification and weighting choices in an era when computational power and efficiency is becoming less of a binding constraint and 'big data' becomes widely available. We apply our methodological contributions to housing markets, given their recently verified importance with regard to international financial stability. In particular, we develop two original forecasting routines based on high dimensional datasets, take simulated and empirical approaches to specifying large sets of spatial weighting matrices and we contrast structural (panel and single country based) vector autoregressions which are identified and analyzed in a number of ways. The appendices are reserved for introducing published econometric software and replication codes developed as a by-product of the main body of work.
203

The Turkish Catastrophe Insurance Pool Claims Modeling 2000-2008 Data

Saribekir, Gozde 01 March 2013 (has links) (PDF)
After the 1999 Marmara Earthquake, social, economic and engineering studies on earthquakes became more intensive. The Turkish Catastrophe Insurance Pool (TCIP) was established after the Marmara Earthquake to share the deficit in the budget of the Government. The TCIP has become a data source for researchers, consisting of variables such as number of claims, claim amount and magnitude. In this thesis, the TCIP earthquake claims, collected between 2000 and 2008, are studied. The number of claims and claim payments (aggregate claim amount) are modeled by using Generalized Linear Models (GLM). Observed sudden jumps in claim data are represented by using the exponential kernel function. Model parameters are estimated by using the Maximum Likelihood Estimation (MLE). The results can be used as recommendation in the computation of expected value of the aggregate claim amounts and the premiums of the TCIP.
204

Pairwise Multiple Comparisons Under Short-tailed Symmetric Distribution

Balci, Sibel 01 May 2007 (has links) (PDF)
In this thesis, pairwise multiple comparisons and multiple comparisons with a control are studied when the observations have short-tailed symmetric distributions. Under non-normality, the testing procedure is given and Huber estimators, trimmed mean with winsorized standard deviation, modified maximum likelihood estimators and ordinary sample mean and sample variance used in this procedure are reviewed. Finally, robustness properties of the stated estimators are compared with each other and it is shown that the test based on the modified maximum likelihood estimators has better robustness properties under short-tailed symmetric distribution.
205

Effect Of Estimation In Goodness-of-fit Tests

Eren, Emrah 01 September 2009 (has links) (PDF)
In statistical analysis, distributional assumptions are needed to apply parametric procedures. Assumptions about underlying distribution should be true for accurate statistical inferences. Goodness-of-fit tests are used for checking the validity of the distributional assumptions. To apply some of the goodness-of-fit tests, the unknown population parameters are estimated. The null distributions of test statistics become complicated or depend on the unknown parameters if population parameters are replaced by their estimators. This will restrict the use of the test. Goodness-of-fit statistics which are invariant to parameters can be used if the distribution under null hypothesis is a location-scale distribution. For location and scale invariant goodness-of-fit tests, there is no need to estimate the unknown population parameters. However, approximations are used in some of those tests. Different types of estimation and approximation techniques are used in this study to compute goodness-of-fit statistics for complete and censored samples from univariate distributions as well as complete samples from bivariate normal distribution. Simulated power properties of the goodness-of-fit tests against a broad range of skew and symmetric alternative distributions are examined to identify the estimation effects in goodness-of-fit tests. The main aim of this thesis is to modify goodness-of-fit tests by using different estimators or approximation techniques, and finally see the effect of estimation on the power of these tests.
206

Robust Estimation And Hypothesis Testing In Microarray Analysis

Ulgen, Burcin Emre 01 August 2010 (has links) (PDF)
Microarray technology allows the simultaneous measurement of thousands of gene expressions simultaneously. As a result of this, many statistical methods emerged for identifying differentially expressed genes. Kerr et al. (2001) proposed analysis of variance (ANOVA) procedure for the analysis of gene expression data. Their estimators are based on the assumption of normality, however the parameter estimates and residuals from this analysis are notably heavier-tailed than normal as they commented. Since non-normality complicates the data analysis and results in inefficient estimators, it is very important to develop statistical procedures which are efficient and robust. For this reason, in this work, we use Modified Maximum Likelihood (MML) and Adaptive Maximum Likelihood estimation method (Tiku and Suresh, 1992) and show that MML and AMML estimators are more efficient and robust. In our study we compared MML and AMML method with widely used statistical analysis methods via simulations and real microarray data sets.
207

The Application Of Disaggregation Methods To The Unemployment Rate Of Turkey

Tuker, Utku Goksel 01 September 2010 (has links) (PDF)
Modeling and forecasting of the unemployment rate of a country is very important to be able to take precautions on the governmental policies. The available unemployment rate data of Turkey provided by the Turkish Statistical Institute (TURKSTAT) are not in suitable format to have a time series model. The unemployment rate data between 1988 and 2009 create a problem of building a reliable time series model due to the insufficient number and irregular form of observations. The application of disaggregation methods to some parts of the unemployment rate data enables us to fit an appropriate time series model and to have forecasts as a result of the suggested model.
208

The Effect Of Temporal Aggregation On Univariate Time Series Analysis

Sariaslan, Nazli 01 September 2010 (has links) (PDF)
Most of the time series are constructed by some kind of aggregation and temporal aggregation that can be defined as aggregation over consecutive time periods. Temporal aggregation takes an important role in time series analysis since the choice of time unit clearly influences the type of model and forecast results. A totally different time series model can be fitted on the same variable over different time periods. In this thesis, the effect of temporal aggregation on univariate time series models is studied by considering modeling and forecasting procedure via a simulation study and an application based on a southern oscillation data set. Simulation study shows how the model, mean square forecast error and estimated parameters change when temporally aggregated data is used for different orders of aggregation and sample sizes. Furthermore, the effect of temporal aggregation is also demonstrated through southern oscillation data set for different orders of aggregation. It is observed that the effect of temporal aggregation should be taken into account for data analysis since temporal aggregation can give rise to misleading results and inferences.
209

Which Method Gives The Best Forecast For Longitudinal Binary Response Data?: A Simulation Study

Aslan, Yasemin 01 October 2010 (has links) (PDF)
Panel data, also known as longitudinal data, are composed of repeated measurements taken from the same subject over different time points. Although it is generally used in time series applications, forecasting can also be used in panel data due to its time dimension. However, there is limited number of studies in this area in the literature. In this thesis, forecasting is studied for panel data with binary response because of its increasing importance and increasing fundamental roles. A simulation study is held to compare the efficiency of different methods and to find the one that gives the optimal forecast values. In this simulation, 21 different methods, including na&iuml / ve and complex ones, are used by the help of R software. It is concluded that transition models and random effects models with no lag of response can be chosen for getting the most accurate forecasts, especially for the first two years of forecasting.
210

On Multivariate Longitudinal Binary Data Models And Their Applications In Forecasting

Asar, Ozgur 01 July 2012 (has links) (PDF)
Longitudinal data arise when subjects are followed over time. This type of data is typically dependent, due to including repeated observations and this type of dependence is termed as within-subject dependence. Often the scientific interest is on multiple longitudinal measurements which introduce two additional types of associations, between-response and cross-response temporal dependencies. Only the statistical methods which take these association structures might yield reliable and valid statistical inferences. Although the methods for univariate longitudinal data have been mostly studied, multivariate longitudinal data still needs more work. In this thesis, although we mainly focus on multivariate longitudinal binary data models, we also consider other types of response families when necessary. We extend a work on multivariate marginal models, namely multivariate marginal models with response specific parameters (MMM1), and propose multivariate marginal models with shared regression parameters (MMM2). Both of these models are generalized estimating equation (GEE) based, and are valid for several response families such as Binomial, Gaussian, Poisson, and Gamma. Two different R packages, mmm and mmm2 are proposed to fit them, respectively. We further develop a marginalized multilevel model, namely probit normal marginalized transition random effects models (PNMTREM) for multivariate longitudinal binary response. By this model, implicit function theorem is introduced to explicitly link the levels of marginalized multilevel models with transition structures for the first time. An R package, bf pnmtrem is proposed to fit the model. PNMTREM is applied to data collected through Iowa Youth and Families Project (IYFP). Five different models, including univariate and multivariate ones, are considered to forecast multivariate longitudinal binary data. A comparative simulation study, which includes a model-independent data simulation process, is considered for this purpose. Forecasting independent variables are taken into account as well. To assess the forecasts, several accuracy measures, such as expected proportion of correct prediction (ePCP), area under the receiver operating characteristic (AUROC) curve, mean absolute scaled error (MASE) are considered. Mother&#039 / s Stress and Children&#039 / s Morbidity (MSCM) data are used to illustrate this comparison in real life. Results show that marginalized models yield better forecasting results compared to marginal models. Simulation results are in agreement with these results as well.

Page generated in 0.0902 seconds