• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 410
  • 58
  • 47
  • 19
  • 13
  • 11
  • 11
  • 11
  • 11
  • 11
  • 11
  • 7
  • 7
  • 4
  • 3
  • Tagged with
  • 690
  • 132
  • 95
  • 94
  • 76
  • 70
  • 62
  • 59
  • 56
  • 54
  • 46
  • 42
  • 38
  • 37
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
531

Controlling High Quality Manufacturing Processes: A Robustness Study Of The Lower-sided Tbe Ewma Procedure

Pehlivan, Canan 01 September 2008 (has links) (PDF)
In quality control applications, Time-Between-Events (TBE) type observations may be monitored by using Exponentially Weighted Moving Average (EWMA) control charts. A widely accepted model for the TBE processes is the exponential distribution, and hence TBE EWMA charts are designed under this assumption. Nevertheless, practical applications do not always conform to the theory and it is common that the observations do not fit the exponential model. Therefore, control charts that are robust to departures from the assumed distribution are desirable in practice. In this thesis, robustness of the lower-sided TBE EWMA charts to the assumption of exponentially distributed observations has been investigated. Weibull and lognormal distributions are considered in order to represent the departures from the assumed exponential model and Markov Chain approach is utilized for evaluating the performance of the chart. By analyzing the performance results, design settings are suggested in order to achieve robust lower-sided TBE EWMA charts.
532

Statistical Inference From Complete And Incomplete Data

Can Mutan, Oya 01 January 2010 (has links) (PDF)
Let X and Y be two random variables such that Y depends on X=x. This is a very common situation in many real life applications. The problem is to estimate the location and scale parameters in the marginal distributions of X and Y and the conditional distribution of Y given X=x. We are also interested in estimating the regression coefficient and the correlation coefficient. We have a cost constraint for observing X=x, the larger x is the more expensive it becomes. The allowable sample size n is governed by a pre-determined total cost. This can lead to a situation where some of the largest X=x observations cannot be observed (Type II censoring). Two general methods of estimation are available, the method of least squares and the method of maximum likelihood. For most non-normal distributions, however, the latter is analytically and computationally problematic. Instead, we use the method of modified maximum likelihood estimation which is known to be essentially as efficient as the maximum likelihood estimation. The method has a distinct advantage: It yields estimators which are explicit functions of sample observations and are, therefore, analytically and computationally straightforward. In this thesis specifically, the problem is to evaluate the effect of the largest order statistics x(i) (i&gt / n-r) in a random sample of size n (i) on the mean E(X) and variance V(X) of X, (ii) on the cost of observing the x-observations, (iii) on the conditional mean E(Y|X=x) and variance V(Y|X=x) and (iv) on the regression coefficient. It is shown that unduly large x-observations have a detrimental effect on the allowable sample size and the estimators, both least squares and modified maximum likelihood. The advantage of not observing a few largest observations are evaluated. The distributions considered are Weibull, Generalized Logistic and the scaled Student&rsquo / s t.
533

A Probabilistic Conceptual Design And Sizing Approach For A Helicopter

Selvi, Selim 01 September 2010 (has links) (PDF)
Due to its complex and multidisciplinary nature, the conceptual design phase of helicopters becomes critical in meeting customer satisfaction. Statistical (probabilistic) design methods can be employed to understand the design better and target a design with lower variability. In this thesis, a conceptual design and helicopter sizing methodology is developed and shown on a helicopter design for Turkey.
534

Bayesian Semiparametric Models For Nonignorable Missing Datamechanisms In Logistic Regression

Ozturk, Olcay 01 May 2011 (has links) (PDF)
In this thesis, Bayesian semiparametric models for the missing data mechanisms of nonignorably missing covariates in logistic regression are developed. In the missing data literature, fully parametric approach is used to model the nonignorable missing data mechanisms. In that approach, a probit or a logit link of the conditional probability of the covariate being missing is modeled as a linear combination of all variables including the missing covariate itself. However, nonignorably missing covariates may not be linearly related with the probit (or logit) of this conditional probability. In our study, the relationship between the probit of the probability of the covariate being missing and the missing covariate itself is modeled by using a penalized spline regression based semiparametric approach. An efficient Markov chain Monte Carlo (MCMC) sampling algorithm to estimate the parameters is established. A WinBUGS code is constructed to sample from the full conditional posterior distributions of the parameters by using Gibbs sampling. Monte Carlo simulation experiments under different true missing data mechanisms are applied to compare the bias and efficiency properties of the resulting estimators with the ones from the fully parametric approach. These simulations show that estimators for logistic regression using semiparametric missing data models maintain better bias and efficiency properties than the ones using fully parametric missing data models when the true relationship between the missingness and the missing covariate has a nonlinear form. They are comparable when this relationship has a linear form.
535

Essays in economics dynamics and uncertainty

Dumav, Martin 10 October 2012 (has links)
This work presents a systematic investigation of two topics. One is in stochastic dynamic general equilibrium. It incorporates private information into dynamic general equilibrium framework. An existence of competitive equilibrium is established. Quantitative analysis is provided for health insurance problem. The other topic is in decision problems under ambiguity. Lack of precise information regarding a decision problem is represented by a set of probabilities. Descriptive richness of the set of probabilities is defi ned. It is used to generalize Skorohod's theorem to sets of probabilities. The latter is used to show the constancy of the coefficient in alpha-maximin multiple prior preferences. Examples illustrate: the implications of this representation; and the restrictions arising from the failure of descriptive richness. / text
536

Γενικευμένα πολυώνυμα Fibonacci και κατανομές πιθανότητας

Φιλίππου, Γιώργος 06 May 2015 (has links)
Η τόσο συχνή εμφάνιση της ακολουθίας Fibonacci στη φύση καθώς και ο συσχετισμός της με πλείστους τομείς της μαθηματικής επιστήμης έδωσε αφορμή να ενταθεί η έρευνα στην περιοχή αυτή. Και τούτο ιδιαίτερα τις τελευταίες δύο δεκαετίες. Τα πολυώνυμα Fibonacci k-τάξης αποτελούν μία από τις ευρύτερες γενικεύσεις της ακολουθίας Fibonacci. Η μελέτη των πολυωνύμων αυτών και η σύνδεσή τους με την πιθανότητα είναι το κύριο αντικείμενο της διατριβής αυτής. Η κατανομή πιθανότητας της τ. μ. Χk, όπου Xk το πλήθος των επαναλήψεων σε ένα πείραμα δοκιμών Bernoulli ώσπου να προκύψουν k διαδοχικές επιτυχίες, έχει ονομασθεί "κατανομή πιθανότητας Fibonacci". Η σχέση της κατανομής Fibonacci με τα πολυώνυμα Fibonacci οδήγησε στις γενικευμένες κατανομές πιθανότητας που αποτέλεσε το δεύτερο άξονα της μελέτης αυτής. / The fact that Fibonacci sequences appear so frequently in nature together with their interrelationship with almost any branch of mathematics, has resulted in an intesive research in this area particularly during the last two decades. One of the most wide extensions of the Fibonacci sequence is provided by the Fibonacci polynomials of order k. The study of these polynomials and thier relation with probability is the main part of this dissertation. The probability distribution of the r.v. Xk, where Xk denotes the number of trials until the occurrence of the kth consecutive success in indipendent trials, thas been called "Fibonacci Probability Distribution". The relation between the Fibonacci Distribution and the Fibonacci polynomials led to generalized probability distributions (Geometric, Negative binomial, Poisson and Compound poisson) which consists the second major part of this study.
537

The role of immune-genetic factors in modelling longitudinally measured HIV bio-markers including the handling of missing data.

Odhiambo, Nancy. 20 December 2013 (has links)
Since the discovery of AIDS among the gay men in 1981 in the United States of America, it has become a major world pandemic with over 40 million individuals infected world wide. According to the Joint United Nations Programme against HIV/AIDS epidermic updates in 2012, 28.3 million individuals are living with HIV world wide, 23.5 million among them coming from sub-saharan Africa and 4.8 million individuals residing in Asia. The report showed that approximately 1.7 million individuals have died from AIDS related deaths, 34 million ± 50% know their HIV status, a total of 2:5 million individuals are newly infected, 14:8 million individuals are eligible for HIV treatment and only 8 million are on HIV treatment (Joint United Nations Programme on HIV/AIDS and health sector progress towards universal access: progress report, 2011). Numerous studies have been carried out to understand the pathogenesis and the dynamics of this deadly disease (AIDS) but, still its pathogenesis is poorly understood. More understanding of the disease is still needed so as to reduce the rate of its acquisition. Researchers have come up with statistical and mathematical models which help in understanding and predicting the progression of the disease better so as to find ways in which its acquisition can be prevented and controlled. Previous studies on HIV/AIDS have shown that, inter-individual variability plays an important role in susceptibility to HIV-1 infection, its transmission, progression and even response to antiviral therapy. Certain immuno-genetic factors (human leukocyte antigen (HLA), Interleukin-10 (IL-10) and single nucleotide polymorphisms (SNPs)) have been associated with the variability among individuals. In this dissertation we are going to reaffirm previous studies through statistical modelling and analysis that have shown that, immuno-genetic factors could play a role in susceptibility, transmission, progression and even response to antiviral therapy. This will be done using the Sinikithemba study data from the HIV Pathogenesis Programme (HPP) at Nelson Mandela Medical school, University of Kwazulu-Natal consisting of 451 HIV positive and treatment naive individuals to model how the HIV Bio-markers (viral load and CD4 count) are associated with the immuno-genetic factors using linear mixed models. We finalize the dissertation by dealing with drop-out which is a pervasive problem in longitudinal studies, regardless of how well they are designed and executed. We demonstrate the application and performance of multiple imputation (MI) in handling drop-out using a longitudinal count data from the Sinikithemba study with log viral load as the response. Our aim is to investigate the influence of drop-out on the evolution of HIV Bio-markers in a model including selected genetic factors as covariates, assuming the missing mechanism is missing at random (MAR). We later compare the results obtained from the MI method to those obtained from the incomplete dataset. From the results, we can clearly see that there is much difference in the findings obtained from the two analysis. Therefore, there is need to account for drop-out since it can lead to biased results if not accounted for. / Thesis (M.Sc.)-University of KwaZulu-Natal, Pietermaritzburg, 2013.
538

Statistical and mathematical modelling of HIV and AIDS, effect of reverse transcriptase inhibitors and causal inference for HIV mortality.

Ngwenya, Olina. 29 January 2014 (has links)
The HIV and AIDS epidemic has remained one of the leading causes of death in the world and has been destructive in Africa with Sub-Saharan Africa remaining the epidemiological locus of the epidemic. HIV and AIDS hinders development by erasing decades of health, economic and social progress, reducing life expectancy by years and deepening poverty [57].The most urgent public-health problem globally is to devise effective strategies to minimize the destruction caused by the HIV and AIDS epidemic. Due to the problems caused by HIV and AIDS, well defined endpoints to evaluate treatment benefits are needed. The surrogate and true endpoints for a disease need to be specified. The purpose of a surrogate endpoint is to draw conclusions about the effect of intervention on true endpoint without having to observe the true endpoint. It is of great importance to understand the surrogate validation methods. At present the question remains as to whether CD4 count and viral load are good surrogate markers for death in HIV or there are some better surrogate markers. This dissertation was undertaken to obtain some clarity on this question by adopting a mathematical model for HIV at immune system level and the impact of treatment in the form of reverse transcriptase inhibitors (RTIs). For an understanding of HIV, the dissertation begins with the description of the human immune system, HIV virion structure, HIV disease progression and HIV drugs. Then a review of an existing mathematical model follows, analyses and simulations of this model are done. These gave an insight into the dynamics of the CD4 count, viral load and HIV therapy. Thereafter surrogate marker validation methods followed. Finally generalized estimating equations (GEEs) approach was used to analyse real data for HIV positive individuals, from the Centre for the AIDS Programme of Research in South Africa (CAPRISA). Numerical simulations for the HIV dynamic model with treatment suggest that the higher the treatment efficacy, the lower the infected cells are left in the body. The infected cells are suppressed to a lower threshold value but they do not completely disappear, as long as the treatment is not 100% efficacious. Further numerical simulations suggest that it is advantageous to have a low proportion of infectious virions (ω) at an individual level because the individual would produce few infectious virions to infect healthy cells. Statistical analysis model using GEEs suggest that CD4 count< 200 and viral load are highly associated with death, meaning that they are good surrogate markers for death. An interesting finding from the analysis of this particular data from CAPRISA was that low CD4 count and high viral loads as surrogates for HIV survival act independently/additively. The interaction effect was found to be insignificant. Individual characteristics or factors that were found to be significantly associated with HIV related death are weight, CD4 count< 200 and viral load. / Thesis (M.Sc.)-University of KwaZulu-Natal, Pietermaritzburg, 2010.
539

Estimating risk determinants of HIV and TB in South Africa.

Mzolo, Thembile. January 2009 (has links)
Where HIV/AIDS has had its greatest adverse impact is on TB. People with TB that are infected with HIV are at increased risk of dying from TB than HIV. TB is the leading cause of death in HIV individuals in South Africa. HIV is the driving factor that increases the risk of progression from latent TB to active TB. In South Africa no coherent analysis of the risk determinants of HIV and TB has been done at the national level this study seeks to mend that gab. This study is about estimating risk determinants of HIV and TB. This will be done using the national household survey conducted by Human Sciences Research Council in 2005. Since individuals from the same household and enumerator area more likely to be more alike in terms of risk of disease or correlated among each other, the GEEs will be used to correct for this potential intraclass correlation. Disease occurrence and distribution is highly heterogeneous at the population, household and the individual level. In recognition of this fact we propose to model this heterogeneity at community level through GLMMs and Bayesian hierarchical modelling approaches with enumerator area indicating the community e ect. The results showed that HIV is driven by sex, age, race, education, health and condom use at sexual debut. Factors associated with TB are HIV status, sex, education, income and health. Factors that are common to both diseases are sex, education and health. The results showed that ignoring the intraclass correlation can results to biased estimates. Inference drawn from GLMMs and Bayesian approach provides some degree of con dence in the results. The positive correlation found at an enumerator area level for both HIV and TB indicates that interventions should be aimed at an area level rather than at the individual level. / Thesis (M.Sc.)-University of KwaZulu-Natal, Pietermaritzburg, 2009
540

Development of a framework for an integrated time-varying agrohydrological forecast system for southern Africa.

Ghile, Yonas Beyene. January 2007 (has links)
Policy makers, water managers, farmers and many other sectors of the society in southern Africa are confronting increasingly complex decisions as a result of the marked day-to-day, intra-seasonal and inter-annual variability of climate. Hence, forecasts of hydro-climatic variables with lead times of days to seasons ahead are becoming increasingly important to them in making more informed risk-based management decisions. With improved representations of atmospheric processes and advances in computer technology, a major improvement has been made by institutions such as the South African Weather Service, the University of Pretoria and the University of Cape Town in forecasting southern Africa’s weather at short lead times and its various climatic statistics for longer time ranges. In spite of these improvements, the operational utility of weather and climate forecasts, especially in agricultural and water management decision making, is still limited. This is so mainly because of a lack of reliability in their accuracy and the fact that they are not suited directly to the requirements of agrohydrological models with respect to their spatial and temporal scales and formats. As a result, the need has arisen to develop a GIS based framework in which the “translation” of weather and climate forecasts into more tangible agrohydrological forecasts such as streamflows, reservoir levels or crop yields is facilitated for enhanced economic, environmental and societal decision making over southern Africa in general, and in selected catchments in particular. This study focuses on the development of such a framework. As a precursor to describing and evaluating this framework, however, one important objective was to review the potential impacts of climate variability on water resources and agriculture, as well as assessing current approaches to managing climate variability and minimising risks from a hydrological perspective. With the aim of understanding the broad range of forecasting systems, the review was extended to the current state of hydro-climatic forecasting techniques and their potential applications in order to reduce vulnerability in the management of water resources and agricultural systems. This was followed by a brief review of some challenges and approaches to maximising benefits from these hydro-climatic forecasts. A GIS based framework has been developed to serve as an aid to process all the computations required to translate near real time rainfall fields estimated by remotely sensed tools, as well as daily rainfall forecasts with a range of lead times provided by Numerical Weather Prediction (NWP) models into daily quantitative values which are suitable for application with hydrological or crop models. Another major component of the framework was the development of two methodologies, viz. the Historical Sequence Method and the Ensemble Re-ordering Based Method for the translation of a triplet of categorical monthly and seasonal rainfall forecasts (i.e. Above, Near and Below Normal) into daily quantitative values, as such a triplet of probabilities cannot be applied in its original published form into hydrological/crop models which operate on a daily time step. The outputs of various near real time observations, of weather and climate models, as well as of downscaling methodologies were evaluated against observations in the Mgeni catchment in KwaZulu-Natal, South Africa, both in terms of rainfall characteristics as well as of streamflows simulated with the daily time step ACRU model. A comparative study of rainfall derived from daily reporting raingauges, ground based radars, satellites and merged fields indicated that the raingauge and merged rainfall fields displayed relatively realistic results and they may be used to simulate the “now state” of a catchment at the beginning of a forecast period. The performance of three NWP models, viz. the C-CAM, UM and NCEP-MRF, were found to vary from one event to another. However, the C-CAM model showed a general tendency of under-estimation whereas the UM and NCEP-MRF models suffered from significant over-estimation of the summer rainfall over the Mgeni catchment. Ensembles of simulated streamflows with the ACRU model using ensembles of rainfalls derived from both the Historical Sequence Method and the Ensemble Re-ordering Based Method showed reasonably good results for most of the selected months and seasons for which they were tested, which indicates that the two methods of transforming categorical seasonal forecasts into ensembles of daily quantitative rainfall values are useful for various agrohydrological applications in South Africa and possibly elsewhere. The use of the Ensemble Re-ordering Based Method was also found to be quite effective in generating the transitional probabilities of rain days and dry days as well as the persistence of dry and wet spells within forecast cycles, all of which are important in the evaluation and forecasting of streamflows and crop yields, as well as droughts and floods. Finally, future areas of research which could facilitate the practical implementation of the framework were identified. / Thesis (Ph.D.)-University of KwaZulu-Natal, Pietermaritzburg, 2007.

Page generated in 0.2452 seconds