• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 63
  • 6
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 104
  • 104
  • 48
  • 22
  • 17
  • 17
  • 16
  • 13
  • 13
  • 13
  • 12
  • 12
  • 11
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Desenvolvimento de programas computacionais visando a estimativa de parâmetros de interesse genético-populacional e o teste de hipóteses genéticas / Development of scientific software with the aim of estimating parameters of population genetic interest and the testing genetic hypotheses

Santos, Fernando Azenha Bautzer 22 November 2006 (has links)
A dissertação apresenta os resultados obtidos com o desenvolvimento de um programa de computação abrangente em interface gráfica para ambiente Windows visando a estimativa de parâmetros de interesse genético-populacional (freqüências alélicas, respectivos erros-padrão e intervalos de confiança a 95%) e o teste de hipóteses genéticas (equilíbrio de Hardy-Weinberg e análise de estruturação hierárquica populacional), por meio de métodos tradicionais e por meio de testes exatos obtidos com procedimentos de simulação (bootstrap e jackknife). / The dissertation presents the results obtained with the development of a comprehensive computation program (software), running on the Windows (MS) graphic interface, with the aim of: (a) estimating parameters of population genetic interest (such as allelic frequencies and their corresponding standard errors and 95% confidence intervals); and (b) performing the testing of genetic hypotheses (Hardy-Weinberg population ratios and analysis of population hierarchical structure) by means of traditional methods as well as through exact tests obtained with computer simulation procedures (bootstrap and jackknife methods).
82

Segmentation d'images ultrasonores basée sur des statistiques locales avec une sélection adaptative d'échelles / Ultrasound image segmentation using local statistics with an adaptative scale selection

Yang, Qing 15 March 2013 (has links)
La segmentation d'images est un domaine important dans le traitement d'images et un grand nombre d'approches différentes ent été développées pendant ces dernières décennies. L'approche des contours actifs est un des plus populaires. Dans ce cadre, cette thèse vise à développer des algorithmes robustes, qui peuvent segmenter des images avec des inhomogénéités d'intensité. Nous nous concentrons sur l'étude des énergies externes basées région dans le cadre des ensembles de niveaux. Précisément, nous abordons la difficulté de choisir l'échelle de la fenêtre spatiale qui définit la localité. Notre contribution principale est d'avoir proposé une échelle adaptative pour les méthodes de segmentation basées sur les statistiques locales. Nous utilisons l'approche d'Intersection des Intervalles de Confiance pour définir une échelle position-dépendante pour l'estimation des statistiques image. L'échelle est optimale dans le sens où elle donne le meilleur compromis entre le biais et la variance de l'approximation polynomiale locale de l'image observée conditionnellement à la segmentation actuelle. De plus, pour le model de segmentation basé sur une interprétation Bahésienne avec deux noyaux locaux, nous suggérons de considérer leurs valeurs séparément. Notre proposition donne une segmentation plus lisse avec moins de délocalisations que la méthode originale. Des expériences comparatives de notre proposition à d'autres méthodes de segmentation basées sur des statistiques locales sont effectuées. Les résultats quantitatifs réalisés sur des images ultrasonores de simulation, montrent que la méthode proposée est plus robuste au phénomène d'atténuation. Des expériences sur des images réelles montrent également l'utilité de notre approche. / Image segmentation is an important research area in image processing and a large number of different approaches have been developed over the last few decades. The active contour approach is one of the most popular among them. Within this framework, this thesis aims at developing robust algorithms, which can segment images with intensity inhomogeneities. We focus on the study of region-based external energies within the level set framework. We study the use of local image statistics for the design of external energies. Precisely, we address the difficulty of choosing the scale of the spatial window that defines locality. Our main contribution is to propose an adaptive scale for local region-based segmen¬tation methods. We use the Intersection of Confidence Intervals approach to define this pixel-dependent scale for the estimation of local image statistics. The scale is optimal in the sense that it gives the best trade-off between the bias and the variance of a Local Polynomial Approximation of the observed image conditional on the current segmenta¬tion. Additionally, for the segmentation model based on a Bayesian interpretation with two local kernels, we suggest to consider their values separately. Our proposition gives a smoother segmentation with less mis-localisations Chan the original method.Comparative experiments of our method to other local region-based segmentation me¬thods are carried out. The quantitative results, on simulated ultrasound B-mode images, show that the proposed scale selection strategy gives a robust solution to the intensity inhomogeneity artifact of this imaging modality. More general experiments on real images also demonstrate the usefulness of our approach.
83

Parameter Estimation and Optimal Design Techniques to Analyze a Mathematical Model in Wound Healing

Karimli, Nigar 01 April 2019 (has links)
For this project, we use a modified version of a previously developed mathematical model, which describes the relationships among matrix metalloproteinases (MMPs), their tissue inhibitors (TIMPs), and extracellular matrix (ECM). Our ultimate goal is to quantify and understand differences in parameter estimates between patients in order to predict future responses and individualize treatment for each patient. By analyzing parameter confidence intervals and confidence and prediction intervals for the state variables, we develop a parameter space reduction algorithm that results in better future response predictions for each individual patient. Moreover, use of another subset selection method, namely Structured Covariance Analysis, that considers identifiability of parameters, has been included in this work. Furthermore, to estimate parameters more efficiently and accurately, the standard error (SE- )optimal design method is employed, which calculates optimal observation times for clinical data to be collected. Finally, by combining different parameter subset selection methods and an optimal design problem, different cases for both finding optimal time points and intervals have been investigated.
84

Δείκτες αποτελεσματικότητας διαδικασιών στη βιομηχανική παραγωγή

Παπανικολάου, Μαρία 29 July 2008 (has links)
Οι δείκτες αποτελεσματικότητας μιας διαδικασίας μετρούν τον βαθμό στον οποίο μια διαδικασία, που βρίσκεται σε στατιστική ισορροπία, παράγει προϊόντα τα οποία ικανοποιούν τις προδιαγραφές του πελάτη. Στη διπλωματική αυτή εργασία δίνονται οι ορισμοί διαφόρων τέτοιων δεικτών που έχουν προταθεί από τη βιβλιογραφία, για μονοδιάστατες και διδιάστατες μεταβλητές, οι οποίες ακολουθούν κανονική κατανομή. Παρουσιάζονται οι ιδιότητες καθώς και οι σχέσεις μεταξύ των δεικτών αυτών και αναλύονται τα μειονεκτήματα και τα πλεονεκτήματα της χρήσης τους. Δίνονται εκτιμητές κάποιων δεικτών και ιδιότητες αυτών όπως αναμενόμενη τιμή, διασπορά, συνάρτηση πυκνότητας. Κατασκευάζονται επίσης διαστήματα εμπιστοσύνης και έλεγχοι υποθέσεων για τους εκτιμητές των δεικτών. Τέλος, παρουσιάζονται αριθμητικά παραδείγματα και εφαρμογές των δεικτών αποτελεσματικότητας στη βιομηχανία. / Process capability indices are intended to provide single-number assessment of the capability, of a process in statistical control, to produce items that meet the customer΄s specifications. We present the definitions of various such indices that have been proposed for univariate and bivariate normal distributions. We refer to their properties, the relations among them and the weaknesses or benefits from their use. Estimators of the indices are considered and their properties such as expected value, variance and probability density function are derived. Confidence intervals and tests of hypothesis are constructed for their estimators. Finally, numerical examples and applications of process capability indices in industry are presented.
85

Simultane Konfidenzintervalle für nichtparametrische relative Kontrasteffekte / Simultaneous Confidence Intervals for Non-parametric Relative Contrast Effects

Konietschke, Frank 20 July 2009 (has links)
No description available.
86

Robust Algorithms for Optimization of Chemical Processes in the Presence of Model-Plant Mismatch

Mandur, Jasdeep Singh 12 June 2014 (has links)
Process models are always associated with uncertainty, due to either inaccurate model structure or inaccurate identification. If left unaccounted for, these uncertainties can significantly affect the model-based decision-making. This thesis addresses the problem of model-based optimization in the presence of uncertainties, especially due to model structure error. The optimal solution from standard optimization techniques is often associated with a certain degree of uncertainty and if the model-plant mismatch is very significant, this solution may have a significant bias with respect to the actual process optimum. Accordingly, in this thesis, we developed new strategies to reduce (1) the variability in the optimal solution and (2) the bias between the predicted and the true process optima. Robust optimization is a well-established methodology where the variability in optimization objective is considered explicitly in the cost function, leading to a solution that is robust to model uncertainties. However, the reported robust formulations have few limitations especially in the context of nonlinear models. The standard technique to quantify the effect of model uncertainties is based on the linearization of underlying model that may not be valid if the noise in measurements is quite high. To address this limitation, uncertainty descriptions based on the Bayes’ Theorem are implemented in this work. Since for nonlinear models the resulting Bayesian uncertainty may have a non-standard form with no analytical solution, the propagation of this uncertainty onto the optimum may become computationally challenging using conventional Monte Carlo techniques. To this end, an approach based on Polynomial Chaos expansions is developed. It is shown in a simulated case study that this approach resulted in drastic reductions in the computational time when compared to a standard Monte Carlo sampling technique. The key advantage of PC expansions is that they provide analytical expressions for statistical moments even if the uncertainty in variables is non-standard. These expansions were also used to speed up the calculation of likelihood function within the Bayesian framework. Here, a methodology based on Multi-Resolution analysis is proposed to formulate the PC based approximated model with higher accuracy over the parameter space that is most likely based on the given measurements. For the second objective, i.e. reducing the bias between the predicted and true process optima, an iterative optimization algorithm is developed which progressively corrects the model for structural error as the algorithm proceeds towards the true process optimum. The standard technique is to calibrate the model at some initial operating conditions and, then, use this model to search for an optimal solution. Since the identification and optimization objectives are solved independently, when there is a mismatch between the process and the model, the parameter estimates cannot satisfy these two objectives simultaneously. To this end, in the proposed methodology, corrections are added to the model in such a way that the updated parameter estimates reduce the conflict between the identification and optimization objectives. Unlike the standard estimation technique that minimizes only the prediction error at a given set of operating conditions, the proposed algorithm also includes the differences between the predicted and measured gradients of the optimization objective and/or constraints in the estimation. In the initial version of the algorithm, the proposed correction is based on the linearization of model outputs. Then, in the second part, the correction is extended by using a quadratic approximation of the model, which, for the given case study, resulted in much faster convergence as compared to the earlier version. Finally, the methodologies mentioned above were combined to formulate a robust iterative optimization strategy that converges to the true process optimum with minimum variability in the search path. One of the major findings of this thesis is that the robust optimal solutions based on the Bayesian parametric uncertainty are much less conservative than their counterparts based on normally distributed parameters.
87

The Evaluation of Well-known Effort Estimation Models based on Predictive Accuracy Indicators

Khan, Khalid January 2010 (has links)
Accurate and reliable effort estimation is still one of the most challenging processes in software engineering. There have been numbers of attempts to develop cost estimation models. However, the evaluation of model accuracy and reliability of those models have gained interest in the last decade. A model can be finely tuned according to specific data, but the issue remains there is the selection of the most appropriate model. A model predictive accuracy is determined by the difference of the various accuracy measures. The one with minimum relative error is considered to be the best fit. The model predictive accuracy is needed to be statistically significant in order to be the best fit. This practice evolved into model evaluation. Models predictive accuracy indicators need to be statistically tested before taking a decision to use a model for estimation. The aim of this thesis is to statistically evaluate well known effort estimation models according to their predictive accuracy indicators using two new approaches; bootstrap confidence intervals and permutation tests. In this thesis, the significance of the difference between various accuracy indicators were empirically tested on the projects obtained from the International Software Benchmarking Standard Group (ISBSG) data set. We selected projects of Un-Adjusted Function Points (UFP) of quality A. Then, the techniques; Analysis Of Variance ANOVA and regression to form Least Square (LS) set and Estimation by Analogy (EbA) set were used. Step wise ANOVA was used to form parametric model. K-NN algorithm was employed in order to obtain analogue projects for effort estimation use in EbA. It was found that the estimation reliability increased with the pre-processing of the data statistically, moreover the significance of the accuracy indicators were not only tested statistically but also with the help of more complex inferential statistical methods. The decision of selecting non-parametric methodology (EbA) for generating project estimates in not by chance but statistically proved.
88

Desenvolvimento de programas computacionais visando a estimativa de parâmetros de interesse genético-populacional e o teste de hipóteses genéticas / Development of scientific software with the aim of estimating parameters of population genetic interest and the testing genetic hypotheses

Fernando Azenha Bautzer Santos 22 November 2006 (has links)
A dissertação apresenta os resultados obtidos com o desenvolvimento de um programa de computação abrangente em interface gráfica para ambiente Windows visando a estimativa de parâmetros de interesse genético-populacional (freqüências alélicas, respectivos erros-padrão e intervalos de confiança a 95%) e o teste de hipóteses genéticas (equilíbrio de Hardy-Weinberg e análise de estruturação hierárquica populacional), por meio de métodos tradicionais e por meio de testes exatos obtidos com procedimentos de simulação (bootstrap e jackknife). / The dissertation presents the results obtained with the development of a comprehensive computation program (software), running on the Windows (MS) graphic interface, with the aim of: (a) estimating parameters of population genetic interest (such as allelic frequencies and their corresponding standard errors and 95% confidence intervals); and (b) performing the testing of genetic hypotheses (Hardy-Weinberg population ratios and analysis of population hierarchical structure) by means of traditional methods as well as through exact tests obtained with computer simulation procedures (bootstrap and jackknife methods).
89

Statistical Inferences on Inflated Data Based on Modified Empirical Likelihood

Stewart, Patrick 06 August 2020 (has links)
No description available.
90

GENERAL-PURPOSE STATISTICAL INFERENCE WITH DIFFERENTIAL PRIVACY GUARANTEES

Zhanyu Wang (13893375) 06 December 2023 (has links)
<p dir="ltr">Differential privacy (DP) uses a probabilistic framework to measure the level of privacy protection of a mechanism that releases data analysis results to the public. Although DP is widely used by both government and industry, there is still a lack of research on statistical inference under DP guarantees. On the one hand, existing DP mechanisms mainly aim to extract dataset-level information instead of population-level information. On the other hand, DP mechanisms introduce calibrated noises into the released statistics, which often results in sampling distributions more complex and intractable than the non-private ones. This dissertation aims to provide general-purpose methods for statistical inference, such as confidence intervals (CIs) and hypothesis tests (HTs), that satisfy the DP guarantees. </p><p dir="ltr">In the first part of the dissertation, we examine a DP bootstrap procedure that releases multiple private bootstrap estimates to construct DP CIs. We present new DP guarantees for this procedure and propose to use deconvolution with DP bootstrap estimates to derive CIs for inference tasks such as population mean, logistic regression, and quantile regression. Our method achieves the nominal coverage level in both simulations and real-world experiments and offers the first approach to private inference for quantile regression.</p><p dir="ltr">In the second part of the dissertation, we propose to use the simulation-based ``repro sample'' approach to produce CIs and HTs based on DP statistics. Our methodology has finite-sample guarantees and can be applied to a wide variety of private inference problems. It appropriately accounts for biases introduced by DP mechanisms (such as by clamping) and improves over other state-of-the-art inference methods in terms of the coverage and type I error of the private inference. </p><p dir="ltr">In the third part of the dissertation, we design a debiased parametric bootstrap framework for DP statistical inference. We propose the adaptive indirect estimator, a novel simulation-based estimator that is consistent and corrects the clamping bias in the DP mechanisms. We also prove that our estimator has the optimal asymptotic variance among all well-behaved consistent estimators, and the parametric bootstrap results based on our estimator are consistent. Simulation studies show that our framework produces valid DP CIs and HTs in finite sample settings, and it is more efficient than other state-of-the-art methods.</p>

Page generated in 0.0794 seconds