• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 14
  • 10
  • 6
  • 4
  • 4
  • 3
  • 3
  • 1
  • 1
  • Tagged with
  • 60
  • 60
  • 10
  • 10
  • 10
  • 10
  • 9
  • 8
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Designs for nonlinear regression with a prior on the parameters

Karami, Jamil Unknown Date
No description available.
12

Avaliação da habilidade preditiva entre modelos Garch multivariados : uma análise baseada no critério Model Confidence Set

Borges, Bruna Kasprzak January 2012 (has links)
Esta dissertação analisa a questão da seleção de modelos GARCH multivariados em termos da perfomance de previsão da matriz de covariância condicional. A aplicação empírica é realizada com 7 retornos de índices de ações envolvendo um conjunto de 34 especificações de modelos para os quais computamos as previsões da variância condicional um passo a frente para uma amostra com 60 observações para cada especificação dos modelos GARCH multivariados. A comparação entre os modelos é baseada no procedimento Model Confidence Set (MCS) avaliado através de duas funções perdas robustas a proxies de volatilidade imperfeitas. O MCS é um procedimento que permite comparar vários modelos simultaneamente em termos de sua habilidade preditiva e determinar um conjunto de modelos estatisticamente semelhantes em termos de previsão, dado um nível de confiança. / This paper considers the question of the selection of multivariate GARCH models in terms of covariance matrix forecasting. In the empirical application we consider 7 series of returns and compare a set of 34 model specifications based on one-step-ahead conditional variance forecasts over a sample with 60 observations. The comparison between models is performed with the Model Confidence Set (MCS) procedure evaluated using two loss functions that are robust against imperfect volatility proxies. The MCS is a procedure that allows both a multiple model comparison in terms of forecasting accuracy and the determination of a model set composed of statistically equivalent models, under a confidence level.
13

Avaliação da habilidade preditiva entre modelos Garch multivariados : uma análise baseada no critério Model Confidence Set

Borges, Bruna Kasprzak January 2012 (has links)
Esta dissertação analisa a questão da seleção de modelos GARCH multivariados em termos da perfomance de previsão da matriz de covariância condicional. A aplicação empírica é realizada com 7 retornos de índices de ações envolvendo um conjunto de 34 especificações de modelos para os quais computamos as previsões da variância condicional um passo a frente para uma amostra com 60 observações para cada especificação dos modelos GARCH multivariados. A comparação entre os modelos é baseada no procedimento Model Confidence Set (MCS) avaliado através de duas funções perdas robustas a proxies de volatilidade imperfeitas. O MCS é um procedimento que permite comparar vários modelos simultaneamente em termos de sua habilidade preditiva e determinar um conjunto de modelos estatisticamente semelhantes em termos de previsão, dado um nível de confiança. / This paper considers the question of the selection of multivariate GARCH models in terms of covariance matrix forecasting. In the empirical application we consider 7 series of returns and compare a set of 34 model specifications based on one-step-ahead conditional variance forecasts over a sample with 60 observations. The comparison between models is performed with the Model Confidence Set (MCS) procedure evaluated using two loss functions that are robust against imperfect volatility proxies. The MCS is a procedure that allows both a multiple model comparison in terms of forecasting accuracy and the determination of a model set composed of statistically equivalent models, under a confidence level.
14

Avaliação da habilidade preditiva entre modelos Garch multivariados : uma análise baseada no critério Model Confidence Set

Borges, Bruna Kasprzak January 2012 (has links)
Esta dissertação analisa a questão da seleção de modelos GARCH multivariados em termos da perfomance de previsão da matriz de covariância condicional. A aplicação empírica é realizada com 7 retornos de índices de ações envolvendo um conjunto de 34 especificações de modelos para os quais computamos as previsões da variância condicional um passo a frente para uma amostra com 60 observações para cada especificação dos modelos GARCH multivariados. A comparação entre os modelos é baseada no procedimento Model Confidence Set (MCS) avaliado através de duas funções perdas robustas a proxies de volatilidade imperfeitas. O MCS é um procedimento que permite comparar vários modelos simultaneamente em termos de sua habilidade preditiva e determinar um conjunto de modelos estatisticamente semelhantes em termos de previsão, dado um nível de confiança. / This paper considers the question of the selection of multivariate GARCH models in terms of covariance matrix forecasting. In the empirical application we consider 7 series of returns and compare a set of 34 model specifications based on one-step-ahead conditional variance forecasts over a sample with 60 observations. The comparison between models is performed with the Model Confidence Set (MCS) procedure evaluated using two loss functions that are robust against imperfect volatility proxies. The MCS is a procedure that allows both a multiple model comparison in terms of forecasting accuracy and the determination of a model set composed of statistically equivalent models, under a confidence level.
15

Quantifying Information Leakage via Adversarial Loss Functions: Theory and Practice

January 2020 (has links)
abstract: Modern digital applications have significantly increased the leakage of private and sensitive personal data. While worst-case measures of leakage such as Differential Privacy (DP) provide the strongest guarantees, when utility matters, average-case information-theoretic measures can be more relevant. However, most such information-theoretic measures do not have clear operational meanings. This dissertation addresses this challenge. This work introduces a tunable leakage measure called maximal $\alpha$-leakage which quantifies the maximal gain of an adversary in inferring any function of a data set. The inferential capability of the adversary is modeled by a class of loss functions, namely, $\alpha$-loss. The choice of $\alpha$ determines specific adversarial actions ranging from refining a belief for $\alpha =1$ to guessing the best posterior for $\alpha = \infty$, and for the two specific values maximal $\alpha$-leakage simplifies to mutual information and maximal leakage, respectively. Maximal $\alpha$-leakage is proved to have a composition property and be robust to side information. There is a fundamental disjoint between theoretical measures of information leakages and their applications in practice. This issue is addressed in the second part of this dissertation by proposing a data-driven framework for learning Censored and Fair Universal Representations (CFUR) of data. This framework is formulated as a constrained minimax optimization of the expected $\alpha$-loss where the constraint ensures a measure of the usefulness of the representation. The performance of the CFUR framework with $\alpha=1$ is evaluated on publicly accessible data sets; it is shown that multiple sensitive features can be effectively censored to achieve group fairness via demographic parity while ensuring accuracy for several \textit{a priori} unknown downstream tasks. Finally, focusing on worst-case measures, novel information-theoretic tools are used to refine the existing relationship between two such measures, $(\epsilon,\delta)$-DP and R\'enyi-DP. Applying these tools to the moments accountant framework, one can track the privacy guarantee achieved by adding Gaussian noise to Stochastic Gradient Descent (SGD) algorithms. Relative to state-of-the-art, for the same privacy budget, this method allows about 100 more SGD rounds for training deep learning models. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2020
16

Comparing Logit and Hinge Surrogate Loss Functions in Outcome Weighted Learning

Eisner, Mariah Claire 01 October 2020 (has links)
No description available.
17

A Segmentation Network with a Class-Agnostic Loss Function for Training on Incomplete Data / Ett segmenteringsnätverk med en klass-agnostisk förlustfunktion för att träna på inkomplett data

Norman, Gabriella January 2020 (has links)
The use of deep learning methods is increasing in medical image analysis, e.g., segmentation of organs in medical images. Deep learning methods are highly dependent on a large amount of training data, a common obstacle for medical image analysis. This master thesis proposes a class-agnostic loss function as a method to train on incomplete data. The project used CT images from 1587 breast cancer patients, with a variety of available segmentation masks for each patient. The class-agnostic loss function is given labels for each class for each sample, in this project, for each segmentation mask for each CT slice. The label tells the loss function if the mask is an actual mask or just a placeholder. If it is a placeholder, the comparison of the predicted mask and the placeholder will not contribute to the loss value. The results show that it was possible to use the class-agnostic loss function to train a segmentation model with eight output masks, with data that never had all eight masks present at the same time and gain approximately the same performance as single-mask models.
18

Confidence Distillation for Efficient Action Recognition

Manzuri Shalmani, Shervin January 2020 (has links)
Modern neural networks are powerful predictive models. However, when it comes to recognizing that they may be wrong about their predictions and measuring the certainty of beliefs, they perform poorly. For one of the most common activation functions, the ReLU and its variants, even a well-calibrated model can produce incorrect but high confidence predictions. In the related task of action recognition, most current classification methods are based on clip-level classifiers that densely sample a given video for non-overlapping, same sized clips and aggregate the results using an aggregation function - typically averaging - to achieve video level predictions. While this approach has shown to be effective, it is sub-optimal in recognition accuracy and has a high computational overhead. To mitigate both these issues, we propose the confidence distillation framework to firstly teach a representation of uncertainty of the teacher to the student and secondly divide the task of full video prediction between the student and the teacher models. We conduct extensive experiments on three action recognition datasets and demonstrate that our framework achieves state-of-the-art results in action recognition accuracy and computational efficiency. / Thesis / Master of Science (MSc) / We devise a distillation loss function to train an efficient sampler/classifier for video-based action recognition tasks.
19

Flood Damage and Vulnerability Assessment for Hurricane Sandy in New York City

Zhang, Fang 02 October 2013 (has links)
No description available.
20

A BAYESIAN DECISION THEORETIC APPROACH TO FIXED SAMPLE SIZE DETERMINATION AND BLINDED SAMPLE SIZE RE-ESTIMATION FOR HYPOTHESIS TESTING

Banton, Dwaine Stephen January 2016 (has links)
This thesis considers two related problems that has application in the field of experimental design for clinical trials: • fixed sample size determination for parallel arm, double-blind survival data analysis to test the hypothesis of no difference in survival functions, and • blinded sample size re-estimation for the same. For the first problem of fixed sample size determination, a method is developed generally for testing of hypothesis, then applied particularly to survival analysis; for the second problem of blinded sample size re-estimation, a method is developed specifically for survival analysis. In both problems, the exponential survival model is assumed. The approach we propose for sample size determination is Bayesian decision theoretical, using explicitly a loss function and a prior distribution. The loss function used is the intrinsic discrepancy loss function introduced by Bernardo and Rueda (2002), and further expounded upon in Bernardo (2011). We use a conjugate prior, and investigate the sensitivity of the calculated sample sizes to specification of the hyper-parameters. For the second problem of blinded sample size re-estimation, we use prior predictive distributions to facilitate calculation of the interim test statistic in a blinded manner while controlling the Type I error. The determination of the test statistic in a blinded manner continues to be nettling problem for researchers. The first problem is typical of traditional experimental designs, while the second problem extends into the realm of adaptive designs. To the best of our knowledge, the approaches we suggest for both problems have never been done hitherto, and extend the current research on both topics. The advantages of our approach, as far as we see it, are unity and coherence of statistical procedures, systematic and methodical incorporation of prior knowledge, and ease of calculation and interpretation. / Statistics

Page generated in 0.0939 seconds