• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 1
  • 1
  • 1
  • Tagged with
  • 13
  • 13
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Combinação de classificadores para inferência dos rejeitados

Rocha, Ricardo Ferreira da 16 March 2012 (has links)
Made available in DSpace on 2016-06-02T20:06:06Z (GMT). No. of bitstreams: 1 4300.pdf: 2695135 bytes, checksum: c7742258a75f77aa35ccb54abc3439fe (MD5) Previous issue date: 2012-03-16 / Financiadora de Estudos e Projetos / In credit scoring problems, the interest is to associate to an element who request some kind of credit, a probability of default. However, traditional models uses samples biased because the data obtained from the tenderers has only clients who won a approval of a request for previous credit. In order to reduce the bias sample of these models, we use strategies to extract information about individuals rejected to be able to infer a response, good or bad payer. This is what we call the reject inference. With the use of these strategies, we also use the bagging technique (bootstrap aggregating), which consist in generate models based in some bootstrap samples of the training data in order to get a new predictor, when these models is combined. In this work we will discuss about some of the combination methods in the literature, especially the method of combination by logistic regression, although little used but with interesting results.We'll also discuss some strategies relating to reject inference. Analyses are given through a simulation study, in data sets generated and real data sets of public domain. / Em problemas de credit scoring, o interesse é associar a um elemento solicitante de algum tipo de crédito, uma probabilidade de inadimplência. No entanto, os modelos tradicionais utilizam amostras viesadas, pois constam apenas de dados obtidos dos proponentes que conseguiram a aprovação de uma solicitação de crédito anterior. Com o intuito de reduzir o vício amostral desses modelos, utilizamos estratégias para extrair informações acerca dos indivíduos rejeitados para que nele seja inferida uma resposta do tipo bom/- mau pagador. Isto é o que chamamos de inferência dos rejeitados. Juntamente com o uso dessas estratégias utilizamos a técnica bagging (bootstrap aggregating ), que é baseada na construção de diversos modelos a partir de réplicas bootstrap dos dados de treinamento, de modo que, quando combinados, gera um novo preditor. Nesse trabalho discutiremos sobre alguns dos métodos de combinação presentes na literatura, em especial o método de combinação via regressão logística, que é ainda pouco utilizado, mas com resultados interessantes. Discutiremos também as principais estratégias referentes à inferência dos rejeitados. As análises se dão por meio de um estudo simulação, em conjuntos de dados gerados e em conjuntos de dados reais de domínio público.
12

[pt] ENSAIOS EM GESTÃO DE CARTEIRAS E PREVISÃO DE RETORNOS DE AÇÕES / [en] ESSAYS IN PORTFOLIO MANAGEMENT AND STOCKS RETURN FORECASTING

ARTUR MANOEL PASSOS 29 November 2021 (has links)
[pt] A dissertação é composta por três ensaios empíricos que usam dados históricos de ações americanas. O primeiro avalia o desempenho de uma abordagem de otimização de carteiras baseada na otimização de Markowitz. Os resultados mostram valor econômico positivo do portfólio resultante, mesmo na presença de custos de transação. O segundo artigo visa comparar e combinar a técnica desenvolvida no artigo anterior à abordagem paramétrica e avalia o desempenho da combinação das técnicas. Os resultados mostram que o desempenho da técnica paramétrica é inferior à técnica de Markowitz modificada e pouco melhor do que o mercado agregado. Isto sugere que o valor econômico de explorar a estrutura de covariância entre as ações é superior a aumentar pesos em ações cujas características oferecem relações risco-retorno maiores até o período. O terceiro ensaio avalia modelos de previsão da variação de retornos entre ações. As estatísticas utilizadas apontam que os modelos padrão não possuem poder preditivo superior a modelos que supõem que não há variação ou que usam a média histórica. Por meio do uso tanto de combinações de modelos lineares quanto estimação restrita de modelos com muitos fatores, mostro que é possível obter resultados ligeiramente superiores. / [en] The dissertation consists of three empirical essays which use historical data of stocks listed in NYSE. The first essay evaluates a portfolio selection approach based on the Markowitz optimization. Results show the portfolios have positive economic value, even after including transaction costs. The second essay compares the technique proposed in the first essay to the parametric approach. Results show the parametric approach performs worse than the modified Markowitz approach and shlightly better than the aggregated market. This suggests that exploring the covariance structure of stocks provides better results than overweighting stocks with characteristics associated to better riskreturn ratios in the past. The third essay evaluates models that forecast the cross-sectional variation in stock returns. Given the statistics used, benchmark models do not show greater forecasting power than skeptical or naive models. By using linear model combination or lasso technique on a model with several factors, I show it is possible to obtain slightly better results.
13

Fusion pour la séparation de sources audio / Fusion for audio source separation

Jaureguiberry, Xabier 16 June 2015 (has links)
La séparation aveugle de sources audio dans le cas sous-déterminé est un problème mathématique complexe dont il est aujourd'hui possible d'obtenir une solution satisfaisante, à condition de sélectionner la méthode la plus adaptée au problème posé et de savoir paramétrer celle-ci soigneusement. Afin d'automatiser cette étape de sélection déterminante, nous proposons dans cette thèse de recourir au principe de fusion. L'idée est simple : il s'agit, pour un problème donné, de sélectionner plusieurs méthodes de résolution plutôt qu'une seule et de les combiner afin d'en améliorer la solution. Pour cela, nous introduisons un cadre général de fusion qui consiste à formuler l'estimée d'une source comme la combinaison de plusieurs estimées de cette même source données par différents algorithmes de séparation, chaque estimée étant pondérée par un coefficient de fusion. Ces coefficients peuvent notamment être appris sur un ensemble d'apprentissage représentatif du problème posé par minimisation d'une fonction de coût liée à l'objectif de séparation. Pour aller plus loin, nous proposons également deux approches permettant d'adapter les coefficients de fusion au signal à séparer. La première formule la fusion dans un cadre bayésien, à la manière du moyennage bayésien de modèles. La deuxième exploite les réseaux de neurones profonds afin de déterminer des coefficients de fusion variant en temps. Toutes ces approches ont été évaluées sur deux corpus distincts : l'un dédié au rehaussement de la parole, l'autre dédié à l'extraction de voix chantée. Quelle que soit l'approche considérée, nos résultats montrent l'intérêt systématique de la fusion par rapport à la simple sélection, la fusion adaptative par réseau de neurones se révélant être la plus performante. / Underdetermined blind source separation is a complex mathematical problem that can be satisfyingly resolved for some practical applications, providing that the right separation method has been selected and carefully tuned. In order to automate this selection process, we propose in this thesis to resort to the principle of fusion which has been widely used in the related field of classification yet is still marginally exploited in source separation. Fusion consists in combining several methods to solve a given problem instead of selecting a unique one. To do so, we introduce a general fusion framework in which a source estimate is expressed as a linear combination of estimates of this same source given by different separation algorithms, each source estimate being weighted by a fusion coefficient. For a given task, fusion coefficients can then be learned on a representative training dataset by minimizing a cost function related to the separation objective. To go further, we also propose two ways to adapt the fusion coefficients to the mixture to be separated. The first one expresses the fusion of several non-negative matrix factorization (NMF) models in a Bayesian fashion similar to Bayesian model averaging. The second one aims at learning time-varying fusion coefficients thanks to deep neural networks. All proposed methods have been evaluated on two distinct corpora. The first one is dedicated to speech enhancement while the other deals with singing voice extraction. Experimental results show that fusion always outperform simple selection in all considered cases, best results being obtained by adaptive time-varying fusion with neural networks.

Page generated in 0.0553 seconds