Spelling suggestions: "subject:"convex functions."" "subject:"konvex functions.""
31 |
Método do gradiente para funções convexas generalizadas / Gradiente method for generalized convex functionsCOUTO, Kelvin Rodrigues 16 December 2009 (has links)
Made available in DSpace on 2014-07-29T16:02:22Z (GMT). No. of bitstreams: 1
Dissertacao kelvin.pdf: 379268 bytes, checksum: 69875c577ac81dd2f77bb73f65c9f683 (MD5)
Previous issue date: 2009-12-16 / The Convergence theory of gradient method and gradient projection method, for minimization of continuously differentiable generalized convex functions, that is, pseudoconvex functions and quasiconvex functions is studied in this work. We shall see that under certain conditions the gradient method, as well as gradient projection method, generate a convergent sequence and the limit point is a minimizing, whenever the function has minimizing and is pseudoconvex functions. If the objective function is quasiconvex then the generated sequence converges to a stationary point whenever that point exists. / Neste trabalho trataremos da convergência do método do gradiente para minimizar funções continuamente diferenciáveis e convexas-generalizadas, isto é, pseudo-convexas
ou quase-convexas. Veremos que sob certas condições o método do gradiente, assim como o método do gradiente projetado, gera uma sequência que converge para minimizador
quando existe um e a função objetivo é pseudo-convexa. Quando a função objetivo é quase-convexa a sequência gerada converge para um ponto estacionário do problema
quando existe um tal ponto.
|
32 |
Weighted Consensus SegmentationsSaker, Halima, Machné, Rainer, Fallmann, Jörg, Murray, Douglas B., Shahin, Ahmad M., Stadler, Peter F. 03 May 2023 (has links)
The problem of segmenting linearly ordered data is frequently encountered in time-series analysis, computational biology, and natural language processing. Segmentations obtained independently from replicate data sets or from the same data with different methods or parameter settings pose the problem of computing an aggregate or consensus segmentation. This Segmentation Aggregation problem amounts to finding a segmentation that minimizes the sum of distances to the input segmentations. It is again a segmentation problem and can be solved by dynamic programming. The aim of this contribution is (1) to gain a better mathematical understanding of the Segmentation Aggregation problem and its solutions and (2) to demonstrate that consensus segmentations have useful applications. Extending previously known results we show that for a large class of distance functions only breakpoints present in at least one input segmentation appear in the consensus segmentation. Furthermore, we derive a bound on the size of consensus segments. As show-case applications, we investigate a yeast transcriptome and show that consensus segments provide a robust means of identifying transcriptomic units. This approach is particularly suited for dense transcriptomes with polycistronic transcripts, operons, or a lack of separation between transcripts. As a second application, we demonstrate that consensus segmentations can be used to robustly identify growth regimes from sets of replicate growth curves.
|
33 |
Space-time constellation and precoder design under channel estimation errorsYadav, A. (Animesh) 08 October 2013 (has links)
Abstract
Multiple-input multiple-output transmitted signal design for the partially coherent Rayleigh fading channels with discrete inputs under a given average transmit power constraint is consider in this thesis. The objective is to design the space-time constellations and linear precoders to adapt to the degradation caused by the imperfect channel estimation at the receiver and the transmit-receive antenna correlation. The system is partially coherent so that the multiple-input multiple-output channel coefficients are estimated at the receiver and its error covariance matrix is fed back to the transmitter.
Two constellation design criteria, one for the single and another for the multiple transmit antennae are proposed. An upper bound on the average bit error probability for the single transmit antenna and cutoff rate, i.e., a lower bound on the mutual information, for multiple transmit antennae are derived. Both criteria are functions of channel estimation error covariance matrix. The designed constellations are called as partially coherent constellation. Additionally, to use the resulting constellations together with forward error control codes requires efficient bit mapping schemes. Because these constellations lack geometrical symmetry in general, the Gray mapping is not always possible in the majority of the constellations obtained.
Moreover, different mapping schemes may lead to highly different bit error rate performances. Thus, an efficient bit mapping algorithm called the modified binary switching algorithm is proposed. It minimizes an upper bound on the average bit error probability. It is shown through computer simulations that the designed partially coherent constellation and their optimized bit mapping algorithm together with turbo codes outperform the conventional constellations.
Linear precoder design was also considered as a simpler, suboptimal alternative. The cutoff rate expression is again used as a criterion to design the linear precoder. A linear precoder is obtained by numerically maximizing the cutoff rate with respect to the precoder matrix with a given average transmit power constraint. Furthermore, the precoder matrix is decomposed using singular-value-decomposition into the input shaping, power loading, and beamforming matrices. The beamforming matrix is found to coincide with the eigenvectors of the transmit correlation matrix. The power loading and input shaping matrices are solved numerically using the difference of convex functions programming algorithm and optimization under the unitary constraint, respectively. Computer simulations show that the performance gains of the designed precoders are significant compared to the cutoff rate optimized partially coherent constellations without precoding. / Tiivistelmä
Väitöskirjassa tarkastellaan lähetyssignaalien suunnittelua osittain koherenteissa Rayleigh-häipyvissä kanavissa toimiviin monitulo-monilähtöjärjestelmiin (MIMO). Lähettimen keskimääräinen lähetysteho oletetaan rajoitetuksi ja lähetyssignaali diskreetiksi. Tavoitteena on suunnitella tila-aikakonstellaatioita ja lineaarisia esikoodereita jotka mukautuvat epätäydellisen kanavaestimoinnin aiheuttamaan suorituskyvyn heikkenemiseen sekä lähetin- ja vastaanotinantennien väliseen korrelaatioon. Tarkasteltavien järjestelmien osittainen koherenttisuus tarkoittaa sitä, että MIMO-kanavan kanavakertoimet estimoidaan vastaanottimessa, josta niiden virhekovarianssimatriisi lähetetään lähettimelle.
Työssä esitetään kaksi konstellaatiosuunnittelukriteeriä, toinen yhdelle lähetinantennille ja toinen moniantennilähettimelle. Molemmat kriteerit ovat kanavan estimaatiovirheen kovarianssimatriisin funktioita. Työssä johdetaan yläraja keskimääräiselle bittivirhetodennäköisyydelle yhden lähetinantennin tapauksessa sekä rajanopeus (cutoff rate), joka on alaraja keskinäisinformaatiolle, usean lähetinantennin tapauksessa. Konstellaatioiden käyttö yhdessä virheenkorjauskoodien kanssa edellyttää tehokaita menetelmiä, joilla bitit kuvataan konstellaatiopisteisiin. Koska tarvittavat konstellaatiot eivät ole tyypillisesti geometrisesti symmetrisiä, Gray-kuvaus ei ole yleensä mahdollinen.Lisäksi erilaiset kuvausmenetelmät voivat johtaa täysin erilaisiin bittivirhesuhteisiin. Tästä johtuen työssä esitetään uusi kuvausalgoritmi (modified bit switching algorithm), joka minimoi keskimääräisen bittivirhetodennäköisyyden ylärajan. Simulointitulokset osoittavat, että työssä kehitetyt konstellaatiot antavat paremman suorituskyvyn turbokoodatuissa järjestelmissä kuin perinteiset konstellaatiot.
Työssä tarkastellaan myös lineaarista esikoodausta yksinkertaisena, alioptimaalisena vaihtoehtona uusille konstellaatioille. Esikoodauksen suunnittelussa käytetään samaa kriteeriä kuin konstellaatioiden kehityksessä eli rajanopeutta. Lineaarinen esikooderi löydetään numeerisesti maksimoimalla rajanopeus kun rajoitusehtona on lähetysteho. Esikoodausmatriisi hajotetaan singulaariarvohajotelmaa käyttäen esisuodatus, tehoallokaatio ja keilanmuodostusmatriiseiksi, jonka havaitaan vastaavan lähetyskorrelaatiomatriisin ominaisvektoreita. Tehoallokaatiomatriisi ratkaistaan numeerisesti käyttäen difference of convex functions -optimointia ja esisuodatusmatriisi optimoinnilla unitaarista rajoitusehtoa käyttäen. Simulaatiotulokset osoittavat uusien esikoodereiden tarjoavan merkittävän suorituskykyedun sellaisiin rajanopeusoptimoituihin osittain koherentteihin konstellaatioihin nähden, jotka eivät käytä esikoodausta.
|
34 |
Algorithmes basés sur la programmation DC et DCA pour l’apprentissage avec la parcimonie et l’apprentissage stochastique en grande dimension / DCA based algorithms for learning with sparsity in high dimensional setting and stochastical learningPhan, Duy Nhat 15 December 2016 (has links)
De nos jours, avec l'abondance croissante de données de très grande taille, les problèmes de classification de grande dimension ont été mis en évidence comme un challenge dans la communauté d'apprentissage automatique et ont beaucoup attiré l'attention des chercheurs dans le domaine. Au cours des dernières années, les techniques d'apprentissage avec la parcimonie et l'optimisation stochastique se sont prouvées être efficaces pour ce type de problèmes. Dans cette thèse, nous nous concentrons sur le développement des méthodes d'optimisation pour résoudre certaines classes de problèmes concernant ces deux sujets. Nos méthodes sont basées sur la programmation DC (Difference of Convex functions) et DCA (DC Algorithm) étant reconnues comme des outils puissants d'optimisation non convexe. La thèse est composée de trois parties. La première partie aborde le problème de la sélection des variables. La deuxième partie étudie le problème de la sélection de groupes de variables. La dernière partie de la thèse liée à l'apprentissage stochastique. Dans la première partie, nous commençons par la sélection des variables dans le problème discriminant de Fisher (Chapitre 2) et le problème de scoring optimal (Chapitre 3), qui sont les deux approches différentes pour la classification supervisée dans l'espace de grande dimension, dans lequel le nombre de variables est beaucoup plus grand que le nombre d'observations. Poursuivant cette étude, nous étudions la structure du problème d'estimation de matrice de covariance parcimonieuse et fournissons les quatre algorithmes appropriés basés sur la programmation DC et DCA (Chapitre 4). Deux applications en finance et en classification sont étudiées pour illustrer l'efficacité de nos méthodes. La deuxième partie étudie la L_p,0régularisation pour la sélection de groupes de variables (Chapitre 5). En utilisant une approximation DC de la L_p,0norme, nous prouvons que le problème approché, avec des paramètres appropriés, est équivalent au problème original. Considérant deux reformulations équivalentes du problème approché, nous développons différents algorithmes basés sur la programmation DC et DCA pour les résoudre. Comme applications, nous mettons en pratique nos méthodes pour la sélection de groupes de variables dans les problèmes de scoring optimal et d'estimation de multiples matrices de covariance. Dans la troisième partie de la thèse, nous introduisons un DCA stochastique pour des problèmes d'estimation des paramètres à grande échelle (Chapitre 6) dans lesquelles la fonction objectif est la somme d'une grande famille des fonctions non convexes. Comme une étude de cas, nous proposons un schéma DCA stochastique spécial pour le modèle loglinéaire incorporant des variables latentes / These days with the increasing abundance of data with high dimensionality, high dimensional classification problems have been highlighted as a challenge in machine learning community and have attracted a great deal of attention from researchers in the field. In recent years, sparse and stochastic learning techniques have been proven to be useful for this kind of problem. In this thesis, we focus on developing optimization approaches for solving some classes of optimization problems in these two topics. Our methods are based on DC (Difference of Convex functions) programming and DCA (DC Algorithms) which are wellknown as one of the most powerful tools in optimization. The thesis is composed of three parts. The first part tackles the issue of variable selection. The second part studies the problem of group variable selection. The final part of the thesis concerns the stochastic learning. In the first part, we start with the variable selection in the Fisher's discriminant problem (Chapter 2) and the optimal scoring problem (Chapter 3), which are two different approaches for the supervised classification in the high dimensional setting, in which the number of features is much larger than the number of observations. Continuing this study, we study the structure of the sparse covariance matrix estimation problem and propose four appropriate DCA based algorithms (Chapter 4). Two applications in finance and classification are conducted to illustrate the efficiency of our methods. The second part studies the L_p,0regularization for the group variable selection (Chapter 5). Using a DC approximation of the L_p,0norm, we indicate that the approximate problem is equivalent to the original problem with suitable parameters. Considering two equivalent reformulations of the approximate problem we develop DCA based algorithms to solve them. Regarding applications, we implement the proposed algorithms for group feature selection in optimal scoring problem and estimation problem of multiple covariance matrices. In the third part of the thesis, we introduce a stochastic DCA for large scale parameter estimation problems (Chapter 6) in which the objective function is a large sum of nonconvex components. As an application, we propose a special stochastic DCA for the loglinear model incorporating latent variables
|
35 |
Teoria dos Pontos Críticos e Sistemas Hamiltonianos. / Critical Point Theory and Hamiltonian Systems.BARBOSA, Leopoldo Maurício Tavares. 17 July 2018 (has links)
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-07-17T17:42:26Z
No. of bitstreams: 1
LEOPOLDO MAURÍCIO TAVARES BARBOSA - DISSERTAÇÃO PPGMAT 2007..pdf: 712644 bytes, checksum: 6b24483b48b038e23d4ace377b04ece5 (MD5) / Made available in DSpace on 2018-07-17T17:42:27Z (GMT). No. of bitstreams: 1
LEOPOLDO MAURÍCIO TAVARES BARBOSA - DISSERTAÇÃO PPGMAT 2007..pdf: 712644 bytes, checksum: 6b24483b48b038e23d4ace377b04ece5 (MD5)
Previous issue date: 2007-10 / CNPq / Capes / Neste trabalho usamos métodos variacionais para mostrar a existência de solução
fraca para dois tipos de problema. O primeiro trata-se de uma Equação Diferencial
Ordinária. O segundo é referente ao sistema Hamiltoniano.
*Para Visualisar as equações ou formulas originalmente escritas neste resumo recomendamos o downloado do arquivo completo. / In this work we use variational methods to show the existence of weak solutions
for two types problems. The first, is related with a following Ordinary Differential
Equations. The second is relating at the Hamiltonian Systems.
*To see the equations or formulas originally written in this summary we recommend downloading the complete file.
|
36 |
Problemas de Otimização Quase Convexos: Método do Gradiente para Funções Escalares e Vetoriais / Optimization Problems Quasi-convex: Gradient Method for Vector and Scalar FunctionsSANTOS, Milton Gabriel Garcia dos 27 October 2011 (has links)
Made available in DSpace on 2014-07-29T16:02:19Z (GMT). No. of bitstreams: 1
Dissertacao Milton Gabriel Garcia dos Santos.pdf: 405990 bytes, checksum: b1b10db3be6011cbbae70bc35ed87950 (MD5)
Previous issue date: 2011-10-27 / This work we study the convergence properties of the Gradient Method Designed and Descent Method for Multi-objective optimization. At first, our optimization problem is
to minimize a real function of n-variables, continuously differentiable and restricted to a set of simple structure and add on the objective function of the hypothesis of
pseudo-convexity or quasi-convexity. Then we consider the problem of unconstrained multi-objective optimization and add some hypotheses about the function vector, such as
convexity or quasi-convexity, and is continuously differentiable. It is noteworthy that in both problems will be used to search for inexact Armijo over viable directions. / Neste trabalho faremos um estudo das propriedades de convergência do Método do Gradiente Projetado e do Método de Descida para otimização Multi-objetivo. No primeiro
momento, o nosso problema de otimização será o de minimizar uma função real de nvariáveis, continuamente diferenciável e restrita a um conjunto de estrutura simples e acrescentaremos sobre a função objetivo a hipótese de quase-convexidade ou pseudoconvexidade. Em seguida iremos considerar o problema de otimização Multi-Objetivo irrestrito e adicionar algumas hipóteses sobre a função vetorial, como a convexidade ou quase-convexidade, além de ser continuamente diferenciável. É importante salientar que
em ambos os problemas será utilizado a busca inexata de armijo ao longo de direções viáveis.
|
Page generated in 0.0829 seconds