• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 47
  • 30
  • 9
  • 6
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 122
  • 28
  • 17
  • 15
  • 14
  • 14
  • 11
  • 11
  • 10
  • 10
  • 9
  • 8
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Iterated Grid Search Algorithm on Unimodal Criteria

Kim, Jinhyo 02 June 1997 (has links)
The unimodality of a function seems a simple concept. But in the Euclidean space R^m, m=3,4,..., it is not easy to define. We have an easy tool to find the minimum point of a unimodal function. The goal of this project is to formalize and support distinctive strategies that typically guarantee convergence. Support is given both by analytic arguments and simulation study. Application is envisioned in low-dimensional but non-trivial problems. The convergence of the proposed iterated grid search algorithm is presented along with the results of particular application studies. It has been recognized that the derivative methods, such as the Newton-type method, are not entirely satisfactory, so a variety of other tools are being considered as alternatives. Many other tools have been rejected because of apparent manipulative difficulties. But in our current research, we focus on the simple algorithm and the guaranteed convergence for unimodal function to avoid the possible chaotic behavior of the function. Furthermore, in case the loss function to be optimized is not unimodal, we suggest a weaker condition: almost (noisy) unimodality, under which the iterated grid search finds an estimated optimum point. / Ph. D.
92

Investing in the Future: The Performance of Green Bonds Compared to Conventional Bonds and Stocks

Söderman, Mats, Haglund, Markus January 2024 (has links)
As the world faces unprecedented environmental challenges, there is an urgent need for largescale investments in green infrastructure and technologies. If we are going to achieve carbon neutrality, significant investments are necessary, and therefore must the entire financial system unite and endorse sustainable investment activities in a market-oriented manner.   A green bond is a relatively new type of bond. It was first introduced in 2007 by the European Investment Bank (EIB). This was followed up by a collaboration between Skandinaviska Enskilda Banken (SEB) and the World Bank, a group of Swedish investors, pension funds, and SRI-focused investors. They issued their first green bond in 2008 intending to attract more investors. However, this attempt to increase the interest did not work, green bonds were almost nonexistent until 2013. One explanation for the slow development of the green bond market was the financial crisis in 2008. Further, the reason for the low interest in green bonds during this period was that traditional investors deemed these risky and non-profitable.  Using a deductive approach, this thesis investigates how green bonds perform compared to conventional bonds and stocks from the issuing company. The authors sampled green and conventional bonds from 33 companies that matured from 2018 to 2023. The sample data set contains bonds from Asia, Europe, South America, North America, and Australia. The data was tested using multiple hypotheses.  This thesis sets out to answer the research question: How do green bonds perform compared to conventional bonds and stocks?   The results indicated there is a significant difference between the three asset types. First, the stocks yield higher returns and higher standard deviations than green and conventional bonds. Second, the authors found no evidence for a difference in return thus a significant difference in standard deviation. The results also suggest there is a difference in modified duration, convexity, maturity, and yield to maturity. These findings indicate that green bonds performed better than conventional bonds, especially regarding risk and volatility. Therefore, could green bonds be useful when diversifying a portfolio.  The findings suggested that a portfolio composition that combines the three assets could be in line with both shareholder theory and stakeholder theory. The portfolio theory also provides interesting insights into the potential portfolio optimizations since there are differences between green and conventional bonds. Since no difference in the return was found for green and conventional bonds the authors find no reason to support the idea of herding behavior in the trading of green bonds.  However, the difference in standard deviation is interesting from a behavioral perspective, a lower standard deviation indicates that the green bond experiences lower volatility compared to conventional bonds.
93

Transport optimal : régularité et applications / Optimal Transport : Regularity and applications

Gallouët, Thomas 10 December 2012 (has links)
Cette thèse comporte deux parties distinctes, toutes les deux liées à la théorie du transport optimal. Dans la première partie, nous considérons une variété riemannienne, deux mesures à densité régulière et un coût de transport, typiquement la distance géodésique quadratique et nous nous intéressons à la régularité de l’application de transport optimal. Le critère décisif à cette régularité s’avère être le signe du tenseur de Ma-Trudinger-Wang (MTW). Nous présentons tout d’abord une synthèse des travaux réalisés sur ce tenseur. Nous nous intéressons ensuite au lien entre la géométrie des lieux d’injectivité et le tenseur MTW. Nous montrons que dans de nombreux cas, la positivité du tenseur MTW implique la convexité des lieux d’injectivité. La deuxième partie de cette thèse est liée aux équations aux dérivées partielles. Certaines peuvent être considérées comme des flots gradients dans l’espace de Wasserstein W2. C’est le cas de l’équation de Keller-Segel en dimension 2. Pour cette équation nous nous intéressons au problème de quantification de la masse lors de l’explosion des solutions ; cette explosion apparaît lorsque la masse initiale est supérieure à un seuil critique Mc. Nous cherchons alors à montrer qu’elle consiste en la formation d’un Dirac de masse Mc. Nous considérons ici un modèle particulaire en dimension 1 ayant le même comportement que l’équation de Keller-Segel. Pour ce modèle nous exhibons des bassins d’attractions à l’intérieur desquels l’explosion se produit avec seulement le nombre critique de particules. Finalement nous nous intéressons au profil d’explosion : à l’aide d’un changement d’échelle parabolique nous montrons que la structure de l’explosion correspond aux points critiques d’une certaine fonctionnelle. / This thesis consists in two distinct parts both related to the optimal transport theory.The first part deals with the regularity of the optimal transport map. The key tool is the Ma-Trundinger-Wang tensor and especially its positivity. We first give a review of the known results about the MTW tensor. We then explore the geometrical consequences of the MTW tensor on the injectivity domain. We prove that in many cases the positivity of MTW implies the convexity of the injectivity domain. The second part is devoted to the behaviour of a Keller-Segel solution in the super critical case. In particular we are interested in the mass quantization problem: we wish to quantify the mass aggregated when the blow-up occurs. In order to study the behaviour of the solution we consider a particle approximation of a Keller-Segel type equation in dimension 1. We define this approximation using the gradient flow interpretation of the Keller-Segel equation and the particular structure of the Wasserstein space in dimension 1. We show two kinds of results; we first prove a stability theorem for the blow-up mechanism: we exhibit basins of attraction in which the solution blows up with only the critical number of particles. We then prove a rigidity theorem for the blow-up mechanism: thanks to a parabolic rescaling we prove that the structure of the blow-up is given by the critical points of a certain functional.
94

Valuation and Optimal Strategies in Markets Experiencing Shocks

Dyrssen, Hannah January 2017 (has links)
This thesis treats a range of stochastic methods with various applications, most notably in finance. It is comprised of five articles, and a summary of the key concepts and results these are built on. The first two papers consider a jump-to-default model, which is a model where some quantity, e.g. the price of a financial asset, is represented by a stochastic process which has continuous sample paths except for the possibility of a sudden drop to zero. In Paper I prices of European-type options in this model are studied together with the partial integro-differential equation that characterizes the price. In Paper II the price of a perpetual American put option in the same model is found in terms of explicit formulas. Both papers also study the parameter monotonicity and convexity properties of the option prices. The third and fourth articles both deal with valuation problems in a jump-diffusion model. Paper III concerns the optimal level at which to exercise an American put option with finite time horizon. More specifically, the integral equation that characterizes the optimal boundary is studied. In Paper IV we consider a stochastic game between two players and determine the optimal value and exercise strategy using an iterative technique. Paper V employs a similar iterative method to solve the statistical problem of determining the unknown drift of a stochastic process, where not only running time but also each observation of the process is costly.
95

Les actions de groupes en géométrie symplectique et l'application moment

Payette, Jordan 11 1900 (has links)
Ce mémoire porte sur quelques notions appropriées d'actions de groupe sur les variétés symplectiques, à savoir en ordre décroissant de généralité : les actions symplectiques, les actions faiblement hamiltoniennes et les actions hamiltoniennes. Une connaissance des actions de groupes et de la géométrie symplectique étant prérequise, deux chapitres sont consacrés à des présentations élémentaires de ces sujets. Le cas des actions hamiltoniennes est étudié en détail au quatrième chapitre : l'importante application moment y est définie et plusieurs résultats concernant les orbites de la représentation coadjointe, tels que les théorèmes de Kirillov et de Kostant-Souriau, y sont démontrés. Le dernier chapitre se concentre sur les actions hamiltoniennes des tores, l'objectif étant de démontrer le théorème de convexité d'Atiyha-Guillemin-Sternberg. Une discussion d'un théorème de classification de Delzant-Laudenbach est aussi donnée. La présentation se voulant une introduction assez exhaustive à la théorie des actions hamiltoniennes, presque tous les résultats énoncés sont accompagnés de preuves complètes. Divers exemples sont étudiés afin d'aider à bien comprendre les aspects plus subtils qui sont considérés. Plusieurs sujets connexes sont abordés, dont la préquantification géométrique et la réduction de Marsden-Weinstein. / This Master thesis is concerned with some natural notions of group actions on symplectic manifolds, which are in decreasing order of generality : symplectic actions, weakly hamiltonian actions and hamiltonian actions. A knowledge of group actions and of symplectic geometry is a prerequisite ; two chapters are devoted to a coverage of the basics of these subjects. The case of hamiltonian actions is studied in detail in the fourth chapter : the important moment map is introduced and several results on the orbits of the coadjoint representation are proved, such as Kirillov's and Kostant-Souriau's theorems. The last chapter concentrates on hamiltonian actions by tori, the main result being a proof of Atiyah-Guillemin-Sternberg's convexity theorem. A classification theorem by Delzant and Laudenbach is also discussed. The presentation is intended to be a rather exhaustive introduction to the theory of hamiltonian actions, with complete proofs to almost all the results. Many examples help for a better understanding of the most tricky concepts. Several connected topics are mentioned, for instance geometric prequantization and Marsden-Weinstein reduction.
96

Modélisation de la réponse Immunitaire T-CD8 : analyse mathématique et modèles multiéchelles / Modeling the CD8 T-cell Immune Response : Mathematical Analysis and Multiscale Models

Girel, Simon 13 November 2018 (has links)
L'infection d'un organisme par un agent pathogène déclenche l'activation des lymphocytes T-CD8 et l'initiation de la réponse immunitaire. Il s'ensuit un programme complexe de prolifération et de différenciation des lymphocytes T-CD8, contrôlé par l'évolution de leur contenu moléculaire. Dans ce manuscrit, nous présentons deux modèles mathématiques de la réponse T-CD8. Le premier se présente comme une équation différentielle à impulsions grâce à laquelle nous étudions l'effet du partage inégal des protéines lors des divisions cellulaires sur la régulation de l'hétérogénéité moléculaire. Le second est un modèle à base d'agents couplant la description d'une population discrète de lymphocytes T-CD8 à celle du contenu moléculaire de ces derniers. Ce modèle s'avère capable de reproduire les différentes phases caractéristiques de la réponse T-CD8 aux échelle cellulaire et moléculaire. Ces deux travaux supportent l'hypothèse que la dynamique cellulaire observée in vivo est le reflet de l'hétérogénéité moléculaire qui structure la population de lymphocytes T-CD8 / Infection of an organism by a pathogen triggers the activation of the CD8 T-cells and the initiation of the immune response. The result is a complex program of proliferation and differentiation of the CD8 T-cells, controlled by the evolution of their molecular content. In this manuscript, we present two mathematical models of the CD8 T-cell response. The first one is presented as an impulsive differential equation by which we study the effect of unequal molecular partitioning at cell division on the regulation of molecular heterogeneity. The second one is an agent-based-model that couples the description of a discrete population of CD8 T-cells and that of their molecular content. This model can reproduce the different typical phases of the CD8 T-cell response at both the cellular and the molecular scales. These two studies support the hypothesis that the cell dynamics observed in vivo is a consequence of the molecular heterogeneity structuring the CD8 T-cell population
97

Segmentação de imagens pela transformada imagem-floresta com faixa de restrição geodésica / Image segmentation by the image foresting transform with geodesic band constraints

Braz, Caio de Moraes 24 February 2016 (has links)
Vários métodos tradicionais de segmentação de imagens, como a transformada de watershed de marcado- res e métodos de conexidade fuzzy (Relative Fuzzy Connectedness- RFC, Iterative Relative Fuzzy Connected- ness - IRFC), podem ser implementados de modo eficiente utilizando o método em grafos da Transformada Imagem-Floresta (Image Foresting Transform - IFT). No entanto, a carência de termos de regularização de fronteira em sua formulação fazem com que a borda do objeto segmentado possa ser altamente irregular. Um modo de contornar isto é por meio do uso de restrições de forma do objeto, que favoreçam formas mais regulares, como na recente restrição de convexidade geodésica em estrela (Geodesic Star Convexity - GSC). Neste trabalho, apresentamos uma nova restrição de forma, chamada de Faixa de Restrição Geodésica (Geodesic Band Constraint - GBC), que pode ser incorporada eficientemente em uma sub-classe do fra- mework de corte em grafos generalizado (Generalized Graph Cut - GGC), que inclui métodos pela IFT. É apresentada uma prova da otimalidade do novo algoritmo em termos de um mínimo global de uma função de energia sujeita às novas restrições de borda. A faixa de restrição geodésica nos ajuda a regularizar a borda dos objetos, consequentemente melhorando a segmentação de objetos com formas mais regulares, mantendo o baixo custo computacional da IFT. A GBC pode também ser usada conjuntamente com um mapa de custos pré estabelecido, baseado em um modelo de forma, de modo a direcionar a segmentação a seguir uma dada forma desejada, com grau de liberdade de escala e demais deformações controladas por um parâmetro único. Essa nova restrição também pode ser combinada com a GSC e com as restrições de polaridade de borda sem custo adicional. O método é demonstrado em imagens naturais, sintéticas e médicas, sendo estas provenientes de tomografias computadorizadas e de ressonância magnética. / In this work, we present a novel boundary constraint, which we denote as the Geodesic Band Constraint (GBC), and we show how it can be efficiently incorporated into a subclass of the Generalized Graph Cut framework (GGC). We include a proof of the optimality of the new algorithm in terms of a global minimum of an energy function subject to the new boundary constraints. The Geodesic Band Constraint helps regularizing the boundary, and consequently, improves the segmentation of objects with more regular shape, while keeping the low computational costs of the Image Foresting Transform (IFT). It can also be combined with the Geodesic Star Convexity prior, and with polarity constraints, at no additional cost.
98

Condições de otimalidade em cálculo das variações no contexto não-suave / Optimality conditions in calculus of variations in the non-smooth context

Signorini, Caroline de Arruda [UNESP] 07 March 2017 (has links)
Submitted by CAROLINE DE ARRUDA SIGNORINI null (carolineasignorini@gmail.com) on 2017-03-22T17:30:47Z No. of bitstreams: 1 Dissertação - versão definitiva [22.03.2017].pdf: 1265324 bytes, checksum: cb95983dd59698aa1bb765a4dd7f9866 (MD5) / Approved for entry into archive by Luiz Galeffi (luizgaleffi@gmail.com) on 2017-03-23T13:46:47Z (GMT) No. of bitstreams: 1 signorini_ca_me_sjrp.pdf: 1265324 bytes, checksum: cb95983dd59698aa1bb765a4dd7f9866 (MD5) / Made available in DSpace on 2017-03-23T13:46:47Z (GMT). No. of bitstreams: 1 signorini_ca_me_sjrp.pdf: 1265324 bytes, checksum: cb95983dd59698aa1bb765a4dd7f9866 (MD5) Previous issue date: 2017-03-07 / Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) / Nosso principal propósito neste trabalho é o estudo de condições necessárias e suficientes de otimalidade para problemas de Cálculo das Variações no contexto não-suave. Este estudo partirá da formulação básica suave, passando por problemas com restrições Lagrangianas, até o caso em que consideramos Lagrangianas não-suaves e soluções absolutamente contínuas. Neste caminho, abordaremos um importante avanço na teoria de Cálculo das Variações: os resultados de existência e regularidade de soluções. Além das condições necessárias, analisaremos as condições suficientes através de um conceito de convexidade generalizada, o qual denominamos E-pseudoinvexidade. / Our main purpose in this work is the study of necessary and sufficient optimality conditions for Calculus of Variations problems in the nonsmooth context. This study will comprehend the smooth basic formulation, constrained problems (with Lagrangian restrictions), non-smooth Lagrangians and absolutely continuous solutions. Moreover, we will approach an important advance in Calculus of Variations theory: the existence and regularity of solutions. In addition to necessary conditions, we will analyze sufficient conditions through a generalized convexity concept, which we called E-pseudoinvexity. / FAPESP: 2014/24271-6
99

美國FED二階段升息對利率交換契約凸性偏誤之實證

王建華 Unknown Date (has links)
「凸性偏誤」(Convexity Bias),非債券的「凸性因子」(Convexity),來自利率非平行變動對債券價格的影響。對利率交換契約而言,有其特殊意義。是指利用一連串到期日連續的期貨契約,作為評價利率交換契約的模型,卻因為在期貨契約到期前,其隱含利率並不等於遠期利率的情況下,採用未經修正過的模型,將錯誤估算交換契約的價格。而此偏誤值因隨著到期日的增加,或利率的波動增高而逐漸擴大,呈曲線特性,故稱之為「凸性偏誤」(Convexity Bias)。 由於完整資料收集不易,本論文的重心就限於探討美國歷史上,從1994年至1996年間,美國聯邦準備理事會(Federal Reserve Board;FED),第一階段利息大幅變動期間,利率的變動對凸性偏誤的影響,並預測之後利率變動時,對利率交換契約價格的影響。旨在以實證資料作完整分析,希望藉此探討凸性偏誤是否也會因利率變動程度的不同,進而對利率交換契約價格產生不同程度的影響。並進一步利用簡單的模型,推算出準確的遠期利率,作為評價利率交換契約的指標。將來若利率發生變動,交換契約的交易雙方,也能因此得到正確的交換契約價格,進行交易或避險,以減低利率風險可能帶來的損失。
100

二獨立卜瓦松均數之比較 / Superiority or non-inferiority testing procedures for two independent poisson samples

劉明得, Liu, Mingte Unknown Date (has links)
泊松分佈(Poisson distribution)是一經常被配適於稀有事件建模的機率分配,其應用領域相當的廣泛,如生物,商業,品質控制等。其中許多的應用均為兩群體均數的比較,如欲檢測一新的處理是否較原本的處理俱優越性(superiority),或者欲驗證一新的方法相較於舊的方法是否俱有不劣性(non-inferiority)。因此,此研究的目標為發展假設檢定的方法,用於比較兩獨立的泊松樣本是否有優越性及不劣性。一般探討假設檢定方法時,均因干擾參數的出現而導致理論探討及計算上的困難。為因應此困境,本研究由簡入繁,亦即先探討相等式的虛無假設(the null hypothesis of equality),繼而,再推展至非優越性的虛無假設(the null hypothesis of non-superiority),最後將這些探究的假設檢定方法應用至檢定不劣性並驗證這些方法的適用性。 兩種Wald 檢定統計量是本研究主要的研究興趣。對應於這兩種檢定統計量的近似的假設檢定法,是利用其極限分配為常態分配的特性而衍生的。此研究裡,可推導得到近似的檢定法的檢定力函數及欲達成某一檢定力水平時所需的樣本數公式。並依據此檢定力函數檢驗此檢定法的有效性(validity)及不偏性(unbiasedness)。並且推廣一連續修正的方法至任何的樣本數組合。另外一方面,此研究亦介紹並推廣兩種p-值的正確(exact)檢定法。其中一種為信賴區間p值檢定法(Berger和Boss, 1994), 另一種為估計的p值檢定法(Krishnamoorthy和Thomson, 2004)。一般正確檢定法較需要繁瑣的計算,故此研究將提出某些步驟以降低計算的負擔。就信賴區間p值檢定法而言,其首要工作為縮減求算p值的範圍,並驗證所使用的檢定統計量是否滿足Barnard凸面(convexity)的條件。若此統計量符合凸面convexity的條件,且在Poisson 的問題上,則此正確的信賴區間p值將出現在屬於虛無假設的參數空間的邊界上。然而,對於估計的p值檢定法而言,因在虛無假設的參數空間上求得Poisson均數的最大概似估計值,並不簡單及無法直接求得,故在此研就,將以一Poisson均數的點估計值代替。對於正確的假設檢定方法,此研究亦提出一個欲達成某一檢定力水平時所需的樣本數的步驟。 此研究將透過一個廣大的數值分析來驗證之前所提出的假設檢定方法。其中,可發現這些近似的假設檢定法之間的差異會受到兩群體之樣本數之比率的影響,而連續性的修正於某些情況下確實能夠使型I誤差較能夠受到控制。另外,當樣本數不夠多時,正確的假設檢定法是較近似的假設檢定法適當,尤其在型I誤差的控制上更是明顯。最後,此研究所提出的假設檢定方法將實際應用於一組乳癌治療的資料。 / The Poisson distribution is a well-known suitable model for modeling a rare events in variety fields such as biology, commerce, quality control, and so on. Many applications involve comparisons of two treatment groups and focus on showing the superiority of the new treatment to the conventional one, or the non-inferiority of the experimental implement to the standard implement upon the cost consideration. We aim to develop statistical tests for testing the superiority and non-inferiority by two independent random samples from Poisson distributions. In developing these tests, both computational and theoretical difficulties arise from presence of nuisance parameters. In this study, we first consider the problems with the null hypothesis of equality for simplicity. The problems are extended to have a regular null hypothesis of non-superiority next. Subsequently, the proposed methods are further investigated in establishing the non-inferiority. Two types of Wald test statistics are of our main research interest. The correspondent asymptotic testing procedures are developed by using the normal limiting distribution. In our study, the asymptotic distribution of the test statistics are derived. The asymptotic power functions and the sample size formula are further obtained. Given the power functions, we justify the validity and unbiasedness of the tests. The adequate continuity correction term for these tests is also found to reduce inflation of the type I error rate. On the other hand, the exact testing procedures based on two exact $p$-values, the confidence-interval $p$-value (Berger and Boos (1994)), and the estimated $p$-value (Krishnamoorthy and Thomson (2004)), are also applied in our study. It is known that an exact testing procedure tends to involve complex computations. In this thesis, several strategies are proposed to lessen the computational burden. For the confidence-interval $p$-value, a truncated confidence set is used to narrow the area for finding the $p$-value. Further, the test statistic is verify whether they fulfill the property of convexity. It is shown that under the convexity the exact $p$-value occurs somewhere of the boundary of the null parameter space. On the other hand, for the estimated $p$-value, a simpler point estimate is applied instead of the use of the restricted maximum likelihood estimators, which are less straightforward in this problem. The estimated $p$-value is shown to provide a conservative conclusion. The calculations of the sample sizes required by using the two exact tests are discussed. Intensive numerical studies show that the performances of the asymptotic tests depend on the fraction of the two sample sizes and the continuity correction can be useful in some cases to reduce the inflation of the type I error rate. However, with small samples, the two exact tests are more adequate in the sense of having a well-controlled type I error rate. A data set of breast cancer patients is analyzed by the proposed methods for illustration.

Page generated in 0.0476 seconds