• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 7
  • 3
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 29
  • 29
  • 12
  • 11
  • 11
  • 11
  • 9
  • 9
  • 8
  • 7
  • 5
  • 5
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Modelos de aprendizado supervisionado usando métodos kernel, conjuntos fuzzy e medidas de probabilidade / Supervised machine learning models using kernel methods, probability measures and fuzzy sets

Guevara Díaz, Jorge Luis 04 May 2015 (has links)
Esta tese propõe uma metodologia baseada em métodos de kernel, teoria fuzzy e probabilidade para tratar conjuntos de dados cujas observações são conjuntos de pontos. As medidas de probabilidade e os conjuntos fuzzy são usados para modelar essas observações. Posteriormente, graças a kernels definidos sobre medidas de probabilidade, ou em conjuntos fuzzy, é feito o mapeamento implícito dessas medidas de probabilidade, ou desses conjuntos fuzzy, para espaços de Hilbert com kernel reproduzível, onde a análise pode ser feita com algum método kernel. Usando essa metodologia, é possível fazer frente a uma ampla gamma de problemas de aprendizado para esses conjuntos de dados. Em particular, a tese apresenta o projeto de modelos de descrição de dados para observações modeladas com medidas de probabilidade. Isso é conseguido graças ao mergulho das medidas de probabilidade nos espaços de Hilbert, e a construção de esferas envolventes mínimas nesses espaços de Hilbert. A tese apresenta como esses modelos podem ser usados como classificadores de uma classe, aplicados na tarefa de detecção de anomalias grupais. No caso que as observações sejam modeladas por conjuntos fuzzy, a tese propõe mapear esses conjuntos fuzzy para os espaços de Hilbert com kernel reproduzível. Isso pode ser feito graças à projeção de novos kernels definidos sobre conjuntos fuzzy. A tese apresenta como esses novos kernels podem ser usados em diversos problemas como classificação, regressão e na definição de distâncias entre conjuntos fuzzy. Em particular, a tese apresenta a aplicação desses kernels em problemas de classificação supervisionada em dados intervalares e teste kernel de duas amostras para dados contendo atributos imprecisos. / This thesis proposes a methodology based on kernel methods, probability measures and fuzzy sets, to analyze datasets whose individual observations are itself sets of points, instead of individual points. Fuzzy sets and probability measures are used to model observations; and kernel methods to analyze the data. Fuzzy sets are used when the observation contain imprecise, vague or linguistic values. Whereas probability measures are used when the observation is given as a set of multidimensional points in a $D$-dimensional Euclidean space. Using this methodology, it is possible to address a wide range of machine learning problems for such datasets. Particularly, this work presents data description models when observations are modeled by probability measures. Those description models are applied to the group anomaly detection task. This work also proposes a new class of kernels, \\emph{the kernels on fuzzy sets}, that are reproducing kernels able to map fuzzy sets to a geometric feature spaces. Those kernels are similarity measures between fuzzy sets. We give from basic definitions to applications of those kernels in machine learning problems as supervised classification and a kernel two-sample test. Potential applications of those kernels include machine learning and patter recognition tasks over fuzzy data; and computational tasks requiring a similarity measure estimation between fuzzy sets.
22

還原風險中立機率測度的雙目標規劃模型 / Recovering Risk-Neutral Probability via Biobjective Programming Model

廖彥茹 Unknown Date (has links)
本論文提出利用機率平賭性質由選擇權市場價格還原風險中立機率測度的雙目標規劃模型。假設對應同一標的資產且不同履約價的選擇權均為歐式選擇權,到期時標的資產的狀態為離散點且個數有限。若市場不存在套利機會時,建構出最小化離差總和及最大化平滑的雙目標規劃模型。將此雙目標規劃模型利用權重法轉換成單一目標之非線性模型,即可還原風險中立機率測度,並利用此風險中立機率測度評價選擇權的公平價格。最後,我們以台指選擇權(TXO)為例,驗證此模型的評價能力。 / This thesis proposes a biobjective nonlinear programming model to derive risk-neutral probability distribution of underlying asset. The method are used to choose probabilities that minimize the deviation between the observed price and the theoretical price as well as maximize the smoothness of the resulting probabilities. A weighting method is used to covert the model into a single objective model. Given a non-arbitrage observed option price, a risk-neutral probability distribution consistent with the observed option can be recovered by the model. This risk-neutral probability is then utilized to evaluate the fair price of options. Finally, an empirical study applying to Taiwan’s market is given to verify the pricing ability of this model.
23

由選擇權市場價格建構具一致性之評價模型 / Building a Consistent Pricing Model from Observed Option Prices via Linear Programming

劉桂芳, Liu, Kuei-fang Unknown Date (has links)
本論文研究如何由觀測的選擇權市場價格還原風險中立機率測度(等價平賭測度)。首先建構選擇權投資組合的套利模型,其中假設選擇權為單期,到期日時的狀態為離散點且個數有限,並且對應同一標的資產且不同履約價格。若市場不存在套利機會時,可使用拉格朗日乘數法則將選擇權套利模型導出拉格朗日乘子的可行性問題。將可行性問題作為限制式重新建構線性規劃模型以還原風險中立機率測度,並且利用此風險中立機率測度評價選擇權的公正價格。最後,我們以台指選擇權(TXO)為例,驗證此模型的評價能力。 / This thesis investigates how to recover the risk-neutral probability (equivalent martingale measure) from observed market prices of options. It starts with building an arbitrage model of options portfolio in which the options are assumed to be in one-period time, finite discrete-states, and corresponding to the same underlying asset with different strike prices. If there is no arbitrage opportunity in the market, we can use Lagrangian multiplier method to obtain a Lagrangian multiplier feasibility problem from the arbitrage model. We employ the feasibility problem as the constraints to construct a linear programming model to recover the risk-neutral probability, and utilize this risk-neutral probability to evaluate the fair price of options. Finally, we take TXO as an example to verify the pricing ability of this model.
24

Estimation de régularité locale / Local regularity estimation

Servien, Rémi 12 March 2010 (has links)
L'objectif de cette thèse est d'étudier le comportement local d'une mesure de probabilité, notamment à l'aide d'un indice de régularité locale. Dans la première partie, nous établissons la normalité asymptotique de l'estimateur des kn plus proches voisins de la densité. Dans la deuxième, nous définissons un estimateur du mode sous des hypothèses affaiblies. Nous montrons que l'indice de régularité intervient dans ces deux problèmes. Enfin, nous construisons dans une troisième partie différents estimateurs pour l'indice de régularité à partir d'estimateurs de la fonction de répartition, dont nous réalisons une revue bibliographique. / The goal of this thesis is to study the local behavior of a probability measure, using a local regularity index. In the first part, we establish the asymptotic normality of the nearest neighbor density estimate. In the second, we define a mode estimator under weakened hypothesis. We show that the regularity index interferes in this two problems. Finally, we construct in a third part various estimators of the regularity index from estimators of the distribution function, which we achieve a review.
25

Modelos de aprendizado supervisionado usando métodos kernel, conjuntos fuzzy e medidas de probabilidade / Supervised machine learning models using kernel methods, probability measures and fuzzy sets

Jorge Luis Guevara Díaz 04 May 2015 (has links)
Esta tese propõe uma metodologia baseada em métodos de kernel, teoria fuzzy e probabilidade para tratar conjuntos de dados cujas observações são conjuntos de pontos. As medidas de probabilidade e os conjuntos fuzzy são usados para modelar essas observações. Posteriormente, graças a kernels definidos sobre medidas de probabilidade, ou em conjuntos fuzzy, é feito o mapeamento implícito dessas medidas de probabilidade, ou desses conjuntos fuzzy, para espaços de Hilbert com kernel reproduzível, onde a análise pode ser feita com algum método kernel. Usando essa metodologia, é possível fazer frente a uma ampla gamma de problemas de aprendizado para esses conjuntos de dados. Em particular, a tese apresenta o projeto de modelos de descrição de dados para observações modeladas com medidas de probabilidade. Isso é conseguido graças ao mergulho das medidas de probabilidade nos espaços de Hilbert, e a construção de esferas envolventes mínimas nesses espaços de Hilbert. A tese apresenta como esses modelos podem ser usados como classificadores de uma classe, aplicados na tarefa de detecção de anomalias grupais. No caso que as observações sejam modeladas por conjuntos fuzzy, a tese propõe mapear esses conjuntos fuzzy para os espaços de Hilbert com kernel reproduzível. Isso pode ser feito graças à projeção de novos kernels definidos sobre conjuntos fuzzy. A tese apresenta como esses novos kernels podem ser usados em diversos problemas como classificação, regressão e na definição de distâncias entre conjuntos fuzzy. Em particular, a tese apresenta a aplicação desses kernels em problemas de classificação supervisionada em dados intervalares e teste kernel de duas amostras para dados contendo atributos imprecisos. / This thesis proposes a methodology based on kernel methods, probability measures and fuzzy sets, to analyze datasets whose individual observations are itself sets of points, instead of individual points. Fuzzy sets and probability measures are used to model observations; and kernel methods to analyze the data. Fuzzy sets are used when the observation contain imprecise, vague or linguistic values. Whereas probability measures are used when the observation is given as a set of multidimensional points in a $D$-dimensional Euclidean space. Using this methodology, it is possible to address a wide range of machine learning problems for such datasets. Particularly, this work presents data description models when observations are modeled by probability measures. Those description models are applied to the group anomaly detection task. This work also proposes a new class of kernels, \\emph{the kernels on fuzzy sets}, that are reproducing kernels able to map fuzzy sets to a geometric feature spaces. Those kernels are similarity measures between fuzzy sets. We give from basic definitions to applications of those kernels in machine learning problems as supervised classification and a kernel two-sample test. Potential applications of those kernels include machine learning and patter recognition tasks over fuzzy data; and computational tasks requiring a similarity measure estimation between fuzzy sets.
26

隨機利率下之資產交換-跨通貨股酬交換與利率交換的評價與避險 / Asset Swap Under Stochastic Interest Rate__The Pricing and Hedging of Cross-Currency Equity Swap and Interest Rate Swap

姜碧嘉, Chiang, Bi-Chia Unknown Date (has links)
雖然跨通貨股酬交換在國際投資市場扮演著重要的角色,但文獻上關於股酬交換評價模式的相關探討並不多,且多集中於國內市場或以本國貨幣做為支付幣別的股酬交換。對於跨通貨股酬交換而言,其評價模式較國內股酬交換之評價模式複雜許多,如何將影響其價值之股價指數、匯率與利率此三個主要因子間的交互相關性同時加入考量,即是此產品之評價過程的重點。 本文在完全市場的假設下,同時放寬傳統評價方法之各變數之相關係數為固定值的假設,提出一新的股酬交換評價方法,即以『兩階段兩步驟』之較具經濟含意的複製方式,推導出股酬交換的一般化評價公式。透過此複製方法,可更清楚得知股酬交換於存續期間的價值變動,更可進一步求得其避險方式,以提供股酬交換交易商在面臨不對稱風險(mismatch risk)時的避險方法。而本文的第二個貢獻在於,將本文所提出之『兩階段兩步驟』的複製方法應用於利率交換的評價上,推導出跨通貨利率交換的一般化評價模式,以進一步比較股酬交換與利率交換此兩種商品的差異性,並試圖釐清市場上對於跨通貨股酬交換評價上的誤解。 與傳統評價公式最大的差異在於:本文評價公式額外考慮了一修正項,複製投資組合可藉由此修正項,對未來各參數間的變動隨時做出調整,以使投資組合能完全複製跨通貨股酬交換的價值。 本文發現,對於國內投資人支付固定利率,以交換B市場的股價指數報酬,且以C國的貨幣做為支付幣別的跨通貨股酬交換而言,其價值除了受到當期利率期間結構的影響外,在期初或每期交換後,其價值與股價指數無直接關聯,但在兩支付間,其價值則會受到當時股價指數與前期股價指數之相對比例的影響。同時,C國對本國的未來匯率並未直接影響跨通貨股酬交換的價值。且若假設各國遠期利率的波動度為零下,則當B國股價指數與C國對本國的匯率呈現正關係或當B國股價指數與B國對本國的匯率呈現負關係時,跨通貨股酬交換的價值愈大。另外,市場上投資人通常誤認股酬交換的價值等於利率交換價值,對於股酬交換與利率交換的比較,本文發現在大多數的情況下,股酬交換的價值與利率交換的價值並不相等。
27

On the contamination of confidence

Coimbra-Lisboa, Paulo César 30 November 2009 (has links)
Submitted by Paulo César Coimbra Lisbôa (pc.coimbra@gmail.com) on 2010-11-11T01:39:34Z No. of bitstreams: 1 PhD_Thesis_Coimbra_v1.pdf: 516617 bytes, checksum: c44a6f3efb7c504da91a6f20e0a95b3f (MD5) / Rejected by Andrea Virginio Machado(andrea.machado@fgv.br), reason: Conforme conversamos, peço fazer a alteração para acesso livre. Andrea on 2010-11-11T14:03:11Z (GMT) / Submitted by Paulo César Coimbra Lisbôa (pc.coimbra@gmail.com) on 2010-11-11T14:17:27Z No. of bitstreams: 1 PhD_Thesis_Coimbra_v1.pdf: 516617 bytes, checksum: c44a6f3efb7c504da91a6f20e0a95b3f (MD5) / Approved for entry into archive by Andrea Virginio Machado(andrea.machado@fgv.br) on 2010-11-16T11:21:16Z (GMT) No. of bitstreams: 1 PhD_Thesis_Coimbra_v1.pdf: 516617 bytes, checksum: c44a6f3efb7c504da91a6f20e0a95b3f (MD5) / Made available in DSpace on 2010-11-17T10:49:20Z (GMT). No. of bitstreams: 1 PhD_Thesis_Coimbra_v1.pdf: 516617 bytes, checksum: c44a6f3efb7c504da91a6f20e0a95b3f (MD5) Previous issue date: 2009-11-30 / Contaminação da confiança é um caso especial de incerteza Knightiana ou ambiguidade na qual o tomador de decisões está diante de não apenas uma única distribuição de probabilidades, mas sim de um conjunto de distribuições de probabilidades. A primeira parte desta tese tem o propósito de fornecer uma caracterização da contaminação da confiança e então apresentar um conjunto de axiomas comportamentais simples sob os quais as preferências de um tomador de decisões é representada pela utilidade esperada de Choquet com contaminação da confiança. A segunda parte desta tese apresenta duas aplicações econômicas de contaminação da confiança: a primeira delas generaliza o teorema de existência de equilíbrio de Nash de Dow e Werlang (o que permite apresentar uma solução explícita para o paradoxo segundo o qual os jogadores de um jogo do dilema dos prisioneiros com um número infinito de repetições não agem de acordo com o esperado pelo procedimento da indução retroativa) e a outra estuda o impacto da contaminação da confiança na escolha de portfolio. / Contamination of confidence is a special case of Knightian uncertainty or ambiguity in which the decision maker faces not simple probability measure but a set of probability measures. The first part of this thesis has the purpose to provide a characterization of the contamination of confidence and then present a simple set of behavioral axioms under which the decision maker’s preference is represented by the Choquet expected utility with contamination of confidence. The second part of this thesis presents two economic applications of the contamination of confidence: the first of them generalizes Dow and Werlang’s existence Theorem of Nash equilibrium under uncertainty (which enables to present an explicit solution to the paradox on which players in a finitely repeated prisoners’ dilemma breaks down backward induction) and the other studies the impact of the contamination of confidence in the portfolio choice.
28

Highway Development Decision-Making Under Uncertainty: Analysis, Critique and Advancement

El-Khatib, Mayar January 2010 (has links)
While decision-making under uncertainty is a major universal problem, its implications in the field of transportation systems are especially enormous; where the benefits of right decisions are tremendous, the consequences of wrong ones are potentially disastrous. In the realm of highway systems, decisions related to the highway configuration (number of lanes, right of way, etc.) need to incorporate both the traffic demand and land price uncertainties. In the literature, these uncertainties have generally been modeled using the Geometric Brownian Motion (GBM) process, which has been used extensively in modeling many other real life phenomena. But few scholars, including those who used the GBM in highway configuration decisions, have offered any rigorous justification for the use of this model. This thesis attempts to offer a detailed analysis of various aspects of transportation systems in relation to decision-making. It reveals some general insights as well as a new concept that extends the notion of opportunity cost to situations where wrong decisions could be made. Claiming deficiency of the GBM model, it also introduces a new formulation that utilizes a large and flexible parametric family of jump models (i.e., Lévy processes). To validate this claim, data related to traffic demand and land prices were collected and analyzed to reveal that their distributions, heavy-tailed and asymmetric, do not match well with the GBM model. As a remedy, this research used the Merton, Kou, and negative inverse Gaussian Lévy processes as possible alternatives. Though the results show indifference in relation to final decisions among the models, mathematically, they improve the precision of uncertainty models and the decision-making process. This furthers the quest for optimality in highway projects and beyond.
29

Highway Development Decision-Making Under Uncertainty: Analysis, Critique and Advancement

El-Khatib, Mayar January 2010 (has links)
While decision-making under uncertainty is a major universal problem, its implications in the field of transportation systems are especially enormous; where the benefits of right decisions are tremendous, the consequences of wrong ones are potentially disastrous. In the realm of highway systems, decisions related to the highway configuration (number of lanes, right of way, etc.) need to incorporate both the traffic demand and land price uncertainties. In the literature, these uncertainties have generally been modeled using the Geometric Brownian Motion (GBM) process, which has been used extensively in modeling many other real life phenomena. But few scholars, including those who used the GBM in highway configuration decisions, have offered any rigorous justification for the use of this model. This thesis attempts to offer a detailed analysis of various aspects of transportation systems in relation to decision-making. It reveals some general insights as well as a new concept that extends the notion of opportunity cost to situations where wrong decisions could be made. Claiming deficiency of the GBM model, it also introduces a new formulation that utilizes a large and flexible parametric family of jump models (i.e., Lévy processes). To validate this claim, data related to traffic demand and land prices were collected and analyzed to reveal that their distributions, heavy-tailed and asymmetric, do not match well with the GBM model. As a remedy, this research used the Merton, Kou, and negative inverse Gaussian Lévy processes as possible alternatives. Though the results show indifference in relation to final decisions among the models, mathematically, they improve the precision of uncertainty models and the decision-making process. This furthers the quest for optimality in highway projects and beyond.

Page generated in 0.0672 seconds