• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 11
  • 11
  • 11
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Detecting Botnet-based Joint Attacks by Hidden Markov Model

Yu Yang, Peng 06 September 2012 (has links)
We present a new detection model include monitoring network perimeter and hosts logs to counter the new method of attacking involve different hosts source during an attacking sequence. The new attacking sequence we called ¡§Scout and Intruder¡¨ involve two separate hosts. The scout will scan and evaluate the target area to find the possible victims and their vulnerability, and the intruder launch the precision strike with login activities looked as same as authorized users. By launching the scout and assassin attack, the attacker could access the system without being detected by the network and system intrusion detection system. In order to detect the Scout and intruder attack, we correlate the netflow connection records, the system logs and network data dump, by finding the states of the attack and the corresponding features we create the detection model using the Hidden Markov Chain. With the model we created, we could find the potential Scout and the Intruder attack in the initial state, which gives the network/system administrator more response time to stop the attack from the attackers.
2

Comparação de algoritmos usados na construção de mapas genéticos / Comparison of algorithms used in the construction of genetic linkage maps

Mollinari, Marcelo 23 January 2008 (has links)
Mapas genéticos são arranjos lineares que indicam a ordem e distância entre locos nos cromossomos de uma determinada espécie. Recentemente, a grande disponibilidade de marcadores moleculares tem tornado estes mapas cada vez mais saturados, sendo necessários métodos eficientes para sua construção. Uma das etapas que merece mais atenção na construção de mapas de ligação é a ordenação dos marcadores genéticos dentro de cada grupo de ligação. Tal ordenação é considerada um caso especial do clássico problema do caixeiro viajante (TSP), que consiste em escolher a melhor ordem entre todas as possíveis. Entretanto, a estratégia de busca exaustiva torna-se inviável quando o número de marcadores é grande. Nesses casos, para que esses mapas possam ser construídos uma alternativa viável é a utilização de algoritmos que forneçam soluções aproximadas. O objetivo desse trabalho foi avaliar a eficiência dos algoritmos Try (TRY), Seriation (SER), Rapid Chain Delineation (RCD), Recombination Counting and Ordering (RECORD) e Unidirectional Growth (UG), além dos critérios PARF (produto mínimo das frações de recombinação adjacentes), SARF (soma mínima das frações de recombinação adjacentes), SALOD (soma máxima dos LOD scores adjacentes) e LMHC (verossimilhança via cadeias de Markov ocultas), usados juntamente com o algoritmo de verificação de erros RIPPLE, para a construção de mapas genéticos. Para tanto, foi simulado um mapa de ligação de uma espécie vegetal hipotética, diplóide e monóica, contendo 21 marcadores com distância fixa entre eles de 3 centimorgans. Usando o método Monte Carlo, foram obtidas aleatoriamente 550 populações F2 com 100 e 400 indivíduos, além de diferentes combinações de marcadores dominantes e codominantes. Foi ainda simulada perda de 10% e 20% dos dados. Os resultados mostraram que os algoritmos TRY e SER tiveram bons resultados em todas as situações simuladas, mesmo com presença de elevado número de dados perdidos e marcadores dominantes ligados em repulsão, podendo ser então recomendado em situações práticas. Os algoritmos RECORD e UG apresentaram bons resultados na ausência de marcadores dominantes ligados em repulsão, podendo então ser recomendados em situações com poucos marcadores dominantes. Dentre todos os algoritmos, o RCD foi o que se mostrou menos eficiente. O critério LHMC, aplicado com o algoritmo RIPPLE, foi o que apresentou melhores resultados quando se deseja fazer verificações de erros na ordenação. / Genetic linkage maps are linear arrangements showing the order and distance between loci in chromosomes of a particular species. Recently, the availability of molecular markers has made such maps more saturated and efficient methods are needed for their construction. One of the steps that deserves more attention in the construction of genetic linkage maps is the ordering of genetic markers within each linkage group. This ordering is considered a special case of the classic traveling salesman problem (TSP), which consists in choosing the best order among all possible ones. However, the strategy of exhaustive search becomes unfeasible when the number of markers is large. One possible alternative to construct such maps is to use algorithms that provide approximate solutions. Thus, the aim of this work was to evaluate the efficiency of algorithms Try (TRY), Seriation (SER), Rapid Chain Delineation (RCD), Recombination Counting and Ordering (RECORD) and Unidirectional Growth (UG), as well as the criteria PARF (product of adjacent recombination fractions), SARF (sum of adjacent recombination fractions), SALOD (sum of adjacent lod scores) and LMHC (likelihood via hidden Markov chains), used with the RIPPLE algorithm for error verification, in the construction of genetic linkage maps. For doing so, a linkage map of a hypothetical diploid and monoecious plant species was simulated, containing 21 markers with fixed distance of 3 centimorgans between them. Using Monte Carlo methods, 550 F2 populations were randomly simulated with 100 and 400 individuals, together with different combinations of dominant and codominant markers. 10 % and 20 % of missing data was also included. Results showed that the algorithms TRY and SER gave good results in all situations, even with presence of a large number of missing data and dominant markers linked in repulsion phase. Thus, these can be recommended for analyzing real data. The algorithms RECORD and UG gave good results in the absence of dominant markers linked in repulsion phase and can be used in this case. Among all algorithms, RCD was the least efficient. The criterion LHMC, applied with the RIPPLE algorithm, showed the best results when the goal is to check ordering errors.
3

Comparação de algoritmos usados na construção de mapas genéticos / Comparison of algorithms used in the construction of genetic linkage maps

Marcelo Mollinari 23 January 2008 (has links)
Mapas genéticos são arranjos lineares que indicam a ordem e distância entre locos nos cromossomos de uma determinada espécie. Recentemente, a grande disponibilidade de marcadores moleculares tem tornado estes mapas cada vez mais saturados, sendo necessários métodos eficientes para sua construção. Uma das etapas que merece mais atenção na construção de mapas de ligação é a ordenação dos marcadores genéticos dentro de cada grupo de ligação. Tal ordenação é considerada um caso especial do clássico problema do caixeiro viajante (TSP), que consiste em escolher a melhor ordem entre todas as possíveis. Entretanto, a estratégia de busca exaustiva torna-se inviável quando o número de marcadores é grande. Nesses casos, para que esses mapas possam ser construídos uma alternativa viável é a utilização de algoritmos que forneçam soluções aproximadas. O objetivo desse trabalho foi avaliar a eficiência dos algoritmos Try (TRY), Seriation (SER), Rapid Chain Delineation (RCD), Recombination Counting and Ordering (RECORD) e Unidirectional Growth (UG), além dos critérios PARF (produto mínimo das frações de recombinação adjacentes), SARF (soma mínima das frações de recombinação adjacentes), SALOD (soma máxima dos LOD scores adjacentes) e LMHC (verossimilhança via cadeias de Markov ocultas), usados juntamente com o algoritmo de verificação de erros RIPPLE, para a construção de mapas genéticos. Para tanto, foi simulado um mapa de ligação de uma espécie vegetal hipotética, diplóide e monóica, contendo 21 marcadores com distância fixa entre eles de 3 centimorgans. Usando o método Monte Carlo, foram obtidas aleatoriamente 550 populações F2 com 100 e 400 indivíduos, além de diferentes combinações de marcadores dominantes e codominantes. Foi ainda simulada perda de 10% e 20% dos dados. Os resultados mostraram que os algoritmos TRY e SER tiveram bons resultados em todas as situações simuladas, mesmo com presença de elevado número de dados perdidos e marcadores dominantes ligados em repulsão, podendo ser então recomendado em situações práticas. Os algoritmos RECORD e UG apresentaram bons resultados na ausência de marcadores dominantes ligados em repulsão, podendo então ser recomendados em situações com poucos marcadores dominantes. Dentre todos os algoritmos, o RCD foi o que se mostrou menos eficiente. O critério LHMC, aplicado com o algoritmo RIPPLE, foi o que apresentou melhores resultados quando se deseja fazer verificações de erros na ordenação. / Genetic linkage maps are linear arrangements showing the order and distance between loci in chromosomes of a particular species. Recently, the availability of molecular markers has made such maps more saturated and efficient methods are needed for their construction. One of the steps that deserves more attention in the construction of genetic linkage maps is the ordering of genetic markers within each linkage group. This ordering is considered a special case of the classic traveling salesman problem (TSP), which consists in choosing the best order among all possible ones. However, the strategy of exhaustive search becomes unfeasible when the number of markers is large. One possible alternative to construct such maps is to use algorithms that provide approximate solutions. Thus, the aim of this work was to evaluate the efficiency of algorithms Try (TRY), Seriation (SER), Rapid Chain Delineation (RCD), Recombination Counting and Ordering (RECORD) and Unidirectional Growth (UG), as well as the criteria PARF (product of adjacent recombination fractions), SARF (sum of adjacent recombination fractions), SALOD (sum of adjacent lod scores) and LMHC (likelihood via hidden Markov chains), used with the RIPPLE algorithm for error verification, in the construction of genetic linkage maps. For doing so, a linkage map of a hypothetical diploid and monoecious plant species was simulated, containing 21 markers with fixed distance of 3 centimorgans between them. Using Monte Carlo methods, 550 F2 populations were randomly simulated with 100 and 400 individuals, together with different combinations of dominant and codominant markers. 10 % and 20 % of missing data was also included. Results showed that the algorithms TRY and SER gave good results in all situations, even with presence of a large number of missing data and dominant markers linked in repulsion phase. Thus, these can be recommended for analyzing real data. The algorithms RECORD and UG gave good results in the absence of dominant markers linked in repulsion phase and can be used in this case. Among all algorithms, RCD was the least efficient. The criterion LHMC, applied with the RIPPLE algorithm, showed the best results when the goal is to check ordering errors.
4

A Switching Black-Scholes Model and Option Pricing

Webb, Melanie Ann January 2003 (has links)
Derivative pricing, and in particular the pricing of options, is an important area of current research in financial mathematics. Experts debate on the best method of pricing and the most appropriate model of a price process to use. In this thesis, a ``Switching Black-Scholes'' model of a price process is proposed. This model is based on the standard geometric Brownian motion (or Black-Scholes) model of a price process. However, the drift and volatility parameters are permitted to vary between a finite number of possible values at known times, according to the state of a hidden Markov chain. This type of model has been found to replicate the Black-Scholes implied volatility smiles observed in the market, and produce option prices which are closer to market values than those obtained from the traditional Black-Scholes formula. As the Markov chain incorporates a second source of uncertainty into the Black-Scholes model, the Switching Black-Scholes market is incomplete, and no unique option pricing methodology exists. In this thesis, we apply the methods of mean-variance hedging, Esscher transforms and minimum entropy in order to price options on assets which evolve according to the Switching Black-Scholes model. C programs to compute these prices are given, and some particular numerical examples are examined. Finally, filtering techniques and reference probability methods are applied to find estimates of the model parameters and state of the hidden Markov chain. / Thesis (Ph.D.)--Applied Mathematics, 2003.
5

A Switching Black-Scholes Model and Option Pricing

Webb, Melanie Ann January 2003 (has links)
Derivative pricing, and in particular the pricing of options, is an important area of current research in financial mathematics. Experts debate on the best method of pricing and the most appropriate model of a price process to use. In this thesis, a ``Switching Black-Scholes'' model of a price process is proposed. This model is based on the standard geometric Brownian motion (or Black-Scholes) model of a price process. However, the drift and volatility parameters are permitted to vary between a finite number of possible values at known times, according to the state of a hidden Markov chain. This type of model has been found to replicate the Black-Scholes implied volatility smiles observed in the market, and produce option prices which are closer to market values than those obtained from the traditional Black-Scholes formula. As the Markov chain incorporates a second source of uncertainty into the Black-Scholes model, the Switching Black-Scholes market is incomplete, and no unique option pricing methodology exists. In this thesis, we apply the methods of mean-variance hedging, Esscher transforms and minimum entropy in order to price options on assets which evolve according to the Switching Black-Scholes model. C programs to compute these prices are given, and some particular numerical examples are examined. Finally, filtering techniques and reference probability methods are applied to find estimates of the model parameters and state of the hidden Markov chain. / Thesis (Ph.D.)--Applied Mathematics, 2003.
6

Stochastic Optimization Methods for Infrastructure Management with Incomplete Monitoring Data / 不完備モニタリング情報下における社会基盤マネジメントのための確率的最適化手法 / フカンビ モニタリング ジョウホウカ ニ オケル シャカイ キバン マネジメント ノ タメ ノ カクリツテキ サイテキカ シュホウ

Nam, Le Thanh 24 September 2009 (has links)
Kyoto University (京都大学) / 0048 / 新制・課程博士 / 博士(工学) / 甲第14919号 / 工博第3146号 / 新制||工||1472(附属図書館) / 27357 / UT51-2009-M833 / 京都大学大学院工学研究科都市社会工学専攻 / (主査)教授 小林 潔司, 教授 大津 宏康, 教授 河野 広隆 / 学位規則第4条第1項該当
7

Méthodes de lissage et d'estimation dans des modèles à variables latentes par des méthodes de Monte-Carlo séquentielles / Smoothing and estimation methods in hidden variable models through sequential Monte-Carlo methods

Dubarry, Cyrille 09 October 2012 (has links)
Les modèles de chaînes de Markov cachées ou plus généralement ceux de Feynman-Kac sont aujourd'hui très largement utilisés. Ils permettent de modéliser une grande diversité de séries temporelles (en finance, biologie, traitement du signal, ...) La complexité croissante de ces modèles a conduit au développement d'approximations via différentes méthodes de Monte-Carlo, dont le Markov Chain Monte-Carlo (MCMC) et le Sequential Monte-Carlo (SMC). Les méthodes de SMC appliquées au filtrage et au lissage particulaires font l'objet de cette thèse. Elles consistent à approcher la loi d'intérêt à l'aide d'une population de particules définies séquentiellement. Différents algorithmes ont déjà été développés et étudiés dans la littérature. Nous raffinons certains de ces résultats dans le cas du Forward Filtering Backward Smoothing et du Forward Filtering Backward Simulation grâce à des inégalités de déviation exponentielle et à des contrôles non asymptotiques de l'erreur moyenne. Nous proposons également un nouvel algorithme de lissage consistant à améliorer une population de particules par des itérations MCMC, et permettant d'estimer la variance de l'estimateur sans aucune autre simulation. Une partie du travail présenté dans cette thèse concerne également les possibilités de mise en parallèle du calcul des estimateurs particulaires. Nous proposons ainsi différentes interactions entre plusieurs populations de particules. Enfin nous illustrons l'utilisation des chaînes de Markov cachées dans la modélisation de données financières en développant un algorithme utilisant l'Expectation-Maximization pour calibrer les paramètres du modèle exponentiel d'Ornstein-Uhlenbeck multi-échelles / Hidden Markov chain models or more generally Feynman-Kac models are now widely used. They allow the modelling of a variety of time series (in finance, biology, signal processing, ...) Their increasing complexity gave birth to approximations using Monte-Carlo methods, among which Markov Chain Monte-Carlo (MCMC) and Sequential Monte-Carlo (SMC). SMC methods applied to particle filtering and smoothing are dealt with in this thesis. These methods consist in approximating the law of interest through a particle population sequentially defined. Different algorithms have already been developed and studied in the literature. We make some of these results more precise in the particular of the Forward Filtering Backward Smoothing and Forward Filtering Backward Simulation by showing exponential deviation inequalities and by giving non-asymptotic upper bounds to the mean error. We also introduce a new smoothing algorithm improving a particle population through MCMC iterations and allowing to estimate the estimator variance without further simulation. Part of the work presented in this thesis is devoted to the parallel computing of particle estimators. We study different interaction schemes between several particle populations. Finally, we also illustrate the use of hidden Markov chains in the modelling of financial data through an algorithm using Expectation-Maximization to calibrate the exponential Ornstein-Uhlenbeck multiscale stochastic volatility model
8

Actuarial applications of multivariate phase-type distributions : model calibration and credibility

Hassan Zadeh, Amin January 2009 (has links)
Thèse numérisée par la Division de la gestion de documents et des archives de l'Université de Montréal.
9

Actuarial applications of multivariate phase-type distributions : model calibration and credibility

Hassan Zadeh, Amin January 2009 (has links)
Thèse numérisée par la Division de la gestion de documents et des archives de l'Université de Montréal
10

Chaînes de Markov cachées et séparation non supervisée de sources / Hidden Markov chains and unsupervised source separation

Rafi, Selwa 11 June 2012 (has links)
Le problème de la restauration est rencontré dans domaines très variés notamment en traitement de signal et de l'image. Il correspond à la récupération des données originales à partir de données observées. Dans le cas de données multidimensionnelles, la résolution de ce problème peut se faire par différentes approches selon la nature des données, l'opérateur de transformation et la présence ou non de bruit. Dans ce travail, nous avons traité ce problème, d'une part, dans le cas des données discrètes en présence de bruit. Dans ce cas, le problème de restauration est analogue à celui de la segmentation. Nous avons alors exploité les modélisations dites chaînes de Markov couples et triplets qui généralisent les chaînes de Markov cachées. L'intérêt de ces modèles réside en la possibilité de généraliser la méthode de calcul de la probabilité à posteriori, ce qui permet une segmentation bayésienne. Nous avons considéré ces méthodes pour des observations bi-dimensionnelles et nous avons appliqué les algorithmes pour une séparation sur des documents issus de manuscrits scannés dans lesquels les textes des deux faces d'une feuille se mélangeaient. D'autre part, nous avons attaqué le problème de la restauration dans un contexte de séparation aveugle de sources. Une méthode classique en séparation aveugle de sources, connue sous l'appellation "Analyse en Composantes Indépendantes" (ACI), nécessite l'hypothèse d'indépendance statistique des sources. Dans des situations réelles, cette hypothèse n'est pas toujours vérifiée. Par conséquent, nous avons étudié une extension du modèle ACI dans le cas où les sources peuvent être statistiquement dépendantes. Pour ce faire, nous avons introduit un processus latent qui gouverne la dépendance et/ou l'indépendance des sources. Le modèle que nous proposons combine un modèle de mélange linéaire instantané tel que celui donné par ACI et un modèle probabiliste sur les sources avec variables cachées. Dans ce cadre, nous montrons comment la technique d'Estimation Conditionnelle Itérative permet d'affaiblir l'hypothèse usuelle d'indépendance en une hypothèse d'indépendance conditionnelle / The restoration problem is usually encountered in various domains and in particular in signal and image processing. It consists in retrieving original data from a set of observed ones. For multidimensional data, the problem can be solved using different approaches depending on the data structure, the transformation system and the noise. In this work, we have first tackled the problem in the case of discrete data and noisy model. In this context, the problem is similar to a segmentation problem. We have exploited Pairwise and Triplet Markov chain models, which generalize Hidden Markov chain models. The interest of these models consist in the possibility to generalize the computation procedure of the posterior probability, allowing one to perform bayesian segmentation. We have considered these methods for two-dimensional signals and we have applied the algorithms to retrieve of old hand-written document which have been scanned and are subject to show through effect. In the second part of this work, we have considered the restoration problem as a blind source separation problem. The well-known "Independent Component Analysis" (ICA) method requires the assumption that the sources be statistically independent. In practice, this condition is not always verified. Consequently, we have studied an extension of the ICA model in the case where the sources are not necessarily independent. We have introduced a latent process which controls the dependence and/or independence of the sources. The model that we propose combines a linear instantaneous mixing model similar to the one of ICA model and a probabilistic model on the sources with hidden variables. In this context, we show how the usual independence assumption can be weakened using the technique of Iterative Conditional Estimation to a conditional independence assumption

Page generated in 0.0455 seconds