• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • Tagged with
  • 26
  • 26
  • 26
  • 26
  • 24
  • 24
  • 10
  • 9
  • 9
  • 8
  • 6
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Local risk-minimization for defaultable markets

Cretarola, Alessandra <1978> 21 June 2007 (has links)
No description available.
2

A moment symbolic representation of Lévy processes with applications

Oliva, Immacolata <1982> 06 June 2012 (has links)
By using a symbolic method, known in the literature as the classical umbral calculus, a symbolic representation of Lévy processes is given and a new family of time-space harmonic polynomials with respect to such processes, which includes and generalizes the exponential complete Bell polynomials, is introduced. The usefulness of time-space harmonic polynomials with respect to Lévy processes is that it is a martingale the stochastic process obtained by replacing the indeterminate x of the polynomials with a Lévy process, whereas the Lévy process does not necessarily have this property. Therefore to find such polynomials could be particularly meaningful for applications. This new family includes Hermite polynomials, time-space harmonic with respect to Brownian motion, Poisson-Charlier polynomials with respect to Poisson processes, Laguerre and actuarial polynomials with respect to Gamma processes , Meixner polynomials of the first kind with respect to Pascal processes, Euler, Bernoulli, Krawtchuk, and pseudo-Narumi polynomials with respect to suitable random walks. The role played by cumulants is stressed and brought to the light, either in the symbolic representation of Lévy processes and their infinite divisibility property, either in the generalization, via umbral Kailath-Segall formula, of the well-known formulae giving elementary symmetric polynomials in terms of power sum symmetric polynomials. The expression of the family of time-space harmonic polynomials here introduced has some connections with the so-called moment representation of various families of multivariate polynomials. Such moment representation has been studied here for the first time in connection with the time-space harmonic property with respect to suitable symbolic multivariate Lévy processes. In particular, multivariate Hermite polynomials and their properties have been studied in connection with a symbolic version of the multivariate Brownian motion, while multivariate Bernoulli and Euler polynomials are represented as powers of multivariate polynomials which are time-space harmonic with respect to suitable multivariate Lévy processes.
3

Optimal choices: mean field games with controlled jumps and optimality in a stochastic volatility model

Benazzoli, Chiara January 2018 (has links)
Decision making in continuous time under random influences is the leitmotif of this work. In the first part a family of mean field games with a state variable evolving as a jump-diffusion process is studied. Under fairly general conditions, the existence of a solution in a relaxed version of these games is established and conditions under which the optimal strategies are in fact Markovian are given. The proofs rely upon the notions of relaxed controls and martingale problems. Mean field games represent the limit, as the number of players tends to infinity, of nonzero-sum stochastic differential games. Under the assumption that the former admit a regular Markovian solution, an approximate Nash equilibrium for the corresponding n-player games is constructed, and the rate of convergence is provided. Finally, the general theory is applied to a simple illiquid inter-bank market model, where the banks can adjust their reserves only at the jump times of some given Poisson processes with a common constant intensity, and some numerical results are provided. In the second part a stochastic optimization problem is presented. Here the evolution of the state is modeled as in the Heston model, but with a further multiplicative control input in the volatility term. The main objective is to consider the possible role of an external actor, whose exogenous contribution is summarised in the control itself. The solvability of the Hamilton-Jacobi-Bellman equation associated to this optimal control problem is discussed.
4

Delayed Forward-Backward stochastic PDE’s driven by non Gaussian Lévy noise with application in finance

Cordoni, Francesco Giuseppe January 2016 (has links)
From the very first results, the mathematical theory of financial markets has undergone several changes, mostly due to financial crises who forced the mathematical-economical community to change the basic assumptions on which the whole theory is founded. Consequently a new mathematical foundation were needed. In particular, the 2007/2008 credit crunch showed the word that a new financial theoretical framework was necessary, since several empirical evidences emerged that aspects that were neglected prior to these years were in fact fundamental if one has to deal with financial markets. The goal of the present thesis goes in this direction; we aim at developing rigorous mathematical instruments that allow to treat fundamental problems in modern financial mathematics. In order to do so, the talk is thus divided into three main parts, which focus on three different topics of modern financial mathematics. The first part is concerned with delay equations. In particular, we will prove Feynman-Kac type result for BSDE's with time-delayed generator, as well as an ad hoc Ito formula for delay equations with jumps. The second part deal with infinite dimensional analysis and network models, focusing in particular on existence and uniqueness results for infinite dimensional SPDE's on networks with general non-local boundary conditions. The last part treats the topic of rigorous asymptotic expansions, providing a small noise asymptotic expansion for SDE with Lévy noise with several concrete application to financial models.
5

Brain Decoding for Brain Mapping: Definition, Heuristic Quantification, and Improvement of Interpretability in Group MEG Decoding

Kia, Seyed Mostafa January 2017 (has links)
In the last century, a huge multi-disciplinary scientific endeavor is devoted to answer the historical questions in understanding the brain functions. Among the statistical methods used for this purpose, brain decoding provides a tool to predict the mental state of a human subject based on the recorded brain signal. Brain decoding is widely applied in the contexts of brain-computer interfacing, medical diagnosis, and multivariate hypothesis testing on neuroimaging data. In the latest case, linear classifiers are generally employed to discriminate between experimental conditions. Then, the derived weights are visualized in the form of brain maps to further study the spatio-temporal patterns of the underlying neurophysiological activity. It is well known that the brain maps derived from weights of linear classifiers are hard to interpret because of high correlations between predictors, low signal-to-noise ratio, across-subject variability, and the high dimensionality of the neuroimaging data. Therefore, improving the interpretability of brain decoding approaches is of primary interest in many neuroimaging studies. Despite extensive studies of this type, at present, there is no formal definition for interpretability of multivariate brain maps. As a consequence, there is no quantitative measure for evaluating the interpretability of different brain decoding methods. In this thesis, as the primary contribution, we propose a theoretical definition of interpretability in linear brain decoding; we show that the interpretability of multivariate brain maps can be decomposed into their reproducibility and representativeness. As an application of the proposed definition, we exemplify a heuristic for approximating the interpretability in multivariate analysis of evoked magnetoencephalography (MEG) responses. We propose to combine the approximated interpretability and the generalization performance of the model into a new multi-objective criterion for model selection. Our results, for the simulated and real MEG data, show that optimizing the hyper-parameters of the regularized linear classifier based on the proposed criterion results in more informative multivariate brain maps. More importantly, the presented definition provides the theoretical background for quantitative evaluation of interpretability, and hence, facilitates the development of more effective brain decoding algorithms in the future. As the secondary contribution, we present an application of multi-task joint feature learning for group-level multivariate pattern recovery in single-trial MEG decoding. The proposed method allows for recovering sparse yet consistent patterns across different subjects, and therefore enhances the interpretability of the decoding model. We evaluated the performance of the multi-task joint feature learning in terms of generalization, reproducibility, and quality of pattern recovery against traditional single-subject and pooling approaches on both simulated and real MEG datasets. Our experimental results demonstrate that the multi-task joint feature learning framework is capable of recovering meaningful patterns of varying spatio-temporally distributed brain activity across individuals while still maintaining excellent generalization performance. The presented methodology facilitates the application of brain decoding for characterizing the fine-level distinctive patterns of brain activity in group-level inference on neuroimaging data.
6

Predictive networks for multi meta-omics data integration

Zandonà, Alessandro January 2017 (has links)
The role of microbiome in disease onset and in equilibrium is being exposed by a wealth of high-throughput omics methods. All key research directions, e.g., the study of gut microbiome dysbiosis in IBD/IBS, indicate the need for bioinformatics methods that can model the complexity of the microbial communities ecology and unravel its disease-associated perturbations. A most promising direction is the “meta-omics” approach, that allows a profiling based on various biological molecules at the metagenomic scale (e.g., metaproteomics, metametabolomics) as well as different “microbial” omes (eukaryotes and viruses) within a system biology approach. This thesis introduces a bioinformatic framework for microbiota datasets that combines predictive profiling, differential network analysis and meta-omics integration. In detail, the framework identifies biomarkers discriminating amongst clinical phenotypes, through machine learning techniques (Random Forest or SVM) based on a complete Data Analysis Protocol derived by two initiatives funded by FDA: the MicroArray Quality Control-II and Sequencing Quality Control projects. The biomarkers are interpreted in terms of biological networks: the framework provides a setup for networks inference, quantification of networks differences based on the glocal Hamming and Ipsen-Mikhailov (HIM) distance and detection of network communities. The differential analysis of networks allows the study of microbiota structural organization as well as the evolving trajectories of microbial communities associated to the dynamics of the target phenotypes. Moreover, the framework combines a novel similarity network fusion method and machine learning to identify biomarkers from the integration of multiple meta-omics data. The framework implementation requires only standard open source computational biology tools, as a combination of R/Bioconductor and Python functions. In particular, full scripts for meta-omics integration are available in a GitHub repository to ease reuse (https://github.com/AleZandona/INF). The pipeline has been validated on original data from three different clinical datasets. First, the predictive profiling and the network differential analysis have been applied on a pediatric Inflammatory Bowel Disease (IBD) cohort (in faecal vs biopsy environments) and controls, in collaboration with a multidisciplinary team at the Ospedale Pediatrico Bambino Gesú (Rome, I). Then, the meta-omics integration has been tested on a paired bacterial and fungal gut microbiota human IBD datasets from the Gastroenterology Department of the Saint Antoine Hospital (Paris, F), thanks to the collaboration with “Commensals and Probiotics-Host Interactions” team at INRA (Jouy-en-Josas, F). Finally, the framework has been validated on a bacterial-fungal gut microbiota dataset from children affected by Rett syndrome. The different nature of datasets used for validation naturally supports the extension of the framework on different omics datasets. Besides, clinical practice can take advantage of our framework, given the reproducibility and robustness of results, ensured by the adopted Data Analysis Protocol, as well as the biological relevance of the findings, confirmed by the clinical collaborators. Specifically, the omics-based dysbiosis profiles and the inferred biological networks can support the current diagnostic tools to reveal disease-associated perturbations at a much prodromal earlier stage of disease and may be used for disease prevention, diagnosis and prognosis.
7

Stochastic Analysis of a resource reservation system

Manica, Nicola January 2013 (has links)
An unmistakable trend in embedded systems is the growth of soft real-time computing. A soft real-time application is one for which deadlines can occasionally be missed, but the probability of this event has to be controllable and predictable. This work is aimed to close the gap in the research of stochastic real-time analysis related to resource reservation scheduling algorithms. This dissertation attempts to: 1. give a quick overview of classic real-time analysis 2. analyze the problems related to use the well-known techniques in the context of soft real-time applications: • overvalue the assignation of parameters as in hard real- time systems based on worst case execution times • time and memory complexity using the known theoretical stochastic analysis 3. propose solutions able to overcome the limitation showed in point 2 4. show some specific examples (theoretical and practical) in which resource reservation lead to advantages. The novel contributions of this thesis are: • a new bound to predict the probability of a deadline misses in a resource reservation systems • a very efficient numeric solution for matrix generated with well-know abstraction models of reservation based on Quasi Birth Death Markov Process • an analytical solution, with some conservative approximations, for the same models. • a new model for specific applications, like interrupts. • experiments using resource reservation in different contexts The thesis is evolved following two different approaches: 1. the first based on the exact model of reservation, and the contributions is: • define a new pessimistic bound, efficient in term of computation, able to overcome the problem of complete knowledge of the computation time. The solution is an approximation of the real solution of the model. 2. the second based on an approximation model in which the novel contributions are: • presents an exact and numeric efficient solution for the model based on Quasi Birth and Death Markov Process • introduces an approximate analytical solution which can be computed with no complexity and which is reversible These techniques are applicable since the minimum interarrival of a request is greater than a server period. Unfortunately exists situations in which this assumption is not feasible. An important example is using resource reservation to scheduling interrupts. In order to consider also this situation, another important novel result of this thesis is: • to introduce a new model for scheduling interrupts In addition, some practical examples of using resource reservation are presented.
8

Modeling and Querying Data Series and Data Streams with Uncertainty

Dallachiesa, Michele January 2014 (has links)
Many real applications consume data that is intrinsically uncertain and error-prone. An uncertain data series is a series whose point values are uncertain. An uncertain data stream is a data stream whose tuples are existentially uncertain and/or have an uncertain value. Typical sources of uncertainty in data series and data streams include sensor data, data synopses, privacy-preserving transformations and forecasting models. In this thesis, we focus on the following three problems: (1) the formulation and the evaluation of similarity search queries in uncertain data series; (2) the evaluation of nearest neighbor search queries in uncertain data series; (3) the adaptation of sliding windows in uncertain data stream processing to accommodate existential and value uncertainty. We demonstrate experimentally that the correlation among neighboring time-stamps in data series can be leveraged to increase the accuracy of the results. We further show that the "possible world" semantics can be used as underlying uncertainty model to formulate nearest neighbor queries that can be evaluated efficiently. Finally, we discuss the relation between existential and value uncertainty in data stream applications, and verify experimentally our proposal of uncertain sliding windows.
9

Automatic Speech Recognition Quality Estimation

Jalalvand, Shahab January 2017 (has links)
Evaluation of automatic speech recognition (ASR) systems is difficult and costly, since it requires manual transcriptions. This evaluation is usually done by computing word error rate (WER) that is the most popular metric in ASR community. Such computation is doable only if the manual references are available, whereas in the real-life applications, it is a too rigid condition. A reference-free metric to evaluate the ASR performance is \textit{confidence measure} which is provided by the ASR decoder. However, the confidence measure is not always available, especially in commercial ASR usages. Even if available, this measure is usually biased towards the decoder. From this perspective, the confidence measure is not suitable for comparison purposes, for example between two ASR systems. These issues motivate the necessity of an automatic quality estimation system for ASR outputs. This thesis explores ASR quality estimation (ASR QE) from different perspectives including: feature engineering, learning algorithms and applications. From feature engineering perspective, a wide range of features extractable from input signal and output transcription are studied. These features represent the quality of the recognition from different aspects and they are divided into four groups: signal, textual, hybrid and word-based features. From learning point of view, we address two main approaches: i) QE via regression, suitable for single hypothesis scenario; ii) QE via machine-learned ranking (MLR), suitable for multiple hypotheses scenario. In the former, a regression model is used to predict the WER score of each single hypothesis that is created through a single automatic transcription channel. In the latter, a ranking model is used to predict the order of multiple hypotheses with respect to their quality. Multiple hypotheses are mainly generated by several ASR systems or several recording microphones. From application point of view, we introduce two applications in which ASR QE makes salient improvement in terms of WER: i) QE-informed data selection for acoustic model adaptation; ii) QE-informed system combination. In the former, we exploit single hypothesis ASR QE methods in order to select the best adaptation data for upgrading the acoustic model. In the latter, we exploit multiple hypotheses ASR QE methods to rank and combine the automatic transcriptions in a supervised manner. The experiments are mostly conducted on CHiME-3 English dataset. CHiME-3 consists of Wall Street Journal utterances, recorded by multiple far distant microphones in noisy environments. The results show that QE-informed acoustic model adaptation leads to 1.8\% absolute WER reduction and QE-informed system combination leads to 1.7% absolute WER reduction in CHiME-3 task. The outcomes of this thesis are packed in the frame of an open source toolkit named TranscRater -transcription rating toolkit- (https://github.com/hlt-mt/TranscRater) which has been developed based on the aforementioned studies. TranscRater can be used to extract informative features, train the QE models and predict the quality of the reference-less recognitions in a variety of ASR tasks.
10

From data to mathematical analysis and simulation in models in epidemiology and ecology

Clamer, Valentina January 2016 (has links)
This dissertation is divided into three different parts. In the first part we analyse collected data on the occurrence of influenza-like illness (ILI) symptoms regarding the 2009 influenza A/H1N1 virus pandemic in two primary schools of Trento, Italy. These data were used to calibrate a discrete-time SIR model, which was designed to estimate the probabilities of influenza transmission within the classes, grades and schools using Markov Chain Monte Carlo (MCMC) methods. We found that the virus was mainly transmitted within class, with lower levels of transmission between students in the same grade and even lower, though not significantly so, among different grades within the schools. We estimated median values of R0 from the epidemic curves in the two schools of 1.16 and 1.40; on the other hand, we estimated the average number of students infected by the first school case to be 0.85 and 1.09 in the two schools. This discrepancy suggests that household and community transmission played an important role in sustaining the school epidemics. The high probability of infection between students in the same class confirms that targeting within-class transmission is key to controlling the spread of influenza in school settings and, as a consequence, in the general population. In the second part, by starting from a basic host-parasitoid model, we study the dynamics of a 2 hosts-1 parasitoid model assuming, for the sake of simplicity, that larval stages have a fixed duration. If each host is subjected to density-dependent mortality in its larval stage, we obtain explicit conditions for coexistence of both hosts, as long as each 1 host-parasitoid system would tend to an equilibrium point. Otherwise, if mortality is density-independent, under the same conditions host coexistence is impossible. On the other hand, if at least one of the 1 host-parasitoid systems has an oscillatory dynamics (which happens under some parameter values), we found, through numerical bifurcation, that coexistence is favoured. It is also possible that coexistence between the two hosts occurs even in the case without density-dependence. Analysis of this case has been based on methods of approximation of the dominant characteristic multipliers of the monodromy operator using a recent method introduced by Breda et al. Models of this type may be relevant for modelling control strategies for Drosophila suzukii, a recently introduced fruit fly that caused severe production losses, based on native parasitoids of indigenous fruit flies. In the third part, we present a starting point to analyse raw data collected by Stacconi et al. in the province of Trento, Italy. We present an extensions of the model presented in Part 2 where we have two hosts and two parasitoids. Since its analysis is complicated, we begin with a simpler one host-one parasitoid model to better understand the possible impact of parasitoids on a host population. We start by considering that the host population is at an equilibrium without parasitoids, which are then introduced as different percentages of initial adult hosts. We compare the times needed by parasitoids to halve host pupae and we found that the best percentage choice is 10%. Thus we decide to fix this percentage of parasitoid introduction and analyse what happens if parasitoids are introduced when the host population is not at equilibrium both by introducing always the same percentage or the same amount of parasitoids. In this case, even if the attack rate is at 1/10 of its maximum value, parasitoids would have a strong effect on host population, shifting it to an oscillatory regime. However we found that this effect would require more than 100 days but we also found that it can faster if parasitoids are introduced before the host population has reached the equilibrium without parasitoids. Thus there could be possible releases when host population is low. Last we investigate also what happens if in nature mortality rates of these species increase and we found that there is not such a big difference respect to the results obtained using laboratory data.

Page generated in 0.1381 seconds