411 |
A Novel Approach For Cancer Characterization Using Latent Dirichlet Allocation and Disease-Specific Genomic AnalysisYalamanchili, Hima Bindu 05 June 2018 (has links)
No description available.
|
412 |
Semi-parametric Bayesian Inference of Accelerated Life Test Using Dirichlet Process Mixture ModelLiu, Xi January 2015 (has links)
No description available.
|
413 |
Bayesian Sparse Regression with Application to Data-driven Understanding of ClimateDas, Debasish January 2015 (has links)
Sparse regressions based on constraining the L1-norm of the coefficients became popular due to their ability to handle high dimensional data unlike the regular regressions which suffer from overfitting and model identifiability issues especially when sample size is small. They are often the method of choice in many fields of science and engineering for simultaneously selecting covariates and fitting parsimonious linear models that are better generalizable and easily interpretable. However, significant challenges may be posed by the need to accommodate extremes and other domain constraints such as dynamical relations among variables, spatial and temporal constraints, need to provide uncertainty estimates and feature correlations, among others. We adopted a hierarchical Bayesian version of the sparse regression framework and exploited its inherent flexibility to accommodate the constraints. We applied sparse regression for the feature selection problem of statistical downscaling of the climate variables with particular focus on their extremes. This is important for many impact studies where the climate change information is required at a spatial scale much finer than that provided by the global or regional climate models. Characterizing the dependence of extremes on covariates can help in identification of plausible causal drivers and inform extremes downscaling. We propose a general-purpose sparse Bayesian framework for covariate discovery that accommodates the non-Gaussian distribution of extremes within a hierarchical Bayesian sparse regression model. We obtain posteriors over regression coefficients, which indicate dependence of extremes on the corresponding covariates and provide uncertainty estimates, using a variational Bayes approximation. The method is applied for selecting informative atmospheric covariates at multiple spatial scales as well as indices of large scale circulation and global warming related to frequency of precipitation extremes over continental United States. Our results confirm the dependence relations that may be expected from known precipitation physics and generates novel insights which can inform physical understanding. We plan to extend our model to discover covariates for extreme intensity in future. We further extend our framework to handle the dynamic relationship among the climate variables using a nonparametric Bayesian mixture of sparse regression models based on Dirichlet Process (DP). The extended model can achieve simultaneous clustering and discovery of covariates within each cluster. Moreover, the a priori knowledge about association between pairs of data-points is incorporated in the model through must-link constraints on a Markov Random Field (MRF) prior. A scalable and efficient variational Bayes approach is developed to infer posteriors on regression coefficients and cluster variables. / Computer and Information Science
|
414 |
Bayesian Approach Dealing with Mixture Model ProblemsZhang, Huaiye 05 June 2012 (has links)
In this dissertation, we focus on two research topics related to mixture models. The first topic is Adaptive Rejection Metropolis Simulated Annealing for Detecting Global Maximum Regions, and the second topic is Bayesian Model Selection for Nonlinear Mixed Effects Model.
In the first topic, we consider a finite mixture model, which is used to fit the data from heterogeneous populations for many applications. An Expectation Maximization (EM) algorithm and Markov Chain Monte Carlo (MCMC) are two popular methods to estimate parameters in a finite mixture model. However, both of the methods may converge to local maximum regions rather than the global maximum when multiple local maxima exist. In this dissertation, we propose a new approach, Adaptive Rejection Metropolis Simulated Annealing (ARMS annealing), to improve the EM algorithm and MCMC methods. Combining simulated annealing (SA) and adaptive rejection metropolis sampling (ARMS), ARMS annealing generate a set of proper starting points which help to reach all possible modes. ARMS uses a piecewise linear envelope function for a proposal distribution. Under the SA framework, we start with a set of proposal distributions, which are constructed by ARMS, and this method finds a set of proper starting points, which help to detect separate modes. We refer to this approach as ARMS annealing. By combining together ARMS annealing with the EM algorithm and with the Bayesian approach, respectively, we have proposed two approaches: an EM ARMS annealing algorithm and a Bayesian ARMS annealing approach. EM ARMS annealing implement the EM algorithm by using a set of starting points proposed by ARMS annealing. ARMS annealing also helps MCMC approaches determine starting points. Both approaches capture the global maximum region and estimate the parameters accurately. An illustrative example uses a survey data on the number of charitable donations.
The second topic is related to the nonlinear mixed effects model (NLME). Typically a parametric NLME model requires strong assumptions which make the model less flexible and often are not satisfied in real applications. To allow the NLME model to have more flexible assumptions, we present three semiparametric Bayesian NLME models, constructed with Dirichlet process (DP) priors. Dirichlet process models often refer to an infinite mixture model. We propose a unified approach, the penalized posterior Bayes factor, for the purpose of model comparison. Using simulation studies, we compare the performance of two of the three semiparametric hierarchical Bayesian approaches with that of the parametric Bayesian approach. Simulation results suggest that our penalized posterior Bayes factor is a robust method for comparing hierarchical parametric and semiparametric models. An application to gastric emptying studies is used to demonstrate the advantage of our estimation and evaluation approaches. / Ph. D.
|
415 |
Structural Shape Optimization Based On The Use Of Cartesian GridsMarco Alacid, Onofre 06 July 2018 (has links)
Tesis por compendio / As ever more challenging designs are required in present-day industries, the traditional trial-and-error procedure frequently used for designing mechanical parts slows down the design process and yields suboptimal designs, so that new approaches are needed to obtain a competitive advantage. With the ascent of the Finite Element Method (FEM) in the engineering community in the 1970s, structural shape optimization arose as a promising area of application.
However, due to the iterative nature of shape optimization processes, the handling of large quantities of numerical models along with the approximated character of numerical methods may even dissuade the use of these techniques (or fail to exploit their full potential) because the development time of new products is becoming ever shorter.
This Thesis is concerned with the formulation of a 3D methodology based on the Cartesian-grid Finite Element Method (cgFEM) as a tool for efficient and robust numerical analysis. This methodology belongs to the category of embedded (or fictitious) domain discretization techniques in which the key concept is to extend the structural analysis problem to an easy-to-mesh approximation domain that encloses the physical domain boundary.
The use of Cartesian grids provides a natural platform for structural shape optimization because the numerical domain is separated from a physical model, which can easily be changed during the optimization procedure without altering the background discretization. Another advantage is the fact that mesh generation becomes a trivial task since the discretization of the numerical domain and its manipulation, in combination with an efficient hierarchical data structure, can be exploited to save computational effort.
However, these advantages are challenged by several numerical issues. Basically, the computational effort has moved from the use of expensive meshing algorithms towards the use of, for example, elaborate numerical integration schemes designed to capture the mismatch between the geometrical domain boundary and the embedding finite element mesh. To do this we used a stabilized formulation to impose boundary conditions and developed novel techniques to be able to capture the exact boundary representation of the models.
To complete the implementation of a structural shape optimization method an adjunct formulation is used for the differentiation of the design sensitivities required for gradient-based algorithms. The derivatives are not only the variables required for the process, but also compose a powerful tool for projecting information between different designs, or even projecting the information to create h-adapted meshes without going through a full h-adaptive refinement process.
The proposed improvements are reflected in the numerical examples included in this Thesis. These analyses clearly show the improved behavior of the cgFEM technology as regards numerical accuracy and computational efficiency, and consequently the suitability of the cgFEM approach for shape optimization or contact problems. / La competitividad en la industria actual impone la necesidad de generar nuevos y mejores diseños. El tradicional procedimiento de prueba y error, usado a menudo para el diseño de componentes mecánicos, ralentiza el proceso de diseño y produce diseños subóptimos, por lo que se necesitan nuevos enfoques para obtener una ventaja competitiva. Con el desarrollo del Método de los Elementos Finitos (MEF) en el campo de la ingeniería en la década de 1970, la optimización de forma estructural surgió como un área de aplicación prometedora.
El entorno industrial cada vez más exigente implica ciclos cada vez más cortos de desarrollo de nuevos productos. Por tanto, la naturaleza iterativa de los procesos de optimización de forma, que supone el análisis de gran cantidad de geometrías (para las se han de usar modelos numéricos de gran tamaño a fin de limitar el efecto de los errores intrínsecamente asociados a las técnicas numéricas), puede incluso disuadir del uso de estas técnicas.
Esta Tesis se centra en la formulación de una metodología 3D basada en el Cartesian-grid Finite Element Method (cgFEM) como herramienta para un análisis numérico eficiente y robusto. Esta metodología pertenece a la categoría de técnicas de discretización Immersed Boundary donde el concepto clave es extender el problema de análisis estructural a un dominio de aproximación, que contiene la frontera del dominio físico, cuya discretización (mallado) resulte sencilla.
El uso de mallados cartesianos proporciona una plataforma natural para la optimización de forma estructural porque el dominio numérico está separado del modelo físico, que podrá cambiar libremente durante el procedimiento de optimización sin alterar la discretización subyacente. Otro argumento positivo reside en el hecho de que la generación de malla se convierte en una tarea trivial. La discretización del dominio numérico y su manipulación, en coalición con la eficiencia de una estructura jerárquica de datos, pueden ser explotados para ahorrar coste computacional.
Sin embargo, estas ventajas pueden ser cuestionadas por varios problemas numéricos. Básicamente, el esfuerzo computacional se ha desplazado. Del uso de costosos algoritmos de mallado nos movemos hacia el uso de, por ejemplo, esquemas de integración numérica elaborados para poder capturar la discrepancia entre la frontera del dominio geométrico y la malla de elementos finitos que lo embebe. Para ello, utilizamos, por un lado, una formulación de estabilización para imponer condiciones de contorno y, por otro lado, hemos desarrollado nuevas técnicas para poder captar la representación exacta de los modelos geométricos.
Para completar la implementación de un método de optimización de forma estructural se usa una formulación adjunta para derivar las sensibilidades de diseño requeridas por los algoritmos basados en gradiente. Las derivadas no son sólo variables requeridas para el proceso, sino una poderosa herramienta para poder proyectar información entre diferentes diseños o, incluso, proyectar la información para crear mallas h-adaptadas sin pasar por un proceso completo de refinamiento h-adaptativo.
Las mejoras propuestas se reflejan en los ejemplos numéricos presentados en esta Tesis. Estos análisis muestran claramente el comportamiento superior de la tecnología cgFEM en cuanto a precisión numérica y eficiencia computacional. En consecuencia, el enfoque cgFEM se postula como una herramienta adecuada para la optimización de forma. / Actualment, amb la competència existent en la industria, s'imposa la necessitat de generar nous i millors dissenys . El tradicional procediment de prova i error, que amb freqüència es fa servir pel disseny de components mecànics, endarrereix el procés de disseny i produeix dissenys subòptims, pel que es necessiten nous enfocaments per obtindre avantatge competitiu. Amb el desenvolupament del Mètode dels Elements Finits (MEF) en el camp de l'enginyeria en la dècada de 1970, l'optimització de forma estructural va sorgir com un àrea d'aplicació prometedora.
No obstant això, a causa de la natura iterativa dels processos d'optimització de forma, la manipulació dels models numèrics en grans quantitats, junt amb l'error de discretització dels mètodes numèrics, pot fins i tot dissuadir de l'ús d'aquestes tècniques (o d'explotar tot el seu potencial), perquè al mateix temps els cicles de desenvolupament de nous productes s'estan acurtant.
Esta Tesi se centra en la formulació d'una metodologia 3D basada en el Cartesian-grid Finite Element Method (cgFEM) com a ferramenta per una anàlisi numèrica eficient i sòlida. Esta metodologia pertany a la categoria de tècniques de discretització Immersed Boundary on el concepte clau és expandir el problema d'anàlisi estructural a un domini d'aproximació fàcil de mallar que conté la frontera del domini físic.
L'utilització de mallats cartesians proporciona una plataforma natural per l'optimització de forma estructural perquè el domini numèric està separat del model físic, que podria canviar lliurement durant el procediment d'optimització sense alterar la discretització subjacent. A més, un altre argument positiu el trobem en què la generació de malla es converteix en una tasca trivial, ja que la discretització del domini numèric i la seua manipulació, en coalició amb l'eficiència d'una estructura jeràrquica de dades, poden ser explotats per estalviar cost computacional.
Tot i això, estos avantatges poden ser qüestionats per diversos problemes numèrics. Bàsicament, l'esforç computacional s'ha desplaçat. De l'ús de costosos algoritmes de mallat ens movem cap a l'ús de, per exemple, esquemes d'integració numèrica elaborats per poder capturar la discrepància entre la frontera del domini geomètric i la malla d'elements finits que ho embeu. Per això, fem ús, d'una banda, d'una formulació d'estabilització per imposar condicions de contorn i, d'un altra, desevolupem noves tècniques per poder captar la representació exacta dels models geomètrics
Per completar la implementació d'un mètode d'optimització de forma estructural es fa ús d'una formulació adjunta per derivar les sensibilitats de disseny requerides pels algoritmes basats en gradient. Les derivades no són únicament variables requerides pel procés, sinó una poderosa ferramenta per poder projectar informació entre diferents dissenys o, fins i tot, projectar la informació per crear malles h-adaptades sense passar per un procés complet de refinament h-adaptatiu.
Les millores proposades s'evidencien en els exemples numèrics presentats en esta Tesi. Estes anàlisis mostren clarament el comportament superior de la tecnologia cgFEM en tant a precisió numèrica i eficiència computacional. Així, l'enfocament cgFEM es postula com una ferramenta adient per l'optimització de forma. / Marco Alacid, O. (2017). Structural Shape Optimization Based On The Use Of Cartesian Grids [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/86195 / Compendio
|
416 |
Caracterización diferenciable y holomorfa de superficies topológicamente planasLlanos Valencia, Héctor Aquiles 16 January 2020 (has links)
Las superficies (2 - variedad conexa) homeomorfas a un abierto de la esfera S2, son llamadas superficies topológicamente planas. En esta tesis, caracterizamos a estas superficies y estudiamos la conexión entre estas características.
Es claro que el plano y la esfera son planas. Notemos que una característica que presentan estas dos superficies, es que ambas satisfacen el famoso Teorema de la Curva de Jordan, i.e., el complemento de cualquier curva cerrada simple en el plano o la esfera, tiene exactamente dos componentes conexas. Otra cualidad que se exhibe en estas dos superficies, es que toda 1-forma diferencial de clase C1 cerrada con soporte compacto necesariamente es exacta.
Finalmente, describimos la relación que mantienen estas características, además, obtenemos un resultado de rigidez. A saber, una superficie de Riemann homeomorfa a un abierto de S2 es biholomorfa a una abierto de la esfera de Riemann. / Tesis
|
417 |
Sur la répartition des unités dans les corps quadratiques réelsLacasse, Marc-André 12 1900 (has links)
Ce mémoire s'emploie à étudier les corps quadratiques réels ainsi qu'un élément particulier de tels corps quadratiques réels : l'unité fondamentale. Pour ce faire, le mémoire commence par présenter le plus clairement possible les connaissances sur différents sujets qui sont essentiels à la compréhension des calculs et des résultats de ma recherche. On introduit d'abord les corps quadratiques ainsi que l'anneau de ses entiers algébriques et on décrit ses unités. On parle ensuite des fractions continues puisqu'elles se retrouvent dans un algorithme de calcul de l'unité fondamentale. On traite ensuite des formes binaires quadratiques et de la formule du nombre de classes de Dirichlet, laquelle fait intervenir l'unité fondamentale en fonction d'autres variables. Une fois cette tâche accomplie, on présente nos calculs et nos résultats. Notre recherche concerne la répartition des unités fondamentales des corps quadratiques réels, la répartition des unités des corps quadratiques réels et les moments du logarithme de l'unité fondamentale. (Le logarithme de l'unité fondamentale est appelé le régulateur.) / This memoir aims to study real quadratic fields and a particular element of such real quadratic fields : the fundamental unit. To achieve this, the memoir begins by presenting as clearly as possible the state of knowledge on different subjects that are essential to understand the computations and results of my research. We first introduce quadratic fields and their rings of algebraic integers, and we describe their units. We then talk about continued fractions because they are present in an algorithm to compute the fundamental unit. Afterwards, we proceed with binary quadratic forms and Dirichlet's class number formula, which involves the fundamental unit as a function of other variables. Once the above tasks are done, we present our calculations and results. Our research concerns the distribution of fundamental units in real quadratic fields, the disbribution of units in real quadratic fields and the moments of the logarithm of the fundamental unit. (The logarithm of the fundamental unit is called the regulator.)
|
418 |
Sur la répartition des unités dans les corps quadratiques réelsLacasse, Marc-André 12 1900 (has links)
Ce mémoire s'emploie à étudier les corps quadratiques réels ainsi qu'un élément particulier de tels corps quadratiques réels : l'unité fondamentale. Pour ce faire, le mémoire commence par présenter le plus clairement possible les connaissances sur différents sujets qui sont essentiels à la compréhension des calculs et des résultats de ma recherche. On introduit d'abord les corps quadratiques ainsi que l'anneau de ses entiers algébriques et on décrit ses unités. On parle ensuite des fractions continues puisqu'elles se retrouvent dans un algorithme de calcul de l'unité fondamentale. On traite ensuite des formes binaires quadratiques et de la formule du nombre de classes de Dirichlet, laquelle fait intervenir l'unité fondamentale en fonction d'autres variables. Une fois cette tâche accomplie, on présente nos calculs et nos résultats. Notre recherche concerne la répartition des unités fondamentales des corps quadratiques réels, la répartition des unités des corps quadratiques réels et les moments du logarithme de l'unité fondamentale. (Le logarithme de l'unité fondamentale est appelé le régulateur.) / This memoir aims to study real quadratic fields and a particular element of such real quadratic fields : the fundamental unit. To achieve this, the memoir begins by presenting as clearly as possible the state of knowledge on different subjects that are essential to understand the computations and results of my research. We first introduce quadratic fields and their rings of algebraic integers, and we describe their units. We then talk about continued fractions because they are present in an algorithm to compute the fundamental unit. Afterwards, we proceed with binary quadratic forms and Dirichlet's class number formula, which involves the fundamental unit as a function of other variables. Once the above tasks are done, we present our calculations and results. Our research concerns the distribution of fundamental units in real quadratic fields, the disbribution of units in real quadratic fields and the moments of the logarithm of the fundamental unit. (The logarithm of the fundamental unit is called the regulator.)
|
419 |
A framework for exploiting electronic documentation in support of innovation processesUys, J. W. 03 1900 (has links)
Thesis (PhD (Industrial Engineering))--University of Stellenbosch, 2010. / ENGLISH ABSTRACT: The crucial role of innovation in creating sustainable competitive advantage is widely recognised in industry today. Likewise, the importance of having the required information accessible to the right employees at the right time is well-appreciated. More specifically, the dependency of effective, efficient innovation processes on the availability of information has been pointed out in literature.
A great challenge is countering the effects of the information overload phenomenon in organisations in order for employees to find the information appropriate to their needs without having to wade through excessively large quantities of information to do so. The initial stages of the innovation process, which are characterised by free association, semi-formal activities, conceptualisation, and experimentation, have already been identified as a key focus area for improving the effectiveness of the entire innovation process. The dependency on information during these early stages of the innovation process is especially high.
Any organisation requires a strategy for innovation, a number of well-defined, implemented processes and measures to be able to innovate in an effective and efficient manner and to drive its innovation endeavours. In addition, the organisation requires certain enablers to support its innovation efforts which include certain core competencies, technologies and knowledge. Most importantly for this research, enablers are required to more effectively manage and utilise innovation-related information. Information residing inside and outside the boundaries of the organisation is required to feed the innovation process. The specific sources of such information are numerous. Such information may further be structured or unstructured in nature. However, an ever-increasing ratio of available innovation-related information is of the unstructured type. Examples include the textual content of reports, books, e-mail messages and web pages. This research explores the innovation landscape and typical sources of innovation-related information. In addition, it explores the landscape of text analytical approaches and techniques in search of ways to more effectively and efficiently deal with unstructured, textual information.
A framework that can be used to provide a unified, dynamic view of an organisation‟s innovation-related information, both structured and unstructured, is presented. Once implemented, this framework will constitute an innovation-focused knowledge base that will organise and make accessible such innovation-related information to the stakeholders of the innovation process. Two novel, complementary text analytical techniques, Latent Dirichlet Allocation and the Concept-Topic Model, were identified for application with the framework. The potential value of these techniques as part of the information systems that would embody the framework is illustrated. The resulting knowledge base would cause a quantum leap in the accessibility of information and may significantly improve the way innovation is done and managed in the target organisation. / AFRIKAANSE OPSOMMING: Die belangrikheid van innovasie vir die daarstel van „n volhoubare mededingende voordeel word tans wyd erken in baie sektore van die bedryf. Ook die belangrikheid van die toeganklikmaking van relevante inligting aan werknemers op die geskikte tyd, word vandag terdeë besef. Die afhanklikheid van effektiewe, doeltreffende innovasieprosesse op die beskikbaarheid van inligting word deurlopend beklemtoon in die navorsingsliteratuur.
„n Groot uitdaging tans is om die oorsake en impak van die inligtingsoorvloedverskynsel in ondernemings te bestry ten einde werknemers in staat te stel om inligting te vind wat voldoen aan hul behoeftes sonder om in die proses deur oormatige groot hoeveelhede inligting te sif. Die aanvanklike stappe van die innovasieproses, gekenmerk deur vrye assosiasie, semi-formele aktiwiteite, konseptualisering en eksperimentasie, is reeds geïdentifiseer as sleutelareas vir die verbetering van die effektiwiteit van die innovasieproses in sy geheel. Die afhanklikheid van hierdie deel van die innovasieproses op inligting is besonder hoog.
Om op „n doeltreffende en optimale wyse te innoveer, benodig elke onderneming „n strategie vir innovasie sowel as „n aantal goed gedefinieerde, ontplooide prosesse en metingskriteria om die innovasieaktiwiteite van die onderneming te dryf. Bykomend benodig ondernemings sekere innovasie-ondersteuningsmeganismes wat bepaalde sleutelaanlegde, -tegnologiëe en kennis insluit. Kern tot hierdie navorsing, benodig organisasies ook ondersteuningsmeganismes om hul in staat te stel om meer doeltreffend innovasie-verwante inligting te bestuur en te gebruik. Inligting, gehuisves beide binne en buite die grense van die onderneming, word benodig om die innovasieproses te voer. Die bronne van sulke inligting is veeltallig en hierdie inligting mag gestruktureerd of ongestruktureerd van aard wees. „n Toenemende persentasie van innovasieverwante inligting is egter van die ongestruktureerde tipe, byvoorbeeld die inligting vervat in die tekstuele inhoud van verslae, boeke, e-posboodskappe en webbladsye. In hierdie navorsing word die innovasielandskap asook tipiese bronne van innovasie-verwante inligting verken. Verder word die landskap van teksanalitiese benaderings en -tegnieke ondersoek ten einde maniere te vind om meer doeltreffend en optimaal met ongestruktureerde, tekstuele inligting om te gaan. „n Raamwerk wat aangewend kan word om „n verenigde, dinamiese voorstelling van „n onderneming se innovasieverwante inligting, beide gestruktureerd en ongestruktureerd, te skep word voorgestel. Na afloop van implementasie sal hierdie raamwerk die innovasieverwante inligting van die onderneming organiseer en meer toeganklik maak vir die deelnemers van die innovasieproses. Daar word verslag gelewer oor die aanwending van twee nuwerwetse, komplementêre teksanalitiese tegnieke tot aanvulling van die raamwerk. Voorts word die potensiele waarde van hierdie tegnieke as deel van die inligtingstelsels wat die raamwerk realiseer, verder uitgewys en geillustreer.
|
420 |
Calcul stochastique via régularisation en dimension infinie avec perspectives financièresDi Girolami, Cristina 05 July 2010 (has links) (PDF)
Ce document de thèse développe certains aspects du calcul stochastique via régularisation pour des processus X à valeurs dans un espace de Banach général B. Il introduit un concept original de Chi-variation quadratique, où Chi est un sous-espace du dual d'un produit tensioriel B⊗B, muni de la topologie projective. Une attention particulière est dévouée au cas où B est l'espace des fonctions continues sur [-τ,0], τ>0. Une classe de résultats de stabilité de classe C^1 pour des processus ayant une Chi-variation quadratique est établie ainsi que des formules d'Itô pour de tels processus. Un rôle significatif est joué par les processus réels à variation quadratique finie X (par exemple un processus de Dirichlet, faible Dirichlet). Le processus naturel à valeurs dans C[-τ,0] est le dénommé processus fenêtre X_t(•) où X_t(y) = X_{t+y}, y ∈ [-τ,0]. Soit T>0. Si X est un processus dont la variation quadratique vaut [X]_t = t et h = H(X_T(•)) où H:C([-T,0])→ R est une fonction de classe C^3 Fréchet par rapport à L^2([-T,0] ou H dépend d'un numéro fini d' intégrales de Wiener, il est possible de représenter h comme un nombre réel H_0 plus une intégrale progressive du type \int_0^T \xi d^-X où \xi est un processus donné explicitement. Ce résultat de répresentation de la variable aléatoire h sera lié strictement à une fonction u:[0,T] x C([-T,0])→R qui en général est une solution d'une equation au derivées partielles en dimension infinie ayant la proprieté H_0=u(0, X_0(•)), \xi_t=Du(t, X_t(•))({0}). A certains égards, ceci généralise la formule de Clark-Ocone valable lorsque X est un mouvement brownien standard W. Une des motivations vient de la théorie de la couverture d'options lorsque le prix de l'actif soujacent n'est pas une semimartingale.
|
Page generated in 0.1439 seconds