Spelling suggestions: "subject:"higherorder"" "subject:"highorder""
291 |
Uma contribuição à análise de técnicas de monitoramento de espectro para sistemas PLCAmado, Laryssa Ramos 29 August 2011 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-04-20T18:23:07Z
No. of bitstreams: 1
laryssaramosamado.pdf: 2344885 bytes, checksum: 4328135ddbd0305fc11aa0bf0f8f8b61 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-04-24T16:50:29Z (GMT) No. of bitstreams: 1
laryssaramosamado.pdf: 2344885 bytes, checksum: 4328135ddbd0305fc11aa0bf0f8f8b61 (MD5) / Made available in DSpace on 2017-04-24T16:50:29Z (GMT). No. of bitstreams: 1
laryssaramosamado.pdf: 2344885 bytes, checksum: 4328135ddbd0305fc11aa0bf0f8f8b61 (MD5)
Previous issue date: 2011-08-29 / CNPq - Conselho Nacional de Desenvolvimento Científico e Tecnológico / A presente dissertação tem como objetivos principais a discussão e a análise do uso de técnicas de monitoramento de espectro aplicadas a sistemas PLC, para que a ocupação deste espectro seja explicitada. Neste contexto, diversas técnicas de processamento de sinais e inteligência computacional são utilizadas para extrair e selecionar o menor número de características que sejam mais representativas para detecção, a fim de projetar o melhor e menos complexo detector de sinais a ser utilizado inicialmente na faixa de frequência entre 1,705 e 100 MHz, mas que permita futuras modificações para aplicações na faixa entre 1,705 e 250 MHz. Além disso, o problema de monitoramento de espectro para sistemas PLC é formalizado, e questões de investigação são analisadas tanto para dados simulados em MATLAB quanto para dados medidos em campo. O processo de medição destes dados é descrito e suas características são explicitadas. Finalmente, a análise dos resultados obtidos indica a adequabilidade das técnicas aplicadas ao problema em questão, porém indicam necessidade do aprofundamento desta investigação. Desta maneira, este trabalho consiste em um estudo inicial sobre importantes questões pertinentes ao monitoramento de espectro de sistemas PLC. / This master thesis aims to discuss and analyze the use of spectrum sensing techniques applied to PLC systems, in order to explicit the spectrum occupation. These techniques extract and select the least quantity of the most representative signal features in order to project the best detector that presents the lowest computational complexity. In addition to that, the spectrum sensing problem is formalized, and a few investigation questions are analyzed for both synthetic and measured data. The measurement of PLC signals and their characterization is also exposed. Although the analysis of the attained results indicate that the techniques used are suitable for the examined problems, their further investigation is necessary, in order to better understand the PLC environment and the spectrum sensing issues related to it. This work is, therefore, an initial study about the mentioned matters.
|
292 |
Essays on imperfect common knowledge in macroeconomicsRibeiro, Marcel Bertini 22 May 2018 (has links)
Submitted by Marcel Bertini Ribeiro (marcelbertini@gmail.com) on 2018-05-30T17:39:13Z
No. of bitstreams: 1
Tese com ficha - Marcel Ribeiro.pdf: 1182673 bytes, checksum: 983f638fb4441b4f69f72f3ca516c059 (MD5) / Approved for entry into archive by Pamela Beltran Tonsa (pamela.tonsa@fgv.br) on 2018-05-30T18:21:58Z (GMT) No. of bitstreams: 1
Tese com ficha - Marcel Ribeiro.pdf: 1182673 bytes, checksum: 983f638fb4441b4f69f72f3ca516c059 (MD5) / Approved for entry into archive by Suzane Guimarães (suzane.guimaraes@fgv.br) on 2018-06-04T12:59:05Z (GMT) No. of bitstreams: 1
Tese com ficha - Marcel Ribeiro.pdf: 1182673 bytes, checksum: 983f638fb4441b4f69f72f3ca516c059 (MD5) / Made available in DSpace on 2018-06-04T12:59:05Z (GMT). No. of bitstreams: 1
Tese com ficha - Marcel Ribeiro.pdf: 1182673 bytes, checksum: 983f638fb4441b4f69f72f3ca516c059 (MD5)
Previous issue date: 2018-05-22 / This dissertation study the implications of strategic uncertainty induced by imperfect common knowledge for macroeconomic models and economic policy. In the first chapter, I evaluate whether central bank’s transparency enhances the effectiveness of monetary policy. I study this question using a New Keynesian model in which firms do not observe the time-varying inflation target or monetary policy shocks. Two informational assumptions are considered: (i) firms observe the interest rate decisions only (standard assumption) and (ii) firms observe the interest rate and an idiosyncratic signal about the inflation target. Under the standard assumption, agents infer output and inflation fluctuations by realizing that other agents are acting exactly like them. That ceases to be true when agents face strategic uncertainty induced by the idiosyncratic signal. One key implication is that, in the case of a monetary contraction, greater transparency improves the inflation-output trade-off only under the second assumption. In the second chapter, for a general class of DSGE models, I show that whenever agents extract information from endogenous variables that depends directly on the underlying unobserved shock, there is a qualitative difference in the signal extraction from those variables under imperfect information and imperfect common knowledge. This difference in learning about unobserved shocks does not vanish even in the limiting case when the variance of the private signal goes to infinity. Intuitively, strategic uncertainty prevents agents from knowing other agents’ decision, despite that those actions are the same in equilibrium. This discontinuity challenges this benchmark assumption by showing the implicitly substantial knowledge about endogenous variables assumed available to agents under imperfect information. The third chapter develops a novel solution method for a general class of DSGE models with imperfect common knowledge. The main contribution is that the method allows for the inclusion of endogenous state variables into the system of linear rational expectations equations under imperfect common knowledge. One key implication is that the endogenous persistence of state variables is the same under full information and imperfect common knowledge. A primer empirical evaluation of the informational frictions suggests that the model under imperfect common knowledge explains better the expectation data but is relatively worse at explaining the macroeconomic aggregates. / Esta tese estuda as implicações da incerteza estratégica induzida pelo conhecimento comum imperfeito para modelos macroeconômicos e política econômica. No primeiro capítulo, avalio se a transparência do banco central aumenta a eficácia da política monetária. Eu estudo essa questão usando um modelo Novo Keynesiano no qual as firmas não observam a meta de inflação variável no tempo e os choques de política monetária. Duas suposições sobre a informação dos agentes são consideradas: (i) as firmas observam apenas as decisões da taxa de juros e (ii) as firmas observam a taxa de juros e um sinal idiossincrático sobre a meta de inflação. Sob a suposição padrão, os agentes inferem flutuações de produto e inflação percebendo que outros agentes estão agindo exatamente como eles. Isso deixa de ser verdade quando os agentes enfrentam a incerteza estratégica induzida pelo sinal idiossincrático. Uma implicação principal é que, no caso de uma contração monetária, maior transparência melhora o trade-off inflação-produto apenas sob a segunda hipótese. No segundo capítulo, para uma classe geral de modelos DSGE, mostro que sempre que os agentes extraem informações de variáveis endógenas que dependem do choque subjacente não observado, as extrações de sinal daquelas variáveis sob informação imperfeita e conhecimento comum imperfeito são diferentes. Essa diferença no aprendizado de choques não observados não desaparece nem no caso limite quando a variação do sinal privado vai para o infinito. Intuitivamente, a incerteza estratégica impede que os agentes conheçam a decisão de outros agentes, apesar de essas ações serem as mesmas em equilíbrio. Essa descontinuidade desafia a suposição padrão, mostrando o conhecimento substancial sobre variáveis endógenas implicitamente assumido disponível para agentes sob informação imperfeita. O terceiro capítulo desenvolve um novo método de solução para uma classe geral de modelos DSGE com conhecimento comum imperfeito. A principal contribuição é que o método permite a inclusão de variáveis de estado endógeno no sistema de equações lineares de expectativas racionais sob conhecimento comum imperfeito. Uma implicação chave é que a persistência endógena de variáveis de estado é a mesma sob informação completa e conhecimento comum imperfeito. Uma avaliação empírica preliminar das fricções informacionais revela que o modelo sob informação imperfeita e dispersa explica melhor os dados de expectativas do que o modelo de informação completa. No entanto, isso ocorre ao custo de ser relativamente pior na explicação dos agregados macroeconômicos.
|
293 |
New PDE models for imaging problems and applicationsCalatroni, Luca January 2016 (has links)
Variational methods and Partial Differential Equations (PDEs) have been extensively employed for the mathematical formulation of a myriad of problems describing physical phenomena such as heat propagation, thermodynamic transformations and many more. In imaging, PDEs following variational principles are often considered. In their general form these models combine a regularisation and a data fitting term, balancing one against the other appropriately. Total variation (TV) regularisation is often used due to its edgepreserving and smoothing properties. In this thesis, we focus on the design of TV-based models for several different applications. We start considering PDE models encoding higher-order derivatives to overcome wellknown TV reconstruction drawbacks. Due to their high differential order and nonlinear nature, the computation of the numerical solution of these equations is often challenging. In this thesis, we propose directional splitting techniques and use Newton-type methods that despite these numerical hurdles render reliable and efficient computational schemes. Next, we discuss the problem of choosing the appropriate data fitting term in the case when multiple noise statistics in the data are present due, for instance, to different acquisition and transmission problems. We propose a novel variational model which encodes appropriately and consistently the different noise distributions in this case. Balancing the effect of the regularisation against the data fitting is also crucial. For this sake, we consider a learning approach which estimates the optimal ratio between the two by using training sets of examples via bilevel optimisation. Numerically, we use a combination of SemiSmooth (SSN) and quasi-Newton methods to solve the problem efficiently. Finally, we consider TV-based models in the framework of graphs for image segmentation problems. Here, spectral properties combined with matrix completion techniques are needed to overcome the computational limitations due to the large amount of image data. Further, a semi-supervised technique for the measurement of the segmented region by means of the Hough transform is proposed.
|
294 |
Functional Genetic Analysis Reveals Intricate Roles of Conserved X-box Elements in Yeast Transcriptional RegulationVoll, Sarah January 2013 (has links)
Understanding the functional impact of physical interactions between proteins and
DNA on gene expression is important for developing approaches to correct disease-associated gene dysregulation. I conducted a systematic, functional genetic analysis of protein-DNA interactions in the promoter region of the yeast ribonucleotide reductase
subunit gene RNR3. I measured the transcriptional impact of systematically
perturbing the major transcriptional regulator, Crt1, and three X-box sites on the
DNA known to physically bind Crt1. This analysis revealed interactions between
two of the three X-boxes in the presence of Crt1, and unexpectedly, a significant
functional role of the X-boxes in the absence of Crt1. Further analysis revealed Crt1-
independent regulators of RNR3 that were impacted by X-box perturbation. Taken
together, these results support the notion that higher-order X-box-mediated interactions
are important for RNR3 transcription, and that the X-boxes have unexpected roles in the regulation of RNR3 transcription that extend beyond their interaction with Crt1.
|
295 |
Le sentiment de marché : mesure et interêt pour la gestion d'actifs / Market sentiment : measure and importance for asset managementFrugier, Alain 30 September 2011 (has links)
La rationalité parfaite des investisseurs, base de l'hypothèse d'efficience desmarchés, est de plus en plus discutée. Ceci a conduit au développement de la financecomportementale. Le sentiment de marché, qui en est issu, est l'objet de cette étude.Après l'avoir mis en relation avec la rationalité et défini, ses modes de mesure courantset une évaluation de leur capacité à anticiper les rentabilités sont présentés. Ensuite, autravers de deux recherches largement indépendantes, nous (1) montrons de manièreempirique, essentiellement à partir de modèles multi-Agents et d'une modélisation del'impact des chocs d'information sur la distribution des rentabilités, que les skewness etkurtosis de la distribution des rentabilités peuvent être utilisés comme indicateurs dusentiment de marché ; (2) mettons en évidence la présence de mémoire sur de nombreuxindicateurs de sentiment, ce qui invalide les modalités habituelles de leur utilisation,dans le cadre de stratégies contrarian. / The perfect rationality of investors, one of the foundations of theefficient market hypothesis, is increasingly being questioned. This has led to thedevelopment of behavioral finance. Market sentiment, which stems from it, is the focusof this study. Having first linked this concept to rationality and defined it, this studygoes on to present the most common ways of measuring market sentiment and assesstheir ability to anticipate market returns. Then, using two different studies, we do twothings (1) using mainly multi-Agent models and by modeling the impact of informationshocks on the distribution of returns, we empirically show how skewness and kurtosis inthe distribution of returns can be used as market sentiment indicators; (2) wedemonstrate that many standard sentiment indicators are processes affected by long- orshort-Term memory, making them invalid as contrarian indicators even though this ishow they are typically used.
|
296 |
Puissance expressive des preuves circulaires / Expressive power of circular proofsFortier, Jerome 19 December 2014 (has links)
Cette recherche vise à établir les propriétés fondamentales d'un système formel aux preuves circulaires introduit par Santocanale, auquel on a rajouté la règle de coupure. On démontre, dans un premier temps, qu'il y a une pleine correspondance entre les preuves circulaires et les flèches issues des catégories dites µ-bicomplètes. Ces flèches sont celles que l'on peut définir purement à partir des outils suivants: les produits et coproduits finis, les algèbres initiales et les coalgèbres finales. Dans la catégorie des ensembles, les preuves circulaires dénotent donc les fonctions qu'on peut définir en utilisant les produits cartésiens finis, les unions disjointes finies, l'induction et la coinduction. On décrit également une procédure d'élimination des coupures qui produit, à partir d'une preuve circulaire finie, une preuve sans cycles et sans coupures, mais possiblement infinie. On démontre que l'élimination des coupures fournit une sémantique opérationnelle aux preuves circulaires, c'est-à-dire qu'elle permet de calculer les fonctions dénotées par celles-ci, par le moyen d'une sorte d'automate avec mémoire. Enfin, on s'intéresse au problème de la puissance expressive de cet éliminateur de coupures, c'est-à-dire à la question de caractériser la classe des expressions qu'il peut calculer. On démontre, par une simulation, que l'éliminateur des coupures est strictement plus expressif que les automates à pile d'ordre supérieur. / This research aims at establishing the fundamental properties of a formal system with circular proofs introduced by Santocanale, to which we added the cut rule. We first show that there is a full correspondence between circular proofs and arrows from the so-called µ-bicomplete categories. These arrows are those that can be defined purely from the following tools: finite products and coproducts, initial algebras and final coalgebras. In the category of sets, circular proofs denote functions that one can define by using finite cartesian products, finite disjoint unions, induction and coinduction. We also describe a cut-elimination procedure that produces, from a given finite circular proof, a proof without cycles and cuts, but which may be infinite. We prove that cut-elimination gives an operational semantics to circular proofs, which is to say that they allow to compute the functions denoted by them, by using a sort of automaton with memory. Finally, we are interested in finding the expressive power of that cut-eliminating automaton. In other words, we want to characterize the class of functions that it can compute. We show, through a simulation, that the cut-eliminating automaton is strictly more expressive than higher-order pushdown automata.
|
297 |
Modèles de comportement non linéaire des matériaux architecturés par des méthodes d'homogénéisation discrètes en grandes déformations. Application à des biomembranes et des textiles / Nonlinear constitutive models for lattice materials by discrete homogenization methods at large strains. Application to biomembranes and textilesElNady, Khaled 18 February 2015 (has links)
Ce travail porte sur le développement de modèles micromécaniques pour le calcul de la réponse homogénéisée de matériaux architecturés, en particulier des matériaux se présentant sous forme de treillis répétitifs. Les matériaux architecturés et micro-architecturés couvrent un domaine très large de de propriétés mécaniques, selon la connectivité nodale, la disposition géométrique des éléments structuraux, leurs propriétés mécaniques, et l'existence d'une possible hiérarchie structurale. L'objectif principal de la thèse est la prise en compte des nonlinéarités géométriques résultant des évolutions importantes de la géométrie initiale du treillis, causée par une rigidité de flexion des éléments structuraux faible en regard de leur rigidité en extension. La méthode dite d'homogénéisation discrète est développée pour prendre en compte les non linéarités géométriques pour des treillis quais périodiques; des schémas incrémentaux sont construits qui reposent sur la résolution incrémentale et séquentielle des problèmes de localisation - homogénéisation posés sur une cellule de base identifiée, soumise à un chargement contrôlé en déformation. Le milieu continu effectif obtenu est en général un milieu micropolaire anisotrope, dont les propriétés effectives reflètent la disposition des éléments structuraux et leurs propriétés mécaniques. La réponse non affine des treillis conduit à des effets de taille qui sont pris en compte soit par un enrichissement de la cinématique par des variables de microrotation ou par la prise en compte des seconds gradients du déplacement. La construction de milieux effectifs du second gradient est faite dans un formalisme de petites perturbations. Il est montré que ces deux types de milieu effectif sont complémentaires en raison de l'analogie existant lors de la construction théorique des réponses homogénéisées, et par le fait qu'ils fournissent des longueurs internes en extension, flexion et torsion. Des applications à des structures tissées et des membranes biologiques décrites comme des réseaux de filaments quais-périodiques ont été faites. Les réponses homogénéisées obtenues sont validées par des comparaisons avec des simulations par éléments finis réalisées sur un volume élémentaire représentatif de la structure. Les schémas d'homogénéisation ont été implémentés dans un code de calcul dédié, alimenté par un fichier de données d'entrée de la géométrie du treillis et de ses propriétés mécaniques. Les modèles micromécaniques développés laissent envisager du fait de leur caractère prédictif la conception de nouveaux matériaux architecturés permettant d'élargir les frontières de l'espace 'matériaux-propriétés' / The present thesis deals with the development of micromechanical schemes for the computation of the homogenized response of architectured materials, focusing on periodical lattice materials. Architectured and micro-architectured materials cover a wide range of mechanical properties according to the nodal connectivity, geometrical arrangement of the structural elements, their moduli, and a possible structural hierarchy. The principal objective of the thesis is the consideration of geometrical nonlinearities accounting for the large changes of the initial lattice geometry, due to the small bending stiffness of the structural elements, in comparison to their tensile rigidity. The so-called discrete homogenization method is extended to the geometrically nonlinear setting for periodical lattices; incremental schemes are constructed based on a staggered localization-homogenization computation of the lattice response over a repetitive unit cell submitted to a controlled deformation loading. The obtained effective medium is a micropolar anisotropic continuum, the effective properties of which accounting for the geometrical arrangement of the structural elements within the lattice and their mechanical properties. The non affine response of the lattice leads to possible size effects which can be captured by an enrichment of the classical Cauchy continuum either by adding rotational degrees of freedom as for the micropolar effective continuum, or by considering second order gradients of the displacement field. Both strategies are followed in this work, the construction of second order grade continua by discrete homogenization being done in a small perturbations framework. We show that both strategies for the enrichment of the effective continuum are complementary due to the existing analogy in the construction of the micropolar and second order grade continua by homogenization. The combination of both schemes further delivers tension, bending and torsion internal lengths, which reflect the lattice topology and the mechanical properties of its structural elements. Applications to textiles and biological membranes described as quasi periodical networks of filaments are considered. The computed effective response is validated by comparison with FE simulations performed over a representative unit cell of the lattice. The homogenization schemes have been implemented in a dedicated code written in combined symbolic and numerical language, and using as an input the lattice geometry and microstructural mechanical properties. The developed predictive micromechanical schemes offer a design tool to conceive new architectured materials to expand the boundaries of the 'material-property' space
|
298 |
The development of a framework for evaluating e-assessment systemsSingh, Upasana Gitanjali 11 1900 (has links)
Academics encounter problems with the selection, evaluation, testing and implementation of e-assessment software tools. The researcher experienced these problems while adopting e-assessment at the university where she is employed. Hence she undertook this study, which is situated in schools and departments in Computing-related disciplines, namely Computer Science, Information Systems and Information Technology at South African Higher Education Institutions. The literature suggests that further research is required in this domain. Furthermore, preliminary empirical studies indicated similar disabling factors at other South African tertiary institutions, which were barriers to long-term implementation of e-assessment. Despite this, academics who are adopters of e-assessment indicate satisfaction, particularly when conducting assessments with large classes. Questions of the multiple choice genre can be assessed automatically, leading to increased productivity and more frequent assessments. The purpose of this research is to develop an evaluation framework to assist academics in determining which e-assessment tool to adopt, enabling them to make more informed decisions. Such a framework would also support evaluation of existing e-assessment systems.
The underlying research design is action research, which supported an iterative series of studies for developing, evaluating, applying, refining, and validating the SEAT (Selecting and Evaluating an e-Assessment Tool) Evaluation Framework and subsequently an interactive electronic version, e-SEAT. Phase 1 of the action research comprised Studies 1 to 3, which established the nature, context and extent of adoption of e-assessment. This set the foundation for development of SEAT in Phase 2. During Studies 4 to 6 in Phase 2, a rigorous sequence of evaluation and application facilitated the transition from the manual SEAT Framework to the electronic evaluation instrument, e-SEAT, and its further evolution.
This research resulted in both a theoretical contribution (SEAT) and a practical contribution (e-SEAT). The findings of the action research contributed, along with the literature, to the categories and criteria in the framework, which in turn, contributed to the bodies of knowledge on MCQs and e-assessment.
The final e-SEAT version, the ultimate product of this action research, is presented in Appendix J1. For easier reference, the Appendices are included on a CD, attached to the back cover of this Thesis.. / Computing / PhD. (Information Systems)
|
299 |
Méthodes de Monte-Carlo EM et approximations particulaires : application à la calibration d'un modèle de volatilité stochastique / Monte Carlo EM methods and particle approximations : application to the calibration of stochastic volatility modelAllaya, Mouhamad M. 09 December 2013 (has links)
Ce travail de thèse poursuit une perspective double dans l'usage conjoint des méthodes de Monte Carlo séquentielles (MMS) et de l'algorithme Espérance-Maximisation (EM) dans le cadre des modèles de Markov cachés présentant une structure de dépendance markovienne d'ordre supérieur à 1 au niveau de la composante inobservée. Tout d'abord, nous commençons par un exposé succinct de l'assise théorique des deux concepts statistiques à Travers les chapitres 1 et 2 qui leurs sont consacrés. Dans un second temps, nous nous intéressons à la mise en pratique simultanée des deux concepts au chapitre 3 et ce dans le cadre usuel ou la structure de dépendance est d'ordre 1, l'apport des méthodes MMS dans ce travail réside dans leur capacité à approximer efficacement des fonctionnelles conditionnelles bornées, notamment des quantités de filtrage et de lissage dans un cadre non linéaire et non gaussien. Quant à l'algorithme EM, il est motivé par la présence à la fois de variables observables, et inobservables (ou partiellement observées) dans les modèles de Markov Cachés et singulièrement les modèles de volatilité stochastique étudié. Après avoir présenté aussi bien l'algorithme EM que les méthodes MCS ainsi que quelques une de leurs propriétés dans les chapitres 1 et 2 respectivement, nous illustrons ces deux outils statistiques au travers de la calibration d'un modèle de volatilité stochastique. Cette application est effectuée pour des taux change ainsi que pour quelques indices boursiers au chapitre 3. Nous concluons ce chapitre sur un léger écart du modèle de volatilité stochastique canonique utilisé ainsi que des simulations de Monte Carlo portant sur le modèle résultant. Enfin, nous nous efforçons dans les chapitres 4 et 5 à fournir les assises théoriques et pratiques de l'extension des méthodes Monte Carlo séquentielles notamment le filtrage et le lissage particulaire lorsque la structure markovienne est plus prononcée. En guise d’illustration, nous donnons l'exemple d'un modèle de volatilité stochastique dégénéré dont une approximation présente une telle propriété de dépendance. / This thesis pursues a double perspective in the joint use of sequential Monte Carlo methods (SMC) and the Expectation-Maximization algorithm (EM) under hidden Markov models having a Markov dependence structure of order grater than one in the unobserved component signal. Firstly, we begin with a brief description of the theoretical basis of both statistical concepts through Chapters 1 and 2 that are devoted. In a second hand, we focus on the simultaneous implementation of both concepts in Chapter 3 in the usual setting where the dependence structure is of order 1. The contribution of SMC methods in this work lies in their ability to effectively approximate any bounded conditional functional in particular, those of filtering and smoothing quantities in a non-linear and non-Gaussian settings. The EM algorithm is itself motivated by the presence of both observable and unobservable ( or partially observed) variables in Hidden Markov Models and particularly the stochastic volatility models in study. Having presented the EM algorithm as well as the SMC methods and some of their properties in Chapters 1 and 2 respectively, we illustrate these two statistical tools through the calibration of a stochastic volatility model. This application is clone for exchange rates and for some stock indexes in Chapter 3. We conclude this chapter on a slight departure from canonical stochastic volatility model as well Monte Carlo simulations on the resulting model. Finally, we strive in Chapters 4 and 5 to provide the theoretical and practical foundation of sequential Monte Carlo methods extension including particle filtering and smoothing when the Markov structure is more pronounced. As an illustration, we give the example of a degenerate stochastic volatility model whose approximation has such a dependence property.
|
300 |
Simple optimizing JIT compilation of higher-order dynamic programming languagesSaleil, Baptiste 05 1900 (has links)
No description available.
|
Page generated in 0.0556 seconds