81 |
Distribuição preditiva do preço de um ativo financeiro: abordagens via modelo de série de tempo Bayesiano e densidade implícita de Black & Scholes / Predictive distribution of a stock price: Bayesian time series model and Black & Scholes implied density approachesNatália Lombardi de Oliveira 01 June 2017 (has links)
Apresentamos duas abordagens para obter uma densidade de probabilidades para o preço futuro de um ativo: uma densidade preditiva, baseada em um modelo Bayesiano para série de tempo e uma densidade implícita, baseada na fórmula de precificação de opções de Black & Scholes. Considerando o modelo de Black & Scholes, derivamos as condições necessárias para obter a densidade implícita do preço do ativo na data de vencimento. Baseando-se nas densidades de previsão, comparamos o modelo implícito com a abordagem histórica do modelo Bayesiano. A partir destas densidades, calculamos probabilidades de ordem e tomamos decisões de vender/comprar um ativo. Como exemplo, apresentamos como utilizar estas distribuições para construir uma fórmula de precificação. / We present two different approaches to obtain a probability density function for the stocks future price: a predictive distribution, based on a Bayesian time series model, and the implied distribution, based on Black & Scholes option pricing formula. Considering the Black & Scholes model, we derive the necessary conditions to obtain the implied distribution of the stock price on the exercise date. Based on predictive densities, we compare the market implied model (Black & Scholes) with a historical based approach (Bayesian time series model). After obtaining the density functions, it is simple to evaluate probabilities of one being bigger than the other and to make a decision of selling/buying a stock. Also, as an example, we present how to use these distributions to build an option pricing formula.
|
82 |
Probabilidade de ocorrência de excesso hídrico para a cultura da soja em planossolos da região central do Rio Grande do Sul / Probability of water excess occurrence in soybean crop at planosols in the central region of Rio Grande do SulBortoluzzi, Mateus Possebon 23 February 2014 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The expansion of soybean production area in Planosols is rather limited by the high frequency of occurrence of excess water, leading to reduced availability of oxygen in the root zone, reduced photosynthesis, as well as productivity, depending on its duration and developmental phase of plants it occurs. The aim of this study was to identify sowing dates with smaller risk of excess water to the subperiods and crop cycle, taking into account three relative maturity groups of soybean cultivars and water storage capacity of Planosols in the central region of Rio Grande do Sul State. The simulation of soybean development and the calculation of crop daily sequential water balance were performed at different sowing dates in each year from August 1968 to July 2012. Thus, the change of soil water storage and the water surpluses in the different soybean developmental phases were quantified for each sowing date. Data from days with excess water were submitted to analysis of variance and Scott-Knott test at 5% probability, and the sources of variation were sowing dates, soils and their interaction. These data also were submitted to the probability distribution analysis, using the chi-square and Kolmogorov-Smirnov tests to verify the probability density function that best fit the data distribution. The greatest number of fittings for the development cycle and subperiods were obtained by the Gamma and Weibull functions, respectively. October's sowings have the highest risk of excess water during the crop cycle. Subperiod sowing-emergence shows up as the most limiting to define the sowing date. Due to the lowest risk of excess water in this sub-period, the sowing carried out after November 1st are the most favorable for soybean sowing in Planosols. / A expansão da área de produção de soja em Planossolos é bastante limitada pela elevada frequência de ocorrência de excesso hídrico, ocasionando redução na disponibilidade de oxigênio na zona radicular, redução da fotossíntese, assim como da produtividade, dependendo da duração do excesso e do subperíodo de desenvolvimento das plantas em que ocorre. O objetivo deste trabalho foi identificar datas de semeadura com menor risco de ocorrência de excesso hídrico para os subperíodos e ciclo da cultura, considerando três grupos de maturidade relativa de cultivares de soja e a capacidade de armazenamento de água dos Planossolos da região central do Rio Grande do Sul. A simulação do desenvolvimento da soja e o cálculo do balanço hídrico sequencial diário da cultura foram realizados em diferentes datas de semeadura de cada ano do período de agosto de 1968 a julho de 2012. Assim, a variação do armazenamento hídrico no solo e a ocorrência de excedentes hídricos nos diferentes subperíodos de desenvolvimento da soja foram quantificadas para cada data de semeadura. Os dados de dias de excesso hídrico foram submetidos à análise de variância e teste de Scott-Knott, a 5% de probabilidade de erro, sendo que as fontes de variação constaram das datas de semeadura, os solos e a sua interação. Os dados também foram submetidos à análise de distribuição de probabilidades, utilizando-se os testes qui-quadrado e Kolmogorov-Smirnov para verificar a função densidade probabilidade que melhor se ajustou à distribuição dos dados. O maior número de ajustes para o ciclo de desenvolvimento e para os subperíodos foram obtidos para as funções gama e weibull, respectivamente. As semeaduras realizadas no mês de outubro são as de maior risco de ocorrência de excesso hídrico ao longo do ciclo da cultura. O subperíodo semeadura-emergência mostra-se como o mais limitante para a definição da data de semeadura. Devido ao menor risco de ocorrência de excesso hídrico neste subperíodo as semeaduras realizadas após o dia primeiro de novembro são as mais favoráveis para a semeadura da soja em Planossolos.
|
83 |
Mean square solutions of random linear models and computation of their probability density functionJornet Sanz, Marc 05 March 2020 (has links)
[EN] This thesis concerns the analysis of differential equations with uncertain input parameters, in the form of random variables or stochastic processes with any type of probability distributions. In modeling, the input coefficients are set from experimental data, which often involve uncertainties from measurement errors. Moreover, the behavior of the physical phenomenon under study does not follow strict deterministic laws. It is thus more realistic to consider mathematical models with randomness in their formulation. The solution, considered in the sample-path or the mean square sense, is a smooth stochastic process, whose uncertainty has to be quantified. Uncertainty quantification is usually performed by computing the main statistics (expectation and variance) and, if possible, the probability density function.
In this dissertation, we study random linear models, based on ordinary differential equations with and without delay and on partial differential equations. The linear structure of the models makes it possible to seek for certain probabilistic solutions and even approximate their probability density functions, which is a difficult goal in general.
A very important part of the dissertation is devoted to random second-order linear differential equations, where the coefficients of the equation are stochastic processes and the initial conditions are random variables. The study of this class of differential equations in the random setting is mainly motivated because of their important role in Mathematical Physics. We start by solving the randomized Legendre differential equation in the mean square sense, which allows the approximation of the expectation and the variance of the stochastic solution. The methodology is extended to general random second-order linear differential equations with analytic (expressible as random power series) coefficients, by means of the so-called Fröbenius method. A comparative case study is performed with spectral methods based on polynomial chaos expansions. On the other hand, the Fröbenius method together with Monte Carlo simulation are used to approximate the probability density function of the solution. Several variance reduction methods based on quadrature rules and multilevel strategies are proposed to speed up the Monte Carlo procedure. The last part on random second-order linear differential equations is devoted to a random diffusion-reaction Poisson-type problem, where the probability density function is approximated using a finite difference numerical scheme.
The thesis also studies random ordinary differential equations with discrete constant delay. We study the linear autonomous case, when the coefficient of the non-delay component and the parameter of the delay term are both random variables while the initial condition is a stochastic process. It is proved that the deterministic solution constructed with the method of steps that involves the delayed exponential function is a probabilistic solution in the Lebesgue sense.
Finally, the last chapter is devoted to the linear advection partial differential equation, subject to stochastic velocity field and initial condition. We solve the equation in the mean square sense and provide new expressions for the probability density function of the solution, even in the non-Gaussian velocity case. / [ES] Esta tesis trata el análisis de ecuaciones diferenciales con parámetros de entrada aleatorios, en la forma de variables aleatorias o procesos estocásticos con cualquier tipo de distribución de probabilidad. En modelización, los coeficientes de entrada se fijan a partir de datos experimentales, los cuales suelen acarrear incertidumbre por los errores de medición. Además, el comportamiento del fenómeno físico bajo estudio no sigue patrones estrictamente deterministas. Es por tanto más realista trabajar con modelos matemáticos con aleatoriedad en su formulación. La solución, considerada en el sentido de caminos aleatorios o en el sentido de media cuadrática, es un proceso estocástico suave, cuya incertidumbre se tiene que cuantificar. La cuantificación de la incertidumbre es a menudo llevada a cabo calculando los principales estadísticos (esperanza y varianza) y, si es posible, la función de densidad de probabilidad.
En este trabajo, estudiamos modelos aleatorios lineales, basados en ecuaciones diferenciales ordinarias con y sin retardo, y en ecuaciones en derivadas parciales. La estructura lineal de los modelos nos permite buscar ciertas soluciones probabilísticas e incluso aproximar su función de densidad de probabilidad, lo cual es un objetivo complicado en general.
Una parte muy importante de la disertación se dedica a las ecuaciones diferenciales lineales de segundo orden aleatorias, donde los coeficientes de la ecuación son procesos estocásticos y las condiciones iniciales son variables aleatorias. El estudio de esta clase de ecuaciones diferenciales en el contexto aleatorio está motivado principalmente por su importante papel en la Física Matemática. Empezamos resolviendo la ecuación diferencial de Legendre aleatorizada en el sentido de media cuadrática, lo que permite la aproximación de la esperanza y la varianza de la solución estocástica. La metodología se extiende al caso general de ecuaciones diferenciales lineales de segundo orden aleatorias con coeficientes analíticos (expresables como series de potencias), mediante el conocido método de Fröbenius. Se lleva a cabo un estudio comparativo con métodos espectrales basados en expansiones de caos polinomial. Por otro lado, el método de Fröbenius junto con la simulación de Monte Carlo se utilizan para aproximar la función de densidad de probabilidad de la solución. Para acelerar el procedimiento de Monte Carlo, se proponen varios métodos de reducción de la varianza basados en reglas de cuadratura y estrategias multinivel. La última parte sobre ecuaciones diferenciales lineales de segundo orden aleatorias estudia un problema aleatorio de tipo Poisson de difusión-reacción, en el que la función de densidad de probabilidad es aproximada mediante un esquema numérico de diferencias finitas.
En la tesis también se tratan ecuaciones diferenciales ordinarias aleatorias con retardo discreto y constante. Estudiamos el caso lineal y autónomo, cuando el coeficiente de la componente no retardada i el parámetro del término retardado son ambos variables aleatorias mientras que la condición inicial es un proceso estocástico. Se demuestra que la solución determinista construida con el método de los pasos y que involucra la función exponencial retardada es una solución probabilística en el sentido de Lebesgue.
Finalmente, el último capítulo lo dedicamos a la ecuación en derivadas parciales lineal de advección, sujeta a velocidad y condición inicial estocásticas. Resolvemos la ecuación en el sentido de media cuadrática y damos nuevas expresiones para la función de densidad de probabilidad de la solución, incluso en el caso de velocidad no Gaussiana. / [CA] Aquesta tesi tracta l'anàlisi d'equacions diferencials amb paràmetres d'entrada aleatoris, en la forma de variables aleatòries o processos estocàstics amb qualsevol mena de distribució de probabilitat. En modelització, els coeficients d'entrada són fixats a partir de dades experimentals, les quals solen comportar incertesa pels errors de mesurament. A més a més, el comportament del fenomen físic sota estudi no segueix patrons estrictament deterministes. És per tant més realista treballar amb models matemàtics amb aleatorietat en la seua formulació. La solució, considerada en el sentit de camins aleatoris o en el sentit de mitjana quadràtica, és un procés estocàstic suau, la incertesa del qual s'ha de quantificar. La quantificació de la incertesa és sovint duta a terme calculant els principals estadístics (esperança i variància) i, si es pot, la funció de densitat de probabilitat.
En aquest treball, estudiem models aleatoris lineals, basats en equacions diferencials ordinàries amb retard i sense, i en equacions en derivades parcials. L'estructura lineal dels models ens fa possible cercar certes solucions probabilístiques i inclús aproximar la seua funció de densitat de probabilitat, el qual és un objectiu complicat en general.
Una part molt important de la dissertació es dedica a les equacions diferencials lineals de segon ordre aleatòries, on els coeficients de l'equació són processos estocàstics i les condicions inicials són variables aleatòries. L'estudi d'aquesta classe d'equacions diferencials en el context aleatori està motivat principalment pel seu important paper en Física Matemàtica. Comencem resolent l'equació diferencial de Legendre aleatoritzada en el sentit de mitjana quadràtica, el que permet l'aproximació de l'esperança i la variància de la solució estocàstica. La metodologia s'estén al cas general d'equacions diferencials lineals de segon ordre aleatòries amb coeficients analítics (expressables com a sèries de potències), per mitjà del conegut mètode de Fröbenius. Es duu a terme un estudi comparatiu amb mètodes espectrals basats en expansions de caos polinomial. Per altra banda, el mètode de Fröbenius juntament amb la simulació de Monte Carlo són emprats per a aproximar la funció de densitat de probabilitat de la solució. Per a accelerar el procediment de Monte Carlo, es proposen diversos mètodes de reducció de la variància basats en regles de quadratura i estratègies multinivell. L'última part sobre equacions diferencials lineals de segon ordre aleatòries estudia un problema aleatori de tipus Poisson de difusió-reacció, en què la funció de densitat de probabilitat és aproximada mitjançant un esquema numèric de diferències finites.
En la tesi també es tracten equacions diferencials ordinàries aleatòries amb retard discret i constant. Estudiem el cas lineal i autònom, quan el coeficient del component no retardat i el paràmetre del terme retardat són ambdós variables aleatòries mentre que la condició inicial és un procés estocàstic. Es prova que la solució determinista construïda amb el mètode dels passos i que involucra la funció exponencial retardada és una solució probabilística en el sentit de Lebesgue.
Finalment, el darrer capítol el dediquem a l'equació en derivades parcials lineal d'advecció, subjecta a velocitat i condició inicial estocàstiques. Resolem l'equació en el sentit de mitjana quadràtica i donem noves expressions per a la funció de densitat de probabilitat de la solució, inclús en el cas de velocitat no Gaussiana. / This work has been supported by the Spanish Ministerio de Economía y Competitividad grant MTM2017–89664–P. I acknowledge the doctorate scholarship granted by Programa de Ayudas de Investigación y Desarrollo (PAID),
Universitat Politècnica de València. / Jornet Sanz, M. (2020). Mean square solutions of random linear models and computation of their probability density function [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/138394
|
84 |
Approche stochastique de l'analyse du « residual moveout » pour la quantification de l'incertitude dans l'imagerie sismique / A stochastic approach to uncertainty quantification in residual moveout analysisTamatoro, Johng-Ay 09 April 2014 (has links)
Le principale objectif de l'imagerie sismique pétrolière telle qu'elle est réalisée de nos jours est de fournir une image représentative des quelques premiers kilomètres du sous-sol. Cette image permettra la localisation des structures géologiques formant les réservoirs où sont piégées les ressources en hydrocarbures. Pour pouvoir caractériser ces réservoirs et permettre la production des hydrocarbures, le géophysicien utilise la migration-profondeur qui est un outil d'imagerie sismique qui sert à convertir des données-temps enregistrées lors des campagnes d'acquisition sismique en des images-profondeur qui seront exploitées par l'ingénieur-réservoir avec l'aide de l'interprète sismique et du géologue. Lors de la migration profondeur, les évènements sismiques (réflecteurs,…) sont replacés à leurs positions spatiales correctes. Une migration-profondeur pertinente requiert une évaluation précise modèle de vitesse. La précision du modèle de vitesse utilisé pour une migration est jugée au travers l'alignement horizontal des évènements présents sur les Common Image Gather (CIG). Les évènements non horizontaux (Residual Move Out) présents sur les CIG sont dus au ratio du modèle de vitesse de migration par la vitesse effective du milieu. L'analyse du Residual Move Out (RMO) a pour but d'évaluer ce ratio pour juger de la pertinence du modèle de vitesse et permettre sa mise à jour. Les CIG qui servent de données pour l'analyse du RMO sont solutions de problèmes inverses mal posés, et sont corrompues par du bruit. Une analyse de l'incertitude s'avère nécessaire pour améliorer l'évaluation des résultats obtenus. Le manque d'outils d'analyse de l'incertitude dans l'analyse du RMO en fait sa faiblesse. L'analyse et la quantification de l'incertitude pourrait aider à la prise de décisions qui auront des impacts socio-économiques importantes. Ce travail de thèse a pour but de contribuer à l'analyse et à la quantification de l'incertitude dans l'analyse des paramètres calculés pendant le traitement des données sismiques et particulièrement dans l'analyse du RMO. Pour atteindre ces objectifs plusieurs étapes ont été nécessaires. Elles sont entre autres :- L’appropriation des différents concepts géophysiques nécessaires à la compréhension du problème (organisation des données de sismique réflexion, outils mathématiques et méthodologiques utilisés);- Présentations des méthodes et outils pour l'analyse classique du RMO;- Interprétation statistique de l’analyse classique;- Proposition d’une approche stochastique;Cette approche stochastique consiste en un modèle statistique hiérarchique dont les paramètres sont :- la variance traduisant le niveau de bruit dans les données estimée par une méthode basée sur les ondelettes, - une fonction qui traduit la cohérence des amplitudes le long des évènements estimée par des méthodes de lissages de données,- le ratio qui est considéré comme une variable aléatoire et non comme un paramètre fixe inconnue comme c'est le cas dans l'approche classique de l'analyse du RMO. Il est estimé par des méthodes de simulations de Monte Carlo par Chaîne de Markov.L'approche proposée dans cette thèse permet d'obtenir autant de cartes de valeurs du paramètre qu'on le désire par le biais des quantiles. La méthodologie proposée est validée par l'application à des données synthétiques et à des données réelles. Une étude de sensibilité de l'estimation du paramètre a été réalisée. L'utilisation de l'incertitude de ce paramètre pour quantifier l'incertitude des positions spatiales des réflecteurs est présentée dans ce travail de thèse. / The main goal of the seismic imaging for oil exploration and production as it is done nowadays is to provide an image of the first kilometers of the subsurface to allow the localization and an accurate estimation of hydrocarbon resources. The reservoirs where these hydrocarbons are trapped are structures which have a more or less complex geology. To characterize these reservoirs and allow the production of hydrocarbons, the geophysicist uses the depth migration which is a seismic imaging tool which serves to convert time data recorded during seismic surveys into depth images which will be exploited by the reservoir engineer with the help of the seismic interpreter and the geologist. During the depth migration, seismic events (reflectors, diffractions, faults …) are moved to their correct locations in space. Relevant depth migration requires an accurate knowledge of vertical and horizontal seismic velocity variations (velocity model). Usually the so-called Common-Image-Gathers (CIGs) serve as a tool to verify correctness of the velocity model. Often the CIGs are computed in the surface offset (distance between shot point and receiver) domain and their flatness serve as criteria of the velocity model correctness. Residual moveout (RMO) of the events on CIGs due to the ratio of migration velocity model and effective velocity model indicates incorrectness of the velocity model and is used for the velocity model updating. The post-stacked images forming the CIGs which are used as data for the RMO analysis are the results of an inverse problem and are corrupt by noises. An uncertainty analysis is necessary to improve evaluation of the results. Dealing with the uncertainty is a major issue, which supposes to help in decisions that have important social and commercial implications. The goal of this thesis is to contribute to the uncertainty analysis and its quantification in the analysis of various parameters computed during the seismic processing and particularly in RMO analysis. To reach these goals several stages were necessary. We began by appropriating the various geophysical concepts necessary for the understanding of:- the organization of the seismic data ;- the various processing ;- the various mathematical and methodological tools which are used (chapters 2 and 3). In the chapter 4, we present different tools used for the conventional RMO analysis. In the fifth one, we give a statistical interpretation of the conventional RMO analysis and we propose a stochastic approach of this analysis. This approach consists in hierarchical statistical model where the parameters are: - the variance which express the noise level in the data ;- a functional parameter which express coherency of the amplitudes along events ; - the ratio which is assume to be a random variable and not an unknown fixed parameter as it is the case in conventional approach. The adjustment of data to the model done by using smoothing methods of data, combined with the using of the wavelets for the estimation of allow to compute the posterior distribution of given the data by the empirical Bayes methods. An estimation of the parameter is obtained by using Markov Chain Monte Carlo simulations of its posterior distribution. The various quantiles of these simulations provide different estimations of . The proposed methodology is validated in the sixth chapter by its application on synthetic data and real data. A sensitivity analysis of the estimation of the parameter was done. The using of the uncertainty of this parameter to quantify the uncertainty of the spatial positions of reflectors is presented in this thesis.
|
85 |
Empirical evaluation of a Markovian model in a limit order marketTrönnberg, Filip January 2012 (has links)
A stochastic model for the dynamics of a limit order book is evaluated and tested on empirical data. Arrival of limit, market and cancellation orders are described in terms of a Markovian queuing system with exponentially distributed occurrences. In this model, several key quantities can be analytically calculated, such as the distribution of times between price moves, price volatility and the probability of an upward price move, all conditional on the state of the order book. We show that the exponential distribution poorly fits the occurrences of order book events and further show that little resemblance exists between the analytical formulas in this model and the empirical data. The log-normal and Weibull distribution are suggested as replacements as they appear to fit the empirical data better.
|
86 |
台灣選舉事件與台指選擇權的資訊效率李明珏, Li, Ming-Chueh Unknown Date (has links)
台灣特殊的兩黨對立政治環境及幾乎每年都會有的固定選舉,使得政治的不確定性深深的影響著國內的投資環境及投資人心態。本研究便是要探討,2002/1/1~2006/1/16 研究期間台灣的投資人在選舉前後的投資行為,是否真如大家所預期的,會受到台灣選舉事件的影響。
本研究首先利用適當的機率密度函數模型及選擇權市場資訊來導出隱含的風險中立密度值。再利用這些風險中立密度值,求出各個選舉事件相對應的機率分配圖形,並透過其機率分配圖形及波動率指數等統計值於投票日前後的變化來觀察某一選舉事件前後投資者的反應。
研究結果發現:1. 選舉事件的發生確實會影響投資者的心理,且投資者會透過選擇權市場有效率的反應預期的未來股價指數分佈情況。2. 越大型、越具爭議且全國性的選舉結果,其選舉期間機率分配圖形及波動率指數具有較高的波動性。3. 一般而言,選舉過後市場不確定因素降低,將使投資者對於股市的預期較為一致和樂觀。而若這個選舉結果使投資者感到意外,因而增加了市場的不確定性,則選後機率分配圖形及波動率指數的改變反而會更為明顯。4. 在此研究下對數常態混合法比傳統的 Black-Scholes 方法產生較低的誤差值,因此就實證的分析上能提供更好的配適。 / This research examines the behavior of investors during election periods from January 1st 2002 to January 6th 2006 in Taiwan. The research includes a few steps. First, we adopted a proper probability density function composed of stock index options data to construct the implied distribution. Then, when changing the whole shape of the risk-neutral implied distribution, the volatility indexes, and the statistics of the implied distribution, we observed investors' response around a specific election event.
According to the empirical results, we found that: 1. An election event would influence investors’ behavior, and investors tend to reflect their expectation of future stock index in the option market in an efficient way. 2. The result of a large-scale and more disputed nationwide election will cause a higher fluctuation in both the implied distribution and the volatility index. 3. In general, the factor resulting from investors’ uncertainty of the market is likely to reduce after the election, which makes investors’ relatively unanimous and optimistic expectation of the stock market. However, if this election result surprises investors, their uncertainty of the market will increase, and thus the changes of the implied distribution and the volatility index become quite obvious. 4. The in-sample performance of the lognormal mixtures method employed in the research is considerably better than that of the traditional Black-Scholes model by having a lower root mean squared error.
|
87 |
LES/PDF approach for turbulent reacting flowsDonde, Pratik Prakash 15 February 2013 (has links)
The probability density function (PDF) approach is a powerful technique for large eddy simulation (LES) based modeling of turbulent reacting flows. In this approach, the joint-PDF of all reacting scalars is estimated by solving a PDF transport equation, thus providing detailed information about small-scale correlations between these quantities. The objective of this work is to further develop the LES/PDF approach for studying flame stabilization in supersonic combustors, and for soot modeling in turbulent flames.
Supersonic combustors are characterized by strong shock-turbulence interactions which preclude the application of conventional Lagrangian stochastic methods for solving the PDF transport equation. A viable alternative is provided by quadrature based methods which are deterministic and Eulerian. In this work, it is first demonstrated that the numerical errors associated with LES require special care in the development of PDF solution algorithms. The direct quadrature method of moments (DQMOM) is one quadrature-based approach developed for supersonic combustion modeling. This approach is shown to generate inconsistent evolution of the scalar moments. Further, gradient-based source terms that appear in the DQMOM transport equations are severely underpredicted in LES leading to artificial mixing of fuel and oxidizer. To overcome these numerical issues, a new approach called semi-discrete quadrature method of moments (SeQMOM) is formulated. The performance of the new technique is compared with the DQMOM approach in canonical flow configurations as well as a three-dimensional supersonic cavity stabilized flame configuration. The SeQMOM approach is shown to predict subfilter statistics accurately compared to the DQMOM approach.
For soot modeling in turbulent flows, an
LES/PDF approach is integrated with detailed models for soot formation and growth. The PDF approach directly evolves the joint statistics of the gas-phase scalars and a set of moments of the soot number density function. This LES/PDF approach is then used to simulate a turbulent natural gas flame. A Lagrangian method formulated in cylindrical coordinates solves the high dimensional PDF transport equation and is coupled to an Eulerian LES solver. The LES/PDF simulations show that soot formation is highly intermittent and is always restricted to the fuel-rich region of the flow. The PDF of soot moments has a wide spread leading to a large subfilter variance. Further, the conditional statistics of soot moments conditioned on mixture fraction and reaction progress variable show strong correlation between the gas phase composition and soot moments. / text
|
88 |
Estimation du taux d'erreurs binaires pour n'importe quel système de communication numériqueDONG, Jia 18 December 2013 (has links) (PDF)
This thesis is related to the Bit Error Rate (BER) estimation for any digital communication system. In many designs of communication systems, the BER is a Key Performance Indicator (KPI). The popular Monte-Carlo (MC) simulation technique is well suited to any system but at the expense of long time simulations when dealing with very low error rates. In this thesis, we propose to estimate the BER by using the Probability Density Function (PDF) estimation of the soft observations of the received bits. First, we have studied a non-parametric PDF estimation technique named the Kernel method. Simulation results in the context of several digital communication systems are proposed. Compared with the conventional MC method, the proposed Kernel-based estimator provides good precision even for high SNR with very limited number of data samples. Second, the Gaussian Mixture Model (GMM), which is a semi-parametric PDF estimation technique, is used to estimate the BER. Compared with the Kernel-based estimator, the GMM method provides better performance in the sense of minimum variance of the estimator. Finally, we have investigated the blind estimation of the BER, which is the estimation when the sent data are unknown. We denote this case as unsupervised BER estimation. The Stochastic Expectation-Maximization (SEM) algorithm combined with the Kernel or GMM PDF estimation methods has been used to solve this issue. By analyzing the simulation results, we show that the obtained BER estimate can be very close to the real values. This is quite promising since it could enable real-time BER estimation on the receiver side without decreasing the user bit rate with pilot symbols for example.
|
89 |
Modeling Simplified Reaction Mechanisms using Continuous Thermodynamics for Hydrocarbon FuelsFox, Clayton D.L. 25 April 2018 (has links)
Commercial fuels are mixtures with large numbers of components. Continuous thermodynamics is a technique for modelling fuel mixtures using a probability density function rather than dealing with each discreet component. The mean and standard deviation of the distribution are then used to model the chemical reactions of the mixture. This thesis develops the necessary theory to apply the technique of continuous thermodynamics to the oxidation reactions of hydrocarbon fuels. The theory is applied to three simplified models of hydrocarbon oxidation: a global one-step reaction, a two-step reaction with CO as the intermediate product, and the four-step reaction of Müller et al. (1992), which contains a high- and a low-temperature branch. These are all greatly simplified models of the complex reaction kinetics of hydrocarbons, and in this thesis they are applied specifically to n-paraffin hydrocarbons in the range from n-heptane to n-hexadecane. The model is tested numerically using a simple constant pressure homogeneous ignition problem using Cantera and compared to simplified and detailed mechanisms for n-heptane. The continuous thermodynamics models are able not only to predict ignition delay times and the development of temperature and species concentrations with time, but also changes in the mixture composition as reaction proceeds as represented by the mean and standard deviation of the distribution function. Continuous thermodynamics is therefore shown to be a useful tool for reactions of multicomponent mixtures, and an alternative to the "surrogate fuel" approach often used at present.
|
90 |
Towards an end-to-end multiband OFDM system analysisSaleem, Rashid January 2012 (has links)
Ultra Wideband (UWB) communication has recently drawn considerable attention from academia and industry. This is mainly owing to the ultra high speeds and cognitive features it could offer. The employability of UWB in numerous areas including but not limited to Wireless Personal Area Networks, WPAN's, Body Area Networks, BAN's, radar and medical imaging etc. has opened several avenues of research and development. However, still there is a disagreement on the standardization of UWB. Two contesting radios for UWB are Multiband Orthogonal Frequency Division Multiplexing (MB-OFDM) and DS-UWB (Direct Sequence Ultra Wideband). As nearly all of the reported research on UWB hasbeen about a very narrow/specific area of the communication system, this thesis looks at the end-to-end performance of an MB-OFDM approach. The overall aim of this project has been to first focus on three different aspects i.e. interference, antenna and propagation aspects of an MB-OFDM system individually and then present a holistic or an end-to-end system analysis finally. In the first phase of the project the author investigated the performance of MB-OFDM system under the effect of his proposed generic or technology non-specific interference. Avoiding the conventional Gaussian approximation, the author has employed an advanced stochastic method. A total of two approaches have been presented in this phase of the project. The first approach is an indirect one which involves the Moment Generating Functions (MGF's) of the Signal-to-Interference-plus-Noise-Ratio (SINR) and the Probability Density Function (pdf) of the SINR to calculate the Average Probabilities of Error of an MB-OFDM system under the influence of proposed generic interference. This approach assumed a specific two-dimensional Poisson spatial/geometric placement of interferers around the victim MB-OFDM receiver. The second approach is a direct approach and extends the first approach by employing a wider class of generic interference. In the second phase of the work the author designed, simulated, prototyped and tested novel compact monopole planar antennas for UWB application. In this phase of the research, compact antennas for the UWB application are presented. These designs employ low-loss Rogers duroid substrates and are fed by Copla-nar Waveguides. The antennas have a proposed feed-line to the main radiating element transition region. This transition region is formed by a special step-generating function-set called the "Inverse Parabolic Step Sequence" or IPSS. These IPSS-based antennas are simulated, prototyped and then tested in the ane-choic chamber. An empirical approach, aimed to further miniaturize IPSS-based antennas, was also derived in this phase of the project. The empirical approach has been applied to derive the design of a further miniaturized antenna. More-over, an electrical miniaturization limit has been concluded for the IPSS-based antennas. The third phase of the project has investigated the effect of the indoor furnishing on the distribution of the elevation Angle-of-Arrival (AOA) of the rays at the receiver. Previously, constant distributions for the AOA of the rays in the elevation direction had been reported. This phase of the research has proposed that the AOA distribution is not fixed. It is established by the author that the indoor elevation AOA distributions depend on the discrete levels of furnishing. A joint time-angle-furnishing channel model is presented in this research phase. In addition, this phase of the thesis proposes two vectorial or any direction AOA distributions for the UWB indoor environments. Finally, the last phase of this thesis is presented. As stated earlier, the overall aim of the project has been to look at three individual aspects of an MB-OFDM system, initially, and then look at the holistic system, finally. Therefore, this final phase of the research presents an end-to-end MB-OFDM system analysis. The interference analysis of the first phase of the project is revisited to re-calculate the probability of bit error with realistic/measured path loss exponents which have been reported in the existing literature. In this method, Gaussian Quadrature Rule based approximations are computed for the average probability of bit error. Last but not the least, an end-to-end or comprehensive system equation/impulse response is presented. The proposed system equation covers more aspects of an indoor UWB system than reported in the existing literature.
|
Page generated in 0.1031 seconds