• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 69
  • 19
  • 8
  • 6
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 142
  • 142
  • 97
  • 24
  • 20
  • 17
  • 16
  • 15
  • 15
  • 14
  • 13
  • 13
  • 12
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Моделирование колебательных свойств пленок наноструктурного углерода на металлической подложке : магистерская диссертация / Modeling of vibrational properties of nanostructural carbon films on metal support

Бокизода, Д. А., Boqizoda, D. A. January 2017 (has links)
Объект исследования – наноразмерные пленки двумерно упорядоченного линейно - цепочечного углерода на металлической подложке без примесей и с примесями. Цель работы – теоретическое исследование структурных, механических и колебательных свойств наноразмерных пленок двумерно упорядоченного линейно - цепочечного углерода на металлической подложке. Методы исследования: метода функционала плотности (DFT), сделан обзор экспериментальных и теоретических работ по применению DFT для вычисления структурных, упругих и колебательных свойств; использование пакетов ABINIT и Quantum-Espresso в приближении DFT; графическое представление результатов с помощью программного обеспечения MS Office Excel и Origin Pro. Результаты работы: рассчитан спектр комбинационного рассеяние ЛЦУ пленок; совмещение экспериментального и расчетного спектров комбинационного рассеяния для линейной структурной модели кристалла карбина; анализ возможности аттестации линейно-цепочечных структур методом КРС. / The object of investigation is nanosized films of two-dimensionally ordered linear-chained carbon on a metal substrate without impurities and with impurities. The aim of the work is a theoretical study of structural, mechanical and vibrational properties of nanosized films of two - dimensional ordered linear - chain carbon on a metallic substrate. Research methods: density functional method (DFT), an overview of experimental and theoretical works on DFT application for calculation of structural, elastic and vibrational properties; the use of Abinit and Quantum-Espresso packages in the DFT approximation; graphical representation of results. The results of the work: the spectrum of Raman scattering of LCC films was calculated; combining the experimental and calculated Raman spectra for the linear structural model of a carbyne crystal, analysis of the possibility of characterization of linear-chained structures by the Raman spectroscopy method was performed.
122

Mean square solutions of random linear models and computation of their probability density function

Jornet Sanz, Marc 05 March 2020 (has links)
[EN] This thesis concerns the analysis of differential equations with uncertain input parameters, in the form of random variables or stochastic processes with any type of probability distributions. In modeling, the input coefficients are set from experimental data, which often involve uncertainties from measurement errors. Moreover, the behavior of the physical phenomenon under study does not follow strict deterministic laws. It is thus more realistic to consider mathematical models with randomness in their formulation. The solution, considered in the sample-path or the mean square sense, is a smooth stochastic process, whose uncertainty has to be quantified. Uncertainty quantification is usually performed by computing the main statistics (expectation and variance) and, if possible, the probability density function. In this dissertation, we study random linear models, based on ordinary differential equations with and without delay and on partial differential equations. The linear structure of the models makes it possible to seek for certain probabilistic solutions and even approximate their probability density functions, which is a difficult goal in general. A very important part of the dissertation is devoted to random second-order linear differential equations, where the coefficients of the equation are stochastic processes and the initial conditions are random variables. The study of this class of differential equations in the random setting is mainly motivated because of their important role in Mathematical Physics. We start by solving the randomized Legendre differential equation in the mean square sense, which allows the approximation of the expectation and the variance of the stochastic solution. The methodology is extended to general random second-order linear differential equations with analytic (expressible as random power series) coefficients, by means of the so-called Fröbenius method. A comparative case study is performed with spectral methods based on polynomial chaos expansions. On the other hand, the Fröbenius method together with Monte Carlo simulation are used to approximate the probability density function of the solution. Several variance reduction methods based on quadrature rules and multilevel strategies are proposed to speed up the Monte Carlo procedure. The last part on random second-order linear differential equations is devoted to a random diffusion-reaction Poisson-type problem, where the probability density function is approximated using a finite difference numerical scheme. The thesis also studies random ordinary differential equations with discrete constant delay. We study the linear autonomous case, when the coefficient of the non-delay component and the parameter of the delay term are both random variables while the initial condition is a stochastic process. It is proved that the deterministic solution constructed with the method of steps that involves the delayed exponential function is a probabilistic solution in the Lebesgue sense. Finally, the last chapter is devoted to the linear advection partial differential equation, subject to stochastic velocity field and initial condition. We solve the equation in the mean square sense and provide new expressions for the probability density function of the solution, even in the non-Gaussian velocity case. / [ES] Esta tesis trata el análisis de ecuaciones diferenciales con parámetros de entrada aleatorios, en la forma de variables aleatorias o procesos estocásticos con cualquier tipo de distribución de probabilidad. En modelización, los coeficientes de entrada se fijan a partir de datos experimentales, los cuales suelen acarrear incertidumbre por los errores de medición. Además, el comportamiento del fenómeno físico bajo estudio no sigue patrones estrictamente deterministas. Es por tanto más realista trabajar con modelos matemáticos con aleatoriedad en su formulación. La solución, considerada en el sentido de caminos aleatorios o en el sentido de media cuadrática, es un proceso estocástico suave, cuya incertidumbre se tiene que cuantificar. La cuantificación de la incertidumbre es a menudo llevada a cabo calculando los principales estadísticos (esperanza y varianza) y, si es posible, la función de densidad de probabilidad. En este trabajo, estudiamos modelos aleatorios lineales, basados en ecuaciones diferenciales ordinarias con y sin retardo, y en ecuaciones en derivadas parciales. La estructura lineal de los modelos nos permite buscar ciertas soluciones probabilísticas e incluso aproximar su función de densidad de probabilidad, lo cual es un objetivo complicado en general. Una parte muy importante de la disertación se dedica a las ecuaciones diferenciales lineales de segundo orden aleatorias, donde los coeficientes de la ecuación son procesos estocásticos y las condiciones iniciales son variables aleatorias. El estudio de esta clase de ecuaciones diferenciales en el contexto aleatorio está motivado principalmente por su importante papel en la Física Matemática. Empezamos resolviendo la ecuación diferencial de Legendre aleatorizada en el sentido de media cuadrática, lo que permite la aproximación de la esperanza y la varianza de la solución estocástica. La metodología se extiende al caso general de ecuaciones diferenciales lineales de segundo orden aleatorias con coeficientes analíticos (expresables como series de potencias), mediante el conocido método de Fröbenius. Se lleva a cabo un estudio comparativo con métodos espectrales basados en expansiones de caos polinomial. Por otro lado, el método de Fröbenius junto con la simulación de Monte Carlo se utilizan para aproximar la función de densidad de probabilidad de la solución. Para acelerar el procedimiento de Monte Carlo, se proponen varios métodos de reducción de la varianza basados en reglas de cuadratura y estrategias multinivel. La última parte sobre ecuaciones diferenciales lineales de segundo orden aleatorias estudia un problema aleatorio de tipo Poisson de difusión-reacción, en el que la función de densidad de probabilidad es aproximada mediante un esquema numérico de diferencias finitas. En la tesis también se tratan ecuaciones diferenciales ordinarias aleatorias con retardo discreto y constante. Estudiamos el caso lineal y autónomo, cuando el coeficiente de la componente no retardada i el parámetro del término retardado son ambos variables aleatorias mientras que la condición inicial es un proceso estocástico. Se demuestra que la solución determinista construida con el método de los pasos y que involucra la función exponencial retardada es una solución probabilística en el sentido de Lebesgue. Finalmente, el último capítulo lo dedicamos a la ecuación en derivadas parciales lineal de advección, sujeta a velocidad y condición inicial estocásticas. Resolvemos la ecuación en el sentido de media cuadrática y damos nuevas expresiones para la función de densidad de probabilidad de la solución, incluso en el caso de velocidad no Gaussiana. / [CA] Aquesta tesi tracta l'anàlisi d'equacions diferencials amb paràmetres d'entrada aleatoris, en la forma de variables aleatòries o processos estocàstics amb qualsevol mena de distribució de probabilitat. En modelització, els coeficients d'entrada són fixats a partir de dades experimentals, les quals solen comportar incertesa pels errors de mesurament. A més a més, el comportament del fenomen físic sota estudi no segueix patrons estrictament deterministes. És per tant més realista treballar amb models matemàtics amb aleatorietat en la seua formulació. La solució, considerada en el sentit de camins aleatoris o en el sentit de mitjana quadràtica, és un procés estocàstic suau, la incertesa del qual s'ha de quantificar. La quantificació de la incertesa és sovint duta a terme calculant els principals estadístics (esperança i variància) i, si es pot, la funció de densitat de probabilitat. En aquest treball, estudiem models aleatoris lineals, basats en equacions diferencials ordinàries amb retard i sense, i en equacions en derivades parcials. L'estructura lineal dels models ens fa possible cercar certes solucions probabilístiques i inclús aproximar la seua funció de densitat de probabilitat, el qual és un objectiu complicat en general. Una part molt important de la dissertació es dedica a les equacions diferencials lineals de segon ordre aleatòries, on els coeficients de l'equació són processos estocàstics i les condicions inicials són variables aleatòries. L'estudi d'aquesta classe d'equacions diferencials en el context aleatori està motivat principalment pel seu important paper en Física Matemàtica. Comencem resolent l'equació diferencial de Legendre aleatoritzada en el sentit de mitjana quadràtica, el que permet l'aproximació de l'esperança i la variància de la solució estocàstica. La metodologia s'estén al cas general d'equacions diferencials lineals de segon ordre aleatòries amb coeficients analítics (expressables com a sèries de potències), per mitjà del conegut mètode de Fröbenius. Es duu a terme un estudi comparatiu amb mètodes espectrals basats en expansions de caos polinomial. Per altra banda, el mètode de Fröbenius juntament amb la simulació de Monte Carlo són emprats per a aproximar la funció de densitat de probabilitat de la solució. Per a accelerar el procediment de Monte Carlo, es proposen diversos mètodes de reducció de la variància basats en regles de quadratura i estratègies multinivell. L'última part sobre equacions diferencials lineals de segon ordre aleatòries estudia un problema aleatori de tipus Poisson de difusió-reacció, en què la funció de densitat de probabilitat és aproximada mitjançant un esquema numèric de diferències finites. En la tesi també es tracten equacions diferencials ordinàries aleatòries amb retard discret i constant. Estudiem el cas lineal i autònom, quan el coeficient del component no retardat i el paràmetre del terme retardat són ambdós variables aleatòries mentre que la condició inicial és un procés estocàstic. Es prova que la solució determinista construïda amb el mètode dels passos i que involucra la funció exponencial retardada és una solució probabilística en el sentit de Lebesgue. Finalment, el darrer capítol el dediquem a l'equació en derivades parcials lineal d'advecció, subjecta a velocitat i condició inicial estocàstiques. Resolem l'equació en el sentit de mitjana quadràtica i donem noves expressions per a la funció de densitat de probabilitat de la solució, inclús en el cas de velocitat no Gaussiana. / This work has been supported by the Spanish Ministerio de Economía y Competitividad grant MTM2017–89664–P. I acknowledge the doctorate scholarship granted by Programa de Ayudas de Investigación y Desarrollo (PAID), Universitat Politècnica de València. / Jornet Sanz, M. (2020). Mean square solutions of random linear models and computation of their probability density function [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/138394
123

Approche stochastique de l'analyse du « residual moveout » pour la quantification de l'incertitude dans l'imagerie sismique / A stochastic approach to uncertainty quantification in residual moveout analysis

Tamatoro, Johng-Ay 09 April 2014 (has links)
Le principale objectif de l'imagerie sismique pétrolière telle qu'elle est réalisée de nos jours est de fournir une image représentative des quelques premiers kilomètres du sous-sol. Cette image permettra la localisation des structures géologiques formant les réservoirs où sont piégées les ressources en hydrocarbures. Pour pouvoir caractériser ces réservoirs et permettre la production des hydrocarbures, le géophysicien utilise la migration-profondeur qui est un outil d'imagerie sismique qui sert à convertir des données-temps enregistrées lors des campagnes d'acquisition sismique en des images-profondeur qui seront exploitées par l'ingénieur-réservoir avec l'aide de l'interprète sismique et du géologue. Lors de la migration profondeur, les évènements sismiques (réflecteurs,…) sont replacés à leurs positions spatiales correctes. Une migration-profondeur pertinente requiert une évaluation précise modèle de vitesse. La précision du modèle de vitesse utilisé pour une migration est jugée au travers l'alignement horizontal des évènements présents sur les Common Image Gather (CIG). Les évènements non horizontaux (Residual Move Out) présents sur les CIG sont dus au ratio du modèle de vitesse de migration par la vitesse effective du milieu. L'analyse du Residual Move Out (RMO) a pour but d'évaluer ce ratio pour juger de la pertinence du modèle de vitesse et permettre sa mise à jour. Les CIG qui servent de données pour l'analyse du RMO sont solutions de problèmes inverses mal posés, et sont corrompues par du bruit. Une analyse de l'incertitude s'avère nécessaire pour améliorer l'évaluation des résultats obtenus. Le manque d'outils d'analyse de l'incertitude dans l'analyse du RMO en fait sa faiblesse. L'analyse et la quantification de l'incertitude pourrait aider à la prise de décisions qui auront des impacts socio-économiques importantes. Ce travail de thèse a pour but de contribuer à l'analyse et à la quantification de l'incertitude dans l'analyse des paramètres calculés pendant le traitement des données sismiques et particulièrement dans l'analyse du RMO. Pour atteindre ces objectifs plusieurs étapes ont été nécessaires. Elles sont entre autres :- L’appropriation des différents concepts géophysiques nécessaires à la compréhension du problème (organisation des données de sismique réflexion, outils mathématiques et méthodologiques utilisés);- Présentations des méthodes et outils pour l'analyse classique du RMO;- Interprétation statistique de l’analyse classique;- Proposition d’une approche stochastique;Cette approche stochastique consiste en un modèle statistique hiérarchique dont les paramètres sont :- la variance traduisant le niveau de bruit dans les données estimée par une méthode basée sur les ondelettes, - une fonction qui traduit la cohérence des amplitudes le long des évènements estimée par des méthodes de lissages de données,- le ratio qui est considéré comme une variable aléatoire et non comme un paramètre fixe inconnue comme c'est le cas dans l'approche classique de l'analyse du RMO. Il est estimé par des méthodes de simulations de Monte Carlo par Chaîne de Markov.L'approche proposée dans cette thèse permet d'obtenir autant de cartes de valeurs du paramètre qu'on le désire par le biais des quantiles. La méthodologie proposée est validée par l'application à des données synthétiques et à des données réelles. Une étude de sensibilité de l'estimation du paramètre a été réalisée. L'utilisation de l'incertitude de ce paramètre pour quantifier l'incertitude des positions spatiales des réflecteurs est présentée dans ce travail de thèse. / The main goal of the seismic imaging for oil exploration and production as it is done nowadays is to provide an image of the first kilometers of the subsurface to allow the localization and an accurate estimation of hydrocarbon resources. The reservoirs where these hydrocarbons are trapped are structures which have a more or less complex geology. To characterize these reservoirs and allow the production of hydrocarbons, the geophysicist uses the depth migration which is a seismic imaging tool which serves to convert time data recorded during seismic surveys into depth images which will be exploited by the reservoir engineer with the help of the seismic interpreter and the geologist. During the depth migration, seismic events (reflectors, diffractions, faults …) are moved to their correct locations in space. Relevant depth migration requires an accurate knowledge of vertical and horizontal seismic velocity variations (velocity model). Usually the so-called Common-Image-Gathers (CIGs) serve as a tool to verify correctness of the velocity model. Often the CIGs are computed in the surface offset (distance between shot point and receiver) domain and their flatness serve as criteria of the velocity model correctness. Residual moveout (RMO) of the events on CIGs due to the ratio of migration velocity model and effective velocity model indicates incorrectness of the velocity model and is used for the velocity model updating. The post-stacked images forming the CIGs which are used as data for the RMO analysis are the results of an inverse problem and are corrupt by noises. An uncertainty analysis is necessary to improve evaluation of the results. Dealing with the uncertainty is a major issue, which supposes to help in decisions that have important social and commercial implications. The goal of this thesis is to contribute to the uncertainty analysis and its quantification in the analysis of various parameters computed during the seismic processing and particularly in RMO analysis. To reach these goals several stages were necessary. We began by appropriating the various geophysical concepts necessary for the understanding of:- the organization of the seismic data ;- the various processing ;- the various mathematical and methodological tools which are used (chapters 2 and 3). In the chapter 4, we present different tools used for the conventional RMO analysis. In the fifth one, we give a statistical interpretation of the conventional RMO analysis and we propose a stochastic approach of this analysis. This approach consists in hierarchical statistical model where the parameters are: - the variance which express the noise level in the data ;- a functional parameter which express coherency of the amplitudes along events ; - the ratio which is assume to be a random variable and not an unknown fixed parameter as it is the case in conventional approach. The adjustment of data to the model done by using smoothing methods of data, combined with the using of the wavelets for the estimation of allow to compute the posterior distribution of given the data by the empirical Bayes methods. An estimation of the parameter is obtained by using Markov Chain Monte Carlo simulations of its posterior distribution. The various quantiles of these simulations provide different estimations of . The proposed methodology is validated in the sixth chapter by its application on synthetic data and real data. A sensitivity analysis of the estimation of the parameter was done. The using of the uncertainty of this parameter to quantify the uncertainty of the spatial positions of reflectors is presented in this thesis.
124

Empirical evaluation of a Markovian model in a limit order market

Trönnberg, Filip January 2012 (has links)
A stochastic model for the dynamics of a limit order book is evaluated and tested on empirical data. Arrival of limit, market and cancellation orders are described in terms of a Markovian queuing system with exponentially distributed occurrences. In this model, several key quantities can be analytically calculated, such as the distribution of times between price moves, price volatility and the probability of an upward price move, all conditional on the state of the order book. We show that the exponential distribution poorly fits the occurrences of order book events and further show that little resemblance exists between the analytical formulas in this model and the empirical data. The log-normal and Weibull distribution are suggested as replacements as they appear to fit the empirical data better.
125

Mechanical optimization of vascular bypass grafts

Felden, Luc 14 April 2005 (has links)
Synthetic vascular grafts are useful to bypass diseased arteries. The long-term failure of synthetic grafts is primarily due to intimal hyperplasia at the anastomotic sites. The accelerated intimal hyperplasia may stem from a compliance mismatch between the host artery and the graft since commercially available synthetic conduits are much stiffer than an artery. The objective of this thesis is to design a method for fabricating a vascular graft that mechanically matches the patients native artery over the expected physiologic range of pressures. The creation of an optimized mechanical graft will hopefully lead to an improvement in patency rates. The mechanical equivalency between the graft and the host artery is defined locally by several criteria including the diameter upon inflation, the elasticity at mean pressure, and axial force. A single parameter mathematical for a thin-walled tube is used to describe of the final mechanical behavior of a synthetic graft. For the general problem, the objective would be to fabricate a mechanics-matching vascular graft for each host artery. Typically, fabrication parameters are set initially and the properties of the fabricated graft are measured. However, by modeling the entire fabrication process and final mechanical properties, it is possible to invert the situation and let the typical output mechanical values be used to define the fabrication parameters. The resultant fabricated graft will then be mechanically matching. As a proof-of-concept, several prototype synthetic grafts were manufactured and characterized by a single Invariant to match a canine artery. The resultant graft equaled the diameter upon inflation, the elasticity at mean pressure, and axial force of the native canine artery within 6%. An alternative to making an individual graft for each artery is also presented. A surgeon may choose the best graft from a set of pre-manufactured grafts, using a computer program algorithm for best fit using two parameters in a neighborhood. The design optimization problem was solved for both canine carotid and human coronary arteries. In conclusion, the overall process of design, fabrication and selection of a mechanics matching synthetic vascular graft is shown to be reliable and robust.
126

台灣選舉事件與台指選擇權的資訊效率

李明珏, Li, Ming-Chueh Unknown Date (has links)
台灣特殊的兩黨對立政治環境及幾乎每年都會有的固定選舉,使得政治的不確定性深深的影響著國內的投資環境及投資人心態。本研究便是要探討,2002/1/1~2006/1/16 研究期間台灣的投資人在選舉前後的投資行為,是否真如大家所預期的,會受到台灣選舉事件的影響。 本研究首先利用適當的機率密度函數模型及選擇權市場資訊來導出隱含的風險中立密度值。再利用這些風險中立密度值,求出各個選舉事件相對應的機率分配圖形,並透過其機率分配圖形及波動率指數等統計值於投票日前後的變化來觀察某一選舉事件前後投資者的反應。 研究結果發現:1. 選舉事件的發生確實會影響投資者的心理,且投資者會透過選擇權市場有效率的反應預期的未來股價指數分佈情況。2. 越大型、越具爭議且全國性的選舉結果,其選舉期間機率分配圖形及波動率指數具有較高的波動性。3. 一般而言,選舉過後市場不確定因素降低,將使投資者對於股市的預期較為一致和樂觀。而若這個選舉結果使投資者感到意外,因而增加了市場的不確定性,則選後機率分配圖形及波動率指數的改變反而會更為明顯。4. 在此研究下對數常態混合法比傳統的 Black-Scholes 方法產生較低的誤差值,因此就實證的分析上能提供更好的配適。 / This research examines the behavior of investors during election periods from January 1st 2002 to January 6th 2006 in Taiwan. The research includes a few steps. First, we adopted a proper probability density function composed of stock index options data to construct the implied distribution. Then, when changing the whole shape of the risk-neutral implied distribution, the volatility indexes, and the statistics of the implied distribution, we observed investors' response around a specific election event. According to the empirical results, we found that: 1. An election event would influence investors’ behavior, and investors tend to reflect their expectation of future stock index in the option market in an efficient way. 2. The result of a large-scale and more disputed nationwide election will cause a higher fluctuation in both the implied distribution and the volatility index. 3. In general, the factor resulting from investors’ uncertainty of the market is likely to reduce after the election, which makes investors’ relatively unanimous and optimistic expectation of the stock market. However, if this election result surprises investors, their uncertainty of the market will increase, and thus the changes of the implied distribution and the volatility index become quite obvious. 4. The in-sample performance of the lognormal mixtures method employed in the research is considerably better than that of the traditional Black-Scholes model by having a lower root mean squared error.
127

Analysis of Hyperelastic Materials with Mechanica - Theory and Application Examples

Jakel, Roland 03 June 2010 (has links) (PDF)
Part 1: Theoretic background information - Review of Hooke’s law for linear elastic materials - The strain energy density of linear elastic materials - Hyperelastic material - Material laws for hyperelastic materials - About selecting the material model and performing tests - Implementation of hyperelastic material laws in Mechanica - Defining hyperelastic material parameters in Mechanica - Test set-ups and specimen shapes of the supported material tests - The uniaxial compression test - Stress and strain definitions in the Mechanica LDA analysis Part 2: Application examples - A test specimen subjected to uniaxial loading - A volumetric compression test - A planar test - Influence of the material law Appendix - PTC Simulation Services Introduction - Dictionary Technical English-German / Teil 1: Theoretische Hintergrundinformation - Das Hookesche Gesetz für linear-elastische Werkstoffe - Die Dehnungsenergiedichte für linear-elastische Materialien - Hyperelastisches Material - Materialgesetze für Hyperelastizität - Auswählen des Materialgesetzes und Testdurchführung - Implementierung der hyperelastischen Materialgesetze in Mechanica - Definieren der hyperelastischen Materialparameter in Mechanica - Testaufbauten und Prüfkörper der unterstützten Materialtests - Der einachsige Druckversuch - Spannungs- und Dehnungsdefinition in der Mechanica-Analyse mit großen Verformungen Teil 2: Anwendungsbeispiele - Ein einachsig beanspruchter Prüfkörper - Ein volumetrischer Drucktest - Ein planarer Test - Einfluss des Materialgesetzes Anhang: - Kurzvorstellung der PTC Simulationsdienstleistungen - Wörterbuch technisches Englisch-Deutsch
128

Analysis of Hyperelastic Materials with Mechanica - Theory and Application Examples / Analyse hyperelastischer Materialien mit Mechanica - Theorie und Anwendungsbeispiele

Jakel, Roland 03 December 2010 (has links) (PDF)
Part 1: Theoretic background information - Review of Hooke’s law for linear elastic materials - The strain energy density of linear elastic materials - Hyperelastic material - Material laws for hyperelastic materials - About selecting the material model and performing tests - Implementation of hyperelastic material laws in Mechanica - Defining hyperelastic material parameters in Mechanica - Test set-ups and specimen shapes of the supported material tests - The uniaxial compression test - Stress and strain definitions in the Mechanica LDA analysis Part 2: Application examples - A test specimen subjected to uniaxial loading - A volumetric compression test - A planar test - Influence of the material law Appendix - PTC Simulation Services Introduction - Dictionary Technical English-German / Teil 1: Theoretische Hintergrundinformation - Das Hookesche Gesetz für linear-elastische Werkstoffe - Die Dehnungsenergiedichte für linear-elastische Materialien - Hyperelastisches Material - Materialgesetze für Hyperelastizität - Auswählen des Materialgesetzes und Testdurchführung - Implementierung der hyperelastischen Materialgesetze in Mechanica - Definieren der hyperelastischen Materialparameter in Mechanica - Testaufbauten und Prüfkörper der unterstützten Materialtests - Der einachsige Druckversuch - Spannungs- und Dehnungsdefinition in der Mechanica-Analyse mit großen Verformungen Teil 2: Anwendungsbeispiele - Ein einachsig beanspruchter Prüfkörper - Ein volumetrischer Drucktest - Ein planarer Test - Einfluss des Materialgesetzes Anhang: - Kurzvorstellung der PTC Simulationsdienstleistungen - Wörterbuch technisches Englisch-Deutsch
129

LES/PDF approach for turbulent reacting flows

Donde, Pratik Prakash 15 February 2013 (has links)
The probability density function (PDF) approach is a powerful technique for large eddy simulation (LES) based modeling of turbulent reacting flows. In this approach, the joint-PDF of all reacting scalars is estimated by solving a PDF transport equation, thus providing detailed information about small-scale correlations between these quantities. The objective of this work is to further develop the LES/PDF approach for studying flame stabilization in supersonic combustors, and for soot modeling in turbulent flames. Supersonic combustors are characterized by strong shock-turbulence interactions which preclude the application of conventional Lagrangian stochastic methods for solving the PDF transport equation. A viable alternative is provided by quadrature based methods which are deterministic and Eulerian. In this work, it is first demonstrated that the numerical errors associated with LES require special care in the development of PDF solution algorithms. The direct quadrature method of moments (DQMOM) is one quadrature-based approach developed for supersonic combustion modeling. This approach is shown to generate inconsistent evolution of the scalar moments. Further, gradient-based source terms that appear in the DQMOM transport equations are severely underpredicted in LES leading to artificial mixing of fuel and oxidizer. To overcome these numerical issues, a new approach called semi-discrete quadrature method of moments (SeQMOM) is formulated. The performance of the new technique is compared with the DQMOM approach in canonical flow configurations as well as a three-dimensional supersonic cavity stabilized flame configuration. The SeQMOM approach is shown to predict subfilter statistics accurately compared to the DQMOM approach. For soot modeling in turbulent flows, an LES/PDF approach is integrated with detailed models for soot formation and growth. The PDF approach directly evolves the joint statistics of the gas-phase scalars and a set of moments of the soot number density function. This LES/PDF approach is then used to simulate a turbulent natural gas flame. A Lagrangian method formulated in cylindrical coordinates solves the high dimensional PDF transport equation and is coupled to an Eulerian LES solver. The LES/PDF simulations show that soot formation is highly intermittent and is always restricted to the fuel-rich region of the flow. The PDF of soot moments has a wide spread leading to a large subfilter variance. Further, the conditional statistics of soot moments conditioned on mixture fraction and reaction progress variable show strong correlation between the gas phase composition and soot moments. / text
130

Estimation du taux d'erreurs binaires pour n'importe quel système de communication numérique

DONG, Jia 18 December 2013 (has links) (PDF)
This thesis is related to the Bit Error Rate (BER) estimation for any digital communication system. In many designs of communication systems, the BER is a Key Performance Indicator (KPI). The popular Monte-Carlo (MC) simulation technique is well suited to any system but at the expense of long time simulations when dealing with very low error rates. In this thesis, we propose to estimate the BER by using the Probability Density Function (PDF) estimation of the soft observations of the received bits. First, we have studied a non-parametric PDF estimation technique named the Kernel method. Simulation results in the context of several digital communication systems are proposed. Compared with the conventional MC method, the proposed Kernel-based estimator provides good precision even for high SNR with very limited number of data samples. Second, the Gaussian Mixture Model (GMM), which is a semi-parametric PDF estimation technique, is used to estimate the BER. Compared with the Kernel-based estimator, the GMM method provides better performance in the sense of minimum variance of the estimator. Finally, we have investigated the blind estimation of the BER, which is the estimation when the sent data are unknown. We denote this case as unsupervised BER estimation. The Stochastic Expectation-Maximization (SEM) algorithm combined with the Kernel or GMM PDF estimation methods has been used to solve this issue. By analyzing the simulation results, we show that the obtained BER estimate can be very close to the real values. This is quite promising since it could enable real-time BER estimation on the receiver side without decreasing the user bit rate with pilot symbols for example.

Page generated in 0.0729 seconds