• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 8
  • 8
  • 8
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Analysis and Application about the Valuation Model of Dynamic Capital Structure

Huang, Jui-Ching 20 August 2001 (has links)
The modern capital structure theory attempts to determine an optimal capital structure on the maximum shareholder¡¦s wealth. In practice, it is very difficult to find an optimal capital structure. Nevertheless, the dynamic capital structure model can simulate an optimal capital structure by Monte Carlo approach. Therefore, this study begin at the opinion of sales revenue and modifies the model of Goldstein, Ju and Leland(1998). Firstly, using the model to simulate the optimality of capital structures about traditional and high-tech industries of Taiwan. Secondly, the impacts of industry characteristic¡Bfirm size¡Bgrowth¡Bprofitability¡Boperating risk and dividend policy on the optimal capital structure are also analyzed. Meanwhile, to investigate the effect of implementation of the integration of individual and corporate taxes on the firm value¡Bdividend policy and capital structure. There are some conclusions in this study. The simulate results indicate that the leverage ratio of traditional industry is higher than that of high-tech industry. Firm size and growth are positively associated with debt ratio. The profitability and dividend yield are negatively associated with debt ratio. But operating risk and debt ratio have curve relation. The traditional industry trends to optimal static capital structure strategy. But high-tech industry is toward optimal upward dynamic capital structure strategy. The higher corporate tax rate, the higher firm value is. The shareholder¡¦s individual tax rate and the tax rate on the retained earnings with firm value are negatively relation. So the extent of how tax reform affects firm value depends on the shareholder¡¦s individual tax rate. According to the model, debt ratio should decrease. The empirical result for the high technology industry has satisfied the theoretical prediction. However, the increase of the debt ratios of firms in the traditional industry indicates that industry framework is changed for higher equity cost. Therefore, the resource of firm funds will be by debt policy. Dividend payment will increase after the implementation of the integration of individual and corporate taxes. The more dividend pay out, the lower firm value is. So it consists with tax-effect hypothesis. Finally, this study derives the valuation model of dynamic capital structure. The simulate results are consistent with research hypothesis. That can advise managers to revise capital structure and to make dividend policy.
2

Computing Agent Competency in First Order Markov Processes

Cao, Xuan 06 December 2021 (has links)
Artificial agents are usually designed to achieve specific goals. An agent's competency can be defined as its ability to accomplish its goals under different conditions. This thesis restricts attention to a specific type of goal, namely reaching a desired state without exceeding a tolerance threshold of undesirable events in a first-order Markov process. For such goals, the state-dependent competency for an agent can be defined as the probability of reaching the desired state without exceeding the threshold and within a time limit given an initial state. The thesis further defines total competency as the set of state-dependent competency relationships over all possible initial states. The thesis uses a Monte Carlo approach to establish a baseline for estimating state-dependent competency. The Monte Carlo approach (a) uses trajectories sampled from an agent behaving in the environment, and then (b) uses nonlinear regression over the trajectory samples to estimate the competency curve. The thesis further presents an equation demonstrating recurrent relations for total competency and an algorithm based on that equation for computing total competency whose worst case computation time grows quadratically with the size of the state space. Simple maze-based Markov chains show that the Monte Carlo approach to estimating the competency agrees with the results computed by the proposed algorithm. Lastly, the thesis explores a special case where there are multiple sequential atomic goals that make up a complex goal. The thesis models a set of sequential goals as a Bayesian network and presents an equation based on the chain rule for deriving the competency for the complex goal from the competency for atomic goals. Experiments for the canonical taxi problem with sequential goals show the correctness of the Bayesian network-based decomposition approach.
3

Analýza spolehlivosti systémů metodou Monte Carlo / Systems Reliability Analysis using Monte Carlo Approach

Kučírek, Vojtěch January 2018 (has links)
Master’s thesis is focused on the technical systems reliability analysis. The first part of the thesis contains the description of the most commonly used reliability parameters and random variable probability distributions. Reliability of a human operator is described in the separate chapter. In the next part of the thesis are mentioned different types of reliability diagrams and methods of reliability analysis. Reliability analysis using Monte Carlo approach is described in the extra chapter. In the thesis are described several software tools, which can be used for systems reliability analysis. Design of PLC system with a human operator is done in the thesis. Reliability analysis using Monte Carlo approach is done on the designed PLC system. Results of Monte Carlo approach are compared with analytically calculated values and with values from reliability software.
4

Suppression of Singularity in Stochastic Fractional Burgers Equations with Multiplicative Noise

Masud, Sadia January 2024 (has links)
Inspired by studies on the regularity of solutions to the fractional Navier-Stokes system and the impact of noise on singularity formation in hydrodynamic models, we investigated these issues within the framework of the fractional 1D Burgers equation. Initially, our research concentrated on the deterministic scenario, where we conducted precise numerical computations to understand the dynamics in both subcritical and supercritical regimes. We utilized a pseudo-spectral approach with automated resolution refinement for discretization in space combined with a hybrid Crank-Nicolson/ Runge-Kutta method for time discretization.We estimated the blow-up time by analyzing the evolution of enstrophy (H1 seminorm) and the width of the analyticity strip. Our findings in the deterministic case highlighted the interplay between dissipative and nonlinear components, leading to distinct dynamics and the formation of shocks and finite-time singularities. In the second part of our study, we explored the fractional Burgers equation under the influence of linear multiplicative noise. To tackle this problem, we employed the Milstein Monte Carlo approach to approximate stochastic effects. Our statistical analysis of stochastic solutions for various noise magnitudes showed that as noise amplitude increases, the distribution of blow-up times becomes more non-Gaussian. Specifically, higher noise levels result in extended mean blow-up time and increase its variability, indicating a regularizing effect of multiplicative noise on the solution. This highlights the crucial role of stochastic perturbations in influencing the behavior of singularities in such systems. Although the trends are rather weak, they nevertheless are consistent with the predictions of the theorem of [41]. However, there is no evidence for a complete elimination of blow-up, which is probably due to the fact that the noise amplitudes considered were not sufficiently large. This highlights the crucial role of stochastic perturbations in influencing the behavior of singularities in such systems. / Thesis / Master of Science (MSc)
5

含解約權之附保證變額壽險評價分析

林威廷 Unknown Date (has links)
本文針對躉繳保費的附保證變額壽險進行評價,保單形式為生死合險,假設投保人可將期初的投資金額連結到兩種投資標的:股價指數及債券型基金,並以BGM模型描述利率的動態過程,然後分別計算不含解約權及含解約權的附保證變額壽險躉繳保費,進而求算出隱含在保單中的保證價值和解約權價值。針對含解約權的附保證變額壽險,以Longstaff and Schwartz(2001)提出的最小平方蒙地卡羅法處理解約的問題。最後,我們求算不同年齡下的男性保費,並且在投資比例、起始最低保證、最低保證給付成長率、針對解約的保證給付成長率和第一個允許的解約時點變動下,分別討論對於保證價值和解約權價值的影響。 結果顯示:(1)當起始最低保證給付等於期初投資金額時,投資在股票的比例越大,越能凸顯保證價值和解約權價值佔保費的比重。以30歲男性為例,保證價值佔不含解約權之附保證變額壽險的比例,由全部投資在債券型基金的0.03%,成長到全部投資在股票的13.86%;而解約權價值佔含解約權之附保證變額壽險的比例,由全部投資在債券型基金的0.05%,成長到全部投資在股票的9.12%。(2)投資比例、起始最低保證給付和最低保證給付成長率越大,保證價值越高。(3)起始最低保證給付和針對解約的保證給付成長率越大,解約權價值越大;而最低保證給付成長率和第一個允許的解約時點越大,解約權價值越小。(4)投資比例隨著最低保證給付不同對解約權價值有不同的影響。 關鍵字:附保證變額壽險、BGM利率模型、解約選擇權、最小平方蒙地卡羅法 / This study emphasizes on the pricing of variable life insurance with minimum guarantees. As an endowment policy in a single premium form, in this paper, it is assumed that the insured can distribute the initial investment amount into two underlying assets: the stock index fund and bond fund. Simulating the interest rate under a BGM model, computational procedures are performed for the single premium of the variable life insurance policy without surrender option and embedding a surrender option, and further, the guarantee value and surrender value embedded in the insurance policy. For the variable life insurance policy embedding a surrender option, the Least Square Monte-Carlo method proposed by Longstaff and Schwartz (2001) is applied to solve the surrender conditions. Finally, we calculate the premium for a male at different ages, and respectively analyze the variations of the guarantee value and surrender value under the influence of the investment portfolio, the initial minimum guaranteed amount, the growth rate of the minimum guarantee, the growth rate of the minimum guarantee for surrender and the first permitted surrender time. The results show that: (1) when the initial minimum guaranteed amount equals the initial investment amount, higher proportion invested in stock will result in larger percentage of the guarantee value and surrender value to total premium. Take a 30-year old male as an example: the percentage of guarantee value to the premium of variable life insurance with minimum guarantee and without a surrender option, which is 0.03% when the initial investment amount thoroughly goes to bond fund, rises up to 13.86% with the entire amount invested in stock index fund. Likewise, the percentage of surrender value to the premium of variable life insurance with minimum guarantee and surrender option is 0.05% with total amount invested in bond fund, while it is 9.12% with the entire amount invested in stock index fund. (2) The higher proportion invested in stock, the initial minimum guaranteed amount and the growth rate of minimum guaranteed amount, the larger guarantee value. (3) Larger initial minimum guaranteed amount and the growth rate of the minimum guaranteed amount for surrender would contribute to a higher surrender value. The higher growth rate of the minimum guaranteed amount and the first permitted surrender time, the lower surrender value. (4) The influence of the investment portfolio to surrender value depends on the initial minimum guaranteed amount. Key words: Variable life insurance with minimum guaranteed amount, BGM interest rate model, surrender option, least squares Monte Carlo approach.
6

Évaluation et contrôle de l'irrégularité de la prise médicamenteuse : proposition et développement de stratégies rationnelles fondées sur une démarche de modélisations pharmacocinétiques et pharmacodynamiques

Gohore Bi, Goue D. 04 1900 (has links)
L'hétérogénéité de réponses dans un groupe de patients soumis à un même régime thérapeutique doit être réduite au cours d'un traitement ou d'un essai clinique. Deux approches sont habituellement utilisées pour atteindre cet objectif. L'une vise essentiellement à construire une observance active. Cette approche se veut interactive et fondée sur l'échange ``médecin-patient '', ``pharmacien-patient'' ou ``vétérinaire-éleveurs''. L'autre plutôt passive et basée sur les caractéristiques du médicament, vise à contrôler en amont cette irrégularité. L'objectif principal de cette thèse était de développer de nouvelles stratégies d'évaluation et de contrôle de l'impact de l'irrégularité de la prise du médicament sur l'issue thérapeutique. Plus spécifiquement, le premier volet de cette recherche consistait à proposer des algorithmes mathématiques permettant d'estimer efficacement l'effet des médicaments dans un contexte de variabilité interindividuelle de profils pharmacocinétiques (PK). Cette nouvelle méthode est fondée sur l'utilisation concommitante de données \textit{in vitro} et \textit{in vivo}. Il s'agit de quantifier l'efficience ( c-à-dire efficacité plus fluctuation de concentrations \textit{in vivo}) de chaque profil PK en incorporant dans les modèles actuels d'estimation de l'efficacité \textit{in vivo}, la fonction qui relie la concentration du médicament de façon \textit{in vitro} à l'effet pharmacodynamique. Comparativement aux approches traditionnelles, cette combinaison de fonction capte de manière explicite la fluctuation des concentrations plasmatiques \textit{in vivo} due à la fonction dynamique de prise médicamenteuse. De plus, elle soulève, à travers quelques exemples, des questions sur la pertinence de l'utilisation des indices statiques traditionnels ($C_{max}$, $AUC$, etc.) d'efficacité comme outil de contrôle de l'antibiorésistance. Le deuxième volet de ce travail de doctorat était d'estimer les meilleurs temps d'échantillonnage sanguin dans une thérapie collective initiée chez les porcs. Pour ce faire, nous avons développé un modèle du comportement alimentaire collectif qui a été par la suite couplé à un modèle classique PK. À l'aide de ce modèle combiné, il a été possible de générer un profil PK typique à chaque stratégie alimentaire particulière. Les données ainsi générées, ont été utilisées pour estimer les temps d'échantillonnage appropriés afin de réduire les incertitudes dues à l'irrégularité de la prise médicamenteuse dans l'estimation des paramètres PK et PD . Parmi les algorithmes proposés à cet effet, la méthode des médianes semble donner des temps d'échantillonnage convenables à la fois pour l'employé et pour les animaux. Enfin, le dernier volet du projet de recherche a consisté à proposer une approche rationnelle de caractérisation et de classification des médicaments selon leur capacité à tolérer des oublis sporadiques. Méthodologiquement, nous avons, à travers une analyse globale de sensibilité, quantifié la corrélation entre les paramètres PK/PD d'un médicament et l'effet d'irrégularité de la prise médicamenteuse. Cette approche a consisté à évaluer de façon concomitante l'influence de tous les paramètres PK/PD et à prendre en compte, par la même occasion, les relations complexes pouvant exister entre ces différents paramètres. Cette étude a été réalisée pour les inhibiteurs calciques qui sont des antihypertenseurs agissant selon un modèle indirect d'effet. En prenant en compte les valeurs des corrélations ainsi calculées, nous avons estimé et proposé un indice comparatif propre à chaque médicament. Cet indice est apte à caractériser et à classer les médicaments agissant par un même mécanisme pharmacodynamique en terme d'indulgence à des oublis de prises médicamenteuses. Il a été appliqué à quatre inhibiteurs calciques. Les résultats obtenus étaient en accord avec les données expérimentales, traduisant ainsi la pertinence et la robustesse de cette nouvelle approche. Les stratégies développées dans ce projet de doctorat sont essentiellement fondées sur l'analyse des relations complexes entre l'histoire de la prise médicamenteuse, la pharmacocinétique et la pharmacodynamique. De cette analyse, elles sont capables d'évaluer et de contrôler l'impact de l'irrégularité de la prise médicamenteuse avec une précision acceptable. De façon générale, les algorithmes qui sous-tendent ces démarches constitueront sans aucun doute, des outils efficients dans le suivi et le traitement des patients. En outre, ils contribueront à contrôler les effets néfastes de la non-observance au traitement par la mise au point de médicaments indulgents aux oublis / The heterogeneity of PK and/or PD profiles in patients undergoing the same treatment regimen should be avoided during treatment or clinical trials. Two traditional approaches are continually used to achieve this purpose. One builds on the interactive synergy between the health caregiver and the patient to exert the patients to become a whole part of his own compliance. Another attempt is to develop drugs or drug dosing regimens that forgive the poor compliance. The main objective of this thesis was to develop new methodologies for assessing and monitoring the impact of irregular drug intake on the therapeutic outcome. Specifically, the first phase of this research was to develop algorithms for evaluation of the efficacy of a treatment by improving classical breakpoint estimation methods to the situation of variable drug disposition. This method introduces the ``efficiency'' of a PK profile by using the efficacy function as a weight in the area under curve ($AUC$) formula. It gives a more powerful PK/PD link and reveales, through some examples, interesting issues about uniqueness of therapeutic outcome indices and antibiotic resistance problems. The second part of this thesis was to determine the optimal sampling times by accounting for the intervariability in drug disposition in collectively treated pigs. For this, we have developed an advanced mathematical model able to generate different PK profiles for various feed strategies. Three algorithms have been performed to identify the optimal sampling times with the criteria of minimizing the PK intervariability . The median-based method yielded suitable sampling periods in terms of convenience for farm staff and animal welfare. The last part of our research was to establish a rational way to delineate drugs in terms of their ``forgiveness'', based on drugs PK/PD properties. For this, a global sensitivity analysis (GSA) has been performed to identify the most sensitive parameters to dose omissions. Then we have proposed a comparative drug forgiveness index to rank the drugs in terms of their tolerability to non compliance with application to four calcium channel blockers. The classification of these molecules in terms of drug forgiveness is in concordance to what has been reported in experimental studies. The strategies developed in this Ph.D. project and essentially based on the analysis of complex relationships between drug intake history, pharmacokinetic and pharmacodynamic properties are able to assess and regulate noncompliance impact with an acceptable uncertainly. In general, the algorithms that imply these approaches will be undoubtedly efficient tools in patient monitoring during dosing regimen. Moreover, they will contribute to control the harmful impact of non-compliance by developing new drugs able to tolerate sporadic dose omission.
7

Évaluation et contrôle de l'irrégularité de la prise médicamenteuse : proposition et développement de stratégies rationnelles fondées sur une démarche de modélisations pharmacocinétiques et pharmacodynamiques

Gohore Bi, Gouê Denis 04 1900 (has links)
L'hétérogénéité de réponses dans un groupe de patients soumis à un même régime thérapeutique doit être réduite au cours d'un traitement ou d'un essai clinique. Deux approches sont habituellement utilisées pour atteindre cet objectif. L'une vise essentiellement à construire une observance active. Cette approche se veut interactive et fondée sur l'échange ``médecin-patient '', ``pharmacien-patient'' ou ``vétérinaire-éleveurs''. L'autre plutôt passive et basée sur les caractéristiques du médicament, vise à contrôler en amont cette irrégularité. L'objectif principal de cette thèse était de développer de nouvelles stratégies d'évaluation et de contrôle de l'impact de l'irrégularité de la prise du médicament sur l'issue thérapeutique. Plus spécifiquement, le premier volet de cette recherche consistait à proposer des algorithmes mathématiques permettant d'estimer efficacement l'effet des médicaments dans un contexte de variabilité interindividuelle de profils pharmacocinétiques (PK). Cette nouvelle méthode est fondée sur l'utilisation concommitante de données \textit{in vitro} et \textit{in vivo}. Il s'agit de quantifier l'efficience ( c-à-dire efficacité plus fluctuation de concentrations \textit{in vivo}) de chaque profil PK en incorporant dans les modèles actuels d'estimation de l'efficacité \textit{in vivo}, la fonction qui relie la concentration du médicament de façon \textit{in vitro} à l'effet pharmacodynamique. Comparativement aux approches traditionnelles, cette combinaison de fonction capte de manière explicite la fluctuation des concentrations plasmatiques \textit{in vivo} due à la fonction dynamique de prise médicamenteuse. De plus, elle soulève, à travers quelques exemples, des questions sur la pertinence de l'utilisation des indices statiques traditionnels ($C_{max}$, $AUC$, etc.) d'efficacité comme outil de contrôle de l'antibiorésistance. Le deuxième volet de ce travail de doctorat était d'estimer les meilleurs temps d'échantillonnage sanguin dans une thérapie collective initiée chez les porcs. Pour ce faire, nous avons développé un modèle du comportement alimentaire collectif qui a été par la suite couplé à un modèle classique PK. À l'aide de ce modèle combiné, il a été possible de générer un profil PK typique à chaque stratégie alimentaire particulière. Les données ainsi générées, ont été utilisées pour estimer les temps d'échantillonnage appropriés afin de réduire les incertitudes dues à l'irrégularité de la prise médicamenteuse dans l'estimation des paramètres PK et PD . Parmi les algorithmes proposés à cet effet, la méthode des médianes semble donner des temps d'échantillonnage convenables à la fois pour l'employé et pour les animaux. Enfin, le dernier volet du projet de recherche a consisté à proposer une approche rationnelle de caractérisation et de classification des médicaments selon leur capacité à tolérer des oublis sporadiques. Méthodologiquement, nous avons, à travers une analyse globale de sensibilité, quantifié la corrélation entre les paramètres PK/PD d'un médicament et l'effet d'irrégularité de la prise médicamenteuse. Cette approche a consisté à évaluer de façon concomitante l'influence de tous les paramètres PK/PD et à prendre en compte, par la même occasion, les relations complexes pouvant exister entre ces différents paramètres. Cette étude a été réalisée pour les inhibiteurs calciques qui sont des antihypertenseurs agissant selon un modèle indirect d'effet. En prenant en compte les valeurs des corrélations ainsi calculées, nous avons estimé et proposé un indice comparatif propre à chaque médicament. Cet indice est apte à caractériser et à classer les médicaments agissant par un même mécanisme pharmacodynamique en terme d'indulgence à des oublis de prises médicamenteuses. Il a été appliqué à quatre inhibiteurs calciques. Les résultats obtenus étaient en accord avec les données expérimentales, traduisant ainsi la pertinence et la robustesse de cette nouvelle approche. Les stratégies développées dans ce projet de doctorat sont essentiellement fondées sur l'analyse des relations complexes entre l'histoire de la prise médicamenteuse, la pharmacocinétique et la pharmacodynamique. De cette analyse, elles sont capables d'évaluer et de contrôler l'impact de l'irrégularité de la prise médicamenteuse avec une précision acceptable. De façon générale, les algorithmes qui sous-tendent ces démarches constitueront sans aucun doute, des outils efficients dans le suivi et le traitement des patients. En outre, ils contribueront à contrôler les effets néfastes de la non-observance au traitement par la mise au point de médicaments indulgents aux oublis / The heterogeneity of PK and/or PD profiles in patients undergoing the same treatment regimen should be avoided during treatment or clinical trials. Two traditional approaches are continually used to achieve this purpose. One builds on the interactive synergy between the health caregiver and the patient to exert the patients to become a whole part of his own compliance. Another attempt is to develop drugs or drug dosing regimens that forgive the poor compliance. The main objective of this thesis was to develop new methodologies for assessing and monitoring the impact of irregular drug intake on the therapeutic outcome. Specifically, the first phase of this research was to develop algorithms for evaluation of the efficacy of a treatment by improving classical breakpoint estimation methods to the situation of variable drug disposition. This method introduces the ``efficiency'' of a PK profile by using the efficacy function as a weight in the area under curve ($AUC$) formula. It gives a more powerful PK/PD link and reveales, through some examples, interesting issues about uniqueness of therapeutic outcome indices and antibiotic resistance problems. The second part of this thesis was to determine the optimal sampling times by accounting for the intervariability in drug disposition in collectively treated pigs. For this, we have developed an advanced mathematical model able to generate different PK profiles for various feed strategies. Three algorithms have been performed to identify the optimal sampling times with the criteria of minimizing the PK intervariability . The median-based method yielded suitable sampling periods in terms of convenience for farm staff and animal welfare. The last part of our research was to establish a rational way to delineate drugs in terms of their ``forgiveness'', based on drugs PK/PD properties. For this, a global sensitivity analysis (GSA) has been performed to identify the most sensitive parameters to dose omissions. Then we have proposed a comparative drug forgiveness index to rank the drugs in terms of their tolerability to non compliance with application to four calcium channel blockers. The classification of these molecules in terms of drug forgiveness is in concordance to what has been reported in experimental studies. The strategies developed in this Ph.D. project and essentially based on the analysis of complex relationships between drug intake history, pharmacokinetic and pharmacodynamic properties are able to assess and regulate noncompliance impact with an acceptable uncertainly. In general, the algorithms that imply these approaches will be undoubtedly efficient tools in patient monitoring during dosing regimen. Moreover, they will contribute to control the harmful impact of non-compliance by developing new drugs able to tolerate sporadic dose omission.
8

Geometric Uncertainty Analysis of Aerodynamic Shapes Using Multifidelity Monte Carlo Estimation

Triston Andrew Kosloske (15353533) 27 April 2023 (has links)
<p>Uncertainty analysis is of great use both for calculating outputs that are more akin to real<br> flight, and for optimization to more robust shapes. However, implementation of uncertainty<br> has been a longstanding challenge in the field of aerodynamics due to the computational cost<br> of simulations. Geometric uncertainty in particular is often left unexplored in favor of uncer-<br> tainties in freestream parameters, turbulence models, or computational error. Therefore, this<br> work proposes a method of geometric uncertainty analysis for aerodynamic shapes that miti-<br> gates the barriers to its feasible computation. The process takes a two- or three-dimensional<br> shape and utilizes a combination of multifidelity meshes and Gaussian process regression<br> (GPR) surrogates in a multifidelity Monte Carlo (MFMC) algorithm. Multifidelity meshes<br> allow for finer sampling with a given budget, making the surrogates more accurate. GPR<br> surrogates are made practical to use by parameterizing major factors in geometric uncer-<br> tainty with only four variables in 2-D and five in 3-D. In both cases, two parameters control<br> the heights of steps that occur on the top and bottom of airfoils where leading and trailing<br> edge devices are attached. Two more parameters control the height and length of waves<br> that can occur in an ideally smooth shape during manufacturing. A fifth parameter controls<br> the depth of span-wise skin buckling waves along a 3-D wing. Parameters are defined to<br> be uniformly distributed with a maximum size of 0.4 mm and 0.15 mm for steps and waves<br> to remain within common manufacturing tolerances. The analysis chain is demonstrated<br> with two test cases. The first, the RAE2822 airfoil, uses transonic freestream parameters<br> set by the ADODG Benchmark Case 2. The results show a mean drag of nearly 10 counts<br> above the deterministic case with fixed lift, and a 2 count increase for a fixed angle of attack<br> version of the case. Each case also has small variations in lift and angle of attack of about<br> 0.5 counts and 0.08◦, respectively. Variances for each of the three tracked outputs show that<br> more variability is possible, and even likely. The ONERA M6 transonic wing, popular due<br> to the extensive experimental data available for computational validation, is the second test<br> case. Variation is found to be less substantial here, with a mean drag increase of 0.5 counts,<br> and a mean lift increase of 0.1 counts. Furthermore, the MFMC algorithm enables accurate<br> results with only a few hours of wall time in addition to GPR training. </p>

Page generated in 0.0801 seconds