Spelling suggestions: "subject:"fonte carlosimulations"" "subject:"fonte carlosimulation""
261 |
Efficient Sequential Sampling for Neural Network-based Surrogate ModelingPavankumar Channabasa Koratikere (15353788) 27 April 2023 (has links)
<p>Gaussian Process Regression (GPR) is a widely used surrogate model in efficient global optimization (EGO) due to its capability to provide uncertainty estimates in the prediction. The cost of creating a GPR model for large data sets is high. On the other hand, neural network (NN) models scale better compared to GPR as the number of samples increase. Unfortunately, the uncertainty estimates for NN prediction are not readily available. In this work, a scalable algorithm is developed for EGO using NN-based prediction and uncertainty (EGONN). Initially, two different NNs are created using two different data sets. The first NN models the output based on the input values in the first data set while the second NN models the prediction error of the first NN using the second data set. The next infill point is added to the first data set based on criteria like expected improvement or prediction uncertainty. EGONN is demonstrated on the optimization of the Forrester function and a constrained Branin function and is compared with EGO. The convergence criteria is based on the maximum number of infill points in both cases. The algorithm is able to reach the optimum point within the given budget. The EGONN is extended to handle constraints explicitly and is utilized for aerodynamic shape optimization of the RAE 2822 airfoil in transonic viscous flow at a free-stream Mach number of 0.734 and a Reynolds number of 6.5 million. The results obtained from EGONN are compared with the results from gradient-based optimization (GBO) using adjoints. The optimum shape obtained from EGONN is comparable to the shape obtained from GBO and is able to eliminate the shock. The drag coefficient is reduced from 200 drag counts to 114 and is close to 110 drag counts obtained from GBO. The EGONN is also extended to handle uncertainty quantification (uqEGONN) using prediction uncertainty as an infill method. The convergence criteria is based on the relative change of summary statistics such as mean and standard deviation of an uncertain quantity. The uqEGONN is tested on Ishigami function with an initial sample size of 100 samples and the algorithm terminates after 70 infill points. The statistics obtained from uqEGONN (using only 170 function evaluations) are close to the values obtained from directly evaluating the function one million times. uqEGONN is demonstrated on to quantifying the uncertainty in the airfoil performance due to geometric variations. The algorithm terminates within 100 computational fluid dynamics (CFD) analyses and the statistics obtained from the algorithm are close to the one obtained from 1000 direct CFD based evaluations.</p>
|
262 |
Development of a continuous condition monitoring system based on probabilistic modelling of partial discharge data for polymeric insulation cablesAhmed, Zeeshan 09 August 2019 (has links)
Partial discharge (PD) measurements have been widely accepted as an efficient online insulation condition assessment method in high voltage equipment. Two sets of experimental PD measuring setups were established with the aim to study the variations in the partial discharge characteristics over the insulation degradation in terms of the physical phenomena taking place in PD sources, up to the point of failure. Probabilistic lifetime modeling techniques based on classification, regression and multivariate time series analysis were performed for a system of PD response variables, i.e. average charge, pulse repetition rate, average charge current, and largest repetitive discharge magnitude over the data acquisition period. Experimental lifelong PD data obtained from samples subjected to accelerated degradation was used to study the dynamic trends and relationships among those aforementioned response variables. Distinguishable data clusters detected by the T-Stochastics Neighborhood Embedding (tSNE) algorithm allows for the examination of the state-of-the-art modeling techniques over PD data. The response behavior of trained models allows for distinguishing the different stages of the insulation degradation. An alternative approach utilizing a multivariate time series analysis was performed in parallel with Classification and Regression models for the purpose of forecasting PD activity (PD response variables corresponding to insulation degradation). True observed data and forecasted data mean values lie within the 95th percentile confidence interval responses for a definite horizon period, which demonstrates the soundness and accuracy of models. A life-predicting model based on the cointegrated relations between the multiple response variables, trained model responses correlated with experimentally evaluated time-to-breakdown values and well-known physical discharge mechanisms, can be used to set an emergent alarming trigger and as a step towards establishing long-term continuous monitoring of partial discharge activity. Furthermore, this dissertation also proposes an effective PD monitoring system based on wavelet and deflation compression techniques required for an optimal data acquisition as well as an algorithm for high-scale, big data reduction to minimize PD data size and account only for the useful PD information. This historically recorded useful information can thus be used for, not only postault diagnostics, but also for the purpose of improving the performance of modelling algorithms as well as for an accurate threshold detection.
|
263 |
CURE RATE AND DESTRUCTIVE CURE RATE MODELS UNDER PROPORTIONAL ODDS LIFETIME DISTRIBUTIONSFENG, TIAN January 2019 (has links)
Cure rate models, introduced by Boag (1949), are very commonly used while modelling
lifetime data involving long time survivors. Applications of cure rate models can be seen
in biomedical science, industrial reliability, finance, manufacturing, demography and criminology. In this thesis, cure rate models are discussed under a competing cause scenario,
with the assumption of proportional odds (PO) lifetime distributions for the susceptibles,
and statistical inferential methods are then developed based on right-censored data.
In Chapter 2, a flexible cure rate model is discussed by assuming the number of competing
causes for the event of interest following the Conway-Maxwell (COM) Poisson distribution,
and their corresponding lifetimes of non-cured or susceptible individuals can be
described by PO model. This provides a natural extension of the work of Gu et al. (2011)
who had considered a geometric number of competing causes. Under right censoring, maximum likelihood estimators (MLEs) are obtained by the use of expectation-maximization
(EM) algorithm. An extensive Monte Carlo simulation study is carried out for various scenarios,
and model discrimination between some well-known cure models like geometric,
Poisson and Bernoulli is also examined. The goodness-of-fit and model diagnostics of the
model are also discussed. A cutaneous melanoma dataset example is used to illustrate the
models as well as the inferential methods.
Next, in Chapter 3, the destructive cure rate models, introduced by Rodrigues et al. (2011), are discussed under the PO assumption. Here, the initial number of competing
causes is modelled by a weighted Poisson distribution with special focus on exponentially
weighted Poisson, length-biased Poisson and negative binomial distributions. Then, a damage
distribution is introduced for the number of initial causes which do not get destroyed.
An EM-type algorithm for computing the MLEs is developed. An extensive simulation
study is carried out for various scenarios, and model discrimination between the three
weighted Poisson distributions is also examined. All the models and methods of estimation
are evaluated through a simulation study. A cutaneous melanoma dataset example is used
to illustrate the models as well as the inferential methods.
In Chapter 4, frailty cure rate models are discussed under a gamma frailty wherein the
initial number of competing causes is described by a Conway-Maxwell (COM) Poisson
distribution in which the lifetimes of non-cured individuals can be described by PO model.
The detailed steps of the EM algorithm are then developed for this model and an extensive
simulation study is carried out to evaluate the performance of the proposed model and the
estimation method. A cutaneous melanoma dataset as well as a simulated data are used for
illustrative purposes.
Finally, Chapter 5 outlines the work carried out in the thesis and also suggests some
problems of further research interest. / Thesis / Doctor of Philosophy (PhD)
|
264 |
The •OH scavenging effect of bromide ions on the yield of H[subscript 2]O[subscript 2] in the radiolysis of water by [superscript 60]Co γ-rays and tritium β-particles at room temperature : a Monte Carlo simulation study / Effet de capture des radicaux •OH par les ions bromure Br- sur le rendement de H[indice inférieur 2]O[indice inférieur 2] dans la radiolyse de l'eau par les rayons γ de [indice supérieur 60]Co et les électrons β du tritium à la température ambiante: une étude par simulation Monte CarloMustaree, Shayla January 2016 (has links)
Abstract: Monte Carlo simulations were used here to compare the radiation chemistry of pure water and aqueous bromide solutions after irradiation with two different types of radiation, namely, tritium β-electrons (~7.8 keV) and [superscript 60]Co γ-rays/fast electron (~1 MeV) or high energy protons. Bromide ions (Br-) are known to be selective scavengers of hydroxyl radicals •OH precursors of hydrogen peroxide H[subscript 2]O[subscript 2]. These simulations thus allowed us to determine the yields (or G-values) of H[subscript 2]O[subscript 2] in the radiolysis of dilute aqueous bromide solutions by the two types of radiations studied, the first with low linear energy transfer (LET) (~0.3 keV/μm) and the second with high LET (~6 keV/μm) at 25 °C. This study was carried out under a wide range of Br- concentrations both in the presence and the absence of oxygen. Simulations clearly showed that irradiation by tritium β-electrons favored a clear increase in G(H[subscript 2]O[subscript 2]) compared to [superscript 60]Co γ-rays. We found that these changes could be related to differences in the initial spatial distributions of radiolytic species (i.e., the structure of the electron tracks, the low-energy β-electrons of tritium depositing their energy as cylindrical “short tracks” and the energetic Compton electrons produced by γ-radiolysis forming mainly spherical “spurs”). Moreover, simulations also showed that the presence of oxygen, a very good scavenger of hydrated electrons (e-[subscript aq]) and H• atoms on the 10[superscript-7] s time scale (i.e., before the end of spur expansion), protected H[subscript 2]O[subscript 2] from further reactions with these species in the homogeneous stage of radiolysis. This protection against e-[subscript aq] and H• atoms therefore led to an increase in the H[subscript 2]O[subscript 2] yields at long times, as seen experimentally. Finally, for both deaerated and aerated solutions, the H[subscript 2]O[subscript 2] yield in tritium β-radiolysis was found to be more easily suppressed than in the case of cobalt-60 γ-radiolysis, and interpreted by the quantitatively different chemistry between short tracks and spurs. These differences in the scavengeability of H[subscript 2]O[subscript 2] precursors in passing from low-LET [superscript 60]Co γ-ray to high-LET tritium β-electron irradiation were in good agreement with experimental data, thereby lending strong support to the picture of tritium-β radiolysis in terms of short tracks of high local LET. / Résumé: Les simulations Monte Carlo constituent une approche théorique efficace pour étudier la chimie sous rayonnement de l'eau et des solutions aqueuses. Dans ce travail, nous avons utilisé ces simulations pour comparer l’action de deux types de rayonnement, à savoir, le rayonnement γ de [indice supérieur 60]Co (électrons de Compton ~1 Me V) et les électrons β du tritium (~ 7,8 keV), sur la radiolyse de l’eau et des solutions aqueuses diluées de bromure. Les ions Br- sont connus comme d’excellents capteurs des radicaux hydroxyles •OH, précurseurs du peroxyde d’hydrogène H[indice inférieur 2]O[indice inférieur 2]. Les simulations Monte Carlo nous ont donc permis de déterminer les rendements (ou valeurs G) de H[indice inférieur 2]O[indice inférieur 2] à 25 °C pour les deux types de rayonnements étudiés, le premier à faible transfert d'énergie linéaire (TEL) (~0,3 keV/μm) et le second à haut TEL (~6 keV/μm). L’étude a été menée pour différentes concentrations d’ions Br-, à la fois en présence et en absence d'oxygène. Les simulations ont montré que l’irradiation par les électrons β du tritium favorisait nettement la formation de H[indice inférieur 2]O[indice inférieur 2] comparativement aux rayons γ du cobalt. Ces changements ont pu être reliés aux différences qui existent dans les distributions spatiales initiales des espèces radiolytiques (i.e., la structure des trajectoires d'électrons, les électrons β du tritium déposant leur énergie sous forme de «trajectoires courtes» de nature cylindrique, et les électrons Compton produits par la radiolyse γ formant principalement des «grappes» de géométrie plus ou moins sphérique). Les simulations ont montré également que la présence d'oxygène, capteur d’électrons hydratés et d’atomes H• sur l'échelle de temps de ~10[indice supérieur -7] s (i.e., avant la fin des grappes), protégeait H[indice inférieur 2]O[indice inférieur 2] d’éventuelles réactions subséquentes avec ces espèces. Une telle «protection» conduit ainsi à une augmentation de G(H[indice inférieur 2]O[indice inférieur 2]) à temps longs. Enfin, en milieu tant désaéré qu’aéré, les rendements en H[indice inférieur 2]O[indice inférieur 2] obtenus lors de la radiolyse par les électrons β du tritium ont été trouvés plus facilement supprimés que lors de la radiolyse γ. Ces différences dans l’efficacité de capture des précurseurs de H[indice inférieur 2]O[indice inférieur 2] ont été interprétées par les différences quantitatives dans la chimie intervenant dans les trajectoires courtes et les grappes. Un excellent accord a été obtenu avec les données expérimentales existantes.
|
265 |
Solid-Solution Strengthening and Suzuki Segregation in Co- and Ni-based AlloysDongsheng Wen (12463488) 29 April 2022 (has links)
<p>Co and Ni are two major elements in high temperature structural alloys that include superalloys for turbine engines and hard metals for cutting tools. The recent development of complex concentrated alloys (CCAs), loosely defined as alloys without a single principal element (e.g. CoNiFeMn), offers additional opportunities in designing new alloys through extensive composition and structure modifications. Within CCAs and Co- and Ni-based superalloys, solid-solution strengthening and stacking fault energy engineering are two of the most important strengthening mechanisms. While studied for decades, the potency and quantitative materials properties of these mechanisms remain elusive. </p>
<p><br></p>
<p>Solid-solution strengthening originates from stress field interactions between dislocations and solute of various species in the alloy. These stress fields can be engineered by composition modification in CCAs, and therefore a wide range of alloys with promising mechanical strength may be designed. This thesis initially reports on experimental and computational validation of newly developed theories for solid-solution strengthening in 3d transition metal (MnFeCoNi) alloys. The strengthening effects of Al, Ti, V, Cr, Cu and Mo as alloying elements are quantified by coupling the Labusch-type strengthening model and experimental measurements. With large atomic misfits with the base alloy, Al, Ti, Mo, and Cr present strong strengthening effects comparable to other Cantor alloys. </p>
<p> </p>
<p>Stacking fault energy engineering can enable novel deformation mechanisms and exceptional strength in face-centered cubic (FCC) materials such as austenitic TRIP/TWIP steels and CoNi-based superalloys exhibiting local phase transformation strengthening via Suzuki segregation. We employed first-principles calculations to investigate the Suzuki segregation and stacking fault energy of the FCC Co-Ni binary alloys at finite temperatures and concentrations. We quantitatively predicted the Co segregation in the innermost plane of the intrinsic stacking fault (ISF). We further quantified the decrease of stacking fault energy due to segregation. </p>
<p><br></p>
<p>We further investigated the driving force of segregation and the origin of the segregation behaviors of 3d, 4d and 5d elements in the Co- and Ni-alloys. Using first-principles calculations, we calculated the ground-state solute-ISF interaction energies and revealed the trends across the periodic table. We discussed the relationships between the interaction energies and the local lattice distortions, charge density redistribution, density of states and local magnetization of the solutes. </p>
<p><br></p>
<p>Finally, this thesis reports on new methodologies to accelerate first-principles calculations utilizing active learning techniques, such as Bayesian optimization, to efficiently search for the ground-state energy line of the system with limited computational resources. Based on the expected improvement method, new acquisition strategies were developed and will be compared and presented. </p>
|
266 |
Mesures d'étalonnage aux neutrons et caractérisation par étude Monte Carlo de la réponse des détecteurs à gouttelettes surchauffées conçus pour la recherche et la détection directe du neutralino (la matière sombre) menant aux résultats finaux de l'expérience PICASSOLafrenière, Matthieu 12 1900 (has links)
No description available.
|
267 |
Intelligent Energy-Savings and Process Improvement Strategies in Energy-Intensive Industries / Intelligent Energy-Savings and Process Improvement Strategies in Energy-Intensive IndustriesTeng, Sin Yong January 2020 (has links)
S tím, jak se neustále vyvíjejí nové technologie pro energeticky náročná průmyslová odvětví, stávající zařízení postupně zaostávají v efektivitě a produktivitě. Tvrdá konkurence na trhu a legislativa v oblasti životního prostředí nutí tato tradiční zařízení k ukončení provozu a k odstavení. Zlepšování procesu a projekty modernizace jsou zásadní v udržování provozních výkonů těchto zařízení. Současné přístupy pro zlepšování procesů jsou hlavně: integrace procesů, optimalizace procesů a intenzifikace procesů. Obecně se v těchto oblastech využívá matematické optimalizace, zkušeností řešitele a provozní heuristiky. Tyto přístupy slouží jako základ pro zlepšování procesů. Avšak, jejich výkon lze dále zlepšit pomocí moderní výpočtové inteligence. Účelem této práce je tudíž aplikace pokročilých technik umělé inteligence a strojového učení za účelem zlepšování procesů v energeticky náročných průmyslových procesech. V této práci je využit přístup, který řeší tento problém simulací průmyslových systémů a přispívá následujícím: (i)Aplikace techniky strojového učení, která zahrnuje jednorázové učení a neuro-evoluci pro modelování a optimalizaci jednotlivých jednotek na základě dat. (ii) Aplikace redukce dimenze (např. Analýza hlavních komponent, autoendkodér) pro vícekriteriální optimalizaci procesu s více jednotkami. (iii) Návrh nového nástroje pro analýzu problematických částí systému za účelem jejich odstranění (bottleneck tree analysis – BOTA). Bylo také navrženo rozšíření nástroje, které umožňuje řešit vícerozměrné problémy pomocí přístupu založeného na datech. (iv) Prokázání účinnosti simulací Monte-Carlo, neuronové sítě a rozhodovacích stromů pro rozhodování při integraci nové technologie procesu do stávajících procesů. (v) Porovnání techniky HTM (Hierarchical Temporal Memory) a duální optimalizace s několika prediktivními nástroji pro podporu managementu provozu v reálném čase. (vi) Implementace umělé neuronové sítě v rámci rozhraní pro konvenční procesní graf (P-graf). (vii) Zdůraznění budoucnosti umělé inteligence a procesního inženýrství v biosystémech prostřednictvím komerčně založeného paradigmatu multi-omics.
|
Page generated in 0.0938 seconds