Spelling suggestions: "subject:"fonte carlosimulations"" "subject:"fonte carlosimulation""
111 |
Modélisation des propriétés magnéto-électriques d'oxydes de métaux de transition anisotropes. / Modeling of the magnetoelectric properties of anisotropic transition metal oxidesAl Baalbaky, Ahmed 21 December 2017 (has links)
Les oxydes de métaux de transition sont largement utilisés en raison de leurs propriétés fondamentales intéressantes et de leurs applications importantes. En particulier, CuCrO2 est d’un intérêt particulier parce qu’il possède un état multiferroïque en absence de champ magnétique. Dans cette thèse, nous modélisons les propriétés magnéto-électriques de CuCrO2 par simulations Monte Carlo basées sur des paramètres magnétiques déterminés par calculs ab initio. Nous étudions également l’effet du dopage du Ga sur les propriétés magnéto-électriques du composé CuCr1-xGaxO2 (0 ≤ x ≤ 0:3). Nos résultats sontqualitativement en accord avec les observations expérimentales. / Transition metal oxides are widely used due to their interesting fundamental properties and important applications. In particular, CuCrO2 is of special interest because it enters the multiferroic state in zero magnetic fields. In this thesis we model the magnetoelectric properties of CuCrO2 using Monte Carlo simulations with the help of ab initio calculations.We also investigate the effect of Ga doping on the magnetoelectric properties of CuCr1-xGaxO2 (0 ≤ x ≤ 0:3). Our results are well comparable to the experimental observations.
|
112 |
Spatial fractionation of the dose in charged particle therapy / Fractionnement spatial de la dose en radiothérapie par particules chargéesPeucelle, Cécile 04 November 2016 (has links)
Malgré de récentes avancées, les traitements par radiothérapie (RT) demeurent insatisfaisants : la tolérance des tissus sains aux rayonnements limite la délivrance de fortes doses (potentiellement curatives) à la tumeur. Pour remédier à ce problème, de nouvelles approches basées sur des modes de dépôt de dose innovants sont aujourd’hui à l’étude. Parmi ces approches, la technique synchrotron “Minibeam Radiation Therapy” (MBRT) a démontré sa capacité à élever la résistance des tissus sains aux rayonnements, ainsi qu’à induire un important retard de croissance tumorale. La MBRT combine des faisceaux submillimétriques à un fractionnement spatial de la dose. Dans ce contexte, l’alliance de la balistique plus avantageuse des particules chargées (et leur sélectivité biologique) à la préservation des tissus sains observée en MBRT permettrait de préserver d’avantage les tissus sains. Cette stratégie innovante a été explorée durant ce travail de thèse. Deux voies ont notamment été étudiées: la MBRT par faisceaux de protons (pMBRT), et d’ions très lourds. Premièrement, la preuve de concept expérimentale de la pMBRT a été réalisée dans un centre clinique (Institut Curie, Centre de Protonthérapie d’Orsay). De plus, l'évaluation de potentielles optimisations de la pMBRT, à la fois en terme de configuration d’irradiation et de génération des minifaisceaux, a été menée dans une étude Monte Carlo (MC). Dans la seconde partie de ce travail, un nouvel usage potentiel des ions très lourds (néon et plus lourds) en radiothérapie a été évalué dans une étude MC. Les combiner à un fractionnement spatial permettrait de tirer profit de leur efficacité dans le traitement de tumeurs radiorésistantes (hypoxiques), un des principaux défis de la RT, tout en minimisant leurs effets secondaires. Les résultats obtenus au terme de ce travail sont favorables à une exploration approfondie de ces deux approches innovantes. Les données dosimétriques compilées dans ce manuscrit serviront à guider prochaines les expérimentations biologiques. / Despite recent breakthroughs, radiotherapy (RT) treatments remain unsatisfactory : the tolerance of normal tissues to radiations still limits the possibility of delivering high (potentially curative) doses in the tumour. To overcome these difficulties, new RT approaches using distinct dose delivery methods are being explored. Among them, the synchrotron minibeam radiation therapy (MBRT) technique has been shown to lead to a remarkable normal tissue resistance to very high doses, and a significant tumour growth delay. MBRT allies sub-millimetric beams to a spatial fractionation of the dose. The combination of the more selective energy deposition of charged particles (and their biological selectivity) to the well-established normal tissue sparing of MBRT could lead to a further gain in normal tissue sparing. This innovative strategy was explored in this Ph.D. thesis. In particular, two new avenues were studied: proton MBRT (pMBRT) and very heavy ion MBRT. First, the experimental proof of concept of pMBRT was performed at a clinical facility (Institut Curie, Orsay, France). In addition, pMBRT setup and minibeam generation were optimised by means of Monte Carlo (MC) simulations. In the second part of this work, a potential renewed use of very heavy ions (neon and heavier) for therapy was evaluated in a MC study. Combining such ions to a spatial fractionation could allow profiting from their high efficiency in the treatment of hypoxic radioresistant tumours, one of the main challenges in RT, while reducing at maximum their side effects. The promising results obtained in this thesis support further explorations of these two novel avenues. The dosimetry knowledge acquired will serve to guide the biological experiments.
|
113 |
Neutrophil Extracellular Trap (NET) Formation: From Fundamental Biophysics to Delivery of NanosensorsMeyer, Daniel 26 June 2019 (has links)
No description available.
|
114 |
Les isotopes d'azote au-delà de la limite de stabilité neutronique : 23N, 24N et 25N / Nitrogen isotopes beyond the neutron drip line : 23N, 24N et 25NDeshayes, Quentin 04 December 2017 (has links)
Afin d'étudier les limites d'existence de la chaîne isotopique des azotes, une expérience a été menée au RIBF-RIKEN en utilisant le spectromètre SAMURAI couplé au détecteur de neutrons NEBULA. Les systèmes étudiés - 23N*, 24N et 25N ont été produits via des réactions de knockout de quelques nucléons ou de fragmentation à partir de faisceau secondaires de haute énergie (~250 MeV/nucléon). La méthode utilisée pour caractériser ces systèmes est celle de la masse invariante qui nécessite la cinématique complète des réactions étudiées. Pour interpréter les résultats, une simulation de la totalité du dispositif expérimental a été utilisée. L'étalonnage des détecteurs et les techniques d'analyse ont été testés en sondant l'état fondamental connu du 16B. Dans le cas du 23N, une étude de spectroscopie gamma en vol a permis de confirmer qu'il ne possédait pas d'état excité lié. Nous avons pu le sonder à travers 3 voies de réactions distinctes : Le knockout d'un proton du 24O, la fragmentation à partir du 27Ne et la diffusion inélastique. Dans tous les cas, nous avons observé une résonance l=0 à environ 3,5 MeV d'énergie d'excitation. Cette résonance a été interprétée, en s'appuyant sur des calculs de modèle en couche, comme le premier état excité du 23N de spin parité Jpi=3/2-. Dans l'ensemble des voies une seconde résonance possédant une énergie d'excitation d'environ 5 MeV était nécessaire pour décrire les spectres en énergie-relative fragment-neutron mesurés.Le 24N a été observé pour la première fois lors de notre expérience comme une résonance autour de 1,3 MeV au dessus du seuil d'émission neutron. Nous avons pu sonder ce système via 4 réactions, le knockout de deux ou trois protons respectivement du 26F et du 27Ne et des réactions de fragmentation à partir du 27F et du 28Ne. L'ensemble de ces spectres peut être ajusté à l'aide d'une résonance l=2. Des considérations théoriques simples nous suggèrent que cette dernière correspond au doublet 2-,1- prédit comme l'état fondamental du 24N par le modèle en couche.Le 25N a également été observé pour la première fois lors de notre expérience. Malgré une statistique relativement limitée, les spectres des réactions de knockout de deux et trois protons du 27F et 28Ne, montrent une structure claire environ 1,7 MeV au dessus du seuil d'émission de deux neutrons qui peut être identifiée comme l'état fondamental 1/2- prédit par le modèle en couche. / To study the most neutron-rich nitrogen isotopes an experiment has been undertaken at the RIBF-RIKEN using the SAMURAI spectrometer and NEBULA neutron array. The nuclei of interest - 23N*, 24N and 25N - were produced via nucleon knockout and fragmentation reactions from high-energy (~250 MeV/nucleon) secondary beams. The technique of invariant-mass spectroscopy, which requires the measurement in complete kinematics of the beam-like reaction products, was employed to characterise these unbound systems. In the case of the 23N, in-flight gamma-ray spectroscopy was performed and it has been possible to confirm that it has no bound excited states. Three reaction channels - the knockout of a proton from 24O, the fragmentation of 27Ne and inelastic scattering – were employed to search for unbound excited states. In all these channels, an l=0 resonance was observed at around 3.5 MeV excitation energy. This resonance is interpreted, through comparison with shell model calculations, as the Jpi=3/2- first excited state of 23N. In all channels, another resonance with an excitation energy of close to 5 MeV was necessary to fully describe the fragment-neutron relative energy spectra. The nucleus 24N was observed here for the first time as resonance-like peak some 1.3 MeV above the one-neutron decay threshold. Four reaction channels were investigated: the knockout of two and three protons from 26F and 27Ne, respectively, and fragmentation of 27F and 28Ne. All the relative energy spectra were consistent with the population of an l=2 resonance. Simple considerations suggest that this is the 2-,1- ground-state doublet predicted by the shell model. The nucleus 25N was also observed here for the first time. Despite the relatively limited statistics, both two-proton and three-proton removal from 27F and 28Ne, exhibited a clear structure some 1.7 MeV above the two-neutron decay threshold which, based on simple considerations, may be identified with the expected 1/2- ground state.
|
115 |
Contrast agent imaging using an optimized table-top x-ray fluorescence and photon-counting computed tomography imaging systemDunning, Chelsea Amanda Saffron 04 November 2020 (has links)
Contrast agents are often crucial in medical imaging for disease diagnosis. Novel
contrast agents, such as gold nanoparticles (AuNPs) and lanthanides, are being ex-
plored for a variety of clinical applications. Preclinical testing of these contrast agents
is necessary before being approved for use in humans, which requires the use of small
animal imaging techniques. Small animal imaging demands the detection of these contrast agents in trace amounts at acceptable imaging time and radiation dose. Two
such imaging techniques include x-ray fluorescence computed tomography (XFCT)
and photon-counting CT (PCCT). XFCT combines the principles of CT with x-ray
fluorescence by detecting fluorescent x-rays from contrast agents at various projections to reconstruct contrast agent maps. XFCT can image trace amounts of AuNPs
but is limited to small animal imaging due to fluorescent x-ray attenuation and scatter. PCCT uses photon-counting detectors that separate the CT data into energy
bins. This enables contrast agent detection by recognizing the energy dependence of
x-ray attenuation in different materials, independent of AuNP depth, and can provide
anatomical information that XFCT cannot. To achieve the best of both worlds, we
modeled and built a table-top x-ray imaging system capable of simultaneous XFCT
and PCCT imaging.
We used Monte Carlo simulation software for the following work in XFCT imaging of AuNPs. We simulated XFCT induced by x-ray, electron, and proton beams
scanning a small animal-sized object (phantom) containing AuNPs with Monte Carlo
techniques. XFCT induced by x-rays resulted in the best image quality of AuNPs,
however high-energy electron and medium-energy proton XFCT may be feasible for
on-board x-ray fluorescence techniques during radiation therapy. We then simulated
a scan of a phantom containing AuNPs on a table-top system to optimize the detector
arrangement, size, and data acquisition strategy based on the resulting XFCT image
quality and available detector equipment. To enable faster XFCT data acquisition,
we separately simulated another AuNP phantom and determined the best collimator
geometry for Au fluorescent x-ray detection.
We also performed experiments on our table-top x-ray imaging system in the lab.
Phantoms containing multiples of three lanthanide contrast agents were scanned on
our tabletop x-ray imaging system using a photon-counting detector capable of sustaining high x-ray fluxes that enabled PCCT. We used a novel subtraction algorithm
for reconstructing separate contrast agent maps; all lanthanides were distinct at low
concentrations including gadolinium and holmium that are close in atomic number.
Finally, we performed the first simultaneous XFCT and PCCT scan of a phantom
and mice containing both gadolinium and gold based on the optimized parameters
from our simulations.
This dissertation outlines the development of our tabletop x-ray imaging system
and the optimization of the complex parameters necessary to obtain XFCT and PCCT
images of multiple contrast agents at biologically-relevant concentrations. / Graduate
|
116 |
Lattice model for amyloid peptides : OPEP force field parametrization and applications to the nucleus size of Alzheimer's peptides / Modèle réseau de peptides amyloïdes : paramétrisation du champ de forces OPEP et application aux noyaux de nucléation de peptides d'AlzheimerTran, Thanh Thuy 20 September 2016 (has links)
La maladie d’Alzheimer touche plus de 40 millions de personnes dans le monde et résulte de l’agrégation du peptide beta-amyloïde de 40/42 résidus. En dépit de nombreuses études expérimentales et théoriques, le mécanisme de formation des fibres et des plaques n’est pas élucidé, et les structures des espèces les plus toxiques restent à déterminer. Dans cette thèse, je me suis intéressée à deux aspects. (1) La détermination du noyau de nucléation (N*) de deux fragments (Aβ)16-22 et (Aβ)37-42. Mon approche consiste à déterminer les paramètres OPEP du dimère (Aβ)16-22 en comparant des simulations Monte Carlo sur réseau et des dynamiques moléculaires atomiques par échange de répliques. Les paramètres fonctionnant aussi sur le trimère (Aβ)16-22 et les dimères et trimères (Aβ)37-42, j’ai étudié la surface d’énergie libre des décamères et mes simulations montrent que N* est de 10 chaines pour (Aβ)16-22 et est supérieure à 20 chaines pour (Aβ)37-42. (2) J’ai ensuite étudié les structures du dimère (Aβ)1-40 par simulations de dynamique moléculaire atomistique par échanges de répliques. Cette étude, qui fournit les conformations d’équilibre du dimère Aβ1-40 en solution aqueuse, ouvre des perspectives pour une compréhension de l’impact des mutations pathogènes et protectrices au niveau moléculaire. / The neurodegenerative Alzheimer's disease (AD) is affecting more than 40 million people worldwide and is linked to the aggregation of the amyloid-β proteins of 40/42 amino acids. Despite many experimental and theoretical studies, the mechanism by which amyloid fibrils form and the 3D structures of the early toxic species in aqueous solution remain to be determined. In this thesis, I studied the structures of the eraly formed oligomers of the amyloid-β peptide and the critical nucleus size of two amyloid-β peptide fragments using either coarse-grained or all-atom simulations. First, at the coarse-grained level, I developed a lattice model for amyloid protein, which allows us to study the nucleus sizes of two experimentally well-characterized peptide fragments (Aβ)16-22 and (Aβ)37-42 of the Alzheimer's peptide (Aβ)1-42. After presenting a comprehensive OPEP force-field parameterization using an on-lattice protein model with Monte Carlo simulations and atomistic simulations, I determined the nucleus sizes of the two fragments. My results show that the nucleation number is 10 chains for (Aβ)16-22 and larger than 20 chains for (Aβ)37-42. This knowledge is important to help design more effective drugs against AD. Second, I investigated the structures of the dimer (Aβ)1-40 using extensive atomistic REMD simulations. This study provides insights into the equilibrium structure of the (Aβ)1-40 dimer in aqueous solution, opening a new avenue for a comprehensive understanding of the impact of pathogenic and protective mutations in early-stage Alzheimer’s disease on a molecular level.
|
117 |
Microbial Cell Disruption Using Pressurized Gases to Improve Lipid Recovery from Wet Biomass: Thermodynamic AnalysisHowlader, Md Shamim 04 May 2018 (has links)
Microbial cell disruption using pressurized gas is a promising approach to improve the lipid extraction yield directly from the wet biomass by eliminating the energy-intensive drying process, which is an integral part of traditional methods. As the process starts with the solubilization of the gas in lipid-rich microbial cells, it is important to understand the solubility of different potential gases in both lipid (triglyceride) and lipid-rich microbial cell culture to design efficient cell disruption processes. In this study, we determined the solubility of different gases (e.g., CO2, CH4, N2, and Ar) in canola oil (triglyceride) using a pressure drop gas apparatus developed in our laboratory. The solubility of different gases in triglyceride followed the trend CO2 > CH4 > Ar > N2. Since the solubility of CO2 was found to be higher compared to other gases, the solubility of CO2 in lipid rich cell culture, cell culture media, and spent media was also determined. It was found that CO2 is more soluble in triglycerides, but less soluble in lipid-rich cell culture compared to CO2 in water. From both thermodynamic models and Monte Carlo simulations, the correlated solubility was found to be in good agreement with the experimental results. CO2 was found to be the most suitable gas for microbial cell disruption because almost 100% cell death occurred when using CO2 whereas more than 85% cells were found to be active after treatment with CH4, N2, and Ar. The optimization of microbial cell disruption was conducted using the combination of Box-Behnken design of experiment (DOE) technique and response surface methodology. The optimized cell disruption conditions were found to be 3900 kPa, 296.5 K, 360 min, and 325 rpm where almost 100% cell death was predicted from the statistical modeling. Finally, it was found that 86% of the total lipid content can be recovered from the wet biomass after treatment with pressurized CO2 under optimized conditions compared to control where up to 74% of the total lipid content can be recovered resulting in 12% increase in the lipid extraction yield using pressurized CO2.
|
118 |
Adaptive Sampling Methods for Stochastic OptimizationDaniel Andres Vasquez Carvajal (10631270) 08 December 2022 (has links)
<p>This dissertation investigates the use of sampling methods for solving stochastic optimization problems using iterative algorithms. Two sampling paradigms are considered: (i) adaptive sampling, where, before each iterate update, the sample size for estimating the objective function and the gradient is adaptively chosen; and (ii) retrospective approximation (RA), where, iterate updates are performed using a chosen fixed sample size for as long as progress is deemed statistically significant, at which time the sample size is increased. We investigate adaptive sampling within the context of a trust-region framework for solving stochastic optimization problems in $\mathbb{R}^d$, and retrospective approximation within the broader context of solving stochastic optimization problems on a Hilbert space. In the first part of the dissertation, we propose Adaptive Sampling Trust-Region Optimization (ASTRO), a class of derivative-based stochastic trust-region (TR) algorithms developed to solve smooth stochastic unconstrained optimization problems in $\mathbb{R}^{d}$ where the objective function and its gradient are observable only through a noisy oracle or using a large dataset. Efficiency in ASTRO stems from two key aspects: (i) adaptive sampling to ensure that the objective function and its gradient are sampled only to the extent needed, so that small sample sizes are chosen when the iterates are far from a critical point and large sample sizes are chosen when iterates are near a critical point; and (ii) quasi-Newton Hessian updates using BFGS. We prove three main results for ASTRO and for general stochastic trust-region methods that estimate function and gradient values adaptively, using sample sizes that are stopping times with respect to the sigma algebra of the generated observations. The first asserts strong consistency when the adaptive sample sizes have a mild logarithmic lower bound, assuming that the oracle errors are light-tailed. The second and third results characterize the iteration and oracle complexities in terms of certain risk functions. Specifically, the second result asserts that the best achievable $\mathcal{O}(\epsilon^{-1})$ iteration complexity (of squared gradient norm) is attained when the total relative risk associated with the adaptive sample size sequence is finite; and the third result characterizes the corresponding oracle complexity in terms of the total generalized risk associated with the adaptive sample size sequence. We report encouraging numerical results in certain settings. In the second part of this dissertation, we consider the use of RA as an alternate adaptive sampling paradigm to solve smooth stochastic constrained optimization problems in infinite-dimensional Hilbert spaces. RA generates a sequence of subsampled deterministic infinite-dimensional problems that are approximately solved within a dynamic error tolerance. The bottleneck in RA becomes solving this sequence of problems efficiently. To this end, we propose a progressive subspace expansion (PSE) framework to solve smooth deterministic optimization problems in infinite-dimensional Hilbert spaces with a TR Sequential Quadratic Programming (SQP) solver. The infinite-dimensional optimization problem is discretized, and a sequence of finite-dimensional problems are solved where the problem dimension is progressively increased. Additionally, (i) we solve this sequence of finite-dimensional problems only to the extent necessary, i.e., we spend just enough computational work needed to solve each problem within a dynamic error tolerance, and (ii) we use the solution of the current optimization problem as the initial guess for the subsequent problem. We prove two main results for PSE. The first assesses convergence to a first-order critical point of a subsequence of iterates generated by the PSE TR-SQP algorithm. The second characterizes the relationship between the error tolerance and the problem dimension, and provides an oracle complexity result for the total amount of computational work incurred by PSE. This amount of computational work is closely connected to three quantities: the convergence rate of the finite-dimensional spaces to the infinite-dimensional space, the rate of increase of the cost of making oracle calls in finite-dimensional spaces, and the convergence rate of the solution method used. We also show encouraging numerical results on an optimal control problem supporting our theoretical findings.</p>
<p> </p>
|
119 |
A Computational Analysis of the Structure of the Genetic CodeDegagne, Christopher 11 1900 (has links)
The standard genetic code (SGC) is the cipher used by nearly all organisms to transcribe information stored in DNA and translate it into its amino acid counterparts. Since the early 1960s, researchers have observed that the SGC is structured so that similar codons encode amino acids with similar physiochemical properties. This structure has been hypothesized to buffer the SGC against transcription or translational error because single nucleotide mutations usually either are silent or impart minimal effect on the containing protein. We herein briefly review different theories for the origin of that structure. We also briefly review different computational experiments designed to quantify buffering capacity for the SGC.
We report on computational Monte Carlo simulations that we performed using a computer program that we developed, AGCT. In the simulations, the SGC was ranked against other, hypothetical genetic codes (HGC) for its ability to minimize physiochemical distances between amino acids encoded by codons separated by single nucleotide mutations. We analyzed unappreciated structural aspects and neglected properties in the SGC. We found that error measure type affected SGC ranking. We also found that altering stop codon positions had no effect on SGC ranking, but including stop codons in error calculations improved SGC ranking. We analyzed 49 properties individually and identified conserved properties. Among these, we found that long-range non-bonded energy is more conserved than is polar requirement, which previously was considered to be the most conserved property in the SGC. We also analyzed properties in combinations. We hypothesized that the SGC is organized as a compromise among multiple properties.
Finally, we used AGCT to test whether different theories on the origin of the SGC could explain more convincingly the buffering capacity in the SGC. We found that, without accounting for transition/transversion biases, the SGC ranking was modest enough under constraints imposed by the coevolution and four column theories that it could be explained due to constraints associated with either theory (or both theories); however, when transition/transversion biases were included, only the four column theory returned a SGC ranking modest enough that it could be explained due to constraints associated with that theory. / Thesis / Master of Science (MSc) / The standard genetic code (SGC) is the cipher used almost universally to transcribe information stored in DNA and translate it to amino acid counterparts. Since the mid 1960s, researchers have recognized that the SGC is organized so that similar three-nucleotide RNA codons encode amino acids with similar properties; researchers consequently hypothesized that the SGC is structured to minimize effects from transcription or translation errors. This hypothesis has been tested using computer simulation. I briefly review results from those studies, complement them by analyzing unappreciated structural aspects and neglected properties, and test two theories on the origin of the SGC.
|
120 |
Precise Trajectory Calculation for Launchers: A MATLAB – Simulink Modeling Approach / Noggrann banberäkning för bärraketer med MATLAB och SimulinkBarale, Matéo January 2024 (has links)
Optimizing launcher trajectories is essential for effective mission planning, and specialized software like ASTOS provide an initial, precise overview. However, as launcher development progresses, there is a growing need for the creation of an autonomous flight trajectory software that offers greaterflexibility in adjusting simulation parameters and better represents actual, real-life trajectories. Thisreport introduces an initial version of a comprehensive six-degree-of-freedom launcher trajectory calculation software developed using MATLAB and Simulink. The emphasis is on the development strategy, encompassing discussions on dynamics equations, essential features, and crucial models necessary for accurate simulations. Real-world scenarios often deviate from optimized trajectories, and the software addresses these deviations using sensitivity analysis through Monte Carlosimulations, enabling a thorough examination of uncertainties in input parameters and their impact on trajectories. The article delves into the establishment of the dispersion analysis tool and offers suggestions for further enhancements for both the Simulink model and this dispersion analysis tool. / Optimering av flygbanor är avgörande för effektiv uppdragsplanering, och specialiserad programvara som ASTOS ger en initial, exakt översikt. Men när flygbanans utveckling fortskrider finns det ett växande behov av att skapa en autonom flygbana som erbjuder större flexibilitet när det gäller attjustera simuleringsparametrar och bättre representerar faktiska, verkliga banor. Den här rapporten introducerar en initial version av en omfattande beräkningsprogramvara utvecklad med MATLAB och Simulink för sex frihetsgraders lanseringsbana. Tyngdpunkten ligger på utvecklingsstrategin, som omfattar diskussioner om dynamikekvationer, väsentliga funktioner och avgörande modeller som är nödvändiga för exakta simuleringar. Scenarier i verkligheten avviker ofta från optimerade banor, och programvaran adresserar dessa avvikelser med känslighetsanalys genom Monte Carlo-simuleringar,vilket möjliggör en grundlig undersökning av osäkerheter i inmatningsparametrar och deras påverkan på banor. Rapporten går in i skapandet av spridningsanalysverktyget och erbjuder förslag till ytterligare förbättringar för både Simulink-modellen och detta dispersionsanalysverktyg.
|
Page generated in 0.1278 seconds