• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 328
  • 113
  • 91
  • 76
  • 36
  • 24
  • 12
  • 8
  • 7
  • 5
  • 5
  • 5
  • 4
  • 3
  • 2
  • Tagged with
  • 879
  • 879
  • 145
  • 124
  • 121
  • 118
  • 113
  • 102
  • 101
  • 85
  • 82
  • 81
  • 73
  • 71
  • 68
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
811

Optimisation des paramètres de carbone de sol dans le modèle CLASSIC à l'aide d'optimisation bayésienne et d'observations

Gauthier, Charles 04 1900 (has links)
Le réservoir de carbone de sol est un élément clé du cycle global du carbone et donc du système climatique. Les sols et le carbone organique qu'ils contiennent constituent le plus grand réservoir de carbone des écosystèmes terrestres. Ce réservoir est également responsable du stockage d'une grande quantité de carbone prélevé de l'atmosphère par les plantes par la photosynthèse. C'est pourquoi les sols sont considérés comme une stratégie de mitigation viable pour réduire la concentration atmosphérique de CO2 dûe aux émissions globales de CO2 d'origine fossile. Malgré son importance, des incertitudes subsistent quant à la taille du réservoir global de carbone organique de sol et à ses dynamiques. Les modèles de biosphère terrestre sont des outils essentiels pour quantifier et étudier la dynamique du carbone organique de sol. Ces modèles simulent les processus biophysiques et biogéochimiques au sein des écosystèmes et peuvent également simuler le comportement futur du réservoir de carbone organique de sol en utilisant des forçages météorologiques appropriés. Cependant, de grandes incertitudes dans les projections faite par les modèles de biosphère terrestre sur les dynamiques du carbone organique de sol ont été observées, en partie dues au problème de l'équifinalité. Afin d'améliorer notre compréhension de la dynamique du carbone organique de sol, cette recherche visait à optimiser les paramètres du schéma de carbone de sol contenu dans le modèle de schéma canadien de surface terrestre incluant les cycles biogéochimiques (CLASSIC), afin de parvenir à une meilleure représentation de la dynamique du carbone organique de sol. Une analyse de sensibilité globale a été réalisée pour identifier lesquels parmis les 16 paramètres du schéma de carbone de sol, n'affectaient pas la simulation du carbone organique de sol et de la respiration du sol. L'analyse de sensibilité a utilisé trois sites de covariance des turbulences afin de représenter différentes conditions climatiques simulées par le schéma de carbone de sol et d'économiser le coût calculatoire de l'analyse. L'analyse de sensibilité a démontré que certains paramètres du schéma de carbone de sol ne contribuent pas à la variance des simulations du carbone organique de sol et de la respiration du sol. Ce résultat a permis de réduire la dimensionnalité du problème d'optimisation. Ensuite, quatre scénarios d'optimisation ont été élaborés sur la base de l'analyse de sensibilité, chacun utilisant un ensemble de paramètres. Deux fonctions coûts ont été utilisées pour l'optimisation de chacun des scénarios. L'optimisation a également démontré que la fonction coût utilisée avait un impact sur les ensembles de paramètres optimisés. Les ensembles de paramètres obtenus à partir des différents scénarios et fonctions coûts ont été comparés à des ensembles de données indépendants et à des estimations globales du carbone organique de sol à l'aide de métrique tel la racine de l'erreur quadratique moyenne et le bias, afin d'évaluer l'effet des ensembles de paramètres sur les simulations effectuées par le schéma de carbone de sol. Un ensemble de paramètres a surpassé les autres ensembles de paramètres optimisés ainsi que le paramétrage par défaut du modèle. Ce résultat a indiqué que la structure d'optimisation était en mesure de produire un ensemble de paramètres qui simulait des valeurs de carbone organique de sol et de respiration du sol qui étaient plus près des valeurs observées que le modèle CLASSIC par défaut, améliorant la représentation de la dynamique du carbone du sol. Cet ensemble de paramètres optimisés a ensuite été utilisé pour effectuer des simulations futures (2015-2100) de la dynamique du carbone organique de sol afin d'évaluer son impact sur les projections de CLASSIC. Les simulations futures ont montré que l'ensemble de paramètres optimisés simulait une quantité de carbone organique de sol 62 % plus élevée que l'ensemble de paramètres par défaut tout en simulant des flux de respiration du sol similaires. Les simulations futures ont également montré que les ensembles de paramètres optimisés et par défaut prévoyaient que le réservoir de carbone organique de sol demeurerait un puits de carbone net d'ici 2100 avec des sources nettes régionales. Cette étude a amélioré globalement la représentation de la dynamique du carbone organique de sol dans le schéma de carbone de sol de CLASSIC en fournissant un ensemble de paramètres optimisés. Cet ensemble de paramètres devrait permettre d'améliorer notre compréhension de la dynamique du carbone du sol. / The soil carbon pool is a vital component of the global carbon cycle and, therefore, the climate system. Soil organic carbon (SOC) is the largest carbon pool in terrestrial ecosystems. This pool stores a large quantity of carbon that plants have removed from the atmosphere through photosynthesis. Because of this, soils are considered a viable climate change mitigation strategy to lower the global atmospheric CO2 concentration that is presently being driven higher by anthropogenic fossil CO2 emissions. Despite its importance, there are still considerable uncertainties around the size of the global SOC pool and its response to changing climate. Terrestrial biosphere models (TBM) simulate the biogeochemical processes within ecosystems and are critical tools to quantify and study SOC dynamics. These models can also simulate the future behavior of SOC if carefully applied and given the proper meteorological forcings. However, TBM predictions of SOC dynamics have high uncertainties due in part to equifinality. To improve our understanding of SOC dynamics, this research optimized the parameters of the soil carbon scheme contained within the Canadian Land Surface Scheme Including Biogeochemical Cycles (CLASSIC), to better represent SOC dynamics. A global sensitivity analysis was performed to identify which of the 16 parameters of the soil carbon scheme did not affect simulated SOC stocks and soil respiration (Rsoil). The sensitivity analysis used observations from three eddy covariance sites for computational efficiency and to encapsulate the climate represented by the global soil carbon scheme. The sensitivity analysis revealed that some parameters of the soil carbon scheme did not contribute to the variance of simulated SOC and Rsoil. These parameters were excluded from the optimization which helped reduce the dimensionality of the optimization problem. Then, four optimization scenarios were created based on the sensitivity analysis, each using a different set of parameters to assess the impact the number of parameters included had on the optimization. Two different loss functions were used in the optimization to assess the impact of accounting for observational error. Comparing the optimal parameters between the optimizations performed using the different loss functions showed that the loss functions impacted the optimized parameter sets. To determine which optimized parameter set obtained by each loss function was most skillful, they were compared to independent data sets and global estimates of SOC, which were not used in the optimization using comparison metrics based on root-mean-square-deviation and bias. This study generated an optimal parameter set that outperformed the default parameterization of the model. This optimal parameter set was then applied in future simulations of SOC dynamics to assess its impact upon CLASSIC's future projections. These future simulations showed that the optimal parameter set simulated future global SOC content 62 % higher than the default parameter set while simulating similar Rsoil fluxes. The future simulations also showed that both the optimized and default parameter sets projected that the SOC pool would be a net sink by 2100 with regional net sources, notably tropical regions.
812

Simulating Evapotranspiration in the Lower Maumee River Watershed Using a Modified Version of the Boreal Ecosystem Productivity Simulator (BEPS) Model and Remote Sensing

Senevirathne, Chathuranga K. 21 September 2021 (has links)
No description available.
813

Development of Key Risk Indicators for Risk Management Within Insurance / Utformning av Nyckelindikatorer för Riskhantering Inom Försäkring

Boija, Olivia, Lindström, Louise January 2021 (has links)
In this thesis a regression analysis of ten independent data sets is analysed in order toestimate losses and Key Risk Indicators (KRI). Each data set contains a list of objects,impacts that each object contains and revenue stream values (RSV) to each impact.The project investigates the data and simulate yearly losses as response variables in theregression modelling. The three regressors that influence the yearly losses are numberof objects, sum of revenue streams and expected aggregated losses. Given the responsevariable from each data set a percentage scale of KRI’s is determined indicating howlarge losses each set possess. / I denna avhandling analyseras en regressionsmodellering av tio oberoende mängderdata för att uppskatta förluster och Key Risk Indicators. Den givna dataupsättningeninnehåller en lista med objekt, påverkan varje objekt erhåller och vad respektiveobjekt omsätter. Projektet undersöker den givma datan och simulerar årliga förlustersom svarsvariabler i regressionsmodelleringen. De tre regressorerna som påverkarde årliga förlusterna är antalet objekt, summan av intäckterna och förvämtadesammanlagda förlusterna. Från den givna svarsvariabeln från varje datamängdbestäms en procentuell skala av KRIer som indikerar hur stora förluster varjeuppsättning har.
814

Parameter extraction in lithium ion batteries using optimal experiments / Parameterbestämning av litium-jonbatterier med hjälp av optimala experiment

Prathimala, Venu Gopal January 2021 (has links)
Lithium-ion (Li-Ion) batteries are widely used in various applications and are viable for automotive applications. The effective management of Li-Ion batteries in battery electric vehicles (BEV) plays a crucial role in performance and range. One can achieve good performance and range by using efficient battery models in battery management systems (BMS). Hence, these battery models play an essential part in the development process of battery electric vehicles. Physics-based battery models are used for design purposes, control, or to predict battery behaviour, and these require much information about materials and reaction and mass transport properties. Model parameterization, i.e., obtaining model parameters from different experimental sets (by fitting the model to experimental data sets), can be challenging depending on model complexity and the type and quality of experimental data. Based on the idea of parameter sensitivity, certain current/voltage data sets could be chosen that theoretically has a more considerable sensitivity for a given model parameter that is of interest to extract. In this thesis work, different methods for extracting model parameters for a Nickel-Manganese-Cobalt (NMC) battery composite electrode are experimentally tested and compared. Specifically, model parameterization using \emph{optimal experiments} based on performed parameter sensitivity analysis has been benchmarked against a 1C discharge test and low rate pulse tests. The different parameter sets obtained have then been validated on a drive cycle and 2C pulse tests. Comparing the methods show some promising results for the optimal experiment design (OED) method, but consideration regarding the state of charge (SOC) dependencies, the number of parameters has to be further evaluated. / Litiumjonbatterier (Li-jon) används i olika applikationer och är ett bra alternativ förfordonsapplikationer. Den effektiva hanteringen av litiumjonbatterier i elbilar har en viktigroll för fordonens prestanda och räckvidd. Man kan nå bra prestanda och räckviddgenom att använda bra batterimodeller i batteriets övervakningssystem (BMS). Därförspelar dessa batterimodeller en viktig roll i utvecklingen av elbilar. Fysikbaseradebatterimodeller används för design, reglering eller för att prediktera beteendet hos batteriet,vilket kräver mycket information om material samt dess reaktion och andra beskaffenheter.Modellparametrisering, dvs. att införskaffa modellparametrar från olika experiment (genom attanpassa modell till experimentella data) kan vara utmanande beroende på modellkomplexitetoch typen samt kvalitén på experimentell data. Baserat på idén om parametersensitivitet kan data om ström och spänning väljas så att de teoretiskt har mer sensitivitet för engiven modellparameter som är av intresse att extrahera. I detta examensarbete testas ochjämförs olika metoder för att extrahera modellparametrar för en Nickelmangankobolt (NMC)batterielektrod. Mer specifikt, modellparametrisering genom optimala experiment baseradepå genomförd parametersesitivitetsanalys jämförts med 1C urladdningstest och låg nivåpulstest. Jämförande av metoderna visar goda resultat för OED metoden men flera parametrarmåste fortsatt utvärderas gällande laddningstatusberoenden (SOC).
815

The impacts of varying 2D modelling strategies on flood hazard assessment in urban areas: case study of the Garonne River flood risk prevention plan / Effekterna av olika strategier för 2D-modellering på bedömningen av översvämningsrisker i stadsområden: fallstudie av planen för förebyggande av översvämningsrisker i Garonnefloden

Hérault, Alexis January 2024 (has links)
Flood risk assessment in urban areas necessitates the utilization of advanced modeling strategies to accurately depict inundation patterns and potential impacts on communities and infrastructure. This study investigates the impacts of varying 2D modeling strategies on flood risk assessment in the context of the Garonne River flood risk prevention plan. The research focuses on building 2D hydraulic models for the Garonne and Ariège rivers using Telemac 2D, supplemented by models for their tributaries on HEC-RAS. Following calibration, the 1875 reference flood event is simulated, sensitivity analyses were conducted on downstream boundary conditions, Strickler coefficient for the floodplain, and discharge parameters. The results reveal significant impacts of these parameters on the final hazard maps, underscoring the importance of thoughtful consideration in model calibration and parameter selection. It also questions the strategy to base the assessment on an extreme historical flood event with little data to back up the accuracy of the results, or favoring a less extreme event that will lead more accurate results, but with maybe less security.  The study highlights the critical role of high precision topography, particularly in flat and urban areas, where traditional discharge data may be lacking, and precipitation-based methods may prove less effective. The choice of modeling software also emerges as a key factor influencing the accuracy of flood hazard assessments, with variations in parameters and computation methods yielding differing outcomes.  Overall, this research underscores the complex interplay between modeling strategies, parameter selection, and topographic characteristics in urban flood risk assessment. It emphasizes the need for a nuanced approach in choosing flood events for modeling, balancing the availability of data with the accuracy and reliability of results. / Riskbedömning av översvämningar i stadsområden kräver användning av avancerade modelleringsstrategier för att korrekt beskriva översvämningsmönster och potentiella effekter på samhällen och infrastruktur. I denna studie undersöks effekterna av olika 2D- modelleringsstrategier på bedömningen av översvämningsrisker i samband med planen för förebyggande av översvämningsrisker i Garonnefloden. Forskningen fokuserar på att bygga hydrauliska 2D-modeller för floderna Garonne och Ariège med Telemac 2D, kompletterat med modeller för deras bifloder i HEC-RAS.  Efter kalibrering simuleras 1875 års referensflöde, känslighetsanalyser utförs på nedströms gränsvillkor, Strickler-koefficienten för översvämningsslätten och flödesparametrar. Resultaten visar att dessa parametrar har en betydande inverkan på de slutliga riskkartorna, vilket understryker vikten av noggranna överväganden vid modellkalibrering och val av parametrar.  Studien belyser den kritiska roll som högupplöst topografi spelar, särskilt i platta och urbana områden, där traditionella flödesdata kan saknas och nederbördsbaserade metoder kan vara mindre effektiva. Valet av modelleringsprogramvara framstår också som en nyckelfaktor som påverkar noggrannheten i bedömningen av översvämningsrisker, med variationer i parametrar och beräkningsmetoder som ger olika resultat.  Sammantaget understryker denna forskning det komplexa samspelet mellan modelleringsstrategier, val av parametrar och topografiska egenskaper vid riskbedömning av översvämningar i städer.  Den betonar behovet av ett nyanserat tillvägagångssätt när man väljer översvämningshändelser för modellering, och balanserar tillgången på data med resultatens noggrannhet och tillförlitlighet.
816

Optimization based Analysis of Highly Automated Driving Simulation

Satyamohan, Sharmila 08 July 2024 (has links)
In recent years, there have been remarkable advancements in automated driving systems. Consumer protection organizations, such as Euro NCAP, play a pivotal role in enhancing the overall safety of these modern vehicles. While previous emphasis has been on passive safety, the significance of active safety systems has surged in recent years. Evaluating the performance of these systems now relies on standardized test scenarios designed to simulate real-world accidents. Addressing this challenge, the future necessitates the incorporation of virtual methods to supplement traditional track tests. Given the complex nature of high-dimensional test cases, an exhaustive grid search is exceedingly time-consuming. In light of this challenge, we present a novel testing method utilizing search-based testing with Bayesian Optimization to efficiently navigate and explore the expansive search space of Euro NCAP CCR scenarios to identify the performance-critical scenarios. The methodology incorporates the Brake Threat Number as a robust criticality metric within the fitness function, providing a reliable indicator for assessing the inevitability of collisions. Furthermore, the research utilizes a surrogate model derived from the evaluation points used by the optimization algorithm to determine the performance-critical boundary that separates the critical and the non-critical scenarios. Additionally, this approach leverages the surrogate model for conducting sensitivity analysis, explaining the impact of individual parameters on the system’s output.
817

A Computational Framework for Assessing and Optimizing the Performance of Observational Networks in 4D-Var Data Assimilation

Cioaca, Alexandru 04 September 2013 (has links)
A deep scientific understanding of complex physical systems, such as the atmosphere, can be achieved neither by direct measurements nor by numerical simulations alone. Data assimilation is a rigorous procedure to fuse information from a priori knowledge of the system state, the physical laws governing the evolution of the system, and real measurements, all with associated error statistics. Data assimilation produces best (a posteriori) estimates of model states and parameter values, and results in considerably improved computer simulations. The acquisition and use of observations in data assimilation raises several important scientific questions related to optimal sensor network design, quantification of data impact, pruning redundant data, and identifying the most beneficial additional observations. These questions originate in operational data assimilation practice, and have started to attract considerable interest in the recent past. This dissertation advances the state of knowledge in four dimensional variational (4D-Var) - data assimilation by developing, implementing, and validating a novel computational framework for estimating observation impact and for optimizing sensor networks. The framework builds on the powerful methodologies of second-order adjoint modeling and the 4D-Var sensitivity equations. Efficient computational approaches for quantifying the observation impact include matrix free linear algebra algorithms and low-rank approximations of the sensitivities to observations. The sensor network configuration problem is formulated as a meta-optimization problem. Best values for parameters such as sensor location are obtained by optimizing a performance criterion, subject to the constraint posed by the 4D-Var optimization. Tractable computational solutions to this "optimization-constrained" optimization problem are provided. The results of this work can be directly applied to the deployment of intelligent sensors and adaptive observations, as well as to reducing the operating costs of measuring networks, while preserving their ability to capture the essential features of the system under consideration. / Ph. D.
818

En Samhällsekonomisk nyttoanalys av bron mellan Hemsön och Strinningen

Pettersson, Samuel, Samuelsson, Jesper January 2024 (has links)
I dagens samhälle är det viktigt att använda sig av optimerade transportförbindelser där det är samhällsekonomiskt lönsamt. Optimala beräkningar och värderingar är viktiga för att inse vilka projekt som kommer att bli lönsamma. Syftet med denna kvantitativa studie är att undersöka de effekterna som påverkar ett infrastrukturprojekt där nyttan och kostnaden av en färjeled jämförs med en bro i ett mindre samhälle. Detta för att se om en broförbindelse mellan Hemsön och Strinningen är samhällsekonomisk lönsam.  Data samlades in från den nuvarande färjeleden mellan Strinningen och Hemsön. Metoden som valdes var en samhällsekonomisk nyttoanalys som utgick från ramverket från ASEK 8, där de påverkande effekterna togs fram och de beräkningsbara effekterna beräknades. Konsumentprisindex tillämpades för att vikta upp de beräkningsbara effekterna till projektets startdatum. Därefter användes nuvärdesmetoden för att vikta kostnader och nyttor över projektets diskonteringsperiod. En Monte Carlo-simulering tillämpades för att beräkna osäkra kostnader och nyttor. Detta gjordes genom att slumpmässigt generera värden i intervallet där de osäkra nyttorna låg med ett stort antal iterationer. Resultatet av de beräkningsbara effekterna visade på en genomsnittlig förlust med en låg sannolikhet för att projektet skulle bli samhällsekonomiskt lönsamt. De ej beräkningsbara effekterna var främst positiva men var med störst sannolikhet inte tillräckligt stora för att göra medelvärdet för nettovinsten positiv. Detta betydde att en ny broförbindelse mellan Strinningen och Hemsön över den ekonomiska livslängden inte skulle bli samhällsekonomiskt lönsam. / In today’s society, it is important to make use of optimized transport links where it is economically viable. Optimal calculations and valuations are important to understand which projects will be profitable. The purpose of this quantitative study is to examine the effects affecting an infrastructure project where the benefit and cost of a ferry link is compared to a bridge in a small city. This is to see if a bridge connection between Hemsön and Strinningen is economically viable.   Data was collected from the current ferry route between Strinningen and Hemsön. The method chosen was a cost benefit analysis that was based on the framework form ASEK 8, where the effects were drawn up and the calculable effects were calculated. The consumer price index was applied to weight up the calculable effects to the projects start date. The present value method was used to weight costs and benefits over the projects discount period. A Monte Carlo simulation was applied to calculate uncertain costs and benefits. This was done by randomly generating values in the range where the uncertain benefits lay with a large number of iterations. The result of the calculable effects showed an average loss with a low probability that the project would be economically viable. The non-calculabe effects were mainly positive but were most likely not large enough to make the average value of the net profit positive. This meant that a new bridge connection between Strinningen and Hemsön over the economic lifespan would not be economically viable.
819

Maximizing Energy Cost Savings: A MILP-based Energy Management System : in Educational Buildings: Case Study in Stockholm / Maximering av energikostnadsbesparingar: A MILP-baserat energihanteringssystem : i utbildningsbyggnader: Fallstudie i studie i Stockholm

Xiao, Binli January 2024 (has links)
In Sweden, the building sector accounts for about 35% of the total energy consumption. Some of the major contributors to energy consumption are the urban educational buildings, such as schools and universities which have considerable potential for improved energy efficiency. Furthermore, it is Sweden’s goal to mitigate climate change and set a zero net target for greenhouse gas emissions by 2045 at the latest. To meet this goal, it is essential to design building energy management with advanced optimization algorithms and data science to ensure renewable sources integration and strategical management for loads and storage. This thesis designs an Energy Management System (EMS) Optimization model that combines Mixed-Integer Linear Programming (MILP) and PV-battery sizing to satisfy energy consumption with the least energy bills and carbon emissions in urban educational buildings. A case study of two educational buildings in Stockholm will be used to simulate and evaluate the effectiveness of the proposed EMS model. Three main studies were made under the current electricity contract and a pre-defined PV capacity for buildings. The first study shows the MILP-based EMS enables optimized decisions of solar production curtailment, smart grid consumption, and smart battery usage while satisfying the building load with the lowest possible energy cost. The MILP-based EMS model achieves more flexible scheduling for batteries and PV integration than traditional rule-based EMS, but the annual saving difference is minimal. With a 25kWp PV system and the proposed EMS, the electric-heated case building saves 21.49% of energy bills annually, while the case building with district heating can save 23.35% of energy bills annually. Secondly, the best optimal Battery Energy Storage System (BESS) sizing is determined with findings that increasing BESS sizing can bring a higher saving but the increase is less than 0.5% due to limited solar energy production and low feed-in income. Under current energy contracts and building conditions, results justify the installation of PV systems but do not support the investment of a BESS. Energy cost saving doesn’t have more potential in electric-heated buildings compared to traditional district-heated buildings. Finally, the third study conducts a sensitivity analysis of the BESS’s Levelized Cost of Energy (LCOE), providing the threshold LCOE for the system with PV-BESS to be economically beneficial, which is 0.27 SEK/kWh. / Byggsektorn står för cirka 35% av Sveriges totala energiförbrukning. Bland de främsta bidragsgivarna återfinns stadsutbildningsbyggnader, såsom skolor och universitet, som har stor potential för förbättrad energieffektivitet. Dessutom strävar Sverige efter att mildra klimatförändringar och sätta upp klimatneutrala mål för byggnader. För att nå dessa mål krävs smart energihantering. Denna avhandling presenterar en modell för optimering av energihanteringssystem (EMS) som kombinerar blandad heltalslinjär programmering (MILP) och dimensionering av solcellsbatterier. Syftet är att minimera elkostnader och koldioxidutsläpp i urbana utbildningsbyggnader och därigenom förbättra hållbarheten. En fallstudie av två utbildningsbyggnader i Stockholm används för att utvärdera EMS-modellens effektivitet. Tre huvudstudier genomfördes inom ramen för det befintliga elavtalet och med en fördefinierad solcellskapacitet för byggnaderna. I den första studien framkommer att EMS baserad på MILP möjliggör mer flexibel schemaläggning för batterier och integration av solceller jämfört med en regelbaserad EMS. Trots detta är skillnaden i årliga besparingar mycket liten. Med ett 25 kWp solcellssystem och den föreslagna EMS sparar en eluppvärmd byggnad 21,49% av elkostnaderna årligen, medan en byggnad med fjärrvärme kan spara 23.35% av elkostnaderna årligen. I den andra studien bestäms optimal storlek för batterilagringsystemet (BESS). Resultaten visar att en ökad storlek på BESS kan ge högre besparingar, men ökningen är mindre än 0,5% på grund av begränsad produktion av solenergi och låga intäkter från nätmatning. Under nuvarande avtal och byggnadsförhållanden motiverar resultaten installationen av solcellssystem, men stöder inte investeringen i BESS. Slutligen genomför den tredje studien en känslighetsanalys av nivåniserad energikostnad (LCOE) för BESS och ger tröskel-LCOE för att systemet med solceller och BESS ska vara ekonomiskt fördelaktigt, vilket är 0,27 SEK/kWh.
820

LASER CLADDING OF ALUMINUM ALLOYS AND HIGH-FIDELITY MODELING OF THE MOLTEN POOL DYNAMICS IN LASER MELTING OF METALS

Corbin M Grohol (20342745) 10 January 2025 (has links)
<p dir="ltr">This research focuses on understanding and improving various metal additive manufacturing processes. The first half is dedicated to experimental investigations and methods for improving the laser cladding of aluminum alloys. The second half is dedicated to high-fidelity modeling of the laser melting process and methods for reducing the computational burden.</p><p dir="ltr">First, laser cladding is a surface enhancement and repair process in which a high-powered laser beam is used to deposit a thin (0.05 mm to 2 mm) layer of material onto a metal substrate with no cracking, minimal porosity, and satisfactory mechanical properties. In this work, a 4 kW High Power Diode Laser (HPDL) is used with off-axis powder injection to deposit single-tracks of aluminum alloy 6061 powder on a 6061-T6511 substrate. The process parameters were varied to identify the possible processing window in which a successful clad is achieved. Geometrical characteristics were correlated to the processing parameters and the trends were discussed. Microhardness testing was employed to examine the mechanical properties of the clad in the as-deposited and precipitation heat-treated conditions. Transmission electron microscopy (TEM) was used to investigate the precipitate structures in the clad and substrate as an explanation for the hardness variations. Experiments were completed on two substrate widths to understand the effect of domain size on the process map, layer size, and hardness.</p><p dir="ltr">Second, a method to deposit quench-sensitive age-hardening aluminum alloy clads is presented, which produces a hardness similar to the T6 temper without the requirement of solution heat treatment. A high-powered diode laser is scanned across the workpiece surface and material feedstock is delivered and melted via off-axis powder injection. The cladding process is immediately followed by quenching with liquid nitrogen, which improves the cooling rate of the quench-sensitive material and increases the hardness response to subsequent precipitation heat treatment. The method was demonstrated on the laser cladding of aluminum alloy 6061 powder on 6061-T6511 extruded bar substrates of 12.7 mm thickness. Single-track single-layer clads were deposited at a laser power of 3746 W, scan speed of 5 mm/s, and powder feed rate of 18 g/min. The in-situ liquid nitrogen quenching improved the clad hardness by 15.7% from 73.1 HV to 84.6 HV and the heat-affected zone hardness by 19.3% from 87.1 HV to 103.9 HV. Extending the process to multi-track multi-layer cladding further increased the clad hardness to 89.3 HV, close to the T6 temper hardness of 90 HV. Transmission electron microscopy revealed the increased precipitate density in the liquid nitrogen quenched clads was responsible for the higher hardness.</p><p dir="ltr">Third, a high-fidelity model of the molten pool dynamics during the laser melting of metals is presented for accurate prediction of the molten pool size and morphology at operating conditions relevant to laser powder bed fusion. The goal of this research is to improve the accuracy of previous models, present a thorough experimental validation, and quantify the model's sensitivity to various properties and parameters. The model is based on an OpenFOAM compressible Volume-of-Fluid (VOF) solver that is modified to include the physics relevant to laser melting. Improvements to previous works include the utilization of a compressible solver to incorporate temperature-dependent density, implementation of temperature-dependent surface tension and viscosity, utilization of the geometric isoAdvector VOF method, selection of a least squares method for the gradient calculations, and careful selection of physically accurate material properties. These model improvements resulted in accurate prediction of the molten pool depth and width (mean absolute error of 7% and 5%, respectively) across eleven operating conditions spanning the conduction and keyhole regimes with laser powers ranging from 100 W to 325 W and scan speeds from 250 mm/s to 1,200 mm/s. The validation included in-house experiments on 304 L stainless steel and experiments from the National Institute of Standards and Technology on Inconel 718. Incorporating the large density change from the ambient temperature to vaporization temperature and utilizing a least squares scheme for the gradient calculation were identified as important factors for the predictive accuracy of the model. The model sensitivity to the wide range of literature values for laser absorptivity, liquid thermal conductivity, and vaporization temperature was quantified. Literature sources were analyzed to identify the most physically accurate property values and reduce the impact of their variability on model predictions.</p><p dir="ltr">Finally, an original surrogate model is presented for the accurate and computationally efficient prediction of molten pool size in multi-track laser melting over a large domain at operating conditions relevant to laser powder bed fusion. The thermal models available for the laser melting process range from heat conduction models to high-fidelity computational fluid dynamics (CFD) models. High-fidelity models provide a comprehensive treatment of the relevant physics of heat conduction, fluid flow, solidification, vaporization, laser propagation, etc. A carefully implemented high-fidelity model is capable of accurately predicting the molten pool dynamics in a broad range of operating conditions. However, the high computational expense limits their application to a few short tracks on small domains. Conduction models, on the other hand, are orders of magnitude cheaper to evaluate but lack the necessary physics for accurate predictions. This research presents a surrogate model that combines the computational efficiency of the conduction model with the accuracy of the high-fidelity model. A conduction model and high-fidelity model are simulated over a small scan pattern to generate training data of the highly transient molten pool depth and width. A surrogate model, consisting of a fuzzy basis function network, is trained with the aforementioned data. The conduction model is then simulated over a larger scan pattern, the results are input into the trained surrogate model, thereby outputting high-fidelity predictions of the molten pool size over a larger scan pattern. Comparison with experimental results shows this surrogate modeling framework provides reasonably accurate predictions of the molten pool size and is a valid way to extend computationally intensive high-fidelity models to larger and more industrially relevant scan patterns.</p>

Page generated in 0.0778 seconds