• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 28
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 48
  • 48
  • 19
  • 10
  • 10
  • 10
  • 10
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Optimisation des paramètres de carbone de sol dans le modèle CLASSIC à l'aide d'optimisation bayésienne et d'observations

Gauthier, Charles 04 1900 (has links)
Le réservoir de carbone de sol est un élément clé du cycle global du carbone et donc du système climatique. Les sols et le carbone organique qu'ils contiennent constituent le plus grand réservoir de carbone des écosystèmes terrestres. Ce réservoir est également responsable du stockage d'une grande quantité de carbone prélevé de l'atmosphère par les plantes par la photosynthèse. C'est pourquoi les sols sont considérés comme une stratégie de mitigation viable pour réduire la concentration atmosphérique de CO2 dûe aux émissions globales de CO2 d'origine fossile. Malgré son importance, des incertitudes subsistent quant à la taille du réservoir global de carbone organique de sol et à ses dynamiques. Les modèles de biosphère terrestre sont des outils essentiels pour quantifier et étudier la dynamique du carbone organique de sol. Ces modèles simulent les processus biophysiques et biogéochimiques au sein des écosystèmes et peuvent également simuler le comportement futur du réservoir de carbone organique de sol en utilisant des forçages météorologiques appropriés. Cependant, de grandes incertitudes dans les projections faite par les modèles de biosphère terrestre sur les dynamiques du carbone organique de sol ont été observées, en partie dues au problème de l'équifinalité. Afin d'améliorer notre compréhension de la dynamique du carbone organique de sol, cette recherche visait à optimiser les paramètres du schéma de carbone de sol contenu dans le modèle de schéma canadien de surface terrestre incluant les cycles biogéochimiques (CLASSIC), afin de parvenir à une meilleure représentation de la dynamique du carbone organique de sol. Une analyse de sensibilité globale a été réalisée pour identifier lesquels parmis les 16 paramètres du schéma de carbone de sol, n'affectaient pas la simulation du carbone organique de sol et de la respiration du sol. L'analyse de sensibilité a utilisé trois sites de covariance des turbulences afin de représenter différentes conditions climatiques simulées par le schéma de carbone de sol et d'économiser le coût calculatoire de l'analyse. L'analyse de sensibilité a démontré que certains paramètres du schéma de carbone de sol ne contribuent pas à la variance des simulations du carbone organique de sol et de la respiration du sol. Ce résultat a permis de réduire la dimensionnalité du problème d'optimisation. Ensuite, quatre scénarios d'optimisation ont été élaborés sur la base de l'analyse de sensibilité, chacun utilisant un ensemble de paramètres. Deux fonctions coûts ont été utilisées pour l'optimisation de chacun des scénarios. L'optimisation a également démontré que la fonction coût utilisée avait un impact sur les ensembles de paramètres optimisés. Les ensembles de paramètres obtenus à partir des différents scénarios et fonctions coûts ont été comparés à des ensembles de données indépendants et à des estimations globales du carbone organique de sol à l'aide de métrique tel la racine de l'erreur quadratique moyenne et le bias, afin d'évaluer l'effet des ensembles de paramètres sur les simulations effectuées par le schéma de carbone de sol. Un ensemble de paramètres a surpassé les autres ensembles de paramètres optimisés ainsi que le paramétrage par défaut du modèle. Ce résultat a indiqué que la structure d'optimisation était en mesure de produire un ensemble de paramètres qui simulait des valeurs de carbone organique de sol et de respiration du sol qui étaient plus près des valeurs observées que le modèle CLASSIC par défaut, améliorant la représentation de la dynamique du carbone du sol. Cet ensemble de paramètres optimisés a ensuite été utilisé pour effectuer des simulations futures (2015-2100) de la dynamique du carbone organique de sol afin d'évaluer son impact sur les projections de CLASSIC. Les simulations futures ont montré que l'ensemble de paramètres optimisés simulait une quantité de carbone organique de sol 62 % plus élevée que l'ensemble de paramètres par défaut tout en simulant des flux de respiration du sol similaires. Les simulations futures ont également montré que les ensembles de paramètres optimisés et par défaut prévoyaient que le réservoir de carbone organique de sol demeurerait un puits de carbone net d'ici 2100 avec des sources nettes régionales. Cette étude a amélioré globalement la représentation de la dynamique du carbone organique de sol dans le schéma de carbone de sol de CLASSIC en fournissant un ensemble de paramètres optimisés. Cet ensemble de paramètres devrait permettre d'améliorer notre compréhension de la dynamique du carbone du sol. / The soil carbon pool is a vital component of the global carbon cycle and, therefore, the climate system. Soil organic carbon (SOC) is the largest carbon pool in terrestrial ecosystems. This pool stores a large quantity of carbon that plants have removed from the atmosphere through photosynthesis. Because of this, soils are considered a viable climate change mitigation strategy to lower the global atmospheric CO2 concentration that is presently being driven higher by anthropogenic fossil CO2 emissions. Despite its importance, there are still considerable uncertainties around the size of the global SOC pool and its response to changing climate. Terrestrial biosphere models (TBM) simulate the biogeochemical processes within ecosystems and are critical tools to quantify and study SOC dynamics. These models can also simulate the future behavior of SOC if carefully applied and given the proper meteorological forcings. However, TBM predictions of SOC dynamics have high uncertainties due in part to equifinality. To improve our understanding of SOC dynamics, this research optimized the parameters of the soil carbon scheme contained within the Canadian Land Surface Scheme Including Biogeochemical Cycles (CLASSIC), to better represent SOC dynamics. A global sensitivity analysis was performed to identify which of the 16 parameters of the soil carbon scheme did not affect simulated SOC stocks and soil respiration (Rsoil). The sensitivity analysis used observations from three eddy covariance sites for computational efficiency and to encapsulate the climate represented by the global soil carbon scheme. The sensitivity analysis revealed that some parameters of the soil carbon scheme did not contribute to the variance of simulated SOC and Rsoil. These parameters were excluded from the optimization which helped reduce the dimensionality of the optimization problem. Then, four optimization scenarios were created based on the sensitivity analysis, each using a different set of parameters to assess the impact the number of parameters included had on the optimization. Two different loss functions were used in the optimization to assess the impact of accounting for observational error. Comparing the optimal parameters between the optimizations performed using the different loss functions showed that the loss functions impacted the optimized parameter sets. To determine which optimized parameter set obtained by each loss function was most skillful, they were compared to independent data sets and global estimates of SOC, which were not used in the optimization using comparison metrics based on root-mean-square-deviation and bias. This study generated an optimal parameter set that outperformed the default parameterization of the model. This optimal parameter set was then applied in future simulations of SOC dynamics to assess its impact upon CLASSIC's future projections. These future simulations showed that the optimal parameter set simulated future global SOC content 62 % higher than the default parameter set while simulating similar Rsoil fluxes. The future simulations also showed that both the optimized and default parameter sets projected that the SOC pool would be a net sink by 2100 with regional net sources, notably tropical regions.
42

Far Field EM Side-Channel Attack Based on Deep Learning with Automated Hyperparameter Tuning

Liu, Keyi January 2021 (has links)
Side-channel attacks have become a realistic threat to the implementations of cryptographic algorithms. By analyzing the unintentional, side-channel leakage, the attacker is able to recover the secret of the target. Recently, a new type of side-channel leakage has been discovered, called far field EM emissions. Unlike attacks based on near field EM emissions or power consumption, the attack based on far field EM emissions is able to extract the secret key from the victim device of several meters distance. However, existing deep-learning attacks based far field EM commonly use a random or grid search method to optimize neural networks’ hyperparameters. Recently, an automated way for deep learning hyperparameter tuning based on Auto- Keras library, called AutoSCA framework, was applied to near-field EM attacks. In this work, we investigate if AutoSCA could help far field EM side-channel attacks. In our experiments, the target is a Bluetooth-5 supported Nordic Semiconductor nRF52832 development kit implementation of Advanced Encryption Standard (AES). Our experiments show that, by using a deep-learning model generated by the AutoSCA framework, we need 485 traces on average to recover a subkey from traces captured at 15 meters distance from the victim device without repeating each encryption. For the same conditions, the state-of-the-art method uses 510 traces. Furthermore, our model contains only 667,433 trainable parameters in total, implying that it requires roughly 9 times less training resources compared to the larger models in the previous work. / Angrepp på sidokanaler har blivit ett realistiskt hot mot implementeringen av kryptografiska algoritmer.Genom att analysera det oavsiktliga läckaget kan angriparen hitta hemligheten bakom målet.Nyligen har en ny typ av sidokanalläckage upptäckts, kallad fjärrfälts EM-utsläpp.Till skillnad från attacker baserade på nära fält EM- utsläpp eller energiförbrukning, kan attacken baserad på yttre fält EM-utsläpp extrahera den hemliga nyckeln från den skadade anordningen på flera meter avstånd.Men befintliga djupinlärningsattacker baserade på långt fält EM använder ofta en slumpmässig sökmetod för att optimera nervnätens hyperparametrar. Nyligen tillämpades ett automatiserat sätt för djupinlärning av hyperparametern baserad på Auto-Keras- bibliotek, AutoSCA- ramverket, vid EM-angrepp nära fältet.I det här arbetet undersöker vi om AutoSCA kan hjälpa till med EM-angrepp.I våra experiment är målet en Bluetooth-5-stödd nordisk semidirigent nR52832- utvecklingsutrustning för avancerad krypteringsstandard (AES).Våra experiment visar att genom att använda en djupinlärningsmodell skapad av AutoSCA-ramverket, behöver vi 485-spår i genomsnitt för att hämta en subnyckel från spår tagna på 15- meters avstånd från offrets apparat utan att upprepa varje kryptering.Under samma förhållanden använder den senaste metoden 510-spår.Dessutom innehåller vår modell bara 667,433-parametrar som totalt kan användas för utbildning, vilket innebär att det krävs ungefär nio gånger mindre utbildningsresurser jämfört med de större modellerna i det tidigare arbetet.
43

Bayesian Off-policy Sim-to-Real Transfer for Antenna Tilt Optimization

Larsson Forsberg, Albin January 2021 (has links)
Choosing the correct angle of electrical tilt in a radio base station is essential when optimizing for coverage and capacity. A reinforcement learning agent can be trained to make this choice. If the training of the agent in the real world is restricted or even impossible, alternative methods can be used. Training in simulation combined with an approximation of the real world is one option that comes with a set of challenges associated with the reality gap. In this thesis, a method based on Bayesian optimization is implemented to tune the environment in which domain randomization is performed to improve the quality of the simulation training. The results show that using Bayesian optimization to find a good subset of parameters works even when access to the real world is constrained. Two off- policy estimators based on inverse propensity scoring and direct method evaluation in combination with an offline dataset of previously collected cell traces were tested. The method manages to find an isolated subspace of the whole domain that optimizes the randomization while still giving good performance in the target domain. / Rätt val av elektrisk antennvinkel för en radiobasstation är avgörande när täckning och kapacitetsoptimering (eng. coverage and capacity optimization) görs för en förstärkningsinlärningsagent. Om träning av agenten i verkligheten är besvärlig eller till och med omöjlig att genomföra kan olika alternativa metoder användas. Simuleringsträning kombinerad med en skattningsmodell av verkligheten är ett alternativ som har olika utmaningar kopplade till klyftan mellan simulering och verkligheten (eng. reality gap). I denna avhandling implementeras en lösning baserad på Bayesiansk Optimering med syftet att anpassa miljön som domänrandomisering sker i för att förbättra kvaliteten på simuleringsträningen. Resultatet visar att Bayesiansk Optimering kan användas för att hitta ett urval av fungerande parametrar även när tillgången till den faktiska verkligheten är begränsad. Två skattningsmodeller baserade på invers propensitetsviktning och direktmetodutvärdering i kombination med ett tidigare insamlat dataset av nätverksdata testades. Den tillämpade metoden lyckas hitta ett isolerat delrum av parameterrymden som optimerar randomiseringen samtidigt som prestationen i verkligheten hålls på en god nivå.
44

Efficient Sequential Sampling for Neural Network-based Surrogate Modeling

Pavankumar Channabasa Koratikere (15353788) 27 April 2023 (has links)
<p>Gaussian Process Regression (GPR) is a widely used surrogate model in efficient global optimization (EGO) due to its capability to provide uncertainty estimates in the prediction. The cost of creating a GPR model for large data sets is high. On the other hand, neural network (NN) models scale better compared to GPR as the number of samples increase. Unfortunately, the uncertainty estimates for NN prediction are not readily available. In this work, a scalable algorithm is developed for EGO using NN-based prediction and uncertainty (EGONN). Initially, two different NNs are created using two different data sets. The first NN models the output based on the input values in the first data set while the second NN models the prediction error of the first NN using the second data set. The next infill point is added to the first data set based on criteria like expected improvement or prediction uncertainty. EGONN is demonstrated on the optimization of the Forrester function and a constrained Branin function and is compared with EGO. The convergence criteria is based on the maximum number of infill points in both cases. The algorithm is able to reach the optimum point within the given budget. The EGONN is extended to handle constraints explicitly and is utilized for aerodynamic shape optimization of the RAE 2822 airfoil in transonic viscous flow at a free-stream Mach number of 0.734 and a Reynolds number of 6.5 million. The results obtained from EGONN are compared with the results from gradient-based optimization (GBO) using adjoints. The optimum shape obtained from EGONN is comparable to the shape obtained from GBO and is able to eliminate the shock. The drag coefficient is reduced from 200 drag counts to 114 and is close to 110 drag counts obtained from GBO. The EGONN is also extended to handle uncertainty quantification (uqEGONN) using prediction uncertainty as an infill method. The convergence criteria is based on the relative change of summary statistics such as mean and standard deviation of an uncertain quantity. The uqEGONN is tested on Ishigami function with an initial sample size of 100 samples and the algorithm terminates after 70 infill points. The statistics obtained from uqEGONN (using only 170 function evaluations) are close to the values obtained from directly evaluating the function one million times. uqEGONN is demonstrated on to quantifying the uncertainty in the airfoil performance due to geometric variations. The algorithm terminates within 100 computational fluid dynamics (CFD) analyses and the statistics obtained from the algorithm are close to the one obtained from 1000 direct CFD based evaluations.</p>
45

Computationally Efficient Explainable AI: Bayesian Optimization for Computing Multiple Counterfactual Explanantions / Beräkningsmässigt Effektiv Förklarbar AI: Bayesiansk Optimering för Beräkning av Flera Motfaktiska Förklaringar

Sacchi, Giorgio January 2023 (has links)
In recent years, advanced machine learning (ML) models have revolutionized industries ranging from the healthcare sector to retail and E-commerce. However, these models have become increasingly complex, making it difficult for even domain experts to understand and retrace the model's decision-making process. To address this challenge, several frameworks for explainable AI have been proposed and developed. This thesis focuses on counterfactual explanations (CFEs), which provide actionable insights by informing users how to modify inputs to achieve desired outputs. However, computing CFEs for a general black-box ML model is computationally expensive since it hinges on solving a challenging optimization problem. To efficiently solve this optimization problem, we propose using Bayesian optimization (BO), and introduce the novel algorithm Separated Bayesian Optimization (SBO). SBO exploits the formulation of the counterfactual function as a composite function. Additionally, we propose warm-starting SBO, which addresses the computational challenges associated with computing multiple CFEs. By decoupling the generation of a surrogate model for the black-box model and the computation of specific CFEs, warm-starting SBO allows us to reuse previous data and computations, resulting in computational discounts and improved efficiency for large-scale applications. Through numerical experiments, we demonstrate that BO is a viable optimization scheme for computing CFEs for black-box ML models. BO achieves computational efficiency while maintaining good accuracy. SBO improves upon this by requiring fewer evaluations while achieving accuracies comparable to the best conventional optimizer tested. Both BO and SBO exhibit improved capabilities in handling various classes of ML decision models compared to the tested baseline optimizers. Finally, Warm-starting SBO significantly enhances the performance of SBO, reducing function evaluations and errors when computing multiple sequential CFEs. The results indicate a strong potential for large-scale industry applications. / Avancerade maskininlärningsmodeller (ML-modeller) har på senaste åren haft stora framgångar inom flera delar av näringslivet, med allt ifrån hälso- och sjukvårdssektorn till detaljhandel och e-handel. I jämn takt med denna utveckling har det dock även kommit en ökad komplexitet av dessa ML-modeller vilket nu lett till att även domänexperter har svårigheter med att förstå och tolka modellernas beslutsprocesser. För att bemöta detta problem har flertalet förklarbar AI ramverk utvecklats. Denna avhandling fokuserar på kontrafaktuella förklaringar (CFEs). Detta är en förklaringstyp som anger för användaren hur denne bör modifiera sin indata för att uppnå ett visst modellbeslut. För en generell svarta-låda ML-modell är dock beräkningsmässigt kostsamt att beräkna CFEs då det krävs att man löser ett utmanande optimeringsproblem. För att lösa optimeringsproblemet föreslår vi användningen av Bayesiansk Optimering (BO), samt presenterar den nya algoritmen Separated Bayesian Optimization (SBO). SBO utnyttjar kompositionsformuleringen av den kontrafaktuella funktionen. Vidare, utforskar vi beräkningen av flera sekventiella CFEs för vilket vi presenterar varm-startad SBO. Varm-startad SBO lyckas återanvända data samt beräkningar från tidigare CFEs tack vare en separation av surrogat-modellen för svarta-låda ML-modellen och beräkningen av enskilda CFEs. Denna egenskap leder till en minskad beräkningskostnad samt ökad effektivitet för storskaliga tillämpningar.  I de genomförda experimenten visar vi att BO är en lämplig optimeringsmetod för att beräkna CFEs för svarta-låda ML-modeller tack vare en god beräknings effektivitet kombinerat med hög noggrannhet. SBO presterade ännu bättre med i snitt färre funktionsutvärderingar och med fel nivåer jämförbara med den bästa testade konventionella optimeringsmetoden. Både BO och SBO visade på bättre kapacitet att hantera olika klasser av ML-modeller än de andra testade metoderna. Slutligen observerade vi att varm-startad SBO gav ytterligare prestandaökningar med både minskade funktionsutvärderingar och fel när flera CFEs beräknades. Dessa resultat pekar på stor potential för storskaliga tillämpningar inom näringslivet.
46

Auto-Tuning Apache Spark Parameters for Processing Large Datasets / Auto-Optimering av Apache Spark-parametrar för bearbetning av stora datamängder

Zhou, Shidi January 2023 (has links)
Apache Spark is a popular open-source distributed processing framework that enables efficient processing of large amounts of data. Apache Spark has a large number of configuration parameters that are strongly related to performance. Selecting an optimal configuration for Apache Spark application deployed in a cloud environment is a complex task. Making a poor choice may not only result in poor performance but also increases costs. Manually adjusting the Apache Spark configuration parameters can take a lot of time and may not lead to the best outcomes, particularly in a cloud environment where computing resources are allocated dynamically, and workloads can fluctuate significantly. The focus of this thesis project is the development of an auto-tuning approach for Apache Spark configuration parameters. Four machine learning models are formulated and evaluated to predict Apache Spark’s performance. Additionally, two models for Apache Spark configuration parameter search are created and evaluated to identify the most suitable parameters, resulting in the shortest execution time. The obtained results demonstrates that with the developed auto-tuning approach and adjusting Apache Spark configuration parameters, Apache Spark applications can achieve a shorter execution time than when using the default parameters. The developed auto-tuning approach gives an improved cluster utilization and shorter job execution time, with an average performance improvement of 49.98%, 53.84%, and 64.16% for the three different types of Apache Spark applications benchmarked. / Apache Spark är en populär öppen källkodslösning för distribuerad databehandling som möjliggör effektiv bearbetning av stora mängder data. Apache Spark har ett stort antal konfigurationsparametrar som starkt påverkar prestandan. Att välja en optimal konfiguration för en Apache Spark-applikation som distribueras i en molnmiljö är en komplex uppgift. Ett dåligt val kan inte bara leda till dålig prestanda utan också ökade kostnader. Manuell anpassning av Apache Spark-konfigurationsparametrar kan ta mycket tid och leda till suboptimala resultat, särskilt i en molnmiljö där beräkningsresurser tilldelas dynamiskt och arbetsbelastningen kan variera avsevärt. Fokus för detta examensprojekt är att utveckla en automatisk optimeringsmetod för konfigurationsparametrarna i Apache Spark. Fyra maskininlärningsmodeller formuleras och utvärderas för att förutsäga Apache Sparks prestanda. Dessutom skapas och utvärderas två modeller för att söka efter de mest lämpliga konfigurationsparametrarna för Apache Spark, vilket resulterar i kortast möjliga exekveringstid. De erhållna resultaten visar att den utvecklade automatiska optimeringsmetoden, med anpassning av Apache Sparks konfigurationsparameterar, bidrar till att Apache Spark-applikationer kan uppnå kortare exekveringstider än vid användning av standard-parametrar. Den utvecklade metoden för automatisk optimering bidrar till en förbättrad användning av klustret och kortare exekveringstider, med en genomsnittlig prestandaförbättring på 49,98%, 53,84% och 64,16% för de tre olika typerna av Apache Spark-applikationer som testades.
47

Solid-Solution Strengthening and Suzuki Segregation in Co- and Ni-based Alloys

Dongsheng Wen (12463488) 29 April 2022 (has links)
<p>Co and Ni are two major elements in high temperature structural alloys that include superalloys for turbine engines and hard metals for cutting tools. The recent development of complex concentrated alloys (CCAs), loosely defined as alloys without a single principal element (e.g. CoNiFeMn), offers additional opportunities in designing new alloys through extensive composition and structure modifications. Within CCAs and Co- and Ni-based superalloys, solid-solution strengthening and stacking fault energy engineering are two of the most important strengthening mechanisms. While studied for decades, the potency and quantitative materials properties of these mechanisms remain elusive. </p> <p><br></p> <p>Solid-solution strengthening originates from stress field interactions between dislocations and solute of various species in the alloy. These stress fields can be engineered by composition modification in CCAs, and therefore a wide range of alloys with promising mechanical strength may be designed. This thesis initially reports on experimental and computational validation of newly developed theories for solid-solution strengthening in 3d transition metal (MnFeCoNi) alloys. The strengthening effects of Al, Ti, V, Cr, Cu and Mo as alloying elements are quantified by coupling the Labusch-type strengthening model and experimental measurements. With large atomic misfits with the base alloy, Al, Ti, Mo, and Cr present strong strengthening effects comparable to other Cantor alloys. </p> <p> </p> <p>Stacking fault energy engineering can enable novel deformation mechanisms and exceptional strength in face-centered cubic (FCC) materials such as austenitic TRIP/TWIP steels and CoNi-based superalloys exhibiting local phase transformation strengthening via Suzuki segregation. We employed first-principles calculations to investigate the Suzuki segregation and stacking fault energy of the FCC Co-Ni binary alloys at finite temperatures and concentrations. We quantitatively predicted the Co segregation in the innermost plane of the intrinsic stacking fault (ISF). We further quantified the decrease of stacking fault energy due to segregation.  </p> <p><br></p> <p>We further investigated the driving force of segregation and the origin of the segregation behaviors of 3d, 4d and 5d elements in the Co- and Ni-alloys. Using first-principles calculations, we calculated the ground-state solute-ISF interaction energies and revealed the trends across the periodic table. We discussed the relationships between the interaction energies and the local lattice distortions, charge density redistribution, density of states and local magnetization of the solutes. </p> <p><br></p> <p>Finally, this thesis reports on new methodologies to accelerate first-principles calculations utilizing active learning techniques, such as Bayesian optimization, to efficiently search for the ground-state energy line of the system with limited computational resources. Based on the expected improvement method, new acquisition strategies were developed and will be compared and presented. </p>
48

Personalization of Bone Remodelling Simulation Models for Clinical Applications

Gutiérrez Gil, Jorge 15 January 2024 (has links)
[ES] El acceso a una atención sanitaria de alta calidad es un marcador importante del desarrollo de las sociedades humanas. Los aportes tecnológicos a la medicina han mostrado un potencial relevante para descubrir procedimientos efectivos a nivel preventivo, diagnóstico y terapéutico. En particular, los métodos computacionales permiten el procesamiento eficaz de datos médicos y, por tanto, pueden modelar sistemas biológicos complejos. Esto ha influido en el desarrollo de la Medicina Personalizada (MP) durante las últimas décadas, donde la obtención de conocimiento específico de cada caso permite realizar intervenciones a medida, todo ello a un coste de recursos accesible. La simulación de remodelación ósea es un campo prometedor en el contexto de la MP. Predecir un proceso de adaptación ósea en un caso concreto puede dar lugar a numerosas aplicaciones en el campo de las enfermedades óseas, tanto a nivel clínico como experimental. Mediante la combinación del Método de Elementos Finitos (FEM) y los algoritmos de remodelación ósea, es posible obtener modelos numéricos de un hueso específico a partir de datos médicos (por ejemplo, una tomografía computarizada). Todo ello puede dar lugar a una revolución en la medicina personalizada. / [CA] L'accés a una atenció sanitària d'alta qualitat és un marcador important del desenvolupament de les societats humanes. Les aportacions tecnològiques a la medicina han mostrat un potencial rellevant per a descobrir procediments efectius a nivell preventiu, diagnòstic i terapèutic. En particular, els mètodes computacionals permeten el processament eficaç de dades mèdiques i, per tant, poden modelar sistemes biològics complexos. Això ha influït en el desenvolupament de la Medicina Personalitzada (MP) durant les últimes dècades, on l'obtenció de coneixement específic de cada cas permet realitzar intervencions a mesura, tot això a un cost de recursos accessible. La simulació de remodelació òssia és un camp prometedor en el context de la MP. Predir un procés d'adaptació òssia en un cas concret pot donar lloc a nombroses aplicacions en el camp de les malalties òssies, tant a nivell clínic com experimental. Mitjançant la combinació del Mètode d'Elements Finits (*FEM) i els algorismes de remodelació òssia, és possible obtindre models numèrics d'un os específic a partir de dades mèdiques (per exemple, una tomografia computada). Tot això pot donar lloc a una revolució en la medicina personalitzada. / [EN] Access to high-quality healthcare is an important marker of the development of human societies. Technological contributions to medicine have shown relevant potential to discover effective procedures at a preventive, diagnostic and therapeutic level. In particular, computational methods enable efficient processing of medical data and can therefore model complex biological systems. This has influenced the development of Personalized Medicine (PM) over recent decades, where obtaining specific knowledge of each case allows for tailored interventions, all at an affordable resource cost. Simulation of bone remodeling is a promising field in the context of PM. Predicting a bone adaptation process in a specific case can lead to numerous applications in the field of bone diseases, both clinically and experimentally. By combining the Finite Element Method (FEM) and bone remodeling algorithms, it is possible to obtain numerical models of a specific bone from medical data (for example, a CT scan). All of this can lead to a revolution in personalized medicine. / Thanks to the Valencian funding programme FDGENT/2018, for providing economic resources to develop this long-term work. / Gutiérrez Gil, J. (2023). Personalization of Bone Remodelling Simulation Models for Clinical Applications [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/202059

Page generated in 0.094 seconds