• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 560
  • 32
  • 6
  • 2
  • Tagged with
  • 620
  • 620
  • 585
  • 52
  • 41
  • 40
  • 38
  • 34
  • 33
  • 30
  • 30
  • 29
  • 28
  • 27
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
551

Consolidating Multi-Factor Models of Systematic Risk with Regulatory Capital / Konsolidering av flerfaktormodeller för systematisk risk med reglerande kapital

Ribom, Henrik January 2018 (has links)
To maintain solvency intimes of severe economic downturns banks and financialinstitutions keep capital cushions that reflect the risks in the balance sheet.Broadly,how much capital that is being held is a combination of external requirementsfromregulators and internal assessments of credit risk. We discuss alternatives totheBasel Pillar II capital add-on based on multi-factor models for held capitaland howthese can be applied so that only concentration (or sector) risk affects theoutcome,even in a portfolio with prominent idiosyncratic risk. Further, the stabilityandreliability of these models are evaluated. We found that this idiosyncraticrisk canefficiently be removed both on a sector and a portfolio level and that themulti-factormodels tested converge.We introduce two new indices based on Risk Weighted Assets (RI) and EconomicCapital (EI). Both show the desired effect of an intuitive dependence on the PDand LGD. Moreover, EI shows a dependence on the inter-sector correlation. Inthesample portfolio, we show that the high concentration in one sector could be(better)justified by these methods when the low average LGD and PD of this sector weretaken into consideration. / För att behålla solvens itider av svår lågkonjunktur håller banker och finansiellainstitutioner buffertar med kapital som reflekterar risken i balansräkningen. Istoradrag så är mängden kapital som hålls beroende av en kombination av externa kravfrån regulatorer och interna uppskattningar av kredit risken. Den häravhandlingendiskuterar alternativ till Basel pelare II kapital påslaget som är baserade påmulti-faktor modeller för kapital och hur dessa kan appliceras så att endastkoncentration(eller sektor) risk påverkar resultat, även i en portfölj med tydligidiosynkratiskrisk. Utöver detta behandlas stabilitet och reliabilitet hos dessa modeller.Genomdetta hittas att den idiosynkratisk risk kan effektivt tas bort på bådeportfölj- ochsektornivå och att de multifaktor modeller som testas konvergerar.Den här avhandlingen introducerar två nya index, baserat på Risk WeightedAssets(RI) och Economic Capital (EI). Båda visar på den önskade effekten av ettintuitivtberoende av PD och LGD. Dessutom visar EI ett beroende av inter-sektor korrela-tion. Med stickprovsportföljen som används var det tydligt att högkoncentrationi en sektor kunde (bättre) rättfärdigas av båda dessa metoder då LGD och PD försektorn i fråga har beaktats.
552

Algorithm that creates productcombinations based on customerdata analysis : An approach with Generalized Linear Modelsand Conditional Probabilities / Algoritm som skapar produktkombinationer baserad på kunddata analys : En metod med generaliserade linjära modeller och betingade sannolikheter

Uyanga, Enkhzul, Wang, Lida January 2017 (has links)
This bachelor’s thesis is a combined study of applied mathematical statistics and industrial engineering and management implemented to develop an algorithm which creates product combinations based on customer data analysis for eleven AB. Mathematically, generalized linear modelling, combinatorics and conditional probabilities were applied to create sales prediction models, generate potential combinations and calculate the conditional probabilities of the combinations getting purchased. SWOT analysis was used to identify which factors can enhance the sales from an industrial engineering and management perspective. Based on the regression analysis, the study showed that the considered variables, which were sales prices, brands, ratings, purchase countries, purchase months and how new the products are, affected the sales amounts of the products. The algorithm takes a barcode of a product as an input and checks whether if the corresponding product type satisfies the requirements of predicted sales amount and conditional probability. The algorithm then returns a list of possible product combinations that fulfil the recommendations. / Detta kandidatexamensarbete är en kombinerad studie av tillämpad matematisk statistik och industriell ekonomisk implementering för att utveckla en algoritm som skapar produktkombinationer baserad på kunddata analys för eleven AB. I den matematiska delen tillämpades generaliserade linjära modeller, kombinatorik och betingade sannolikheter för att skapa prediktionsmodeller för försäljningsantal, generera potentiella kombinationer och beräkna betingade sannolikheter att kombinationerna bli köpta. SWOT-analys användes för att identifiera vilka faktorer som kan öka försäljningen från ett industriell ekonomiskt perspektiv. Baserat på regressionsanalysen, studien har visat att de betraktade variablerna, som var försäljningspriser, varumärken, försäljningsländer, försäljningsmånader och hur nya produkterna är, påverkade försäljningsantalen på produkterna. Algoritmen tar emot en streckkod av en produkt som inmatning och kontrollerar om den motsvarande produkttypen uppfyller kraven för predikterad försäljningssumma och betingad sannolikhet. Algoritmen returnerar en lista av alla möjliga kombinationer på produkter som uppfyller rekommendationerna.
553

Modelling management fees of mutual funds using multiple linearregression / Modellering av fonders förvaltningsavgift genom multilinjär regression

Hallberg, David, Renström, Erik January 2017 (has links)
This paper seeks to investigate whether management fees, set by mutual funds, rely on a set of explanatory variables. The study includes equity, bond, and money market funds, all investing in securities registered in Sweden. Results obtained from the project show that changes in assets under management, standard deviation, and tracking error, for a course of 5 years, can provide some explanation to what management fees mutual funds set. In turn, this raises many interesting questions on how capital flows and fund differentiation affects the fees. Also, a market analysis of the Swedish fund market shows that elements of monopolistic competition are present. Finally, because of the scope of this study, several suggestions on further research have been made. / Denna artikel ämnar undersöka huruvida förvaltningsavgifter, satta av fonder, beror på ett antal förklarande variabler. Studien inkluderar aktie, obligations och korträntefonder, vilka investerar i värdepapper registrerade i Sverige. Resultat erhållna från projektet tyder på att förändringar i kapital under förvaltning, standardavvikelse och spårningsfel (tracking error), alla uppmätta över 5 år, delvis kan förklara vilka avgifter fonder sätter. I sin tur väcker detta flera intressanta frågor över hur kapitalflöden och fonddifferentiering påverkar avgifter. Dessutom visar en genomförd marknadsanalys över den svenska fondmarknaden att karaktäristiska drag av monopolistisk konkurrens är närvarande. Slutligen, i samband med studiens omfattning, så har flertalet förslag på vidare studier gjorts.
554

Modelling Non-life Insurance Policyholder Price Sensitivity : A Statistical Analysis Performed with Logistic Regression / Modellering av priskänslighet i sakförsäkring

Hardin, Patrik, Tabari, Sam January 2017 (has links)
This bachelor thesis within mathematical statistics studies the possibility of modelling the renewal probability for commercial non-life insurance policyholders. The project was carried out in collaboration with the non-life insurance company If P&C Insurance Ltd. at their headquarters in Stockholm, Sweden. The paper includes an introduction to underlying concepts within insurance and mathematics and a detailed review of the analytical process followed by a discussion and conclusions. The first stages of the project were the initial collection and processing of explanatory insurance data and the development of a logistic regression model for policy renewal. An initial model was built and modern methods of mathematics and statistics were applied in order obtain a final model consisting of 9 significant characteristics. The regression model had a predictive power of 61%. This suggests that it to a certain degree is possible to predict the renewal probability of non-life insurance policyholders based on their characteristics. The results from the final model were ultimately translated into a measure of price sensitivity which can be implemented in both pricing models and CRM systems. We believe that price sensitivity analysis, if done correctly, is a natural step in improving the current pricing models in the insurance industry and this project provides a foundation for further research in this area. / Detta kandidatexamensarbete inom matematisk statistik undersöker möjligheten att modellera förnyelsegraden för kommersiella skadeförsärkringskunder. Arbetet utfördes i samarbete med If Skadeförsäkring vid huvudkontoret i Stockholm, Sverige. Uppsatsen innehåller en introduktion till underliggande koncept inom försäkring och matematik samt en utförlig översikt över projektets analytiska process, följt av en diskussion och slutsatser. De huvudsakliga delarna av projektet var insamling och bearbetning av förklarande försäkringsdata samt utvecklandet och tolkningen av en logistisk regressionsmodell för förnyelsegrad. En första modell byggdes och moderna metoder inom matematik och statistik utfördes för att erhålla en slutgiltig regressionsmodell uppbyggd av 9  signifikanta kundkaraktäristika. Regressionsmodellen hade en förklaringsgrad av 61% vilket pekar på att det till en viss grad är möjligt att förklara förnyelsegraden hos försäkringskunder utifrån dessa karaktäristika. Resultaten från den slutgiltiga modellen översattes slutligen till ett priskänslighetsmått vilket möjliggjorde implementering i prissättningsmodeller samt CRM-system. Vi anser att priskänslighetsanalys, om korrekt genomfört, är ett naturligt steg i utvecklingen av dagens prissättningsmodeller inom försäkringsbranschen och detta projekt lägger en grund för fortsatta studier inom detta område.
555

An Experimental Study of the High-Lift System and Wing-Body Junction Wake Flow Interference of the NASA Common Research Model / En experimentell studie av flödesinterferensen mellan flygplanskropp och vinge för NASA's Common Research Model

Brundin, Desirée January 2017 (has links)
This thesis investigates the turbulent flow in the wake of the wing-body junction of the NASA Common Research Model to further reveal its complex vortical structure and to contribute to the reference database used for Computational Fluid Dynamics validation activities. Compressible flows near two wall-boundary layers occurs not only at the wing-body junction but at every control surface of an airplane, therefore increased knowledge about this complex flow structure could potentially improve the estimates of drag performance and control surface efficiency, primarily for minimizing the environmental impact of commercial flight. The airplane model is modified by adding an inboard flap to investigate the influence from the deflection on the vorticity and velocity field. Future flap designs and settings are discussed from a performance improvement point of view, with the investigated flow influence in mind. The experimental measurements for this thesis were collected using a Cobra Probe, a dynamic multi-hole pressure probe, for Reynolds numbers close to one million based on the wing root chord. A pre-programmed three-dimensional grid was used to cover the most interesting parts of the junction flow. The facility used for the tests is a 120 cm by 80 cm indraft, subsonic wind tunnel at NASA Ames Research Center’s Fluid Mechanics Lab, which provides an on-set flow speed of around Mach 0.15, corresponding to approximately 48 m/s. / Den här avhandlingen undersöker det turbulenta flödet runt övergången mellan flygplanskropp och vinge på en NASA Common Research Model för att vidare utforska den komplexa, tredimensionella strukturen av flödet och bidra till NASA’s officiella databas för jämförelser med simulerade flöden. Kompressibla flöden nära tvåväggsgränsskikt uppkommer inte bara vid övergången mellan flygplanskropp och vinge utan även vid varje kontrollyta på ett flygplan. Ökad kunskap om flödets beteende vid sådana områden kan därför bidra till en bättre uppskattning av prestanda och effektivitet av kontrollytorna och flygplanet i sin helhet, vilket kan bidra till minskad miljöpåverkan från kommersiell flygtrafik. Flygplansmodellen är modifierad genom montering av en vingklaff på den inre delen av vingen, detta för att undersöka hur olika vinklar på klaffarnas nedböjning påverkar flödets struktur och hastighetsfält. Framtida klaffdesigner och inställningar för ökad prestanda diskuteras även utifrån denna påverkan. Mätningarna i vindtunneln gjordes med en Cobra Probe, ett dynamisk tryckmätningsinstrument, speciellt designad för turbulenta och instabila flöden. Reynoldsnumren som generades av den subsoniska, indrags-vindtunneln var ungefär en miljon baserad på vingrotens längd, vilket motsvarar knappt en tiondel av normala flygförhållanden för samma flygplansmodell.
556

Flash Pulse Thermography Measurements of Coat Thickness

Häggkvist, Alexander January 2023 (has links)
The application of varnish, metal coats, and paint is a common practice for modifying or enhancing material properties. Metal coats are frequently used as protective layers against corrosion, heat, and wear, while also influencing characteristics like conductivity, weight, and production costs. Achieving the optimal thickness of the coating is critical, as a too-thin layer may not offer sufficient protection, while an overly thick layer adds unnecessary weight and increases expenses. Therefore, it is crucial to accurately measure the coating thickness without causing any damage. This project focuses on utilising flash pulse thermography, a non-invasive and non-destructive measuring technique, with three algorithms — Dynamical Thermal Tomography, Power Function, and Pulse Phase Thermography — to measure and differentiate between plates with known variations in the number of coating layers. The study also aims to identify the limiting factors associated with the experimental equipment and the characteristics of the thermography algorithms. The thickness calculations were performed both individually for each plate and simultaneously for multiple plates. The results demonstrate that Dynamical Thermal Tomography exhibits superior precision and strong linear correlation when measuring individual plates. On the other hand, the Power Function algorithm outperforms in effectively distinguishing between two plates simultaneously, while providing decent precision for individual plates. It is worth noting that the framerate of the camera significantly affects the performance and serves as the primary limiting factor in this specific experimental setup.Further investigations are necessary to obtain more conclusive results and determine the limitations of accuracy when measuring coating thickness.
557

Clinical Analytics and Personalized Medicine

Chih-Hao Fang (13978917) 19 October 2022 (has links)
<p>The increasing volume and availability of Electronic Health Records (EHRs) open up opportunities for computational models to improve patient care. Key factors in improving patient outcomes include identifying patient sub-groups with distinct patient characteristics and providing personalized treatment actions with expected improved outcomes. This thesis investigates how well-formulated matrix decomposition and causal inference techniques can be leveraged to tackle the problem of disease sub-typing and inferring treatment recommendations in healthcare. In particular, the research resulted in computational techniques based on archetypal analysis to identify and analyze disease sub-types and a causal reinforcement learning method for learning treatment recommendations. Our work on these techniques are divided into four part in this thesis:</p> <p><br></p> <p>In the first part of the thesis, we present a retrospective study of Sepsis patients in intensive care environments using patient data. Sepsis accounts for more than 50% of hospital deaths, and the associated cost ranks the highest among hospital admissions in the US. Sepsis may be misdiagnosed because the patient is not thoroughly assessed or the symptoms are misinterpreted, which can lead to serious health complications or even death. An improved understanding of disease states, progression, severity, and clinical markers can significantly improve patient outcomes and reduce costs. We have developed a computational framework based on archetypal analysis that identifies disease states in sepsis using clinical variables and samples in the MIMIC-III database. Each identified state is associated with different manifestations of organ dysfunction. Patients in different states are observed to be statistically significantly composed of distinct populations with disparate demographic and comorbidity profiles. We furthermore model disease progression using a Markov chain. Our progression model accurately characterizes the severity level of each pathological trajectory and identifies significant changes in clinical variables and treatment actions during sepsis state transitions. Collectively, our framework provides a holistic view of sepsis, and our findings provide the basis for the future development of clinical trials and therapeutic strategies for sepsis. These results have significant implications for a large number of hospitalizations.</p> <p><br></p> <p><br></p> <p>In the second part, we focus on the problem of recommending optimal personalized treatment policies from observational data. Treatment policies are typically based on randomized controlled trials (RCTs); these policies are often sub-optimal, inconsistent, and have potential biases. Using observational data, we formulate suitable objective functions that encode causal reasoning in a reinforcement learning (RL) framework and present efficient algorithms for learning optimal treatment policies using interventional and counterfactual reasoning. We demonstrate the efficacy of our method on two observational datasets: (i) observational data to study the effectiveness of right heart catheterization (RHC) in the initial care of 5735 critically ill patients, and (ii) data from the Infant Health and Development Program (IHDP), aimed at estimating the effect of the intervention on the neonatal health for 985 low-birth-weight, premature infants. For the RHC dataset, our method's policy prescribes right heart catheterization (RHC) for 11.5% of the patients compared to the best current method that prescribes RHC for 38% of the patients. Even with this significantly reduced intervention, our policy yields a 1.5% improvement in the 180-day survival rate and a 2.2% improvement in the 30-day survival rate. For the IHDP dataset, we observe a 3.16% improvement in the rate of improvement of neonatal health using our method's policy.</p> <p><br></p> <p>In the third part, we consider the Supervised Archetypal Analysis (SAA) problem, which incorporates label information to compute archetypes. We formulate a new constrained optimization problem incorporating Laplacian regularization to guide archetypes towards groupings of similar data points, resulting in label-coherent archetypes and label-consistent soft assignments. We first use the MNIST dataset to show that SAA can can yield better cluster quality over baselines on any chosen number of archetypes. We then use the CelebFaces Attributes dataset to demonstrate the superiority of SAA in terms of cluster quality and interpretability over competing supervised and unsupervised methods. We also demonstrate the interpretability of SAA decompositions in the context of a movie rating application. We show that the archetypes from SAA can be directly interpreted as user ratings and encode class-specific movie preferences. Finally, we demonstrate how the SAA archetypes can be used for personalized movie recommendations. </p> <p><br></p> <p>In the last part of this thesis, we apply our SAA technique to clinical settings. We study the problem of developing methods for ventilation recommendations for Sepsis patients. Mechanical ventilation is an essential and commonly prescribed intervention for Sepsis patients. However, studies have shown that mechanical ventilation is associated with higher mortality rates on average, it is generally believed that this is a consequence of broad use of ventilation, and that a more targeted use can significantly improve average treatment effect and, consequently, survival rates. We develop a computational framework using Supervised Archetypal Analysis to stratify our cohort to identify groups that benefit from ventilators. We use SAA to group patients based on pre-treatment variables as well as treatment outcomes by constructing a Laplacian regularizer from treatment response (label) information and incorporating it into the objective function of AA. Using our Sepsis cohort, we demonstrate that our method can effectively stratify our cohort into sub-cohorts that have positive and negative ATEs, corresponding to groups of patients that should and should not receive mechanical ventilation, respectively. </p> <p>We then train a classifier to identify patient sub-cohorts with positive and negative treatment effects. We show that our treatment recommender, on average, has a high positive ATE for patients that are recommended ventilator support and a slightly negative ATE for those not recommended ventilator support. We use SHAP (Shapley Additive exPlanations) techniques for generating clinical explanations for our classifier and demonstrate their use in the generation of patient-specific classification and explanation. Our framework provides a powerful new tool to assist in the clinical assessment of Sepsis patients for ventilator use.</p>
558

<b>FAST ALGORITHMS FOR MATRIX COMPUTATION AND APPLICATIONS</b>

Qiyuan Pang (17565405) 10 December 2023 (has links)
<p dir="ltr">Matrix decompositions play a pivotal role in matrix computation and applications. While general dense matrix-vector multiplications and linear equation solvers are prohibitively expensive, matrix decompositions offer fast alternatives for matrices meeting specific properties. This dissertation delves into my contributions to two fast matrix multiplication algorithms and one fast linear equation solver algorithm tailored for certain matrices and applications, all based on efficient matrix decompositions. Fast dimensionality reduction methods in spectral clustering, based on efficient eigen-decompositions, are also explored.</p><p dir="ltr">The first matrix decomposition introduced is the "kernel-independent" interpolative decomposition butterfly factorization (IDBF), acting as a data-sparse approximation for matrices adhering to a complementary low-rank property. Constructible in $O(N\log N)$ operations for an $N \times N$ matrix via hierarchical interpolative decompositions (IDs), the IDBF results in a product of $O(\log N)$ sparse matrices, each with $O(N)$ non-zero entries. This factorization facilitates rapid matrix-vector multiplication in $O(N \log N)$ operations, making it a versatile framework applicable to various scenarios like special function transformation, Fourier integral operators, and high-frequency wave computation.</p><p dir="ltr">The second matrix decomposition accelerates matrix-vector multiplication for computing multi-dimensional Jacobi polynomial transforms. Leveraging the observation that solutions to Jacobi's differential equation can be represented through non-oscillatory phase and amplitude functions, the corresponding matrix is expressed as the Hadamard product of a numerically low-rank matrix and a multi-dimensional discrete Fourier transform (DFT) matrix. This approach utilizes $r^d$ fast Fourier transforms (FFTs), where $r = O(\log n / \log \log n)$ and $d$ is the dimension, resulting in an almost optimal algorithm for computing the multidimensional Jacobi polynomial transform.</p><p dir="ltr">An efficient numerical method is developed based on a matrix decomposition, Hierarchical Interpolative Factorization, for solving modified Poisson-Boltzmann (MPB) equations. Addressing the computational bottleneck of evaluating Green's function in the MPB solver, the proposed method achieves linear scaling by combining selected inversion and hierarchical interpolative factorization. This innovation significantly reduces the computational cost associated with solving MPB equations, particularly in the evaluation of Green's function.</p><p dir="ltr"><br></p><p dir="ltr">Finally, eigen-decomposition methods, including the block Chebyshev-Davidson method and Orthogonalization-Free methods, are proposed for dimensionality reduction in spectral clustering. By leveraging well-known spectrum bounds of a Laplacian matrix, the Chebyshev-Davidson methods allow dimensionality reduction without the need for spectrum bounds estimation. And instead of the vanilla Chebyshev-Davidson method, it is better to use the block Chebyshev-Davidson method with an inner-outer restart technique to reduce total CPU time and a progressive polynomial filter to take advantage of suitable initial vectors when available, for example, in the streaming graph scenario. Theoretically, the Orthogonalization-Free method constructs a unitary isomorphic space to the eigenspace or a space weighting the eigenspace, solving optimization problems through Gradient Descent with Momentum Acceleration based on Conjugate Gradient and Line Search for optimal step sizes. Numerical results indicate that the eigenspace and the weighted eigenspace are equivalent in clustering performance, and scalable parallel versions of the block Chebyshev-Davidson method and OFM are developed to enhance efficiency in parallel computing.</p>
559

Randomized Diagonal Estimation / Randomiserad Diagonalestimering

Popp, Niclas Joshua January 2023 (has links)
Implicit diagonal estimation is a long-standing problem that is concerned with approximating the diagonal of a matrix that can only be accessed through matrix-vector products. It is of interest in various fields of application, such as network science, material science and machine learning. This thesis provides a comprehensive review of randomized algorithms for implicit diagonal estimation and introduces various enhancements as well as extensions to matrix functions. Three novel diagonal estimators are presented. The first method employs low-rank Nyström approximations. The second approach is based on shifts, forming a generalization of current deflation-based techniques. Additionally, we introduce a method for adaptively determining the number of test vectors, thereby removing the need for prior knowledge about the matrix. Moreover, the median of means principle is incorporated into diagonal estimation. Apart from that, we combine diagonal estimation methods with approaches for approximating the action of matrix functions using polynomial approximations and Krylov subspaces. This enables us to present implicit methods for estimating the diagonal of matrix functions. We provide first of their kind theoretical results for the convergence of these estimators. Subsequently, we present a deflation-based diagonal estimator for monotone functions of normal matrices with improved convergence properties. To validate the effectiveness and practical applicability of our methods, we conduct numerical experiments in real-world scenarios. This includes estimating the subgraph centralities in a protein interaction network, approximating uncertainty in ordinary least squares as well as randomized Jacobi preconditioning. / Implicit diagonalskattning är ett långvarigt problem som handlar om approximationen av diagonalerna i en matris som endast kan nås genom matris-vektorprodukter. Problemet är av intresse inom olika tillämpnings-områden, exempelvis nätverksvetenskap, materialvetenskap och maskininlärning. Detta arbete ger en omfattande översikt över algoritmer för randomiserad diagonalskattning och presenterar flera förbättringar samt utvidgningar till matrisfunktioner. Tre nya diagonalskattare presenteras. Den första metoden använder Nyström-approximationer med låg rang. Den andra metoden är baserad på skift och är en generalisering av de nuvarande deflationsbaserade metoderna. Dessutom presenteras en metod för adaptiv bestämning av antalet testvektorer som inte kräver förhandskunskap om matrisen. Median of Means principen ingår också i uppskattningen av diagonalerna. Dessutom kombinerar vi metoder för att uppskatta diagonalerna med algoritmer för att approximera matris-vektorprodukter med matrisfunktioner med hjälp av polynomapproximationer och Krylov-underutrymmen. Detta gör att vi kan presentera implicita metoder för att uppskatta diagonalerna i matrisfunktioner. Vi ger de första teoretiska resultaten för konvergensen av dessa skattare. Sedan presenterar vi en deflationsbaserad diagonal estimator för monotona funktioner av normala matriser med förbättrade konvergensegenskaper. För att validera våra metoders effektivitet och praktiska användbarhet genomför vi numeriska experiment i verkliga scenarier. Detta inkluderar uppskattning av Subgraph Centrality i nätverk, osäkerhetskvantifiering inom ramen för vanliga minsta kvadratmetoden och randomiserad Jacobi-förkonditionering.
560

Deep learning for temporal super-resolution of 4D Flow MRI / Djupinlärning för temporalt högupplöst 4D Flow MRI

Callmer, Pia January 2023 (has links)
The accurate assessment of hemodynamics and its parameters play an important role when diagnosing cardiovascular diseases. In this context, 4D Flow Magnetic Resonance Imaging (4D Flow MRI) is a non-invasive measurement technique that facilitates hemodynamic parameter assessment as well as quantitative and qualitative analysis of three-directional flow over time. However, the assessment is limited by noise, low spatio-temporal resolution and long acquisition times. Consequently, in regions characterized by transient, rapid flow dynamics, such as the aorta and heart, capturing these rapid transient flows remains particularly challenging. Recent research has shown the feasibility of machine learning models to effectively denoise and increase the spatio-temporal resolution of 4D Flow MRI. However, temporal super-resolution networks, which can generalize on unseen domains and are independent on boundary segmentations, remain unexplored.  This study aims to investigate the feasibility of a neural network for temporal super-resolution and denoising of 4D Flow MRI data. To achieve this, we propose a residual convolutional neural network (based on the 4DFlowNet from Ferdian et al.) providing an end-to-end mapping from temporal low resolution space to high resolution space. The network is trained on patient-specific cardiac models created with computational-fluid dynamic (CFD) simulations covering a full cardiac cycle. For clinical contextualization, performance is assessed on clinical patient data. The study shows the potential of the 4DFlowNet for temporal-super resolution with an average relative error of 16.6 % on an unseen cardiac domain, outperforming deterministic methods such as linear and cubic interpolation. We find that the network effectively reduces noise and recovers high-transient flow by a factor of 2 on both in-silico and in-vivo cardiac datasets. The prediction results in a temporal resolution of 20 ms, going beyond the general clinical routine of 30-40 ms. This study exemplifies the performance of a residual CNN for temporal super-resolution of 4D flow MRI data, providing an option to extend evaluations to aortic geometries and to further develop different upsampling factors and temporal resolutions. / En noggrann bedömning av hemodynamiken och dess parametrar spelar en viktig roll vid diagnos av kardiovaskulära sjukdomar. I detta sammanhang är 4D Flow Magnetic Resonance Imaging (4D Flow MRI) en icke-invasiv mätteknik som underlättar bedömning av hemodynamiska parametrar samt kvantitativ och kvalitativ analys av flöde. Bedömningen begränsas av brus, låg spatio-temporal upplösning och långa insamlingstider. I områden som karakteriseras av snabb flödesdynamik, såsom aorta och hjärta, är det därför fortfarande särskilt svårt att fånga dessa snabba transienta flöden. Ny forskning har visat att det är möjligt att använda maskininlärningsmodeller för att effektivt reducera brus och öka den spatio-temporala upplösningen i 4D Flow MRI. Nätverk för temporal superupplösning, som kan generaliseras till osedda domäner och är oberoende av segmentering, är fortfarande outforskade.  Denna studie syftar till att undersöka genomförbarheten av ett neuralt nätverk för temporal superupplösning och brusreducering av 4D Flow MRI-data. För att uppnå detta föreslår vi ett residual faltningsneuralt nätverk (baserat på 4DFlowNet från Ferdian et al.) som tillhandahåller en end-to-end-mappning från temporalt lågupplöst utrymme till högupplöst utrymme. Nätverket tränas på patientspecifika hjärtmodeller som skapats med CFD-simuleringar som spänner över en hel hjärtcykel. För klinisk kontextualisering utvärderas nätverkets prestanda på kliniska patientdata. Studien visar potentialen av 4DFlowNet för temporal superupplösning med ett genomsnittligt relativt fel på 16,6 % på en osedd hjärtdomän, vilket överträffar deterministiska metoder som linjär och kubisk interpolation. Vi konstaterar att nätverket effektivt minskar brus och återställer högtransient flöde med en faktor på 2 på både in-silico ochin-vivo hjärtdataset. Förutsägelsen resulterar i en temporal upplösning på 20 ms, vilket är mer än den allmänna kliniska rutinen på 30-40 ms. Denna studie exemplifierar prestandan hos en residual CNN för temporal superupplösning av 4D-flödes-MRI-data, vilket ger möjlighet att utvidga utvärderingarna till aortageometrier och att vidareutveckla olika uppsamplingsfaktorer och temporala upplösningar.

Page generated in 0.1404 seconds