• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 2
  • Tagged with
  • 29
  • 29
  • 29
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Controller Design for a Gearbox Oil ConditioningTestbed Through Data-Driven Modeling / Regulatordesign för en växellåda oljekonditionering testbädd genom datadriven modellering.

Brinkley IV, Charles, Wu, Chieh-Ju January 2022 (has links)
With the exponential development of more sustainable automotive powertrains, new gearbox technologies must also be created and tested extensively. Scania employs dynamometer testbeds to conduct such tests, but this plethora of new and rapidly developed gearboxes pose many problems for testbed technicians. Regulating oil temperature during tests is vital and controllers must be developed for each gearbox configuration; this is difficult given system complexity, nonlinear dynamics, and time limitations. Therefore, technicians currently resort to a manually tuned controller based on real-time observations; a time-intensive process with sub-par performance. This master thesis breaks down this predicament into two research questions. The first employs a replicate study to investigate whether linear system identification methods can model the oil conditioning system adequately. A test procedure is developed and executed on one gearbox setup to capture system behavior around a reference point and the resulting models are compared for best fitment. Results from this study show that such data-driven modeling methods can sufficiently represent the system. The second research question investigates whether the derived model can then be used to create a better-performing model-based controller through pole placement design. To draw a comparison between old and new controllers, both are implemented on the testbed PLC while conducting a nominal test procedure varying torque and oil flow. Results from this study show that the developed controller does regulate temperature sufficiently, but the original controller is more robust in this specific test case. / Med den exponentiella utvecklingen av mer hållbara drivlinor i fordonsindustrin måste nya växellådsteknologier skapas och testas på en omfattande skala. Scania använder sig utav dynamometer testbäddar för att utföra sådana tester, men denna uppsjö av nya och snabbt utvecklade växellådor skapar utmaningar för testbäddsteknikerna. Reglering av oljetemperaturen under testerna är avgörande och därmed måste nya regulatorer utvecklas för varje växellådskonfiguration; detta är problematiskt med tanke på systemkomplexitet, olinjär dynamik samt tidsbegränsning. På grund av detta använder sig testbäddsteknikerna för tillfället av en manuell metod för att ta fram parametrarna till regulatorerna baserat på realtidsobservationer vilket är en tidskrävande process som ofta leder till en underpresterande regulator. Det här masterarbetet bryter ner den nämnda problematiken i två forskningsfrågor. Den första behandlar en replikationsstudie för att undersöka om linjära systemidentifikations metoder kan modellera oljekonditioneringssytemet på ett adekvat sätt. En testprocedur utvecklas och utförs på en växellådskonfiguration för att ta fram en modell för systemet kring en referenspunkt. De resulterande modellerna jämförs för att fastställa vilken metod som bäst beskriver systemet. Resultatet från denna studie visar att sådana data-drivna modelleringsmetoder kan beskriva systemet på ett tillfredsställande sätt. Den andra forskningsfrågan undersöker om den härledda modellen kan användas för att skapa en bättre presterande modellbaserad regulator med hjälp av polplaceringsmetoden. För att kunna göra en jämförelse mellan gamla samt nya regulatorer implementeras båda på testbäddens PLC varvid en nominell testprocedur utförs som varierar vridmoment och oljeflöde. Resultatet från denna studie visar att den framtagna regulatorn kan reglera oljetemperaturen på ett tillfredsställande sätt, däremot är den ursprungliga regulatorn mer robust i det behandlade testfallet.
22

Data-Driven Variational Multiscale Reduced Order Modeling of Turbulent Flows

Mou, Changhong 16 June 2021 (has links)
In this dissertation, we consider two different strategies for improving the projection-based reduced order model (ROM) accuracy: (I) adding closure terms to the standard ROM; (II) using Lagrangian data to improve the ROM basis. Following strategy (I), we propose a new data-driven reduced order model (ROM) framework that centers around the hierarchical structure of the variational multiscale (VMS) methodology and utilizes data to increase the ROM accuracy at a modest computational cost. The VMS methodology is a natural fit for the hierarchical structure of the ROM basis: In the first step, we use the ROM projection to separate the scales into three categories: (i) resolved large scales, (ii) resolved small scales, and (iii) unresolved scales. In the second step, we explicitly identify the VMS-ROM closure terms, i.e., the terms representing the interactions among the three types of scales. In the third step, we use available data to model the VMS-ROM closure terms. Thus, instead of phenomenological models used in VMS for standard numerical discretizations (e.g., eddy viscosity models), we utilize available data to construct new structural VMS-ROM closure models. Specifically, we build ROM operators (vectors, matrices, and tensors) that are closest to the true ROM closure terms evaluated with the available data. We test the new data-driven VMS-ROM in the numerical simulation of four test cases: (i) the 1D Burgers equation with viscosity coefficient $nu = 10^{-3}$; (ii) a 2D flow past a circular cylinder at Reynolds numbers $Re=100$, $Re=500$, and $Re=1000$; (iii) the quasi-geostrophic equations at Reynolds number $Re=450$ and Rossby number $Ro=0.0036$; and (iv) a 2D flow over a backward facing step at Reynolds number $Re=1000$. The numerical results show that the data-driven VMS-ROM is significantly more accurate than standard ROMs. Furthermore, we propose a new hybrid ROM framework for the numerical simulation of fluid flows. This hybrid framework incorporates two closure modeling strategies: (i) A structural closure modeling component that involves the recently proposed data-driven variational multiscale ROM approach, and (ii) A functional closure modeling component that introduces an artificial viscosity term. We also utilize physical constraints for the structural ROM operators in order to add robustness to the hybrid ROM. We perform a numerical investigation of the hybrid ROM for the three-dimensional turbulent channel flow at a Reynolds number $Re = 13,750$. In addition, we focus on the mathematical foundations of ROM closures. First, we extend the verifiability concept from large eddy simulation to the ROM setting. Specifically, we call a ROM closure model verifiable if a small ROM closure model error (i.e., a small difference between the true ROM closure and the modeled ROM closure) implies a small ROM error. Second, we prove that a data-driven ROM closure (i.e., the data-driven variational multiscale ROM) is verifiable. For strategy (II), we propose new Lagrangian inner products that we use together with Eulerian and Lagrangian data to construct new Lagrangian ROMs. We show that the new Lagrangian ROMs are orders of magnitude more accurate than the standard Eulerian ROMs, i.e., ROMs that use standard Eulerian inner product and data to construct the ROM basis. Specifically, for the quasi-geostrophic equations, we show that the new Lagrangian ROMs are more accurate than the standard Eulerian ROMs in approximating not only Lagrangian fields (e.g., the finite time Lyapunov exponent (FTLE)), but also Eulerian fields (e.g., the streamfunction). We emphasize that the new Lagrangian ROMs do not employ any closure modeling to model the effect of discarded modes (which is standard procedure for low-dimensional ROMs of complex nonlinear systems). Thus, the dramatic increase in the new Lagrangian ROMs' accuracy is entirely due to the novel Lagrangian inner products used to build the Lagrangian ROM basis. / Doctor of Philosophy / Reduced order models (ROMs) are popular in physical and engineering applications: for example, ROMs are widely used in aircraft designing as it can greatly reduce computational cost for the aircraft's aeroelastic predictions while retaining good accuracy. However, for high Reynolds number turbulent flows, such as blood flows in arteries, oil transport in pipelines, and ocean currents, the standard ROMs may yield inaccurate results. In this dissertation, to improve ROM's accuracy for turbulent flows, we investigate three different types of ROMs. In this dissertation, both numerical and theoretical results show that the proposed new ROMs yield more accurate results than the standard ROM and thus can be more useful.
23

Machine learning-based sensitivity analysis of surface parameters in numerical weather prediction model simulations over complex terrain

Di Santo, Dario 22 July 2024 (has links)
Land surface models (LSMs) implemented in numerical weather prediction (NWP) models use several parameters to suitably describe the surface and its interaction with the atmosphere, whose determination is often affected by many uncertainties, strongly influencing simulation results. However, the sensitivity of meteorological model results to these parameters has not yet been studied systematically, especially in complex terrain, where uncertainty is expected to be even larger. This work aims at identifying critical LSM parameters influencing the results of NWP models, focusing in particular on the simulation of thermally-driven circulations over complex terrain. While previous sensitivity analyses employed offline LSM simulations to evaluate the sensitivity to surface parameters, this study adopts an online coupled approach, utilizing the Noah-MP LSM within the Weather Research and Forecasting (WRF) model. To overcome computational constraints, a novel tool, Machine Learning-based Automated Multi-method Parameter Sensitivity and Importance analysis Tool (ML-AMPSIT), is developed and tested. This tool allows users to explore the sensitivity of the results to model parameters using supervised machine learning regression algorithms, including Random Forest, CART, XGBoost, SVM, LASSO, Gaussian Process Regression, and Bayesian Ridge Regression. These algorithms serve as fast surrogate models, greatly accelerating sensitivity analyses while maintaining a high level of accuracy. The versatility and effectiveness of ML-AMPSIT enable the fast implementation of advanced sensitivity methods, such as the Sobol method, overcoming the computational limitations encountered in expensive models like WRF. The suitability of this tool to assess model’s sensitivity to the variation of specific parameters is first tested in an idealized sea breeze case study where six surface parameters are varied. Then, the analysis focuses on the evaluation of the sensitivity to surface parameters in the simulation of thermally-driven circulations in a mountain valley. Specifically, an idealized three-dimensional topography consisting of a valley-plain system is adopted, analyzing a complete diurnal cycle of valley and slope winds. The analysis focuses on all the key surface parameters governing the interactions between NoahMP and WRF. The proposed approach, novel in the context of LSM-NWP model coupling, draws from established applications of machine learning in various Earth science disciplines, underscoring its potential to improve the estimation of parameter sensitivities in NWP models.
24

Large Eddy Simulation Reduced Order Models

Xie, Xuping 12 May 2017 (has links)
This dissertation uses spatial filtering to develop a large eddy simulation reduced order model (LES-ROM) framework for fluid flows. Proper orthogonal decomposition is utilized to extract the dominant spatial structures of the system. Within the general LES-ROM framework, two approaches are proposed to address the celebrated ROM closure problem. No phenomenological arguments (e.g., of eddy viscosity type) are used to develop these new ROM closure models. The first novel model is the approximate deconvolution ROM (AD-ROM), which uses methods from image processing and inverse problems to solve the ROM closure problem. The AD-ROM is investigated in the numerical simulation of a 3D flow past a circular cylinder at a Reynolds number $Re=1000$. The AD-ROM generates accurate results without any numerical dissipation mechanism. It also decreases the CPU time of the standard ROM by orders of magnitude. The second new model is the calibrated-filtered ROM (CF-ROM), which is a data-driven ROM. The available full order model results are used offline in an optimization problem to calibrate the ROM subfilter-scale stress tensor. The resulting CF-ROM is tested numerically in the simulation of the 1D Burgers equation with a small diffusion parameter. The numerical results show that the CF-ROM is more efficient than and as accurate as state-of-the-art ROM closure models. / Ph. D. / Numerical simulation of complex fluid flows is often challenging in many realistic engineering, scientific, and medical applications. Indeed, an accurate numerical approximation of such flows generally requires millions and even billions of degrees of freedom. Furthermore, some design and control applications involve repeated numerical simulations for different parameter values. Reduced order models (ROMs) are an efficient approach to the numerical simulation of fluid flows, since they can reduce the computational time of a brute force computational approach by orders of magnitude while preserving key features of the flow. Our main contribution to the field is the use of spatial filtering to develop better ROMs. To construct the new spatially filtered ROMs, we use ideas from image processing and inverse problems, as well as data-driven algorithms. The new ROMs are more accurate than standard ROMs in the numerical simulation of challenging three-dimensional flows past a circular cylinder.
25

THERMOMECHANICAL DEGRADATION AND RHEOLOGY CHARACTERIZATION OF THERMAL GREASES

Pranay Praveen Nagrani (11573653) 12 March 2025 (has links)
<p dir="ltr">Due to advances in 3D integration and miniaturization of chips, the power density and number of hotspots within electronic packages have increased rapidly. A major bottleneck in the chip-to-coolant thermal resistance pathway is the interfacial resistance at solid-solid contacts and, therefore, thermal interface materials (TIMs) are employed to minimize interfacial thermal resistance. To improve heat dissipation, thermal grease (a type of TIM) is generally employed to reduce the overall thermal resistance from a heat-generating component to the heat sink. However, these materials degrade throughout their lifetime and the process is not well understood.</p><p dir="ltr">The first part of this dissertation focuses on investigating the degradation behavior of thermal greases using traditional and accelerated reliability techniques. The performance of a thermal grease often worsens with time due to the thermomechanical cycling driven by the coefficient of thermal expansion mismatch between the substrates via pumpout (material moves out of the interface) and dryout (phase separation of the composite material) phenomena. I isolate the effect of thermal cycling (from mechanical cycling) on the degradation of thermal greases by subjecting them to power cycling while holding the bond line thickness constant. In addition to thermocouples in the system, a high-resolution temperature mapping of the thermal grease is leveraged using an infrared microscope for evaluation of local degradation <i>in situ</i>. The results demonstrate a novel pathway for evaluating thermal grease performance by showcasing the importance of the viscosity-temperature hysteresis. However, traditional reliability testing methods such as thermal cycling have long testing periods, often of the order of days or months. Therefore, to accelerate the degradation analysis of thermal greases, I propose adding mechanical cycling while maintaining a constant heat flow rate. The reliability of thermal greases is investigated at different mechanical oscillation amplitudes and squeezing pressures using a novel custom-designed and machined experimental rig. The results uncover that the mechanical reliability of thermal greases depends on the ratio of elastic modulus to viscosity, with higher ratios being more desirable. Meanwhile, the thermal reliability depends upon the synergy of material properties with higher elastic modulus and higher thermal conductivity, resulting in a lesser increase in thermal resistance over the lifetime of thermal greases. </p><p dir="ltr">The second part of this dissertation focuses on the characterization of the rheology of the thermal greases and the associated uncertainty. Thermal greases have complex rheological properties that impact the performance over their lifetime. I perform rheological experiments on thermal greases and observe both stress relaxation and stress buildup regimes, which are not captured by steady shear-thinning models. Instead, a thixo-elasto-visco-plastic and a nonlinear-elasto-visco-plastic constitutive model characterizes each of the observed regimes. I use the models within a data-driven approach based on physics-informed neural networks (PINNs) to solve the inverse problem of determining the rheological model parameters from the dynamic response in experiments. Further, from a microscopic point of view, these rheological behaviors and associated uncertainties arise from the microstructure rearrangements due to particles' inhomogeneous mixing or separation/settling over time. However, this model calibration approach does not address parameter uncertainty arising due to epistemic (limited rheological data) and aleatoric (randomness of rheological experiments) sources. The last part of this dissertation addresses this limitation and quantifies uncertainties arising in the model calibration process. A hierarchical Bayesian inference methodology is used to obtain distributions of the rheological parameters. The uncertainty is further propagated to shear stress distributions and thermal resistances of thermal grease to demonstrate that the rheological models considered are suitable representations of the experimentally observed regimes. </p><p dir="ltr">Therefore, the current dissertation addresses the thermomechanical degradation behaviors and associated complex rheological characteristics of thermal greases. Understanding the degradation and rheology of thermal greases can help design better thermal greases which are degradation-resistant and hence can improve the reliability of electronic packages.</p>
26

Data-driven modeling and simulation of spatiotemporal processes with a view toward applications in biology

Maddu Kondaiah, Suryanarayana 11 January 2022 (has links)
Mathematical modeling and simulation has emerged as a fundamental means to understand physical process around us with countless real-world applications in applied science and engineering problems. However, heavy reliance on first principles, symmetry relations, and conservation laws has limited its applicability to a few scientific domains and even few real-world scenarios. Especially in disciplines like biology the underlying living constituents exhibit a myriad of complexities like non-linearities, non-equilibrium physics, self-organization and plasticity that routinely escape mathematical treatment based on governing laws. Meanwhile, recent decades have witnessed rapid advancement in computing hardware, sensing technologies, and algorithmic innovations in machine learning. This progress has helped propel data-driven paradigms to achieve unprecedented practical success in the fields of image processing and computer vision, natural language processing, autonomous transport, and etc. In the current thesis, we explore, apply, and advance statistical and machine learning strategies that help bridge the gap between data and mathematical models, with a view toward modeling and simulation of spatiotemporal processes in biology. As first, we address the problem of learning interpretable mathematical models of biologial process from limited and noisy data. For this, we propose a statistical learning framework called PDE-STRIDE based on the theory of stability selection and ℓ0-based sparse regularization for parsimonious model selection. The PDE-STRIDE framework enables model learning with relaxed dependencies on tuning parameters, sample-size and noise-levels. We demonstrate the practical applicability of our method on real-world data by considering a purely data-driven re-evaluation of the advective triggering hypothesis explaining the embryonic patterning event in the C. elegans zygote. As a next natural step, we extend our PDE-STRIDE framework to leverage prior knowledge from physical principles to learn biologically plausible and physically consistent models rather than models that simply fit the data best. For this, we modify the PDE-STRIDE framework to handle structured sparsity constraints for grouping features which enables us to: 1) enforce conservation laws, 2) extract spatially varying non-observables, 3) encode symmetry relations associated with the underlying biological process. We show several applications from systems biology demonstrating the claim that enforcing priors dramatically enhances the robustness and consistency of the data-driven approaches. In the following part, we apply our statistical learning framework for learning mean-field deterministic equations of active matter systems directly from stochastic self-propelled active particle simulations. We investigate two examples of particle models which differs in the microscopic interaction rules being used. First, we consider a self-propelled particle model endowed with density-dependent motility character. For the chosen hydrodynamic variables, our data-driven framework learns continuum partial differential equations that are in excellent agreement with analytical derived coarse-grain equations from Boltzmann approach. In addition, our structured sparsity framework is able to decode the hidden dependency between particle speed and the local density intrinsic to the self-propelled particle model. As a second example, the learning framework is applied for coarse-graining a popular stochastic particle model employed for studying the collective cell motion in epithelial sheets. The PDE-STRIDE framework is able to infer novel PDE model that quantitatively captures the flow statistics of the particle model in the regime of low density fluctuations. Modern microscopy techniques produce GigaBytes (GB) and TeraBytes (TB) of data while imaging spatiotemporal developmental dynamics of living organisms. However, classical statistical learning based on penalized linear regression models struggle with issues like accurate computation of derivatives in the candidate library and problems with computational scalability for application to “big” and noisy data-sets. For this reason we exploit the rich parameterization of neural networks that can efficiently learn from large data-sets. Specifically, we explore the framework of Physics-Informed Neural Networks (PINN) that allow for seamless integration of physics priors with measurement data. We propose novel strategies for multi-objective optimization that allow for adapting PINN architecture to multi-scale modeling problems arising in biology. We showcase application examples for both forward and inverse modeling of mesoscale active turbulence phenomenon observed in dense bacterial suspensions. Employing our strategies, we demonstrate orders of magnitude gain in accuracy and convergence in comparison with conventional formulation for solving multi-objective optimization in PINNs. In the concluding chapter of the thesis, we skip model interpretability and focus on learning computable models directly from noisy data for the purpose of pure dynamics forecasting. We propose STENCIL-NET, an artificial neural network architecture that learns solution adaptive spatial discretization of an unknown PDE model that can be stably integrated in time with negligible loss in accuracy. To support this claim, we present numerical experiments on long-term forecasting of chaotic PDE solutions on coarse spatio-temporal grids, and also showcase de-noising application that help decompose spatiotemporal dynamics from the noise in an equation-free manner.
27

Three essays of healthcare data-driven predictive modeling

Zhouyang Lou (15343159) 26 April 2023 (has links)
<p>Predictive modeling in healthcare involves the development of data-driven and computational models which can predict what will happen, be it for a single individual or for an entire system. The adoption of predictive models can guide various stakeholders’ decision-making in the healthcare sector, and consequently improve individual outcomes and the cost-effectiveness of care. With the rapid development in healthcare of big data and the Internet of Things technologies, research in healthcare decision-making has grown in both importance and complexity. One of the complexities facing those who would build predictive models is heterogeneity of patient populations, clinical practices, and intervention outcomes, as well as from diverse health systems. There are many sub-domains in healthcare for which predictive modeling is useful such as disease risk modeling, clinical intelligence, pharmacovigilance, precision medicine, hospitalization process optimization, digital health, and preventive care. In my dissertation, I focus on predictive modeling for applications that fit into three broad and important domains of healthcare, namely clinical practice, public health, and healthcare system. In this dissertation, I present three papers that present a collection of predictive modeling studies to address the challenge of modeling heterogeneity in health care. The first paper presents a decision-tree model to address clinicians’ need to decide among various liver cirrhosis diagnosis strategies. The second paper presents a micro-simulation model to assess the impact on cardiovascular disease (CVD) to help decision makers at government agencies develop cost-effective food policies to prevent cardiovascular diseases, a public-health domain application. The third paper compares a set of data-driven prediction models, the best performing of which is paired together with interpretable machine learning to facilitate the coordination of optimization for hospital-discharged patients choosing skilled nursing facilities. This collection of studies addresses important modeling challenges in specific healthcare domains, and also broadly contribute to research in medical decision-making, public health policy and healthcare systems.</p>
28

Data-driven Dynamic Baseline Calibration Method for Gas Sensors / Datadriven Dynamisk Baslinjekalibreringsmetod för Gassensorer

Yang, Cheng January 2021 (has links)
Automatic Baseline Correction is the state-of-the-art calibration method of non-dispersive infrared CO2 sensing, which is the standard CO2 gas monitoring method. In this thesis, we improve it by introducing the dynamic baseline based on environmental data. The 96 data sets from 48 atmospheric stations verify the characteristics of the annual growth trend and seasonality of the baseline model. In order to improve the accuracy of the calibration, the k-means clustering method is used to identify different types of baselines. Then the localized dynamic baseline model is predicted by using the location information of the stations only, which provides an executable calibration implementation for dynamic baseline calibration without relying on historical CO2 data. / Automatisk baslinjekorrigering är den senaste kalibreringsmetoden för icke-dispersiv infraröd CO2 avkänning, vilket är standard CO2 gasövervakningsmetod. I denna avhandling förbättrar vi den genom att introducera den dynamiska baslinjen baserat på miljödata. De 96 datamängderna från 48 atmosfärstationer bekräftar egenskaperna för den årliga tillväxttrenden och säsongsmässigheten hos basmodellen. För att förbättra kalibreringens noggrannhet används k-medelklusteringsmetoden för att identifiera olika typer av baslinjer. Därefter förutses den lokaliserade dynamiska baslinjemodellen med endast platsinformationen för stationerna, som ger en körbar kalibreringsimplementering för dynamisk baslinjekalibrering utan att förlita sig på historisk CO2 data.
29

Deriving a mathematical framework for data-driven analyses of immune cell dynamics

Burt, Philipp 06 January 2023 (has links)
Zelluläre Entscheidungen, wie z. B. die Differenzierung von T-Helferzellen (Th-Zellen) in spezialisierte Effektorlinien, haben großen Einfluss auf die Spezifität von Immunreaktionen. Solche Reaktionen sind das Ergebnis eines komplexen Zusammenspiels einzelner Zellen, die über kleine Signalmoleküle, so genannte Zytokine, kommunizieren. Die hohe Anzahl der Komponenten, sowie deren komplizierte und oft nichtlineare Interaktionen erschweren dabei die Vorhersage, wie bestimmte zelluläre Reaktionen erzeugt werden. Aus diesem Grund sind die globalen Auswirkungen der gezielten Beeinflussung einzelner Zellen oder spezifischer Signalwege nur unzureichend verstanden. So wirken beispielsweise etablierte Behandlungen von Autoimmunkrankheiten oft nur bei einem Teil der Patienten. Durch Einzelzellmethoden wie Live-Cell-Imaging, Massenzytometrie und Einzelzellsequenzierung, können Immunzellen heutzutage quantitativ auf mehreren Ebenen charakterisiert werden. Diese Ansammlung quantitativer Daten erlaubt die Formulierung datengetriebener Modelle zur Vorhersage von zellulären Entscheidungen, allerdings fehlen in vielen Fällen Methoden, um die verschiedenen Daten auf geeignete Weise zu integrieren und zu annotieren. Die vorliegende Arbeit befasst sich mit quantitativen Modellformulierungen für die Entscheidungsfindung von Zellen im Immunsystem mit dem Schwerpunkt auf Lymphozytenproliferation, -differenzierung und -tod. / Cellular decisions, such as the differentiation of T helper (Th) cells into specialized effector lineages, largely impact the direction of immune responses. Such population-level responses are the result of a complex interplay of individual cells which communicate via small signaling molecules called cytokines. The system's complexity, stemming not only from the number of components but also from their intricate and oftentimes non-linear interactions, makes it difficult to develop intuition for how cellular responses are actually generated. Not surprisingly, the global effects of targeting individual cells or specific signaling pathways through perturbations are poorly understood. For instance, common treatments of autoimmune diseases often work for some patients, but not for others. Recently developed methods such as live-cell imaging, mass cytometry and single-cell sequencing now enable quantitative characterization of individual immune cells. This accumulating wealth of quantitative data has laid the basis to derive predictive, data-driven models of immune cell behavior, but in many cases, methods to integrate and annotate the data in a way suitable for model formulation are missing. In this thesis, quantitative workflows and methods are introduced that allow to formulate data-driven models of immune cell decision-making with a particular focus on lymphocyte proliferation, differentiation and death.

Page generated in 0.0729 seconds