411 |
Mathematical modeling approaches for dynamical analysis of protein regulatory networks with applications to the budding yeast cell cycle and the circadian rhythm in cyanobacteriaLaomettachit, Teeraphan 11 November 2011 (has links)
Mathematical modeling has become increasingly popular as a tool to study regulatory interactions within gene-protein networks. From the modeler's perspective, two challenges arise in the process of building a mathematical model. First, the same regulatory network can be translated into different types of models at different levels of detail, and the modeler must choose an appropriate level to describe the network. Second, realistic regulatory networks are complicated due to the large number of biochemical species and interactions that govern any physiological process. Constructing and validating a realistic mathematical model of such a network can be a difficult and lengthy task. To confront the first challenge, we develop a new modeling approach that classifies components in the networks into three classes of variables, which are described by different rate laws. These three classes serve as "building blocks" that can be connected to build a complex regulatory network. We show that our approach combines the best features of different types of models, and we demonstrate its utility by applying it to the budding yeast cell cycle. To confront the second challenge, modelers have developed rule-based modeling as a framework to build complex mathematical models. In this approach, the modeler describes a set of rules that instructs the computer to automatically generate all possible chemical reactions in the network. Building a mathematical model using rule-based modeling is not only less time-consuming and error-prone, but also allows modelers to account comprehensively for many different mechanistic details of a molecular regulatory system. We demonstrate the potential of rule-based modeling by applying it to the generation of circadian rhythms in cyanobacteria. / Ph. D.
|
412 |
Tight Discrete Formulations to Enhance Solvability with Applications to Production, Telecommunications, and Air Transportation ProblemsSmith, J. Cole 20 April 2000 (has links)
In formulating discrete optimization problems, it is not only important to have a correct mathematical model, but to have a well structured model that can be solved effectively. Two important characteristics of a general integer or mixed-integer program are its size (the number of constraints and variables in the problem), and its strength or tightness (a measure of how well it approximates the convex hull of feasible solutions). In designing model formulations, it is critical to ensure a proper balance between compactness of the representation and the tightness of its linear relaxation, in order to enhance its solvability. In this dissertation, we consider these issues pertaining to the modeling of mixed-integer 0-1 programming problems in general, as well as in the context of several specific real-world applications, including a telecommunications network design problem and an airspace management problem.
We first consider the Reformulation-Linearization Technique (RLT) of Sherali and Adams and explore the generation of reduced first-level representations for mixed-integer 0-1 programs that tend to retain the strength of the full first-level linear programming relaxation. The motivation for this study is provided by the computational success of the first-level RLT representation (in full or partial form) experienced by several researchers working on various classes of problems. We show that there exists a first-level representation having only about half the RLT constraints that yields the same lower bound value via its relaxation. Accordingly, we attempt to a priori predict the form of this representation and identify many special cases for which this prediction is accurate. However, using various counter-examples, we show that this prediction as well as several variants of it are not accurate in general, even for the case of a single binary variable. Since the full first-level relaxation produces the convex hull representation for the case of a single binary variable, we investigate whether this is the case with respect to the reduced first-level relaxation as well, and show similarly that it holds true only for some special cases. Empirical results on the prediction capability of the reduced, versus the full, first-level representation demonstrate a high level of prediction accuracy on a set of random as well as practical, standard test problems.
Next, we focus on a useful modeling concept that is frequently ignored while formulating discrete optimization problems. Very often, there exists a natural symmetry inherent in the problem itself that, if propagated to the model, can hopelessly mire a branch-and-bound solver by burdening it to explore and eliminate such alternative symmetric solutions. We discuss three applications where such a symmetry arises. For each case, we identify the indistinguishable objects in the model which create the problem symmetry, and show how imposing certain decision hierarchies within the model significantly enhances its solvability. These hierarchies render an otherwise virtually intractable formulation computationally viable using commercial software. For the first problem, we consider a problem of minimizing the maximum dosage of noise to which workers are exposed while working on a set of machines. We next examine a problem of minimizing the cost of acquiring and utilizing machines designed to cool large facilities or buildings, subject to minimum operational requirements. For each of these applications, we generate realistic test beds of problems. The decision hierarchies allow all previously intractable problems to be solved relatively quickly, and dramatically decrease the required computational time for all other problems. For the third problem, we investigate a network design problem arising in the context of deploying synchronous optical networks (SONET) using a unidirectional path switched ring architecture, a standard of transmission using optical fiber technology. Given several rings of this type, the problem is to find a placement of nodes to possibly multiple rings, and to determine what portion of demand traffic between node pairs spanned by each ring should be allocated to that ring. The constraints require that the demand traffic between each node pair should be satisfiable given the ring capacities, and that no more than a specified maximum number of nodes should be assigned to each ring. The objective function is to minimize the total number of node-to-ring assignments, and hence, the capital investment in add-drop multiplexer equipments. We formulate the problem as a mixed-integer programming model, and propose several alternative modeling techniques designed to improve the mathematical representation of this problem. We then develop various classes of valid inequalities for the problem along with suitable separation procedures for tightening the representation of the model, and accordingly, prescribe an algorithmic approach that coordinates tailored routines with a commercial solver (CPLEX). We also propose a heuristic procedure which enhances the solvability of the problem and provides bounds within 5-13% of the optimal solution. Promising computational results that exhibit the viability of the overall approach and that lend insights into various modeling and algorithmic constructs are presented.
Following this we turn our attention to the modeling and analysis of several issues related to airspace management. Currently, commercial aircraft are routed along certain defined airspace corridors, where safe minimum separation distances between aircraft may be routinely enforced. However, this mode of operation does not fully utilize the available airspace resources, and may prove to be inadequate under future National Airspace (NAS) scenarios involving new concepts such as Free-Flight. This mode of operation is further compounded by the projected significant increase in commercial air traffic. (Free-Flight is a paradigm of aircraft operations which permits the selection of more cost-effective routes for flights rather than simple traversals between designated way-points, from various origins to different destinations.)
We begin our study of Air Traffic Management (ATM) by first developing an Airspace Sector Occupancy Model (AOM) that identifies the occupancies of flights within three dimensional (possibly nonconvex) regions of space called sectors. The proposed iterative procedure effectively traces each flight's progress through nonconvex sector modules which comprise the sectors. Next, we develop an Aircraft Encounter Model (AEM), which uses the information obtained from AOM to efficiently characterize the number and nature of blind-conflicts (i.e., conflicts under no avoidance or resolution maneuvers) resulting from a selected mix of flight-plans. Besides identifying the existence of a conflict, AEM also provides useful information on the severity of the conflict, and its geometry, such as the faces across which an intruder enters and exits the protective shell or envelope of another aircraft, the duration of intrusion, its relative heading, and the point of closest approach. For purposes of evaluation and assessment, we also develop an aggregate metric that provides an overall assessment of the conflicts in terms of their individual severity and resolution difficulty. We apply these models to real data provided by the Federal Aviation Administration (FAA) for evaluating several Free-Flight scenarios under wind-optimized and cruise-climb conditions.
We digress at this point to consider a more general collision detection problem that frequently arises in the field of robotics. Given a set of bodies with their initial positions and trajectories, we wish to identify the first collision that occurs between any two bodies, or to determine that none exists. For the case of bodies having linear trajectories, we construct a convex hull representation of the integer programming model of Selim and Almohamad, and exhibit the relative effectiveness of solving this problem via the resultant linear program. We also extend this analysis to model a situation in which bodies move along piecewise linear trajectories, possibly rotating at the end of each linear translation. For this case, we again compare an integer programming approach with its linear programming convex hull representation, and exhibit the relative effectiveness of solving a sequence of problems based on applying the latter construct to each time segment.
Returning to Air Traffic Management, another future difficulty in airspace resource utilization stems from a projected increase in commercial space traffic, due to the advent of Reusable Launch Vehicle (RLV) technology. Currently, each shuttle launch cordons off a large region of Special Use Airspace (SUA) in which no commercial aircraft are permitted to enter for the specified duration. Of concern to airspace planners is the expense of routinely disrupting air traffic, resulting in circuitous diversions and delays, while enforcing such SUA restrictions. To provide a tool for tactical and planning purposes in such a context within the framework of a coordinated decision making process between the FAA and commercial airlines, we develop an Airspace Planning Model (APM). Given a set of flights for a particular time horizon, along with (possibly several) alternative flight-plans for each flight that are based on delays and diversions due to special-use airspace (SUA) restrictions prompted by launches at spaceports or weather considerations, this model prescribes a set of flight-plans to be implemented. The model formulation seeks to minimize a delay and fuel cost based objective function, subject to the constraints that each flight is assigned one of the designated flight-plans, and that the resulting set of flight-plans satisfies certain specified workload, safety, and equity criteria. These requirements ensure that the workload for air-traffic controllers in each sector is held under a permissible limit, that any potential conflicts which may occur are routinely resolvable, and that the various airlines involved derive equitable levels of benefits from the overall implemented schedule. In order to solve the resulting 0-1 mixed-integer programming problem more effectively using commercial software (CPLEX-MIP), we explore the use of various facetial cutting planes and reformulation techniques designed to more closely approximate the convex hull of feasible solutions to the problem. We also prescribe a heuristic procedure which is demonstrated to provide solutions to the problem that are either optimal or are within 0.01% of optimality. Computational results are reported on several scenarios based on actual flight data obtained from the Federal Aviation Administration (FAA) in order to demonstrate the efficacy of the proposed approach for air traffic management (ATM) purposes. In addition to the evaluation of these various models, we exhibit the usefulness of this airspace planning model as a strategic planning tool for the FAA by exploring the sensitivity of the solution provided by the model to changes both in the radius of the SUA formulated around the spaceport, and in the duration of the launch-window during which the SUA is activated. / Ph. D.
|
413 |
Mathematical Modeling of Therapies for MCF7 Breast Cancer CellsHe, Wei 22 June 2021 (has links)
Estrogen receptor (ER)-positive breast cancer is responsive to a number of targeted therapies used clinically. Unfortunately, the continuous application of any targeted therapy often results in resistance to the therapy. Our ultimate goal is to use mathematical modelling to optimize alternating therapies that not only decrease proliferation but also stave off resistance. Toward this end, we measured levels of key proteins and proliferation over a 7-day time course in ER-positive MCF7 breast cancer cells. Treatments included endocrine therapy, either estrogen deprivation, which mimics the effects of an aromatase inhibitor, or fulvestrant, an ER degrader. These data were used to calibrate a mathematical model based on key interactions between ER signaling and the cell cycle. We show that the calibrated model is capable of predicting the combination treatment of fulvestrant and estrogen deprivation. Further, we show that we can add a new drug, palbociclib, to the model by measuring only two key proteins, c-Myc and hyperphosphorylated RB1, and adjusting only parameters associated with the drug. The model is then able to predict the combination treatment of estrogen deprivation and palbociclib. Then we added the dynamics of estrogen concentration in the medium into the model and extended the short-term model to a long-term model. The long-term model can simulate various mono- or combination treatments at different doses over 28 days. In addition to palbociclib, we add another Cdk4/6 inhibitor to the model, abemaciclib, which can induce apoptosis at high concentrations. Then the model can match the effects of abemaciclib treatment at two different doses and also capture the apoptosis effects induced by abemaciclib. After calibrating the model to these different treatment conditions, we used the model to explore the synergism among these different treatments. The mathematical model predicts a significant synergism between palbociclib or abemaciclib in combination with fulvestrant. And the predicted synergisms are verified by experiments. This critical synergism between these Cdk4/6 inhibitors and endocrine therapy could reflect the reason that Cdk4/6 inhibitors achieve pronounced success in clinic trails. Lastly, we used protein biomarkers (cyclinD1, cyclinE1, Cdk4, Cdk6 and Cdk2) and palbociclib dose-response proliferation assays to assess the difference between mono- and alternating therapy after 10 weeks of treatments. But neither the protein levels nor palbociclib dose-response show significant differences after 10 weeks of treatment. Therefore, we cannot conclude that alternating therapy delays palbociclib resistance compared with palbociclib mono-treatment after 10 weeks. Longer term experiments or other methods will be needed to uncover any difference. However, in this research we showed that a mechanism-based mathematical model is able to simulate and predict various effects of clinically-used treatments on ER-positive breast cancer cells at different time scales. And this mathematical model has the potential to explore ideas for potential drug treatments, optimize protocols that limit proliferation, and determine the drugs, doses, and alternating schedule for long term experiments. / Doctor of Philosophy / Estrogen receptors are proteins found inside breast cancer cells that are activated by the hormone estrogen. Estrogen-receptor positive breast cancer is the most common type of breast cancer and accounts for about 70% of breast cancer tumors. Endocrine therapy, which inhibits estrogen receptor signaling, and Cyclin-dependent kinase 4 and 6 (Cdk4/6) inhibitors are the preferred first-line therapy for patients with estrogen receptor-positive cancers. We built a mathematical model of MCF7 cells (an estrogen receptor-positive breast cancer cell line) in response to these standard first-line therapies. This mathematical model can capture the experimentally observed protein and cell proliferation changes in response to various treatment conditions, including different drug combinations, different doses, and different treatment durations up to 28 days. The model can then be used to look for more effective treatment possibilities. In particular, our mathematical model predicted a strong synergism between Cdk4/6 inhibitors and endocrine therapy, which could allow significant reductions in drug dosage while producing the same effect. This synergism was verified by experiments. In addition to treatment methods where one drug or combination of several drugs is used continuously, we consider alternating among various therapies in a fixed cycle. The mathematical model can help us determine which drugs and which doses might be most appropriate. Since an alternating therapy doesn't inhibit one particular target non-stop, the hope is that alternating therapies can delay the onset of drug resistance, where the drug becomes less effective or stops working completely. Unfortunately, an initial 10- week experiment to test for differences in resistance to a mono-therapy versus an alternating therapy did not show a significant difference, pointing to the need for longer experiments to see if alternating therapies can actually make a difference in resistance. Mathematical models will be important for determining the drugs, doses, and time intervals to be used in these experiments, as figuring out the best options by trial and error in such long-term experiments is not practical.
|
414 |
Mechanistic understanding of biogranulation for continuous flow wastewater treatment and organic waste valorizationAn, Zhaohui 20 April 2022 (has links)
Aerobic granular sludge has been regarded as a promising alternative to the conventional activated sludge which has been used for a century in that granular sludge offers advantages in high biomass retention, fast sludge-water separation, and small footprint requirement. However, this technology has been rarely applied in continuous flow reactors (CFRs) which are the most common type of bioreactors used in water resource recovery facilities across the world. Hence, the overarching goal of this study is to provide advanced understanding of biogranulation mechanism to enable industrial application of this technology. The lack of long-term stability study in CFRs has restricted its full-scale acceptability. The high settling velocity-based selection pressure has been regarded as the ultimate driving force towards biogranulation in sequential batch reactors (SBRs). In this study, this physical selection pressure was firstly weakened and then eliminated in CFRs to investigate its role in maintaining the long-term structural stability of aerobic granules. Given the fact that implementing settling velocity-based selection pressure only can cultivate biogranules in SBRs but not in completely stirred tank reactors (CSTRs), the essential role of feast/famine conditions was investigated. Seventeen sets of data collected from both literature and this study were analyzed to develop a general understanding of the granulation mechanisms. The outcome indicated that granulation is more sensitive to the feast/famine conditions than to the settling velocity-based selection pressure. The theory was tested in a CFR with 10-CSTR chambers connected in series to provide feast/famine conditions followed by a physical selector separating the slow-settling bioflocs and fast-settling biogranules into feast and famine zones, respectively. Along with successful biogranulation, the startup performance interruption problem inherent in SBRs was also resolved in this innovative design because the sludge loss due to physical washout selection was mitigated by returning bioflocs to the famine zone. Then, a cost-effective engineering strategy was put forward to promote the full-scale application of this advanced technology. With this generalized biogranulation theory, pure culture biogranules with desired functions for high value-added bioproducts were also investigated and achieved for the first time in this study, which paves a new avenue to harnessing granulation technology for intensifying waste valorization bioprocesses. / Doctor of Philosophy / Nowadays, the rapid population growth and unprecedented urbanization are overloading the capacity of many wastewater resource recovery facilities (WRRFs). Therefore, there is a need to develop a cost-effective strategy to upgrade the treatment capacity of existing WRRFs without incurring major capital investment. Because conventional activated sludge comes with loose structure and poor settleability, replacing them with dense aerobic granular sludge offers the opportunity to intensify the capacity of existing WRRF tankage and clarifiers through better retention of high bacterial mass that offers not only a fast pollutant removal rate but also a high water-solids separation rate. The aerobic granulation technology turns traditional activated sludge into granular sludge by inducing microbial cell-to-cell co-aggregation. Although this technology has been developed for more than 20 years, its application in full-scale WRRFs is still limited because majority of WRRFs are constructed with continuous flow reactors in which the aerobic granulation mechanism largely remains unknown. Besides, the long-term stability of aerobic granules in continuous flow reactors also remain unstudied, further constraining the full-scale application of the technology. The sensitivity of aerobic granulation to physical selection and biological selection was analyzed in this study. The results concluded that aerobic granulation is more sensitive to the latter but not to the former. Subsequently, this theory was tested in a novel bioreactor setup that creates feast/famine conditions for biological selection. A physical selector was installed at the end of the bioreactor to separate and return the fast- and slow- settling bioparticles to the feast and famine zones, respectively. This unique reactor design and operational strategy provided an economical approach to retrofitting current WRRFs for achieving treatment capacity upgrading without major infrastructure alternation. It also protected the bioreactor startup performance by enhancing the stability of WRRFs in the future application. Last but not least, this updated understanding of aerobic granulation theory was for the first time extrapolated to and verified with the formation of pure culture biogranules harnessed in this study for value-added bioproduct valorization from waste materials.
|
415 |
Real-Time Resource Optimization for Wireless NetworksHuang, Yan 11 January 2021 (has links)
Resource allocation in modern wireless networks is constrained by increasingly stringent real-time requirements. Such real-time requirements typically come from, among others, the short coherence time on a wireless channel, the small time resolution for resource allocation in OFDM-based radio frame structure, or the low-latency requirements from delay-sensitive applications. An optimal resource allocation solution is useful only if it can be determined and applied to the network entities within its expected time. For today's wireless networks such as 5G NR, such expected time (or real-time requirement) can be as low as 1 ms or even 100 μs. Most of the existing resource optimization solutions to wireless networks do not explicitly take real-time requirement as a constraint when developing solutions. In fact, the mainstream of research works relies on the asymptotic complexity analysis for designing solution algorithms. Asymptotic complexity analysis is only concerned with the growth of its computational complexity as the input size increases (as in the big-O notation). It cannot capture the real-time requirement that is measured in wall-clock time. As a result, existing approaches such as exact or approximate optimization techniques from operations research are usually not useful in wireless networks in the field. Similarly, many problem-specific heuristic solutions with polynomial-time asymptotic complexities may suffer from a similar fate, if their complexities are not tested in actual wall-clock time.
To address the limitations of existing approaches, this dissertation presents novel real- time solution designs to two types of optimization problems in wireless networks: i) problems that have closed-form mathematical models, and ii) problems that cannot be modeled in closed-form. For the first type of problems, we propose a novel approach that consists of (i) problem decomposition, which breaks an original optimization problem into a large number of small and independent sub-problems, (ii) search intensification, which identifies the most promising problem sub-space and selects a small set of sub-problems to match the available GPU processing cores, and (iii) GPU-based large-scale parallel processing, which solves the selected sub-problems in parallel and finds a near-optimal solution to the original problem. The efficacy of this approach has been illustrated by our solutions to the following two problems.
• Real-Time Scheduling to Achieve Fair LTE/Wi-Fi Coexistence: We investigate a resource optimization problem for the fair coexistence between LTE and Wi-Fi in the unlicensed spectrum. The real-time requirement for finding the optimal channel division and LTE resource allocation solution is on 1 ms time scale. This problem involves the optimal division of transmission time for LTE and Wi-Fi across multi- ple unlicensed bands, and the resource allocation among LTE users within the LTE's "ON" periods. We formulate this optimization problem as a mixed-integer linear pro- gram and prove its NP-hardness. Then by exploiting the unique problem structure, we propose a real-time solution design that is based on problem decomposition and GPU-based parallel processing techniques. Results from an implementation on the NVIDIA GPU/CUDA platform demonstrate that the proposed solution can achieve near-optimal objective and meet the 1 ms timing requirement in 4G LTE.
• An Ultrafast GPU-based Proportional Fair Scheduler for 5G NR: We study the popular proportional-fair (PF) scheduling problem in a 5G NR environment. The real-time requirement for determining the optimal (with respect to the PF objective) resource allocation and MCS selection solution is 125 μs (under 5G numerology 3). In this problem, we need to allocate frequency-time resource blocks on an operating channel and assign modulation and coding scheme (MCS) for each active user in the cell. We present GPF+ — a GPU based real-time PF scheduler. With GPF+, the original PF optimization problem is decomposed into a large number of small and in- dependent sub-problems. We then employ a cross-entropy based search intensification technique to identify the most promising problem sub-space and select a small set of sub-problems to fit into a GPU. After solving the selected sub-problems in parallel using GPU cores, we find the best sub-problem solution and use it as the final scheduling solution. Evaluation results show that GPF+ is able to provide near-optimal PF performance in a 5G cell while meeting the 125 μs real-time requirement.
For the second type of problems, where there is no closed-form mathematical formulation, we propose to employ model-free deep learning (DL) or deep reinforcement learning (DRL) techniques along with judicious consideration of timing requirement throughout the design. Under DL/DRL, we employ deep function approximators (neural networks) to learn the unknown objective function of an optimization problem, approximate an optimal algorithm to find resource allocation solutions, or discover important mapping functions related to the resource optimization. To meet the real-time requirement, we propose to augment DL or DRL methods with optimization techniques at the input or output of the deep function approximators to reduce their complexities and computational time. Under this approach, we study the following two problems:
• A DRL-based Approach to Dynamic eMBB/URLLC Multiplexing in 5G NR: We study the problem of dynamic multiplexing of eMBB and URLLC on the same channel through preemptive resource puncturing. The real-time requirement for determining the optimal URLLC puncturing solution is 1 ms (under 5G numerology 0). A major challenge in solving this problem is that it cannot be modeled using closed-form mathematical expressions. To address this issue, we develop a model-free DRL approach which employs a deep neural network to learn an optimal algorithm to allocate the URLLC puncturing over the operating channel, with the objective of minimizing the adverse impact from URLLC traffic on eMBB. Our contributions include a novel learning method that exploits the intrinsic properties of the URLLC puncturing optimization problem to achieve a fast and stable learning convergence, and a mechanism to ensure feasibility of the deep neural network's output puncturing solution. Experimental results demonstrate that our DRL-based solution significantly outperforms state-of-the-art algorithms proposed in the literature and meets the 1 ms real-time requirement for dynamic multiplexing.
• A DL-based Link Adaptation for eMBB/URLLC Multiplexing in 5G NR: We investigate MCS selection for eMBB traffic under the impact of URLLC preemptive puncturing. The real-time requirement for determining the optimal MCSs for all eMBB transmissions scheduled in a transmission interval is 125 μs (under 5G numerology 3). The objective is to have eMBB meet a given block-error rate (BLER) target under the adverse impact of URLLC puncturing. Since this problem cannot be mathematically modeled in closed-form, we proposed a DL-based solution design that uses a deep neural network to learn and predict the BLERs of a transmission under each MCS level. Then based on the BLER predictions, an optimal MCS can be found for each transmission that can achieve the BLER target. To meet the 5G real-time requirement, we implement this design through a hybrid CPU and GPU architecture to minimize the execution time. Extensive experimental results show that our design can select optimal MCS under the impact of preemptive puncturing and meet the 125 μs timing requirement. / Doctor of Philosophy / In modern wireless networks such as 4G LTE and 5G NR, the optimal allocation of radio resources must be performed within a real-time requirement of 1 ms or even 100 μs time scale. Such a real-time requirement comes from the physical properties of wireless channels, the short time resolution for resource allocation defined in the wireless communication standards, and the low-latency requirement from delay-sensitive applications.
Real-time requirement, although necessary for wireless networks in the field, has hardly been considered as a key constraint for solution design in the research community. Existing solutions in the literature mostly consider theoretical computational complexities, rather than actual computation time as measured by wall clock.
To address the limitations of existing approaches, this dissertation presents real-time solution designs to two types of optimization problems in wireless networks: i) problems that have mathematical models, and ii) problems that cannot be modeled mathematically. For the first type of problems, we propose a novel approach that consists of (i) problem decomposition, (ii) search intensification, and (iii) GPU-based large-scale parallel processing techniques. The efficacy of this approach has been illustrated by our solutions to the following two problems.
• Real-Time Scheduling to Achieve Fair LTE/Wi-Fi Coexistence: We investigate a resource optimization problem for the fair coexistence between LTE and Wi-Fi users in the same (unlicensed) spectrum. The real-time requirement for finding the optimal LTE resource allocation solution is on 1 ms time scale.
• An Ultrafast GPU-based Proportional Fair Scheduler for 5G NR: We study the popular proportional-fair (PF) scheduling problem in a 5G NR environment. The real-time requirement for determining the optimal resource allocation and modulation and coding scheme (MCS) for each user is 125 μs.
For the second type of problems, where there is no mathematical formulation, we propose to employ model-free deep learning (DL) or deep reinforcement learning (DRL) techniques along with judicious consideration of timing requirement throughout the design. Under this approach, we study the following two problems:
• A DRL-based Approach to Dynamic eMBB/URLLC Multiplexing in 5G NR: We study the problem of dynamic multiplexing of eMBB and URLLC on the same channel through preemptive resource puncturing. The real-time requirement for determining the optimal URLLC puncturing solution is 1 ms.
• A DL-based Link Adaptation for eMBB/URLLC Multiplexing in 5G NR: We investigate MCS selection for eMBB traffic under the impact of URLLC preemptive puncturing. The real-time requirement for determining the optimal MCSs for all eMBB transmissions scheduled in a transmission interval is 125 μs.
|
416 |
Systems Biology Study of Breast Cancer Endocrine Response and ResistanceChen, Chun 08 November 2013 (has links)
As a robust system, cells can wisely choose and switch between different signaling programs according to their differentiation stages and external environments. Cancer cells can hijack this plasticity to develop drug resistance. For example, breast cancers that are initially responsive to endocrine therapy often develop resistance robustly. This process is dynamically controlled by interactions of genes, proteins, RNAs and environmental factors at multiple scales. The complexity of this network cannot be understood by studying individual components in the cell. Systems biology focuses on the interactions of basic components, so as to uncover the molecular mechanism of cell physiology with a systemic and dynamical view. Mathematical modeling as a tool in systems biology provides a unique opportunity to understand the underlying mechanisms of endocrine response and resistance in breast cancer.
In Chapter 2, I focused on the experimental observations that breast cancer cells can switch between estrogen receptor α (ERα) regulated and growth factor receptor (GFR) regulated signaling pathways for survival and proliferation. A mathematical model based on the signaling crosstalk between ERα and GFR was constructed. The model successfully explains several intriguing experimental findings related to bimodal distributions of GFR proteins in breast cancer cells, which had been lacking reasonable justifications for almost two decades. The model also explains how transient overexpression of ERα promotes resistance of breast cancer cells to estrogen withdrawal. Understanding the non-genetic heterogeneity associated with this survival-signaling switch can shed light on the design of more efficient breast cancer therapies.
In Chapter 3, I utilized a novel strategy to model the transitions between the endocrine response and resistance states in breast cancer cells. Using the experimentally observed estrogen sensitivity phenotypes in breast cancer (sensitive, hypersensitive, and supersensitive) as example, I proposed a useful framework of modeling cell state transitions on the energy landscape of breast cancer as a dynamical system. Grounded on the most possible routes of transitions on the breast cancer landscape, a state transition model was developed. By analyzing this model, I investigated the optimum settings of two intuitive strategies, sequential and intermittent treatments, to overcome endocrine resistance in breast cancer. The method used in this study can be generalized to study treatment strategies and improve treatment efficiencies in breast cancer as well as other types of cancer. / Ph. D.
|
417 |
Spatiotemporal Model of the Asymmetric Division Cycle of Caulobacter crescentusSubramanian, Kartik 24 October 2014 (has links)
The life cycle of Caulobacter crescentus is of interest because of the asymmetric nature of cell division that gives rise to progeny that have distinct morphology and function. One daughter called the stalked cell is sessile and capable of DNA replication, while the second daughter called the swarmer cell is motile but quiescent. Advances in microscopy combined with molecular biology techniques have revealed that macromolecules are localized in a non-homogeneous fashion in the cell cytoplasm, and that dynamic localization of proteins is critical for cell cycle progression and asymmetry. However, the molecular-level mechanisms that govern protein localization, and enable the cell to exploit subcellular localization towards orchestrating an asymmetric life cycle remain obscure. There are also instances of researchers using intuitive reasoning to develop very different verbal explanations of the same biological process. To provide a complementary view of the molecular mechanism controlling the asymmetric division cycle of Caulobacter, we have developed a mathematical model of the cell cycle regulatory network.
Our reaction-diffusion models provide additional insight into specific mechanism regulating different aspects of the cell cycle. We describe a molecular mechanism by which the bifunctional histidine kinase PleC exhibits bistable transitions between phosphatase and kinase forms. We demonstrate that the kinase form of PleC is crucial for both swarmer-to-stalked cell morphogenesis, and for replicative asymmetry in the predivisional cell. We propose that localization of the scaffolding protein PopZ can be explained by a Turing-type mechanism. Finally, we discuss a preliminary model of ParA- dependent chromosome segregation. Our model simulations are in agreement with experimentally observed protein distributions in wild-type and mutant cells. In addition to predicting novel mutants that can be tested in the laboratory, we use our models to reconcile competing hypotheses and provide a unified view of the regulatory mechanisms that direct the Caulobacter cell cycle. / Ph. D.
|
418 |
Theoretical and Computational Studies on the Dynamics and Regulation of Cell Phenotypic TransitionsZhang, Hang 18 April 2016 (has links)
Cell phenotypic transitions, or cell fate decision making processes, are regulated by complex regulatory networks composed of genes, RNAs, proteins and metabolites. The regulation can take place at the epigenetic, transcriptional, translational, and post-translational levels to name a few.
Epigenetic histone modification plays an important role in cell phenotype maintenance and transitions. However, the underlying mechanism relating dynamical histone modifications to stable epigenetic cell memory remains elusive. Incorporating key pieces of molecular level experimental information, we built a statistical mechanics model for the inheritance of epigenetic histone modifications. The model reveals that enzyme selectivity of different histone substrates and cooperativity between neighboring nucleosomes are essential to generate bistability of the epigenetic memory. We then applied the epigenetic modeling framework to the differentiation process of olfactory sensory neurons (OSNs), where the observed 'one-neuron-one-allele' phenomenon has remained as a long-standing puzzle. Our model successfully explains this singular behavior in terms of epigenetic competition and enhancer cooperativity during the differentiation process. Epigenetic level events and transcriptional level events cooperate synergistically in the OSN differentiation process. The model also makes a list of testable experimental predictions. In general, the epigenetic modeling framework can be used to study phenotypic transitions when histone modification is a major regulatory element in the system.
Post-transcriptional level regulation plays important roles in cell phenotype maintenance. Our integrated experimental and computational studies revealed such a motif regulating the differentiation of definitive endoderm. We identified two RNA binding proteins, hnRNPA1 and KSRP, which repress each other through microRNAs miR-375 and miR-135a. The motif can generate switch behavior and serve as a noise filter in the stem cell differentiation process. Manipulating the motif could enhance the differentiation efficiency toward a specific lineage one desires.
Last we performed mathematical modeling on an epithelial-to-mesenchymal transition (EMT) process, which could be used by tumor cells for their migration. Our model predicts that the IL-6 induced EMT is a stepwise process with multiple intermediate states.
In summary, our theoretical and computational analyses about cell phenotypic transitions provide novel insights on the underlying mechanism of cell fate decision. The modeling studies revealed general physical principles underlying complex regulatory networks. / Ph. D.
|
419 |
Strategic Planning for the Reverse Supply Chain: Optimal End-of-Life Option, Product Design, and PricingSteeneck, Daniel Waymouth 06 November 2014 (has links)
A company's decisions on how to manage its reverse supply chain (RSC) are important for both economic and environmental reasons. From a strategic standpoint, the key decision a manufacturer makes is whether or not to collect products at their end-of-life (EOL) (i.e., when their useful lives are over), and if so, how to recover value from the recovered products. We call this decision as the EOL option of a product, and it determines how the RSC is designed and managed overall. Many EOL options exist for a product such as resale, refurbishment, remanufacturing and part salvage. However, many factors influence the optimal EOL option. These factors include the product's: (i) characteristics, (ii) design, and (iii) pricing. A product's characteristics are its properties that impact the various costs incurred during its production, residual part values, and customer demand. In this work, the product design is viewed as the choice of quality for each of its parts. A part's quality-level determines, among other things, its cost, salvage value, and the likelihood of obtaining it in good condition from a disassembled used product. Finally, the manufacturer must determine how to price its new and used products. This decision depends on many considerations such as whether new and used products compete and whether competition exists from other manufacturers. The choice of appropriate EOL options for products constitutes a foundation of RSC design. In this work, we study how to optimally determine a product's optimal EOL option and consider the impact of product design and product pricing on this decision.
We present a full description of the system that details the relationships among all entities. The system description reveals the use of a production planning type of modeling strategy. Additionally, a comprehensive and general mathematical model is presented that takes into consideration multi-period planning and product inventory. A unique aspect of our model over previous production planning models for RSC is that we consider the product returns as being endogenous variables rather than them being exogenous. This model forms the basis of our research, and we use its special cases in our analysis.
To begin our analysis of the problem, we study the case in which the product design and price are fixed. Both non-mandated and mandated collection are considered. Our analysis focuses on a special case of the problem involving two stages: in the first stage, new products are produced, and in the second stage, the EOL products are collected for value recovery. For fixed product design and price, our analysis reveals a fundamental mapping of product characteristics onto optimal EOL options. It is germane to our understanding of the problem in general since a multi-period problem is separable into multiple two-stage problems. Necessary and sufficient optimality conditions are also presented for each possible solution of this two-stage problem. For the two-part problem, a graphical mapping of product characteristics onto optimal EOL options is also presented, which reveals how EOL options vary with product characteristics.
Additionally, we study the case of product design under mandated collection, as encountered in product leasing. We assume new production cost, part replacement cost, and part salvage value to be functions of the quality-level of a part along with the likelihood of recovering a good-part from a returned product. These are reasonable assumptions for leased products since the customer is paying for the usage of the product over a fixed contract period. In this case, the two-stage model can still be used to gain insights. For the two-part problem, a method for mapping part yields onto optimal EOL options is presented. Closed-form optimality conditions for joint determination of part yields and EOL options are not generally attainable for the two-stage case; however, computationally efficient methods for this problem are developed for some relatively non-restrictive special cases. It is found that, typically, a part may belong to one of three major categories: (i) it is of low quality and will need to be replaced to perform remanufacturing, (ii) it is of high quality and its surplus will be salvaged, or (iii) it is of moderate quality and just enough of its amount is collected to meet remanufactured product demand.
Finally, we consider the problem of determining optimal prices for new and remanufactured products under non-mandated manufacturer's choice of collection. New and remanufactured products may or may not compete, depending on market conditions. Additionally, we assume the manufacturer to have a monopoly on the product. Again, the two-stage problem is used and efficient solution methods are developed. Efficient solution methods and key insights are presented. / Ph. D.
|
420 |
Investigations of Insulin-Like Growth Factor I Cell Surface Binding: Regulation by Insulin-Like Growth Factor Binding Protein-3 and Heparan Sulfate ProteoglycanBalderson, Stephanie D. 22 May 1997 (has links)
The primary aim of this text is to gain insight on how cellular activation by a insulin-like growth factor (IGF-I), in the presence of insulin-like growth factor binding protein-3 (IGFBP-3), is influenced by heparan sulfate proteoglycans (HSPG). Initial research will be presented, assumptions and hypotheses that were included in the development of mathematical models will be discussed, and the future enhancements of the models will be explored. There are many potential scenarios for how each component might influence the others. Mathematical modeling techniques will highlight the contributions made by numerous extracellular parameters on IGF-I cell surface binding. Tentative assumptions can be applied to modeling techniques and predictions may aid in the direction of future experiments.
Experimentally, it was found that IGFBP-3 inhibited IGF-I Bovine Aortic Endothelial (BAE) cell surface binding while p9 HS slightly increased IGF-I BAE cell surface binding. IGFBP-3 has a higher binding affinity for IGF-I (3 x 10-9 M) than p9 HS has for IGF-I (1.5 x 10-8 M) as determined with cell-free binding assays. The presence of p9 HS countered the inhibiting effect of IGFBP-3 on IGF-I BAE cell surface binding.
Although preliminary experiments with labeled p9 HS and IGFBP-3 indicated little to no cell surface binding, later experiments indicated that both IGFBP-3 and p9 HS do bind to the BAE cell surface. Pre-incubation of BAE cells with either IGFBP-3 or p9 HS resulted in an increase of IGF-I BAE cell surface binding . There was a more substantial increase of IGF-I surface binding when cells were pre-incubated with IGFBP- 3 than p9 HS. There was a larger increase of IGF-I BAE cell surface binding when cells were pre-incubated with p9 HS than when p9 HS and IGF-I were added simultaneously. This suggests that IGFBP-3 and p9 HS surface binding plays key role in IGF-I surface binding, however, p9 HS surface binding does not alter IGF-I surface binding as much as IGFBP-3 surface binding seems to.
Experimental work helps further the understanding of IGF-I cellular activation as regulated by IGFBP-3 and p9 HS. Developing mathematical models allows the researcher to focus on individual elements in a complex systems and gain insight on how the real system will respond to individual changes. Discrepancies between the model results and the experimental data presented indicate that soluble receptor inhibition is not sufficient to account for experimental results.
The alliance of engineering analysis and molecular biology helps to clarify significant principles relevant to the conveyance of growth factors into tissue. Awareness of the effects of individual parameters in the delivery system, made possible with mathematical models, will provide guidance and save time in the design of future therapeutics involving growth factors. / Master of Science
|
Page generated in 0.1223 seconds