• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 468
  • 271
  • 30
  • 28
  • 20
  • 6
  • 6
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 995
  • 995
  • 394
  • 366
  • 159
  • 129
  • 90
  • 89
  • 88
  • 85
  • 79
  • 78
  • 77
  • 76
  • 73
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Models for the Generation of Heterogeneous Complex Networks

Youssef, Bassant El Sayed 02 July 2015 (has links)
Complex networks are composed of a large number of interacting nodes. Examples of complex networks include the topology of the Internet, connections between websites or web pages in the World Wide Web (WWW), and connections between participants in social networks.Due to their ubiquity, modeling complex networks is importantfor answering many research questions that cannot be answered without a mathematical model. For example, mathematical models of complex networks can be used to find the most vulnerable nodes to protect during a virus attack in theInternet, to predict connections between websites in the WWW, or to find members of different communities insocial networks. Researchers have analyzed complex networksand concluded that they are distinguished from other networks by four specific statistical properties. These four statistical properties are commonly known in this field as: (i) thesmall world effect,(ii) high average clustering coefficient, (iii) scale-free power law degree distribution, and (iv) emergence of community structure. These four statistical properties are further described later in this dissertation. Mostmodels used to generate complex networks attempt to produce networks with these statistical properties. Additionally, most of these network models generate homogeneous complex networks where all the networknodes are considered to have the same properties. Homogenous complex networks neglect the heterogeneous nature ofthe nodes in many complexnetworks. Moreover, somemodels proposed for generating heterogeneous complexnetworks are not general as they make specific assumptions about the properties of the network.Including heterogeneity in the connection algorithm of a modelwould makeitmore suitable for generating the subset of complex networks that exhibit selective linking.Additionally, all modelsproposed, to date, for generating heterogeneous complex networks do not preserve all four of the statistical properties of complexnetworks stated above. Thus, formulation of a model for the generation of general heterogeneous complex networkswith characteristics that resemble as much as possible the statistical properties common to the real-world networks that have received attention from the research community is still an open research question. In this work, we propose two new types of models to generate heterogeneous complex networks. First, we introduce the Integrated Attribute Similarity Model (IASM). IASM uses preferential attachment(PA) to connect nodes based on a similarity measure for node attributes combined with a node's structural popularity measure. IASM integrates the attribute similarity measure and a structural popularity measure in the computation of the connection function used to determine connectionsbetween each arriving (newly created) node and the existing(previously created or old) network nodes. IASM is also the first model known to assign an attribute vector having more than one element to each node, thus allowing different attributes per node in the generated complex network. Networks generated using IASM have a power law degree distribution and preserve the small world phenomenon. IASM models are enhanced to increase their clustering coefficient using a triad formation step (TFS). In a TFS, a node connects to the neighbor of the node to which it was previously connected through preferential attachment, thus forming a triad. The TFS increases the number of triads that are formed in the generated network which increases the network's average clustering coefficient. We also introduce a second novel model,the Settling Node Adaptive Model (SNAM). SNAM reflects the heterogeneous nature of connectionstandard requirements for nodes. The connectionstandard requirements for a noderefers to the values of attribute similarity and/or structural popularityof old node ythat node new xwould find acceptable in order to connect to node y.SNAM is novel in that such a node connection criterion is not included in any previous model for the generation of complex networks. SNAM is shown to be successful in preserving the power law degree distribution, the small world phenomenon, and the high clustering coefficient of complex networks. Next,we implement a modification to the IASM and SNAM models that results in the emergence of community structure.Nodes are classified into classes according to their attribute values. The connection algorithm is modified to include the class similarity values between network nodes. This community structure model preservesthe PL degree distribution, small world property, and does not affect average clustering coefficient values expected from both IASM and SNAM. Additionally, the model exhibits the presence of community structure having most of the connections made between nodes belonging to the same class with only a small percent of the connections made between nodes of different classes. We perform a mathematical analysis of IASM and SNAM to study the degree distribution for networks generated by both models. This mathematical analysis shows that networks generated by both models have a power law degree distribution. Finally, we completed a case study to illustrate the potential value of our research on the modeling of heterogeneous complex networks. This case study was performed on a Facebook dataset. The case study shows that SNAM, with some modifications to the connection algorithm, is capable of generating a network with almost the same characteristics as found for the original dataset. The case study providesinsight on how the flexibility of SNAM's connection algorithm can be an advantagethat makes SNAM capable of generating networks with different statistical properties. Ideas for future research areas includestudyingthe effect of using eigenvector centrality, instead of degree centrality, on the emergence of community structure in IASM; usingthe nodeindex as an indication for its order of arrival to the network and distributing added connections fairly among networknodes along the life of the generated network; experimenting with the nature of attributesto generatea more comprehensive model; and usingtime sensitive attributes in the models, where the attribute can change its value with time, / Ph. D.
42

Mathematical modeling of macronutrient signaling in Saccharomyces cerevisiae

Jalihal, Amogh Prabhav 08 July 2020 (has links)
In eukaryotes, distinct nutrient signals are integrated in order to produce robust cellular responses to fluctuations in the environment. This process of signal integration is attributed to the crosstalk between nutrient specific signaling pathways, as well as the large degree of overlap between their regulatory targets. In the budding yeast Saccharomyces cerevisiae, these distinct pathways have been well characterized. However, the significant overlap between these pathways confounds the interpretation of the overall regulatory logic in terms of nutrient-dependent cell state determination. Here, we propose a literature-curated molecular mechanism of the integrated nutrient signaling pathway in budding yeast, focussing on carbon and nitrogen signaling. We build a computational model of this pathway to reconcile the available experimental data with our proposed molecular mechanism. We evaluate the robustness of the model fit to data with respect to the variations in the values of kinetic parameters used to calibrate the model. Finally, we use the model to make novel, experimentally testable predictions of transcription factor activities in mutant strains undergoing complex nutrient shifts. We also propose a novel framework, called BoolODE for utilizing published Boolean models to generate synthetic datasets used to benchmark the performance of algorithms performing gene regulatory network inference from single cell RNA sequencing data. / Doctor of Philosophy / An important problem in biology is how organisms sense and adapt to ever changing environments. A good example of an environmental cue that affects animal behavior is the availability of food; scarcity of food forces animals to search for food-rich habitats, or go into hibernation. At the level of single cells, a range of behaviors are observed depending on the amount of food, or nutrients present in the environment. Moreover, different types of nutrients are important for different biological functions in single cells, and each different nutrient type will have to be available in the right quantities to support cellular growth. At the subcellular level, intricate molecular machineries exist which sense the amounts of each nutrient type, and interpret this information in order to make a decision on how best to respond. This interpretation and integration of nutrient information is a complex, poorly understood process even in a simple unicellular organism like the budding yeast. In order to understand this process, termed nutrient signaling, we propose a mathematical model of how yeasts respond to nutrient availability in the environment. Our model advances the state of knowledge by presenting the first comprehensive mathematical model of the nutrient signaling machinery, accounting for a variety of experimental observations from the last three decades of yeast nutrient signaling. We use our model to make predictions on how yeasts might behave when supplied with different combinations of nutrients, which can be verified by experiments. Finally, the cellular machinery that helps yeasts respond to nutrient availability in the environment is very similar to the machinery in cancer cells that causes them to grow rapidly. Our proposed model can serve as a stepping stone towards the construction of a model of cancer's responses to its nutritional environment.
43

Validating Forecasting Strategies of Simple Epidemic Models on the 2015-2016 Zika Epidemic

Puglisi, Nicolas Leonardo 14 May 2024 (has links)
Accurate forecasting of infectious disease outbreaks is vital for safeguarding global health and the well-being of individuals. Model-based forecasts enable public health officials to test what-if scenarios, evaluate control strategies, and develop informed policies to allocate resources effectively. Model selection is a pivotal aspect of creating dependable forecasts for infectious diseases. This thesis delves into validating forecasts of simple epidemic models. We use incidence data from the 2015-2016 Zika virus outbreak in Antioquia, Colombia, to assess what model features result in accurate forecasts. We employed the Parametric Bootstrapping and Ensemble Kalman Filter methods to assimilate data and then generated 14-day-ahead forecasts throughout the epidemic across five case studies. We visualized each forecast to show the training/testing split in data and associated prediction intervals. Fore- casting accuracy was evaluated using five statistical performance metrics. Early into the epidemic, phenomenological models - like the generalized logistic model - resulted in more accurate forecasts. However, as the epidemic progressed, the mechanistic model incorporating disease latency outperformed its counterparts. While modeling disease transmission mechanisms is crucial for accurate Zika incidence forecasting, additional data is needed to make these models more reliable and precise. / Master of Science / Accurate forecasting of infectious disease outbreaks is vital for safeguarding global health and the well-being of individuals. Model-based forecasts enable public health officials to test what-if scenarios, evaluate control strategies, and develop informed policies to allocate resources effectively. Model selection is a pivotal aspect of creating dependable forecasts for infectious diseases. This thesis delves into validating forecasts of simple epidemic models. We use data from the 2015-2016 Zika virus outbreak in Antioquia, Colombia, to assess what model features result in accurate forecasts. We considered two techniques to generate 14-day-ahead forecasts throughout the epidemic across five case studies. We visualized each forecast and evaluated model accuracy. Early into the epidemic, simple growth models resulted in more accurate forecasts. However, as the epidemic progressed, the model incorporating disease-specific characteristics outperformed its counterparts. While modeling disease transmission is crucial for accurate epidemic forecasting, additional data is needed to make these models more reliable and precise.
44

Modelling of an industrial naphtha isomerization reactor and development and assessment of a new isomerization process

Ahmed, A.M., Jarullah, A.T., Abed, F.M., Mujtaba, Iqbal 30 June 2018 (has links)
Yes / Naphtha isomerization is an important issue in petroleum industries and it has to be a simple and cost effective technology for producing clean fuel with high gasoline octane number. In this work, based on real industrial data, a detailed process model is developed for an existing naphtha isomerization reactor of Baiji North Refinery (BNR) of Iraq which involves estimation of the kinetic parameters of the reactor. The optimal values of the kinetic parameters are estimated via minimizing the sum of squared errors between the predicted and the experimental data of BNR. Finally, a new isomerization process (named as AJAM process) is proposed and using the reactor model developed earlier, the reactor condition is optimized which maximizes the yield and research octane number (RON) of the reactor.
45

Optimisation of several industrial and recently developed AJAM naphtha isomerization processes using model based techniques

Jarullah, A.T., Abed, F.M., Ahmed, A.M., Mujtaba, Iqbal 24 April 2019 (has links)
Yes / Increasing the yield and research octane number (RON) of naphtha isomerization process are the most important issues in industries. There are many alternative industrial naphtha isomerization processes practiced around the world. In addition, AJAM is a new naphtha isomerization process proposed by the authors recently (Ahmed et al., 2018) where the isomerization reactor model was validated using real data of Baiji North Refinery (BNR) of Iraq. In this work, first, the performance of the AJAM Process is evaluated against 8 existing industrial isomerization processes in terms of RON, yield and the cost using model based optimisation techniques. To be consistent, we have used the same isomerization reactor model in all the industrial processes we evaluated here. Secondly, energy saving opportunity in the new AJAM process is studied using pinch technology.
46

Optimal operation of a pyrolysis reactor

Jarullah, Aysar Talib, Hameed, S.A., Hameed, Z.A., Mujtaba, Iqbal January 2015 (has links)
No / In the present study, the problem of optimization of thermal cracker (pyrolysis) operation is discussed. The main objective in thermal cracker optimization is the estimation of the optimal flow rates of different feeds (such as, Gas-oil, Propane, Ethane and Debutanized natural gasoline) to the cracking furnace under the restriction on ethylene and propylene production. Thousands of combinations of feeds are possible. Hence the optimization needs an efficient strategy in searching for the global minimum. The optimization problem consists of maximizing the economic profit subject to a number of equality and inequality constraints. Modelling, simulation and optimal operation via optimization of the thermal cracking reactor has been carried out by gPROMS (general PROcess Modelling System) software. The optimization problem is posed as a Non-Linear Programming problem and using a Successive Quadratic Programming (SQP) method for solving constrained nonlinear optimization problem with high accuracy within gPROMS software. New results have been obtained for the control variables and optimal cost of the cracker in comparison with previous studies.
47

Immunoepidemiological Modeling of Dengue Viral Infection

Nikin-Beers, Ryan Patrick 25 April 2018 (has links)
Dengue viral infection is a mosquito-borne disease with four distinct strains, where the interactions between these strains have implications on the severity of the disease outcomes. The two competing hypotheses for the increased severity during secondary infections are antibody dependent enhancement and original antigenic sin. Antibody dependent enhancement suggests that long-lived antibodies from primary infection remain during secondary infection but do not neutralize the virus. Original antigenic sin proposes that T cells specific to primary infection dominate cellular immune responses during secondary infections, but are inefficient at clearing cells infected with non-specific strains. To analyze these hypotheses, we developed within-host mathematical models. In previous work, we predicted a decreased non-neutralizing antibody effect during secondary infection. Since this effect accounts for decreased viral clearance and the virus is in quasi-equilibrium with infected cells, we could be accounting for reduced cell killing and the original antigenic sin hypothesis. To further understand these interactions, we develop a model of T cell responses to primary and secondary dengue virus infections that considers the effect of T cell cross-reactivity in disease enhancement. We fit the models to published patient data and show that the overall infected cell killing is similar in dengue heterologous infections, resulting in dengue fever and dengue hemorrhagic fever. The contribution to overall killing, however, is dominated by non-specific T cell responses during the majority of secondary dengue hemorrhagic fever cases. By contrast, more than half of secondary dengue fever cases have predominant strain-specific T cell responses. These results support the hypothesis that cross-reactive T cell responses occur mainly during severe disease cases of heterologous dengue virus infections. Finally, using the results from our within-host models, we develop a multiscale model of dengue viral infection which couples the within-host virus dynamics to the population level dynamics through a system of partial differential equations. We analytically determine the relationship between the model parameters and the characteristics of the solutions, and find thresholds under which infections persist in the population. Furthermore, we develop and implement a full numerical scheme for our model. / Ph. D. / Dengue viral infection is a mosquito-borne disease with four distinct strains, where the interactions between these strains have implications on the severity of the disease outcomes. The two competing hypotheses for the increased severity during secondary infections are antibody dependent enhancement and original antigenic sin. Antibody dependent enhancement suggests that long-lived antibodies from primary infection remain during secondary infection but do not neutralize the virus. Original antigenic sin proposes that T cells specific to primary infection dominate cellular immune responses during secondary infections, but are inefficient at clearing cells infected with non-specific strains. To analyze these hypotheses, we developed within-host mathematical models. In previous work, we predicted a decreased non-neutralizing antibody effect during secondary infection. Since this effect accounts for decreased viral clearance and the virus is in quasi-equilibrium with infected cells, we could be accounting for reduced cell killing and the original antigenic sin hypothesis. To further understand these interactions, we develop a model of T cell responses to primary and secondary dengue virus infections that considers the effect of T cell cross-reactivity in disease enhancement. We fit the models to published patient data and show that the overall infected cell killing is similar in dengue heterologous infections, resulting in dengue fever and dengue hemorrhagic fever. The contribution to overall killing, however, is dominated by non-specific T cell responses during the majority of secondary dengue hemorrhagic fever cases. By contrast, more than half of secondary dengue fever cases have predominant strain-specific T cell responses. These results support the hypothesis that cross-reactive T cell responses occur mainly during severe disease cases of heterologous dengue virus infections. Finally, using the results from our within-host models, we develop a multiscale model of dengue viral infection which couples the within-host virus dynamics to the population level dynamics through a system of partial differential equations. We analytically determine the relationship between the model parameters and the characteristics of the solutions, and find thresholds under which infections persist in the population. Furthermore, we develop and implement a full numerical scheme for our model.
48

Glycerol production in plasmodium falciparum : towards a detailed kinetic model

Adams, Waldo Wayne 04 1900 (has links)
Thesis (MSc)--Stellenbosch University, 2015. / ENGLISH ABSTRACT: Having caused the deaths of more than 10 million individuals since 2000 with most of them occurring in Africa, malaria remains a serious disease that requires undivided attention. To this end a detailed kinetic model of Plasmodium falciparum glycolysis was constructed, validated and used to determine potential drug targets for the development of novel, effective antimalarial therapies. The kinetic model described the behaviour of the glycolytic enzymes with a set of ordinary differential equations that was solved to obtain the steady state fluxes and concentrations of internal metabolites. The model included a glycerol branch represented in a single fitted equation. This present study set out to detect, characterise, and incorporate into the model the enzymes that constitute the glycerol branch of P. falciparum glycolysis. The kinetic parameters of glycerol 3-phosphate dehydrogenase (G3PDH), the first enzyme in the branch and catalyst of the dihydroxyacetone phosphosate (DHAP) reducing reaction, was determined and added to the detailed kinetic model. The model was subsequently validated by comparing its prediction of steady state fluxes with experimentally measured fluxes. Once it was evident that the predictions of the unfitted model agreed with experimentally measured fluxes, metabolic control analysis was performed on this branched system to ascertain the distribution of control over the steady state flux through the glycerol branch. The control G3PDH exercised over its own flux was less than expected due to the enzyme’s sensitivity to changes in NADH and thus the redox balance of the cell. Attempts were made to detect the enzymes responsible for the conversion of glycerol 3-phosphate (G3P) to glycerol. Very low levels of glycerol kinase activity was observed. Although G3P-dependent release of inorganic phosphate was detected results were inconclusive as to whether a non-specific phosphatase also mediated the conversion. Overall, the expansion of the model to include G3PDH did not affect the steady state metabolite concentrations and flux adversely. / AFRIKAANSE OPSOMMING: Vanaf die jaar 2000 het malaria die dood van meer as 10 miljoen mense veroorsaak. Die meeste sterftes het in Afrika voorgekom —’n aanduiding van hoe ernstige siekte dit is en een wat onverdeelde aandag moet geniet. Om hierdie rede is ’n gedetaileerde kinetiese model van glikoliese in Plasmodium falciparum gebou, gevalideer en gebruik om potensiële dwelm teikens te identifiseer vir die ontwikkeling van nuwe, meer effektiewe anti-malaria terapieë. Die kinetiese model beskryf die gedrag van die glikolitiese ensieme in terme van gewone differensiële vergelykings wat opgelos is om die bestendige toestand fluksies en interne metaboliet konsentrasies te bepaal. Die model sluit ’n gliserol-tak in wat deur ’n enkele aangepaste vergelyking verteenwoordig word. Hierdie studie het voorgeneem om die ensieme van die gliserol-tak van P. falciparum glikoliese te identifiseer, karakteriseer en in die model te inkorporeer. Ons het die kinetiese parameters van die eerste ensiem in die gliserol-tak, gliserol 3-fosfaat dehidrogenase (G3PDH), die katalis van die dihidroksiasetoon fosfaat(DHAP) reduserende reaksie, bepaal. Die kinetiese parameters is by die gedetaileerde model gevoeg. Validering het plaasgevind deur die model se voorspellings met eksperimenteel bepaalde waardes te vergelyk. Toe dit duidelik geword het dat die voorspellings van die model met die eksperimenteel bepaalde fluks ooreenstem, is metaboliese kontrole analiese op die vertakte sisteem uitgevoer. Dit is gedoen om vas te stel hoe die bestendige toestand fluks deur die gliserol-tak beheer word. G3PDH het nie volle beheer oor sy eie fluks nie, in teenstelling met ons vergewagtinge. Daar is gepoog om vas te stel watter ensieme verantwoordelik is vir die produksie van gliserol vanuit gliserol 3-fosfaat (G3P). ’n Lae gliserolkinase aktiwiteit is waargeneem. Alhoewel G3P afhanklike vrystelling van anorganise fosfaat waargeneem is, is dit nie duidelik vanuit die resultate of die proses deur ’n nie-spesifieke fosfatase uitgevoer word nie. Die uitbreiding van die model om ’n G3PDH vergelyking in te sluit het nie die bestendige toestand metaboliet konsentrasies en fluks negatief geaffekteer nie.
49

Signal processing methods for the analysis of cerebral blood flow and metabolism

Tingying, Peng January 2009 (has links)
An important protective feature of the cerebral circulation is its ability to maintain sufficient cerebral blood flow and oxygen supply in accordance with the energy demands of the brain despite variations in a number of external factors such as arterial blood pressure, heart rate and respiration rate. If cerebral autoregulation is impaired, abnormally low or high CBF can lead to cerebral ischemia, intracranial hypertension or even capillary damage, thus contributing to the onset of cerebrovascular events. The control and regulation of cerebral blood flow is a dynamic, multivariate phenomenon. Sensitive techniques are required to monitor and process experimental data concerning cerebral blood flow and metabolic rate in a clinical setting. This thesis presents a model simulation study and 4 related signal processing studies concerned with CBF regulation. The first study models the regulation of the cerebral vasculature to systemic changes in blood pressure, dissolved blood gas concentration and neural activation in a integrated haemodynamic system. The model simulations show that the three pathways which are generally thought to be independent (pressure, CO₂ and activation) greatly influence each other, it is vital to consider parallel changes of unmeasured variability when performing a single pathway study. The second study shows how simultaneously measured blood gas concentration fluctuations can improve the accuracy of an existing frequency domain technique for recovering cerebral autoregulation dynamics from spontaneous fluctuations in blood pressure and cerebral blood flow velocity. The third study shows how the continuous wavelet transform can recover both time and frequency information about dynamic autoregulation, including the contribution of blood gas concentration. The fourth study shows how the discrete wavelet transform can be used to investigate frequency-dependent coupling between cerebral and systemic cardiovascular dynamics. The final study then uses these techniques to investigate the systemic effects on resting BOLD variability. The general approach taken in this thesis is a combined analysis of both modelling and data analysis. Physiologically-based models encapsulate hypotheses about features of CBF regulation, particularly those features that may be difficult to recover using existing analysis methods, and thus provide the motivation for developing both new analysis methods and criteria to evaluate these methods. On the other hand, the statistical features extracted directly from experimental data can be used to validate and improve the model.
50

Motion correction and parameter estimation in DCE-MRI sequences : application to colorectal cancer

Bhushan, Manav January 2014 (has links)
Cancer is one of the leading causes of premature deaths across the world today, and there is an urgent need for imaging techniques that can help in early diagnosis and treatment planning for cancer patients. In the last four decades, magnetic resonance imaging (MRI) has emerged as one of the leading modalities for non-invasive imaging of tumours. By using dynamic contrast-enhanced magnetic resonance imaging (DCEMRI), this modality can be used to acquire information about perfusion and vascularity of tumours, which can help in predicting response to treatment. There are many factors that complicate the analysis of DCE-MRI data, and make clinical predictions based on it unreliable. During data acquisition, there are many sources of uncertainties and errors, especially patient motion, which result in the same image position being representative of many different anatomical locations across time. Apart from motion, there are also other inherent uncertainties and noise associated with the measurement of DCE-MRI parameters, which contribute to the model-fitting error observed when trying to apply pharmacokinetic (PK) models to the data. In this thesis, a probabilistic, model-based registration and parameter estimation (MoRPE) framework for motion correction and PK-parameter estimation in DCE-MRI sequences is presented. The MoRPE framework is first compared with conventional motion correction methods on simulated data, and then applied to data from a clinical trial involving twenty colorectal cancer patients. On clinical data, the ability of MoRPE to discriminate between responders and non-responders to combined chemoand radiotherapy is tested, and found to be superior to other methods. The effect of incorporating different arterial input functions within MoRPE is also assessed. Following this, a quantitative analysis of the uncertainties associated with the different PK parameters is performed using a variational Bayes mathematical framework. This analysis provides a quantitative estimate of the extent to which motion correction affects the uncertainties associated with different parameters. Finally, the importance of estimating spatial heterogeneity of PK parameters within tumours is assessed. The efficacy of different measures of spatial heterogeneity, in predicting response to therapy based on the pre-therapy scan alone are compared, and the prognostic value of a new derived PK parameter the 'acceleration constant' is investigated. The integration of uncertainty estimates of different DCE-MRI parameters into the calculation of their heterogeneity measures is also shown to improve the prediction of response to therapy.

Page generated in 0.1114 seconds