• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 111
  • 90
  • 29
  • 9
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 305
  • 305
  • 91
  • 87
  • 58
  • 57
  • 49
  • 45
  • 37
  • 35
  • 33
  • 31
  • 30
  • 30
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Using graphical models to investigate phenotypic networks involving polygenic traits / O uso de modelos gráficos para investigar redes fenotípicas envolvendo características poligênicas

Pinto, Renan Mercuri 28 March 2018 (has links)
Understanding the causal architecture underlying complex systems biology has a great value in agriculture production for the development of optimal management strategies and selective breeding. So far, most studies in this area use only prior knowledge to propose causal networks and/or do not consider the possible genetic confounding factors on the structure search, which may hide important relationships among phenotypes and also bias the resulting inferred causal network. In this dissertation, we explore many structural learning algorithms and present a new one, called PolyMaGNet (Polygenic traits with Major Genes Network analysis), to search for recursive causal structures involving complex phenotypic traits with polygenic inheritance and also allowing the possibility of major genes affecting the traits. Briefly, a multiple-trait animal mixed model is fitted using a Bayesian approach considering major genes as covariates. Next, posterior samples of the residual covariance matrix are used as input for the Inductive Causation algorithm to search for putative causal structures, which are compared to each other using the Akaike information criterion. The performance of PolyMaGNet was evaluated and compared with another widely used approach in a simulated study considering a QTL mapping population. Results showed that, in the presence of major genes, our method recovered the true skeleton structure as well as the causal directions with a higher rate of true positives. The PolyMaGNet approach was also applied to a real dataset of an F2 Duroc × Pietrain pig resource population to recover the causal structure underlying on carcass, meat quality and chemical composition traits. Results corroborated with the literature regarding the cause-effect relationships between these traits and also provided new insights about phenotypic causal networks and its genetic architectures in complex systems biology. / Compreender a arquitetura causal subjacente à sistemas biológicos complexos é de grande valia na produção agrícola para o desenvolvimento de estratégias de manejo e seleção genética. Até o momento, a maior parte dos estudos neste contexto utiliza apenas conhecimento prévio para propor redes causais e/ou não considera fatores de confundimento genético na busca de estruturas, fato que pode ocultar relações importantes entre os fenótipos e viesar inferências sobre a rede causal. Nesta tese, exploramos alguns algoritmos de aprendizagem de estruturas e apresentamos um novo, chamado PolyMaGNet (do inglês, Polygenic traits with Major Genes Network analysis), para buscar estruturas causais recursivas entre características fenotípicas poligênicas complexas e permitindo, também, a possibilidade de efeitos de genes maiores que as afetam. Resumidamente, um modelo misto de múltiplas características é ajustado usando abordagem Bayesiana considerando os genes maiores como covariáveis no modelo. Em seguida, amostras posteriores da matriz de covariância residual são usadas como entrada para o algoritmo de causação indutiva para pesquisar estruturas causais putativas, as quais são comparadas usando o critério de informação de Akaike. O desempenho do PolyMaGNet foi avaliado e comparado com outra abordagem bastante utilizada por meio de um estudo simulado considerando uma população de mapeamento de QTL. Os resultados mostraram que, na presença de genes maiores, o método PolyMaGNet recuperou a verdadeira estrutura do esqueleto, bem como as direções causais, com uma taxa de efetividade maior. O método é ilustrado também utilizando-se um conjunto de dados reais de uma população de suínos F2 Duroc × Pietrain para recuperar a estrutura causal subjacente à características fenotípicas relacionadas a qualidade da carcaça, carne e composição química. Os resultados corroboraram com a literatura sobre as relações de causa-efeito entre os fenótipos e também forneceram novos conhecimentos sobre a rede fenotípica e sua arquitetura genética.
62

Desenvolvimento de um método para diagnose de falhas na operação de navios transportadores de gás natural liquefeito através de redes bayesianas. / Development of a method for fault diagnosis in liquefied natural gas carrier ships using bayesian networks.

Melani, Arthur Henrique de Andrade 18 August 2015 (has links)
O Gás Natural Liquefeito (GNL) tem, aos poucos, se tornado uma importante opção para a diversificação da matriz energética brasileira. Os navios metaneiros são os responsáveis pelo transporte do GNL desde as plantas de liquefação até as de regaseificação. Dada a importância, bem como a periculosidade, das operações de transporte e de carga e descarga de navios metaneiros, torna-se necessário não só um bom plano de manutenção como também um sistema de detecção de falhas que podem ocorrer durante estes processos. Este trabalho apresenta um método de diagnose de falhas para a operação de carga e descarga de navios transportadores de GNL através da utilização de Redes Bayesianas em conjunto com técnicas de análise de confiabilidade, como a Análise de Modos e Efeitos de Falhas (FMEA) e a Análise de Árvores de Falhas (FTA). O método proposto indica, através da leitura de sensores presentes no sistema de carga e descarga, quais os componentes que mais provavelmente estão em falha. O método fornece uma abordagem bem estruturada para a construção das Redes Bayesianas utilizadas na diagnose de falhas do sistema. / Liquefied Natural Gas (LNG) has gradually become an important option for the diversification of the Brazilian energy matrix. LNG carriers are responsible for LNG transportation from the liquefaction plant to the regaseification plant. Given the importance, as well as the risk, of transportation and loading/unloading operations of LNG carriers, not only a good maintenance plan is needed, but also a failure detection system that localizes the origin of a failure that may occur during these processes. This research presents a fault diagnosis method for the loading and unloading operations of LNG carriers through the use of Bayesian networks together with reliability analysis techniques, such as Failure Modes and Effects Analysis (FMEA ) and Fault Tree Analysis (FTA). The proposed method indicates, by reading sensors present in the loading and unloading system, which components are most likely faulty. The method provides a well-structured approach for the development of Bayesian networks used in the diagnosis of system failures.
63

A data-driven solution for root cause analysis in cloud computing environments. / Uma solução guiada por dados de análise de causa raiz em ambiente de computação em nuvem.

Pereira, Rosangela de Fátima 05 December 2016 (has links)
The failure analysis and resolution in cloud-computing environments are a a highly important issue, being their primary motivation the mitigation of the impact of such failures on applications hosted in these environments. Although there are advances in the case of immediate detection of failures, there is a lack of research in root cause analysis of failures in cloud computing. In this process, failures are tracked to analyze their causal factor. This practice allows cloud operators to act on a more effective process in preventing failures, resulting in the number of recurring failures reduction. Although this practice is commonly performed through human intervention, based on the expertise of professionals, the complexity of cloud-computing environments, coupled with the large volume of data generated from log records generated in these environments and the wide interdependence between system components, has turned manual analysis impractical. Therefore, scalable solutions are needed to automate the root cause analysis process in cloud computing environments, allowing the analysis of large data sets with satisfactory performance. Based on these requirements, this thesis presents a data-driven solution for root cause analysis in cloud-computing environments. The proposed solution includes the required functionalities for the collection, processing and analysis of data, as well as a method based on Bayesian Networks for the automatic identification of root causes. The validation of the proposal is accomplished through a proof of concept using OpenStack, a framework for cloud-computing infrastructure, and Hadoop, a framework for distributed processing of large data volumes. The tests presented satisfactory performance, and the developed model correctly classified the root causes with low rate of false positives. / A análise e reparação de falhas em ambientes de computação em nuvem é uma questão amplamente pesquisada, tendo como principal motivação minimizar o impacto que tais falhas podem causar nas aplicações hospedadas nesses ambientes. Embora exista um avanço na área de detecção imediata de falhas, ainda há percalços para realizar a análise de sua causa raiz. Nesse processo, as falhas são rastreadas a fim de analisar o seu fator causal ou seus fatores causais. Essa prática permite que operadores da nuvem possam atuar de modo mais efetivo na prevenção de falhas, reduzindo-se o número de falhas recorrentes. Embora essa prática seja comumente realizada por meio de intervenção humana, com base no expertise dos profissionais, a complexidade dos ambientes de computação em nuvem, somada ao grande volume de dados oriundos de registros de log gerados nesses ambientes e à ampla inter-dependência entre os componentes do sistema tem tornado a análise manual inviável. Por esse motivo, torna-se necessário soluções que permitam automatizar o processo de análise de causa raiz de uma falha ou conjunto de falhas em ambientes de computação em nuvem, e que sejam escaláveis, viabilizando a análise de grande volume de dados com desempenho satisfatório. Com base em tais necessidades, essa dissertação apresenta uma solução guiada por dados para análise de causa raiz em ambientes de computação em nuvem. A solução proposta contempla as funcionalidades necessárias para a aquisição, processamento e análise de dados no diagnóstico de falhas, bem como um método baseado em Redes Bayesianas para a identificação automática de causas raiz de falhas. A validação da proposta é realizada por meio de uma prova de conceito utilizando o OpenStack, um arcabouço para infraestrutura de computação em nuvem, e o Hadoop, um arcabouço para processamento distribuído de grande volume de dados. Os testes apresentaram desempenhos satisfatórios da arquitetura proposta, e o modelo desenvolvido classificou corretamente com baixo número de falsos positivos.
64

Avaliação de redes Bayesianas para imputação em variáveis qualitativas e quantitativas. / Evaluating Bayesian networks for imputation with qualitative and quantitative variables.

Magalhães, Ismenia Blavatsky de 29 March 2007 (has links)
Redes Bayesianas são estruturas que combinam distribuições de probabilidade e grafos. Apesar das redes Bayesianas terem surgido na década de 80 e as primeiras tentativas em solucionar os problemas gerados a partir da não resposta datarem das décadas de 30 e 40, a utilização de estruturas deste tipo especificamente para imputação é bem recente: em 2002 em institutos oficiais de estatística e em 2003 no contexto de mineração de dados. O intuito deste trabalho é o de fornecer alguns resultados da aplicação de redes Bayesianas discretas e mistas para imputação. Para isso é proposto um algoritmo que combina o conhecimento de especialistas e dados experimentais observados de pesquisas anteriores ou parte dos dados coletados. Ao empregar as redes Bayesianas neste contexto, parte-se da hipótese de que uma vez preservadas as variáveis em sua relação original, o método de imputação será eficiente em manter propriedades desejáveis. Neste sentido, foram avaliados três tipos de consistências já existentes na literatura: a consistência da base de dados, a consistência lógica e a consistência estatística, e propôs-se a consistência estrutural, que se define como sendo a capacidade de a rede manter sua estrutura na classe de equivalência da rede original quando construída a partir dos dados após a imputação. É utilizada pela primeira vez uma rede Bayesiana mista para o tratamento da não resposta em variáveis quantitativas. Calcula-se uma medida de consistência estatística para redes mistas usando como recurso a imputação múltipla para a avaliação de parâmetros da rede e de modelos de regressão. Como aplicação foram conduzidos experimentos com base nos dados de domicílios e pessoas do Censo Demográfico 2000 do município de Natal e nos dados de um estudo sobre homicídios em Campinas. Dos resultados afirma-se que as redes Bayesianas para imputação em atributos discretos são promissoras, principalmente se o interesse estiver em manter a consistência estatística e o número de classes da variável for pequeno. Já para outras características, como o coeficiente de contingência entre as variáveis, são afetadas pelo método à medida que se aumenta o percentual de não resposta. Nos atributos contínuos, a mediana apresenta-se mais sensível ao método. / Bayesian networks are structures that combine probability distributions with graphs. Although Bayesian networks initially appeared in the 1980s and the first attempts to solve the problems generated from the non-response date back to the 1930s and 1940s, the use of structures of this kind specifically for imputation is rather recent: in 2002 by official statistical institutes, and in 2003 in the context of data mining. The purpose of this work is to present some results on the application of discrete and mixed Bayesian networks for imputation. For that purpose, we present an algorithm combining knowledge obtained from experts with experimental data derived from previous research or part of the collected data. To apply Bayesian networks in this context, it is assumed that once the variables are preserved in their original relation, the imputation method will be effective in maintaining desirable properties. Pursuant to this, three types of consistence which already exist in literature are evaluated: the database consistence, the logical consistence and the statistical consistence. In addition, the structural consistence is proposed, which can be defined as the ability of a network to maintain its structure in the equivalence class of the original network when built from the data after imputation. For the first time a mixed Bayesian network is used for the treatment of the non-response in quantitative variables. The statistical consistence for mixed networks is being developed by using, as a resource, the multiple imputation for evaluating network parameters and regression models. For the purpose of application, some experiences were conducted using simple networks based on data for dwellings and people from the 2000 Demographic Census in the City of Natal and on data from a study on homicides in the City of Campinas. It can be stated from the results that the Bayesian networks for imputation in discrete attributes seem to be promising, particularly if the interest is to maintain the statistical consistence and if the number of classes of the variable is small. Features such as the contingency tables coefficient among variables, on the other hand, are affected by this method as the percentage of non-response increases. The median is more sensitive to this method in continuous attributes.
65

Self-correcting Bayesian target tracking

Biresaw, Tewodros Atanaw January 2015 (has links)
Visual tracking, a building block for many applications, has challenges such as occlusions,illumination changes, background clutter and variable motion dynamics that may degrade the tracking performance and are likely to cause failures. In this thesis, we propose Track-Evaluate-Correct framework (self-correlation) for existing trackers in order to achieve a robust tracking. For a tracker in the framework, we embed an evaluation block to check the status of tracking quality and a correction block to avoid upcoming failures or to recover from failures. We present a generic representation and formulation of the self-correcting tracking for Bayesian trackers using a Dynamic Bayesian Network (DBN). The self-correcting tracking is done similarly to a selfaware system where parameters are tuned in the model or different models are fused or selected in a piece-wise way in order to deal with tracking challenges and failures. In the DBN model representation, the parameter tuning, fusion and model selection are done based on evaluation and correction variables that correspond to the evaluation and correction, respectively. The inferences of variables in the DBN model are used to explain the operation of self-correcting tracking. The specific contributions under the generic self-correcting framework are correlation-based selfcorrecting tracking for an extended object with model points and tracker-level fusion as described below. For improving the probabilistic tracking of extended object with a set of model points, we use Track-Evaluate-Correct framework in order to achieve self-correcting tracking. The framework combines the tracker with an on-line performance measure and a correction technique. We correlate model point trajectories to improve on-line the accuracy of a failed or an uncertain tracker. A model point tracker gets assistance from neighbouring trackers whenever degradation in its performance is detected using the on-line performance measure. The correction of the model point state is based on the correlation information from the states of other trackers. Partial Least Square regression is used to model the correlation of point tracker states from short windowed trajectories adaptively. Experimental results on data obtained from optical motion capture systems show the improvement in tracking performance of the proposed framework compared to the baseline tracker and other state-of-the-art trackers. The proposed framework allows appropriate re-initialisation of local trackers to recover from failures that are caused by clutter and missed detections in the motion capture data. Finally, we propose a tracker-level fusion framework to obtain self-correcting tracking. The fusion framework combines trackers addressing different tracking challenges to improve the overall performance. As a novelty of the proposed framework, we include an online performance measure to identify the track quality level of each tracker to guide the fusion. The trackers in the framework assist each other based on appropriate mixing of the prior states. Moreover, the track quality level is used to update the target appearance model. We demonstrate the framework with two Bayesian trackers on video sequences with various challenges and show its robustness compared to the independent use of the trackers used in the framework, and also compared to other state-of-the-art trackers. The appropriate online performance measure based appearance model update and prior mixing on trackers allows the proposed framework to deal with tracking challenges.
66

Bayesian networks for evidence based clinical decision support

Yet, Barbaros January 2013 (has links)
Evidence based medicine (EBM) is defined as the use of best available evidence for decision making, and it has been the predominant paradigm in clinical decision making for the last 20 years. EBM requires evidence from multiple sources to be combined, as published results may not be directly applicable to individual patients. For example, randomised controlled trials (RCT) often exclude patients with comorbidities, so a clinician has to combine the results of the RCT with evidence about comorbidities using his clinical knowledge of how disease, treatment and comorbidities interact with each other. Bayesian networks (BN) are well suited for assisting clinicians making evidence-based decisions as they can combine knowledge, data and other sources of evidence. The graphical structure of BN is suitable for representing knowledge about the mechanisms linking diseases, treatments and comorbidities and the strength of relations in this structure can be learned from data and published results. However, there is still a lack of techniques that systematically use knowledge, data and published results together to build BNs. This thesis advances techniques for using knowledge, data and published results to develop and refine BNs for assisting clinical decision-making. In particular, the thesis presents four novel contributions. First, it proposes a method of combining knowledge and data to build BNs that reason in a way that is consistent with knowledge and data by allowing the BN model to include variables that cannot be measured directly. Second, it proposes techniques to build BNs that provide decision support by combining the evidence from meta-analysis of published studies with clinical knowledge and data. Third, it presents an evidence framework that supplements clinical BNs by representing the description and source of medical evidence supporting each element of a BN. Fourth, it proposes a knowledge engineering method for abstracting a BN structure by showing how each abstraction operation changes knowledge encoded in the structure. These novel techniques are illustrated by a clinical case-study in trauma-care. The aim of the case-study is to provide decision support in treatment of mangled extremities by using clinical expertise, data and published evidence about the subject. The case study is done in collaboration with the trauma unit of the Royal London Hospital.
67

Network inference using independence criteria

Verbyla, Petras January 2018 (has links)
Biological systems are driven by complex regulatory processes. Graphical models play a crucial role in the analysis and reconstruction of such processes. It is possible to derive regulatory models using network inference algorithms from high-throughput data, for example; from gene or protein expression data. A wide variety of network inference algorithms have been designed and implemented. Our aim is to explore the possibilities of using statistical independence criteria for biological network inference. The contributions of our work can be categorized into four sections. First, we provide a detailed overview of some of the most popular general independence criteria: distance covariance (dCov), kernel canonical variance (KCC), kernel generalized variance (KGV) and the Hilbert-Schmidt Independence Criterion (HSIC). We provide easy to understand geometrical interpretations for these criteria. We also explicitly show the equivalence of dCov, KGV and HSIC. Second, we introduce a new criterion for measuring dependence based on the signal to noise ratio (SNRIC). SNRIC is significantly faster to compute than other popular independence criteria. SNRIC is an approximate criterion but becomes exact under many popular modelling assumptions, for example for data from an additive noise model. Third, we compare the performance of the independence criteria on biological experimental data within the framework of the PC algorithm. Since not all criteria are available in a version that allows for testing conditional independence, we propose and test an approach which relies on residuals and requires only an unconditional version of an independence criterion. Finally we propose a novel method to infer networks with feedback loops. We use an MCMC sampler, which samples using a loss function based on an independence criterion. This allows us to find networks under very general assumptions, such as non-linear relationships, non-Gaussian noise distributions and feedback loops.
68

Tracing Knowledge and Engagement in Parallel by Observing Behavior in Intelligent Tutoring Systems

Schultz, Sarah E 27 January 2015 (has links)
Two of the major goals in Educational Data Mining are determining students’ state of knowledge and determining their affective state. It is useful to be able to determine whether a student is engaged with a tutor or task in order to adapt to his/her needs and necessary to have an idea of the students' knowledge state in order to provide material that is appropriately challenging. These two problems are usually examined separately and multiple methods have been proposed to solve each of them. However, little work has been done on examining both of these states in parallel and the combined effect on a student’s performance. The work reported in this thesis explores ways to observe both behavior and performance in order to more fully understand student state.
69

Aquisição e processamento de biosinais de eletromiografia de superfície e eletroencelografia para caracterização de comandos verbais ou intenção de fala mediante seu processamento matemático em pacientes com disartria

Sánchez Galego, Juliet January 2016 (has links)
Sistemas para assistência de pessoas com sequelas de Acidente Vascular Cerebral (AVC) como, por exemplo, a Disartria apresenta interesse crescente devido ao aumento da parcela da população com esses distúrbios. Este trabalho propõe a aquisição e o processamento dos biosinais de Eletromiografia de Superficie (sEMG) no músculos do rosto ligados ao processo da fala e de Eletroencefalografia (EEG), sincronizados no tempo mediante um arquivo de áudio. Para isso realizaram-se coletas em voluntários saudáveis no Laboratório IEE e com voluntários com Disartria, previamente diagnosticados com AVC, no departamento de Fisioterapia do Hospital de Clínicas de Porto Alegre. O objetivo principal é classificar esses biosinais frente a comandos verbais estabelecidos, mediante o método computacional Support Vector Machine (SVM) para o sinal de sEMG e Naive Bayes (NB) para o sinal de EEG, visando o futuro estudo e classificação do grau de Disartria do paciente. Estes métodos foram comparados com o Linear Discriminant Analysis (LDA), que foi implementado para os sinais de sEMG e EEG. As características extraídas do sinal de sEMG foram: desvio padrão, média aritmética, skewness, kurtosis e RMS; para o sinal de EEG as características extraídas na frequência foram: Mínimo, Máximo, Média e Desvio padrão e Skewness e Kurtosis, no domínio do tempo. Como parte do pré-processamento também foi empregado o filtro espacial Common Spatial Pattern (CSP) de forma a aumentar a atividade discriminativa entre as classes de movimento no sinal de EEG. Foi avaliado através de um Projeto de Experimentos Fatorial, a natureza das coletas, o sujeito, o método computacional, o estado do sujeito e a banda de frequência filtrada para EEG. Os comandos verbais definidos: “Direita”, “Esquerda”, “Para Frente” e “Para Trás”, possibilitaram a identificação de tarefas mentais em sujeitos saudáveis e com Disartria, atingindo-se Accuracy de 77,6% - 80,8%. / Assistive technology for people with Cerebrovascular Accident (CVA) aftereffects, such as Dysarthria, is gaining interest due to the increasing proportion of the population with these disorders. This work proposes the acquisition and processing of Surface Electromyography (sEMG) signal from the speech process face muscles and Electroencephalography (EEG) signal, synchronized in time by an audio file. For that reason assays were carried out with healthy volunteers at IEE Laboratory and with dysarthric volunteers, previously diagnosed with CVA, at the physiotherapy department of the Porto Alegre University Hospital. The main objective is to classify these biosignals in front of verbal commands established, by computational method of Support Vector Machine (SVM) for the sEMG and Naive Bayes (NB) for EEG, regarding the future study and classification of pacient degree of Dysarthria. These methods were compared with Linear Discriminant Analysis (LDA), who was implemented for sEMG and EEG. The extracted features of sEMG signal were: standard deviation, arithmetic mean, skewness, kurtosis and RMS; for EEG signal extracted features in frequency domain were: minimum, maximum, average and standard deviation, skewness and kurtosis, were used for time domain extraction. As part of pre-processing, Common Spatial Pattern (CSP) filter was also employed, in order to increase the discriminating activity between motion classes in the EEG signal. Data were evaluated in a factorial experiment project, with nature of assays, subject, computational method, subject health state and specifically for EEG were evaluated frequency band filtered. Defined verbal commands, "Right", "Left", "Forward" and "Back", allowed the identification of mental tasks in healthy subjects and dysarthric subjects, reaching Accuracy of 77.6% - 80.8%.
70

Analyse multi-niveaux en biologie systémique computationnelle : le cas des cellules HeLa sous traitement apoptotique / Multi-level analysis in computational system biology : the case of HeLa cells under apoptosis treatment

Pichené, Matthieu 25 June 2018 (has links)
Cette thèse examine une nouvelle façon d'étudier l'impact d'une voie de signalisation donnée sur l'évolution d'un tissu grâce à l'analyse multi-niveaux. Cette analyse est divisée en deux parties principales: La première partie considère les modèles décrivant la voie au niveau cellulaire. A l'aide de ces modèles, on peut calculer de manière résoluble la dynamique d'un groupe de cellules, en le représentant par une distribution multivariée sur des concentrations de molécules clés. La deuxième partie propose un modèle 3d de croissance tissulaire qui considère la population de cellules comme un ensemble de sous-populations, partitionnée de façon à ce que chaque sous-population partage les mêmes conditions externes. Pour chaque sous-population, le modèle résoluble présenté dans la première partie peut être utilisé. Cette thèse se concentre principalement sur la première partie, tandis qu'un chapitre couvre un projet de modèle pour la deuxième partie. / This thesis examines a new way to study the impact of a given pathway on the dynamics of a tissue through Multi-Level Analysis. The analysis is split in two main parts: The first part considers models describing the pathway at the cellular level. Using these models, one can compute in a tractable manner the dynamics of a group of cells, representing it by a multivariate distribution over concentrations of key molecules. % of the distribution of the states of this pathway through groups of cells. The second part proposes a 3d model of tissular growth that considers the population of cell as a set of subpopulations, partitionned such as each subpopulation shares the same external conditions. For each subpopulation, the tractable model presented in the first part can be used. This thesis focuses mainly on the first part, whereas a chapter covers a draft of a model for the second part.

Page generated in 0.065 seconds