311 |
Extension of the Rule-Based Programming Language XL by Concepts for Multi-Scaled Modelling and Level-of-Detail VisualizationOng, Yongzhi 27 April 2015 (has links)
No description available.
|
312 |
Etude multi-échelle de l'érosion de contact au sein des ouvrages hydrauliques en terreBeguin, Remi 07 December 2011 (has links) (PDF)
L'érosion de contact est un type d'érosion interne qui se développe à l'interface entre deux couches de matériaux de granulométries différentes. Les particules d'un sol fin (sable, limon, argile...) sont détachées par l'écoulement et entraînées à travers les pores du sol grossier au contact (gravier...). Bien que l'on suspecte sa présence dans de nombreux ouvrages, ce processus d'érosion a été peu étudié jusqu'à présent. Aussi ce travail de thèse s'est attaché à mieux le comprendre pour parvenir à le modéliser. A l'échelle du pore, l'écoulement à l'interface entre deux milieux poreux a été caractérisé, grâce à un dispositif expérimental développé au Cemagref d'Aix-en-Provence qui combine fluorescence induite par laser, méthode PIV et milieu iso-indice. L'importance de la variabilité des sollicitations hydrauliques a ainsi été soulignée. A l'échelle de l'échantillon, des essais sur sols réels et sols reconstitués ont été menés au LTHE, afin d'identifier les phénomènes en jeu. Ont également été obtenues sur ce dispositif des mesures du taux d'érosion en fonction de l'intensité de l'écoulement, pour différents types de sols fins et de sols grossiers. Une modélisation stochastique de ces essais a ensuite été proposée. Enfin, des essais à grande échelle ont été conduits au laboratoire de la Compagnie Nationale du Rhône pour étudier l'éventuelle influence d'effets d'échelle ainsi que les conséquences de cette érosion de contact sur le comportement global et l'intégrité d'un ouvrage. Il a ainsi été mis en évidence la possibilité qu'une érosion de conduit soit initiée par érosion de contact. Cette thèse a été réalisée dans le cadre du projet ERINOH (ERosion INterne dans les Ouvrages Hydrauliques), en convention CIFRE avec le Centre d'Ingénierie Hydraulique d'EDF.
|
313 |
Multivariate Multiscale Analysis of Neural Spike TrainsRamezan, Reza 10 December 2013 (has links)
This dissertation introduces new methodologies for the analysis of neural spike trains. Biological properties of the nervous system, and how they are reflected in neural data, can motivate specific analytic tools. Some of these biological aspects motivate multiscale frameworks, which allow for simultaneous modelling of the local and global behaviour of neurons. Chapter 1 provides the preliminary background on the biology of the nervous system and details the concept of information and randomness in the analysis of the neural spike trains. It also provides the reader with a thorough literature review on the current statistical models in the analysis of neural spike trains. The material presented in the next six chapters (2-7) have been the focus of three papers, which have either already been published or are being prepared for publication.
It is demonstrated in Chapters 2 and 3 that the multiscale complexity penalized likelihood method, introduced in Kolaczyk and Nowak (2004), is a powerful model in the simultaneous modelling of spike trains with biological properties from different time scales. To detect the periodic spiking activities of neurons, two periodic models from the literature, Bickel et al. (2007, 2008); Shao and Li (2011), were combined and modified in a multiscale penalized likelihood model. The contributions of these chapters are (1) employinh a powerful visualization tool, inter-spike interval (ISI) plot, (2) combining the multiscale method of Kolaczyk and Nowak (2004) with the periodic models ofBickel et al. (2007, 2008) and Shao and Li (2011), to introduce the so-called additive and multiplicative models for the intensity function of neural spike trains and introducing a cross-validation scheme to estimate their tuning parameters, (3) providing the numerical bootstrap confidence bands for the multiscale estimate of the intensity
function, and (4) studying the effect of time-scale on the statistical properties of spike counts.
Motivated by neural integration phenomena, as well as the adjustments for the neural refractory period, Chapters 4 and 5 study the Skellam process and introduce the Skellam Process with Resetting (SPR). Introducing SPR and its application in the analysis of neural spike trains is one of the major contributions of this dissertation. This stochastic process is biologically plausible, and unlike the Poisson process, it does not suffer from limited dependency structure. It also has multivariate generalizations for the simultaneous analysis of multiple spike trains. A computationally efficient recursive algorithm for the estimation of the parameters of SPR is introduced in Chapter 5. Except for the literature review at the beginning of Chapter 4, the rest of the material within these two chapters is original. The specific contributions of Chapters 4 and 5 are (1) introducing the Skellam Process with Resetting as a statistical tool to analyze neural spike trains and studying its properties, including all theorems and lemmas provided in Chapter 4, (2) the two fairly standard definitions of the Skellam process (homogeneous and inhomogeneous) and the proof of their equivalency, (3) deriving the likelihood function based on the observable data (spike trains) and developing a computationally efficient recursive algorithm for parameter estimation, and (4) studying the effect of time scales on the SPR model.
The challenging problem of multivariate analysis of the neural spike trains is addressed in Chapter 6. As far as we know, the multivariate models which are available in the literature suffer from limited dependency structures. In particular, modelling negative correlation among spike trains is a challenging problem. To address this issue, the multivariate Skellam distribution, as well as the multivariate Skellam process, which both have flexible dependency structures, are developed. Chapter 5 also introduces a multivariate version of Skellam Process with Resetting (MSPR), and a so-called profile-moment likelihood estimation of its parameters. This chapter generalizes the results of Chapter 4 and 5, and therefore, except for the brief literature review provided at the beginning of the chapter, the remainder of the material is original work. In particular, the contributions of this chapter are (1) introducing multivariate Skellam distribution, (2) introducing two definitions of the Multivariate Skellam process in both homogeneous and inhomogeneous cases and proving their equivalence, (3) introducing Multivariate Skellam Process with Resetting (MSPR) to simultaneously model spike trains from an ensemble of neurons, and (4) utilizing the so-called profile-moment likelihood method to compute estimates of the parameters of MSPR.
The discussion of the developed methodologies as well as the ``next steps'' are outlined in Chapter 7.
|
314 |
A Distributed Optimal Control Approach for Multi-agent Trajectory OptimizationFoderaro, Greg January 2013 (has links)
<p>This dissertation presents a novel distributed optimal control (DOC) problem formulation that is applicable to multiscale dynamical systems comprised of numerous interacting systems, or agents, that together give rise to coherent macroscopic behaviors, or coarse dynamics, that can be modeled by partial differential equations (PDEs) on larger spatial and time scales. The DOC methodology seeks to obtain optimal agent state and control trajectories by representing the system's performance as an integral cost function of the macroscopic state, which is optimized subject to the agents' dynamics. The macroscopic state is identified as a time-varying probability density function to which the states of the individual agents can be mapped via a restriction operator. Optimality conditions for the DOC problem are derived analytically, and the optimal trajectories of the macroscopic state and control are computed using direct and indirect optimization algorithms. Feedback microscopic control laws are then derived from the optimal macroscopic description using a potential function approach.</p><p>The DOC approach is demonstrated numerically through benchmark multi-agent trajectory optimization problems, where large systems of agents were given the objectives of traveling to goal state distributions, avoiding obstacles, maintaining formations, and minimizing energy consumption through control. Comparisons are provided between the direct and indirect optimization techniques, as well as existing methods from the literature, and a computational complexity analysis is presented. The methodology is also applied to a track coverage optimization problem for the control of distributed networks of mobile omnidirectional sensors, where the sensors move to maximize the probability of track detection of a known distribution of mobile targets traversing a region of interest (ROI). Through extensive simulations, DOC is shown to outperform several existing sensor deployment and control strategies. Furthermore, the computation required by the DOC algorithm is proven to be far reduced compared to that of classical, direct optimal control algorithms.</p> / Dissertation
|
315 |
Reduced Order Model and Uncertainty Quantification for Stochastic Porous Media FlowsWei, Jia 2012 August 1900 (has links)
In this dissertation, we focus on the uncertainty quantification problems where the goal is to sample the porous media properties given integrated responses. We first introduce a reduced order model using the level set method to characterize the channelized features of permeability fields. The sampling process is completed under Bayesian framework. We hence study the regularity of posterior distributions with respect to the prior measures.
The stochastic flow equations that contain both spatial and random components must be resolved in order to sample the porous media properties. Some type of upscaling or multiscale technique is needed when solving the flow and transport through heterogeneous porous media. We propose ensemble-level multiscale finite element method and ensemble-level preconditioner technique for solving the stochastic flow equations, when the permeability fields have certain topology features. These methods can be used to accelerate the forward computations in the sampling processes.
Additionally, we develop analysis-of-variance-based mixed multiscale finite element method as well as a novel adaptive version. These methods are used to study the forward uncertainty propagation of input random fields. The computational cost is saved since the high dimensional problem is decomposed into lower dimensional problems.
We also work on developing efficient advanced Markov Chain Monte Carlo methods. Algorithms are proposed based on the multi-stage Markov Chain Monte Carlo and Stochastic Approximation Monte Carlo methods. The new methods have the ability to search the whole sample space for optimizations. Analysis and detailed numerical results are presented for applications of all the above methods.
|
316 |
Processamento de imagens HDR utilizando shaders gráficos em múltiplas plataformasMunhoz, Rafael Gomes January 2017 (has links)
Orientador: Prof. Dr. André Guilherme Ribeiro Balan / Dissertação (mestrado) - Universidade Federal do ABC, Programa de Pós-Graduação em Ciência da Computação, 2017. / Uma cena real possui uma grande variação de contraste que, quando vista pelo olho humano, resulta em detalhes que sensores de câmeras digitais comuns não conseguem capturar. Isso ocorre devido às limitações dos dispositivos para obter e exibir diferentes valores de cor. Imagens HDR (High Dynamic Range), por sua vez, são representações que conseguem reproduzir essa amplitude de valores.
Para gerar e exibir imagens HDR, diante das limitações dos dispositivos, é necessário trabalhar
em um domínio de menor alcance, com imagens LDR (Low Dynamic Range). Os algoritmos que
mapeiam os valores entre os domínios são chamadas de operadores de tone-mapping. Apenas a aplicação de tone-mapping não gera resultados de alta qualidade, sendo necessárias técnicas de redução de ruídos e decomposição de imagem para tal. Essas técnicas implicam um alto custo computacional e demandam muito tempo quando executados na CPU. Por outro lado, o processamento na GPU oferece um paralelismo natural, por viabilizar operações a serem aplicadas a todos os pixels, simultaneamente. Uma das maneiras de programar essas operações na GPU é através do uso de shaders gráficos, alterando a forma que os pixels da imagem são reproduzidos. Com o constante crescimento da utilização de dispositivos móveis, um tema recorrente é o desempenho e a viabilidade de aplicações de alta performance em tais dispositivos, que atualmente, na maioria dos casos, possuem em sua arquitetura uma GPU programável.
Nesse trabalho, desenvolvemos shaders gráficos OpenGL para processar operações de tonemapping, bem como a decomposição multiescala de imagens utilizando filtros não lineares
importantes e modernos, a fim de preservar a maioria dos detalhes das imagens. Isso gera resultados
mais nítidos quando comparados com técnicas que aplicam os operadores de tone-mapping
diretamente nas imagens. Por outro lado, o processamento em GPU representa uma enorme
melhoria de velocidade em relação ao processamento da CPU. A aplicação que desenvolvemos é
multiplataforma para que ele possa ser executado em desktops e dispositivos móveis. Utilizamos
a aplicação para avaliar o desempenho de diferentes operadores de tone-mapping e diferentes
filtros de imagem não lineares para executar a decomposição de imagens em vários níveis. / A typical scene may have a highly nonuniform illumination that common digital camera sensors are currently not able to deal with, as well as typical screen monitors. A High Dynamic Range image (HDR) is an image model capable to store much larger illumination range than regular models, what is more similar to our human system view.
To generate and display HDR images, given the limitations of the devices (cameras and screen
monitors), it is necessary to work in a domain with smaller range, called LDR images (Low
Dynamic Range). The algorithms that map HDR images to LDR images are called tone-mapping
operators. These algorithms, when operating on very high resolution HDR images, demand very high computational effort that CPU are also not currently capable to deal with. On the other hand, GPU offers a natural parallelism by enabling operations to be applied on thousands of pixels simultaneously. One way to program these operations on the GPU is through the use of graphics shaders, directly changing a graphical pipeline that reproduce pixels of the image, such as OpenGL pipeline. Nowadays, mobile devices are also highly available devices that can have powerful GPUs. Hence, an important research subject is to access the viability of using such devices on HDR image processing and tone-mapping.
In this work, we develop OpenGL graphic shaders to process tone-mapping operation as well as image multiscale decomposition using important and modern nonlinear image filters, in order to preserve the most of the images details. This generates sharper results when compared to techniques that directly apply tone-mapping operators on the images. On the other hand, GPU processing represents a huge speed improvement over CPU processing. The application we develop is multiplatform so it can run on desktops and mobile devices. We used it to evaluate the performance of different tone-mapping operators and different nonlinear image filters to perform image multiscale decomposition.
|
317 |
Calcul haute performance en dynamique des contacts via deux familles de décomposition de domaine / High performance computing of discrete nonsmooth contact dynamics via two domain décomposition methodsVisseq, Vincent 03 July 2013 (has links)
La simulation numérique des systèmes multicorps en présence d'interactions complexes, dont le contact frottant, pose de nombreux défis, tant en terme de modélisation que de temps de calcul. Dans ce manuscrit de thèse, nous étudions deux familles de décomposition de domaine adaptées au formalisme de la dynamique non régulière des contacts (NSCD). Cette méthode d'intégration implicite en temps de l'évolution d'une collection de corps en interaction a pour caractéristique de prendre en compte le caractère discret et non régulier d'un tel milieu. Les techniques de décomposition de domaine classiques ne peuvent de ce fait être directement transposées. Deux méthodes de décomposition de domaine proches des formalismes des méthodes de Schwarz et de complément de Schur sont présentées. Ces méthodes se révèlent être de puissants outils pour la parallélisation en mémoire distribuée des simulations granulaires 2D et 3D sur un centre de calcul haute performance. Le comportement de structure des milieux granulaires denses est de plus exploité afin de propager rapidement l'information sur l'ensemble des sous-domaines via un schéma semi-implicite d'intégration en temps. / Numerical simulations of the dynamics of discrete structures in presence of numerous impacts and frictional contacts leads to CPU-intensive large time computations. To deal with such realistic assemblies, numerical tools have been developed, in particular the method called nonsmooth contact dynamics (NSCD). Such modeling has to deal with discreteness and nonsmoothness, such that domain decomposition approaches for regular continuum media has to be rethought. We present further two domain decomposition method linked to Schwarz and Schur formalism. Scalability and numerical performances of the methods for 2D and 3D granular media is studied, showing good parallel behavior on a supercomputer platform. The structural behavior of dense granular packing is herein used to introduce a spacial multilevel preconditioner with a coarse problem to improve convergence in a space-time approach.
|
318 |
Développement d’une approche multi-échelle pour l'étude de la solubilité des flavonoïdes et leur assemblage avec les polymères / Development of a multi-scale approach to study flavonoids solubility and their assembly with polymersSlimane, Manel 15 December 2017 (has links)
Depuis quelques décennies, les flavonoïdes sont de plus en plus utilisés dans différents domaines d’applications alimentaires et non alimentaires. Cet engouement est dû principalement à leurs activités antioxydantes. Cependant, la solubilisation, la dispersion et la stabilisation de ces molécules sont variables et constituent un frein à leur utilisation. L’objectif de ce travail est de pallier cet inconvénient en visant à comprendre les interactions entre ces composés et leur milieu en absence et en présence de polymères, par une double approche expérimentale et par modélisation et mésomodélisation moléculaire. Dans un premier temps les interactions entre 3 flavonoïdes la quercétine et ses deux formes glycosilées la rutine et l’isoquercétine dans différents solvants organiques ont été étudiées. Les résultats obtenus (paramètre de Flory Huggins et fonction de la distribution radiale) ont montré que la partie B2 commune aux trois flavonoïdes avec des valeurs de paramètres de Flory Huggins proche de 0.5 dans le M2B2 et plus importantes dans l’acétonitrile est la responsable du comportement des flavonoïdes dans le solvant. Les simulations par DDFT ont montré une agrégation de la quercétine dans le M2B2 contre une dispersion dans l’acétonitrile. Toutes ces observations ont été validées expérimentalement (étude de la solubilité et observations microscopiques). Dans un deuxième temps on a étudié la quercétine en présence d’un bioploymère le PLGA dans l’eau. Des nanoparticules ont été formées en variant la concentration des différents composés et le ratio acide lactique / acide glycolique du PLGA. Les méthodes de la modélisation moléculaire et de la mésomodélisation (calcul du paramètre de solubilité par dynamique moléculaire et observation de la dispersion ou de la séparation de phase par DDFT) ainsi que l’approche expérimentale (DSC, MET …) nous ont menées à la même constatation. En effet la taille des particules augmente avec la concentration du PLGA et le taux d’acide lactique dans le polymère. Aussi la concentration de l’émulsifiant dans le milieu joue un rôle important dans la formation d’agrégats PLGA-Q. Plus sa concentration est importante, plus la formation des particules est difficile comme il joue un rôle sur la viscosité du milieu et par conséquence la diffusivité des molécules dans l’eau. Tous les résultats obtenus par modélisation moléculaire et par mésomodélisation ont été validés expérimentalement. On peut donc conclure que la méthodologie adoptée en simulation peut constituer un outil d’aide à la prédiction du comportement des flavonoïdes / Over the past few decades, flavonoids have become increasingly used in different food and non-food applications due to their important antioxidant activities. However, the solubilization, dispersion and stabilization of these molecules are variable and constitute a brake on their use in different formulations. The objective of this work is to overcome those limitations by understanding the interactions between these compounds and their environment without and with the add of polymers, by a multi-scale approach approach (molecular modeling and mesoscale modeling and experimental study). Initially, interactions between 3 flavonoids (quercetin, rutin and isoquercetin) in various organic solvents, were studied. The obtained results (mainly Flory Huggins parameter and radial distribution function RDF) showed that the B2 part common to the three flavonoids (For example Flory Huggins parameter values were close to 0.5 in the M2B2 and much more important in acetonitrile) is responsible for the miscibility behavior of the flavonoids in the solvent. DDFT simulations showed aggregation of quercetin in M2B2 against dispersion in acetonitrile. All these observations were confirmed experimentally (study of solubility and microscopic observations). Then, quercetin was studied in the presence of a biopolymer, PLGA in water. Nanoparticles were formed by varying the concentration of the various compounds and the lactic acid / glycolic acid ratio in the PLGA. The tools of molecular modeling and mesoscale modeling (calculation of the solubility parameter by molecular dynamics and observation of the dispersion or the phase separation by DDFT) as well as the experimental approach (DSC, MET ...) led us to the same conclusions. Indeed, the particle size increases with the concentration of PLGA and the rate of lactic acid in the polymer. Also the concentration of the emulsifier in the medium has an important role in the formation of PLGA-Q aggregates. The higher its concentration, the more difficult the formation of the particles as it affects the viscosity of the medium and consequently the diffusivity of the molecules in the water. All the results obtained by molecular modeling and by mesoscale modeling have been confirmed experimentally. We can therefore conclude that the methodology adopted in the simulations can be considered as a tool to help on predicting the behavior of flavonoids in different medium
|
319 |
Modélisation multi-échelle et analyse expérimentale du comportement de composites à matrice thermoplastique renforcés fibres de verre sous sollicitations dynamiques modérées / Multiscale model and experimental characterization of glass fiber reinforced thermoplastic composite under dynamic loadingAchour, Nadia 22 December 2017 (has links)
Le présent travail de thèse a pour objectif de développer un outil de modélisation par transition d’échelles sous forme de machine d’essais virtuels. Celle-ci, utilisée conjointement aux codes de calculs de structures, permet de déterminer le comportement anisotrope complexe de composites à matrice polypropylène chargés en fibres de verre courtes sous sollicitations dynamiques. La microstructure en cœur-peau induite par le procédé d’injection du matériau est investiguée expérimentalement par μCT. Le comportement dynamique est caractérisé pour des vitesses de déformation allant jusqu’à 200s-1 au moyen d’une une méthodologie expérimentale basée sur l’utilisation d’un joint d’amortissement et d’une optimisation des éprouvettes. Les mécanismes d’endommagement sont analysés expérimentalement par essai in situ. Ils mettent en évidence le phénomène d’endommagent prépondérant qui est la décohésion de l’interface fibre matrice. Basé sur ces résultats expérimentaux, l’approche multi échelles développée consiste en une méthode de Mori Tanaka incrémentale appliquée à une matrice élastoviscoplastique et des renforts enrobés intégrant l’évolution de l’endommagement à l’échelle mésoscopique. L’endommagement introduit dans les enrobages perturbe le transfert de charge entre la matrice et les renforts. De plus, la dépendance à la vitesse de déformation, aux orientations et aux taux de fibre du modèle sont corrélés par des essais. La machine d’essais virtuels est validée par modélisation de structures. L’outil prédictif ainsi développé prend en compte le minimum nécessaire à la description de la microstructure tout en étant fiable et pertinent dans la modélisation de composites sous sollicitations dynamiques modérées. / The current work focuses on the development of a micromechanical modeling tool in the form of a virtual test machine which, used with the structural calculation codes, allows to determine the complex anisotropic behavior of polypropylene matrix composites reinforced with short glass fibers under dynamic loading. The core-skin microstructure induced by the material injection process is investigated experimentally by μCT. The dynamic behavior is characterized for strain rates of up to 200s-1 using an experimental methodology based on the use of a damping joint and specimen optimization. The mechanisms of damage are analyzed experimentally by in situ SEM testing. They highlight the importance of the debonding phenomenon in the damage scenario. Based on these experimental results, the multiscale approach developed consists of an incremental Mori Tanaka method applied to an elastoviscoplastic matrix and coated reinforcements integrating the evolution of damage at the mesoscopic scale. The damage introduced into the coatings disturbs the load transfer between the matrix and the reinforcements. In addition, the strain rate, orientation, and fiber rate dependence of the model are correlated by testing. The virtual testing machine is validated by modeling structures. The developed predictive tool thus takes into account the minimum necessary to describe the microstructure while being reliable and relevant in the modeling of composites under moderate dynamic stress.
|
320 |
Modelagem de tumores avasculares: de autômatos celulares a modelos de multiescala / Avascular tumor modelling: from celular automata to multiscale modelsPaiva, Leticia Ribeiro de 21 March 2007 (has links)
Made available in DSpace on 2015-03-26T13:35:24Z (GMT). No. of bitstreams: 1
01 - capa_abstract.pdf: 134195 bytes, checksum: 45ccca4315780ed037185fce918f9327 (MD5)
Previous issue date: 2007-03-21 / Universidade Federal de Viçosa / Despite of the recent progress in cancer diagnosis and treatment, the survival rates of patients with tumors in unresectable locations, recurrent or metastatic tumors are still low. On the quest for alternative treatments, oncolytic virotherapy and encapsulation of chemotherapeutic drugs into nanoscale vehicles emerge as promissing strategies. However, several fundamental process and issues still must be understood in order to enhance the efficacy of these treatments. The nonlinearities and complexities inherent to tumor-oncolytic virus and tumor-drug interactions claim for a mathematical approach. Quantitative models allow to enlarge our understanding of the parameters influencing therapeutic outcomes, guide essays by indicating relevant physiological processes for further investigation, and prevent excessive experimentation. The multiescale models for virotherapy presented and discussed in this thesis suggest the appropriate traits an oncolytic virus must have and the less agressive ways to modulate the antiviral immune response in order to maximize the tumor erradication probability. Concerning the model for treatment with chemotherapeutic drugs encapsulated into nanoparticles, we focused on chimeric polymers attached with the doxorubicin drug, that recently are under active investigation. Using the same parameters that characterize these particles and the experimental protocols commonly used for their administration, our results indicate some of the basic features of these nanoparticles that should be developed in order to maximize the therapy's success. / A maior parte das terapias anti-câncer clinicamente usadas tem se desenvolvido empiricamente [1] mas a resposta do tumor e do organismo a essas terapias é não-linear. Portanto, modelos matemáticos podem ser ferramentas complementares (e talvez necessárias) para a compreensão da dinâmica da resposta à droga ou terapia no organismo. Nesta dissertação de mestrado alguns desses modelos são estudados. Em particular, propomos uma estratégia para crescer agregados isotrópicos do modelo de Eden na rede, um modelo estocástico básico para o crescimento de tumores avasculares, Os padrões gerados são caracterizados pela largura da interface, que é calculada considerando o centro da rede ou o centro de massa do agregado como referência, e pela diferença entre as probabilidades de crescimento axial e diagonal. Também foi estudado um modelo de multiescala para viroterapia em tumores avasculares em que as concentrações de nutrientes e vírus são descritas por equações de reação-difusão macroscópicas e as ações de células tumorais são governadas por regras estocásticas microscópicas. O objetivo central dessa parte do trabalho é a determinação do diagrama de estados no espaço de parâmetros. A faixa de parâmetros envolvidos foi estimada a partir de dados experimentais e a resposta das células tumorais à injeção viral apresenta quatro comportamentos diferentes, todos observados experimentalmente. Os valores dos parâmetros que geram predominantemente cada um desses comportamentos são determinados. / Não foi localizado o texto completo
O texto publicado trata-se do resumo
|
Page generated in 0.074 seconds