• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 2
  • 2
  • 1
  • Tagged with
  • 11
  • 11
  • 11
  • 6
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Zobecněné odhadovací rovnice (GEE) / Generalized estimating equaitons

Sotáková, Martina January 2020 (has links)
In this thesis we are interested in generalized estimating equations (GEE). First, we introduce the term of generalized linear model, on which generalized estimating equations are based. Next we present the methos of pseudo maximum likelyhood and quasi-pseudo maximum likelyhood, from which we move on to the methods of generalized estimating equations. Finally, we perform simulation studies, which demonstrates the theoretical results presented in the thesis. 1
2

Essays on Estimation Methods for Factor Models and Structural Equation Models

Jin, Shaobo January 2015 (has links)
This thesis which consists of four papers is concerned with estimation methods in factor analysis and structural equation models. New estimation methods are proposed and investigated. In paper I an approximation of the penalized maximum likelihood (ML) is introduced to fit an exploratory factor analysis model. Approximated penalized ML continuously and efficiently shrinks the factor loadings towards zero. It naturally factorizes a covariance matrix or a correlation matrix. It is also applicable to an orthogonal or an oblique structure. Paper II, a simulation study, investigates the properties of approximated penalized ML with an orthogonal factor model. Different combinations of penalty terms and tuning parameter selection methods are examined. Differences in factorizing a covariance matrix and factorizing a correlation matrix are also explored. It is shown that the approximated penalized ML frequently improves the traditional estimation-rotation procedure. In Paper III we focus on pseudo ML for multi-group data. Data from different groups are pooled and normal theory is used to fit the model. It is shown that pseudo ML produces consistent estimators of factor loadings and that it is numerically easier than multi-group ML. In addition, normal theory is not applicable to estimate standard errors. A sandwich-type estimator of standard errors is derived. Paper IV examines properties of the recently proposed polychoric instrumental variable (PIV) estimators for ordinal data through a simulation study. PIV is compared with conventional estimation methods (unweighted least squares and diagonally weighted least squares). PIV produces accurate estimates of factor loadings and factor covariances in the correctly specified confirmatory factor analysis model and accurate estimates of loadings and coefficient matrices in the correctly specified structure equation model. If the model is misspecified, robustness of PIV depends on model complexity, underlying distribution, and instrumental variables.
3

Structural Credit Risk Models: Estimation and Applications

Lovreta, Lidija 26 May 2010 (has links)
El risc de crèdit s'associa a l'eventual incompliment de les obligacions de pagament per part dels creditors. En aquest cas, l'interès principal de les institucions financeres és mesurar i gestionar amb precisió aquest risc des del punt de vista quantitatiu. Com a resposta a l'interès esmentat, aquesta tesi doctoral, titulada "Structural Credit Risk Models: Estimation and Applications", se centra en l'ús pràctic dels anomenats "models estructurals de risc de crèdit". Aquests models es caracteritzen perquè estableixen una relació explícita entre el risc de crèdit i diverses variables fonamentals, la qual cosa permet un ventall ampli d'aplicacions. Concretament, la tesi analitza el contingut informatiu tant del mercat d'accions com del mercat de CDS sobre la base dels models estructurals esmentats.El primer capítol, estudia la velocitat distinta amb què el mercat d'accions i el mercat de CDS incorporen nova informació sobre el risc de crèdit. L'anàlisi se centra a respondre dues preguntes clau: quin d'aquests mercats genera una informació més precisa sobre el risc de crèdit i quins factors determinen el diferent contingut informatiu dels indicadors respectius de risc, és a dir, les primes de crèdit implícites en el mercat d'accions enfront del de CDS. La base de dades utilitzada inclou 94 empreses (40 d'europees, 32 de nordamericanes i 22 de japoneses) durant el període 2002-2004. Entre les conclusions principals destaquen la naturalesa dinàmica del procés de price discovery, una interconnexió més gran entre ambdós mercats i un major domini informatiu del mercat d'accions, associat a uns nivells més elevats del risc de crèdit, i, finalment, una probabilitat més gran de lideratge informatiu del mercat de CDS en els períodes d'estrès creditici.El segon capítol se centra en el problema de l'estimació de les variables latents en els models estructurals. Es proposa una nova metodologia, que consisteix en un algoritme iteratiu aplicat a la funció de versemblança per a la sèrie temporal del preu de les accions. El mètode genera estimadors de pseudomàxima versemblança per al valor, la volatilitat i el retorn que s'espera obtenir dels actius de l'empresa. Es demostra empíricament que aquest nou mètode produeix, en tots els casos, valors raonables del punt de fallida. A més, aquest mètode és contrastat d'acord amb les primes de CDS generades. S'observa que, en comparació amb altres alternatives per fixar el punt de fallida (màxima versemblança estàndard, barrera endògena, punt d'impagament de KMV i nominal del deute), l'estimació per pseudomàxima versemblança proporciona menys divergències.El tercer i darrer capítol de la tesi tracta la qüestió relativa a components distints del risc de crèdit a la prima dels CDS. Més concretament, estudia l'efecte del desequilibri entre l'oferta i la demanda, un aspecte important en un mercat on el nombre de compradors (de protecció) supera habitualment el de venedors. La base de dades cobreix, en aquest cas, 163 empreses en total (92 d'europees i 71 de nord-americanes) per al període 2002- 2008. Es demostra que el desequilibri entre l'oferta i la demanda té, efectivament, un paper important a l'hora d'explicar els moviments a curt termini en els CDS. La influència d'aquest desequilibri es detecta després de controlar l'efecte de variables fonamentals vinculades al risc de crèdit, i és més gran durant els períodes d'estrès creditici. Aquests resultats il·lustren que les primes dels CDS reflecteixen no tan sols el cost de la protecció, sinó també el cost anticipat per part dels venedors d'aquesta protecció per tancar la posició adquirida. / El riesgo de crédito se asocia al potencial incumplimiento por parte de los acreedores respecto de sus obligaciones de pago. En este sentido, el principal interés de las instituciones financieras es medir y gestionar con precisión dicho riesgo desde un punto de vista cuantitativo. Con objeto de responder a este interés, la presente tesis doctoral titulada "Structural Credit Risk Models: Estimation and Applications", se centra en el uso práctico de los denominados "Modelos Estructurales de Riesgo de Crédito". Estos modelos se caracterizan por establecer una conexión explícita entre el riesgo de crédito y diversas variables fundamentales, permitiendo de este modo un amplio abanico de aplicaciones. Para ser más explícitos, la presente tesis explora el contenido informativo tanto del mercado de acciones como del mercado de CDS sobre la base de los mencionados modelos estructurales.El primer capítulo de la tesis estudia la distinta velocidad con la que el mercado de acciones y el mercado de CDS incorporan nueva información sobre el riesgo de crédito. El análisis se centra en contestar dos preguntas clave: cuál de estos mercados genera información más precisa sobre el riesgo de crédito, y qué factores determinan en distinto contenido informativo de los respectivos indicadores de riesgo, esto es, primas de crédito implícitas en el mercado de acciones frente a CDS. La base de datos utilizada engloba a 94 compañías (40 europeas, 32 Norteamericanas y 22 japonesas) durante el periodo 2002-2004. Entre las principales conclusiones destacan la naturaleza dinámica del proceso de price discovery, la mayor interconexión entre ambos mercados y el mayor dominio informativo del mercado de acciones asociados a mayores niveles del riesgo de crédito, y finalmente la mayor probabilidad de liderazgo informativo del mercado de CDS en los periodos de estrés crediticio.El segundo capítulo se centra en el problema de estimación de variables latentes en modelos estructurales. Se propone una nueva metodología consistente en un algoritmo iterativo aplicado a la función de verosimilitud para la serie temporal del precio de las acciones. El método genera estimadores pseudo máximo verosímiles para el valor, volatilidad y retorno esperado de los activos de la compañía. Se demuestra empíricamente que este nuevo método produce en todos los casos valores razonables del punto de quiebra. El método es además contrastado en base a las primas de CDS generadas. Se observa que, en comparación con otras alternativas para fijar el punto de quiebra (máxima verosimilitud estándar, barrera endógena, punto de impago de KMV, y nominal de la deuda), la estimación por pseudo máxima verosimilitud da lugar a las menores divergencias.El tercer y último capítulo de la tesis aborda la cuestión relativa a componentes distintos al riesgo de crédito en la prima de los CDS. Se estudia más concretamente el efecto del desequilibrio entre oferta y demanda, un aspecto importante en un mercado donde el número de compradores (de protección) supera habitualmente al de vendedores. La base de datos cubre en este caso un total de 163 compañías (92 europeas y 71 norteamericanas) para el periodo 2002-2008. Se demuestra que el desequilibrio entre oferta y demanda tiene efectivamente un papel importante a la hora de explicar los movimientos de corto plazo en los CDS. La influencia de este desequilibrio se detecta una vez controlado el efecto de variables fundamentales ligadas al riesgo de crédito, y es mayor durante los periodos de estrés crediticio. Estos resultados ilustran que las primas de los CDS reflejan no sólo el coste de la protección, sino el coste anticipado por parte de los vendedores de tal protección de cerrar la posición adquirida. / Credit risk is associated with potential failure of borrowers to fulfill their obligations. In that sense, the main interest of financial institutions becomes to accurately measure and manage credit risk on a quantitative basis. With the intention to respond to this task this doctoral thesis, entitled "Structural Credit Risk Models: Estimation and Applications", focuses on practical usefulness of structural credit risk models that are characterized with explicit link with economic fundamentals and consequently allow for a broad range of application possibilities. To be more specific, in essence, the thesis project explores the information on credit risk embodied in the stock market and market for credit derivatives (CDS market) on the basis of structural credit risk models. The issue addressed in the first chapter refers to relative informational content of stock and CDS market in terms of credit risk. The overall analysis is focused on answering two crucial questions: which of these markets provides more timely information regarding credit risk, and what are the factors that influence informational content of credit risk indicators (i.e. stock market implied credit spreads and CDS spreads). Data set encompasses international set of 94 companies (40 European, 32 US and 22 Japanese) during the period 2002-2004. The main conclusions uncover time-varying behaviour of credit risk discovery, stronger cross market relationship and stock market leadership at higher levels of credit risk, as well as positive relationship between the frequency of severe credit deterioration shocks and the probability of the CDS market leadership.Second chapter concentrates on the problem of estimation of latent parameters of structural models. It proposes a new, maximum likelihood based iterative algorithm which, on the basis of the log-likelihood function for the time series of equity prices, provides pseudo maximum likelihood estimates of the default barrier and of the value, volatility, and expected return on the firm's assets. The procedure allows for credit risk estimation based only on the readily available information from stock market and is empirically tested in terms of CDS spread estimation. It is demonstrated empirically that, contrary to the standard ML approach, the proposed method ensures that the default barrier always falls within reasonable bounds. Moreover, theoretical credit spreads based on pseudo ML estimates offer the lowest credit default swap pricing errors when compared to the other options that are usually considered when determining the default barrier: standard ML estimate, endogenous value, KMV's default point, and principal value of debt.Final, third chapter of the thesis, provides further evidence of the performance of the proposed pseudo maximum likelihood procedure and addresses the issue of the presence of non-default component in CDS spreads. Specifically, the effect of demand-supply imbalance, an important aspect of liquidity in the market where the number of buyers frequently outstrips the number of sellers, is analyzed. The data set is largely extended covering 163 non-financial companies (92 European and 71 North American) and period 2002-2008. In a nutshell, after controlling for the fundamentals reflected through theoretical, stock market implied credit spreads, demand-supply imbalance factors turn out to be important in explaining short-run CDS movements, especially during structural breaks. Results illustrate that CDS spreads reflect not only the price of credit protection, but also a premium for the anticipated cost of unwinding the position of protection sellers.
4

Agriculture & New New Trade Theory / Theoretical, Methodological, and Empirical Issues

Prehn, Sören 15 May 2012 (has links)
No description available.
5

The gravity model for international trade: Specification and estimation issues in the prevalence of zero flows

Krisztin, Tamás, Fischer, Manfred M. 14 August 2014 (has links) (PDF)
The gravity model for international trade is one of the most successful empirical models in trade literature. There is a long tradition to log-linearise the multiplicative model and to estimate the parameters of interest by least squares. But this practice is inappropriate for several reasons. First of all, bilateral trade flows are frequently zero and disregarding countries that do not trade with each other produces biased results. Second, log-linearisation in the presence of heteroscedasticity leads to inconsistent estimates in general. In recent years, the Poisson gravity model along with pseudo maximum likelihood estimation methods have become popular as a way of dealing with such econometric issues as arise when dealing with origin-destination flows. But the standard Poisson model specification is vulnerable to problems of overdispersion and excess zero flows. To overcome these problems, this paper presents zero-inflated extensions of the Poisson and negative binomial specifications as viable alternatives to both the log-linear and the standard Poisson specifications of the gravity model. The performance of the alternative model specifications is assessed on a real world example, where more than half of country-level trade flows are zero. (authors' abstract) / Series: Working Papers in Regional Science
6

Estimating the Trade and Welfare Effects of Brexit: A Panel Data Structural Gravity Model

Oberhofer, Harald, Pfaffermayr, Michael 01 1900 (has links) (PDF)
This paper proposes a new panel data structural gravity approach for estimating the trade and welfare effects of Brexit. The suggested Constrained Poisson Pseudo Maximum Likelihood Estimator exhibits some useful properties for trade policy analysis and allows to obtain estimates and confidence intervals which are consistent with structural trade theory. Assuming different counterfactual post-Brexit scenarios, our main findings suggest that UKs (EUs) exports of goods to the EU (UK) are likely to decline within a range between 7.2% and 45.7% (5.9% and 38.2%) six years after the Brexit has taken place. For the UK, the negative trade effects are only partially offset by an increase in domestic goods trade and trade with third countries, inducing a decline in UKs real income between 1.4% and 5.7% under the hard Brexit scenario. The estimated welfare effects for the EU are negligible in magnitude and statistically not different from zero. / Series: Department of Economics Working Paper Series
7

An Application of the Gravity Model to International Trade in Narcotics

Marchildon, Miguel January 2018 (has links)
The transnational traffic of narcotics has had undeniable impacts on international development, for instance, stagnant economic growth in Myanmar (Chin, 2009), unsustainable agricultural practices in Yemen (Robins, 2016), and human security threats in Columbia (Thoumi, 2013). Furthermore, globalization is a catalyst for the transnational narcotics traffic (Robins, 2016; Aas, 2007; Kelly, Maghan & Serio, 2005). Several qualitative studies exist on the transnational narcotics traffic, yet few quantitative studies examine the issue. There is thus an opportunity for novel quantitative studies on the general question: “what are the main economic factors that influence the transnational traffic of narcotics between countries?” This study looked at the specific question: “are distance and economic size correlated with the volume of narcotics traffic between countries?” This study chose the gravity model as it centres on bilateral trade (Tinbergen, 1962), accounts for trade barriers (Kalirajan, 2008) and is empirically robust (Anderson 2011). This study defined a basic functional gravity model relating a proxy of the narcotics traffic to distance and economic size. Four augmented functional gravity models were also advanced to address omitted variable bias. The research was limited conceptually to cross sectional and pooled time series data. In addition, the data was also limited practically to a convenience sample of secondary data drawn from: the United Nations Office on Drugs and Crime’s (UNODC) (2016a) Individual Drug Seizures (IDS); the World Bank’s (2016) World Development Indicators; and the CEPII’s GeoDist (2016) datasets. This study used a novel “dosage” approach to unit standardization to overcome the challenge posed by the many measures and forms of narcotics. The study used the Poisson pseudo maximum likelihood (PPML) estimator as its estimations of the gravity model are consistent (Gourieroux et al., 1984), allow heteroscedasticity (Silva & Tenreyro, 2006) and avoid back transformation bias (Cox et al., 2008). The evidence analyzed in this study seem to indicate that the gravity model may not be applicable in its current form to the transnational narcotics traffic among countries that report drug seizures to the UNODC. However, the sampling method and the choice of proxy are likely to influence these findings. Moreover, the low explanatory power of the gravity model for the narcotics traffic, reflected in the values of the pseudo-R-squared coefficient of determination, indicates that other factors are at play. For instance, authors such as Asad and Harris (2003) and Thoumi (2003) argue that institutions could be a key factor in the narcotics traffic. Future empirical research into this topic could build on the theses findings to introduce new proxies and to explore alternate theoretical frameworks.
8

Commerce international et économie de la science : distances, agglomération, effets de pairs et discrimination / International trade and economics of science : distances, agglomeration, peer effects and discrimination

Bosquet, Clément 03 October 2012 (has links)
Cette thèse rassemble principalement des contributions en économie de la science à laquelle les deux premières parties sont consacrées. La première teste l'importance des choix méthodologiques dans la mesure de la production scientifique et étudie les canaux de diffusion de la connaissance. La deuxième s'intéresse aux déterminants individuels et locaux de la productivité des chercheurs et au différentiel de promotion entre hommes et femmes sur le marché du travail académique. Sont établis les résultats suivants : les choix méthodologiques dans la mesure de la production scientifique n'affectent que très peu les classements des institutions de recherche. Les citations et les poids associés à la qualité des journaux mesurent globalement la même productivité de la recherche. La localisation des chercheurs a un impact sur leur productivité dans la mesure où certaines universités génèrent davantage d'externalités que d'autres. Ces externalités sont plus importantes là où les chercheurs sont homogènes en terme de performances, où la diversité thématique est grande, et dans une moindre mesure dans les grands centres de recherche, lorsqu'il y a plus de femmes, de chercheurs âgés, de stars et là où les chercheurs sont connectés à des co-auteurs à l'étranger. Si les femmes sont moins souvent Professeur des Universités (par opposition à Maître de Conférences) que les hommes, ce n'est ni parce qu'elles sont discriminées dans le processus de promotion, ni que le coût de promotion (mobilité) est plus important pour elles, ni qu'elles ont des préférences différentes concernant le salaire et le prestige des institutions dans lesquelles elles travaillent. / The core of this thesis lies in the field of economics of science to which the two first parts are devoted. The first part questions the impact of methodological choices in the measurement of research productivity and studies the channels of knowledge diffusion. The second part studies the impact on individual publication records of both individual and departments' characteristics and analyse the gender gap in occupations on the academic labour market. The main results are the following: methodological choices in the measurement of research productivity do not impact the estimated hierarchy of research institutions. Citations and journal quality weights measure the same dimension of publication productivity. Location matters in the academic research activity: some departments generate more externalities than others. Externalities are higher where academics are homogeneous in terms of publication performance and have diverse research fields, and, to a lower extent, if the department is large, with more women, older academics, stars and co-authors connection to foreign departments. If women are less likely to be full Professor (with respect to Assistant Professor) than men, this is neither because they are discriminated against in the promotion process, neither because the promotion cost (mobility) is higher for them, nor because they have different preferences for salaries versus department prestige. A possible, but not tested, explanation is that women self-select themselves by participating less in or exerting lower effort during the promotion process.
9

Modelos de regressão beta com erro nas variáveis / Beta regression model with measurement error

Carrasco, Jalmar Manuel Farfan 25 May 2012 (has links)
Neste trabalho de tese propomos um modelo de regressão beta com erros de medida. Esta proposta é uma área inexplorada em modelos não lineares na presença de erros de medição. Abordamos metodologias de estimação, como máxima verossimilhança aproximada, máxima pseudo-verossimilhança aproximada e calibração da regressão. O método de máxima verossimilhança aproximada determina as estimativas maximizando diretamente o logaritmo da função de verossimilhança. O método de máxima pseudo-verossimilhança aproximada é utilizado quando a inferência em um determinado modelo envolve apenas alguns mas não todos os parâmetros. Nesse sentido, dizemos que o modelo apresenta parâmetros de interesse como também de perturbação. Quando substituímos a verdadeira covariável (variável não observada) por uma estimativa da esperança condicional da variável não observada dada a observada, o método é conhecido como calibração da regressão. Comparamos as metodologias de estimação mediante um estudo de simulação de Monte Carlo. Este estudo de simulação evidenciou que os métodos de máxima verossimilhança aproximada e máxima pseudo-verossimilhança aproximada tiveram melhor desempenho frente aos métodos de calibração da regressão e naïve (ingênuo). Utilizamos a linguagem de programação Ox (Doornik, 2011) como suporte computacional. Encontramos a distribuição assintótica dos estimadores, com o objetivo de calcular intervalos de confiança e testar hipóteses, tal como propõem Carroll et. al.(2006, Seção A.6.6), Guolo (2011) e Gong e Samaniego (1981). Ademais, são utilizadas as estatísticas da razão de verossimilhanças e gradiente para testar hipóteses. Num estudo de simulação realizado, avaliamos o desempenho dos testes da razão de verossimilhanças e gradiente. Desenvolvemos técnicas de diagnóstico para o modelo de regressão beta com erros de medida. Propomos o resíduo ponderado padronizado tal como definem Espinheira (2008) com o objetivo de verificar as suposições assumidas ao modelo e detectar pontos aberrantes. Medidas de influência global, tais como a distância de Cook generalizada e o afastamento da verossimilhança, são utilizadas para detectar pontos influentes. Além disso, utilizamos a técnica de influência local conformal sob três esquemas de perturbação (ponderação de casos, perturbação da variável resposta e perturbação da covariável com e sem erros de medida). Aplicamos nossos resultados a dois conjuntos de dados reais para exemplificar a teoria desenvolvida. Finalmente, apresentamos algumas conclusões e possíveis trabalhos futuros. / In this thesis, we propose a beta regression model with measurement error. Among nonlinear models with measurement error, such a model has not been studied extensively. Here, we discuss estimation methods such as maximum likelihood, pseudo-maximum likelihood, and regression calibration methods. The maximum likelihood method estimates parameters by directly maximizing the logarithm of the likelihood function. The pseudo-maximum likelihood method is used when the inference in a given model involves only some but not all parameters. Hence, we say that the model under study presents parameters of interest, as well as nuisance parameters. When we replace the true covariate (observed variable) with conditional estimates of the unobserved variable given the observed variable, the method is known as regression calibration. We compare the aforementioned estimation methods through a Monte Carlo simulation study. This simulation study shows that maximum likelihood and pseudo-maximum likelihood methods perform better than the calibration regression method and the naïve approach. We use the programming language Ox (Doornik, 2011) as a computational tool. We calculate the asymptotic distribution of estimators in order to calculate confidence intervals and test hypotheses, as proposed by Carroll et. al (2006, Section A.6.6), Guolo (2011) and Gong and Samaniego (1981). Moreover, we use the likelihood ratio and gradient statistics to test hypotheses. We carry out a simulation study to evaluate the performance of the likelihood ratio and gradient tests. We develop diagnostic tests for the beta regression model with measurement error. We propose weighted standardized residuals as defined by Espinheira (2008) to verify the assumptions made for the model and to detect outliers. The measures of global influence, such as the generalized Cook\'s distance and likelihood distance, are used to detect influential points. In addition, we use the conformal approach for evaluating local influence for three perturbation schemes: case-weight perturbation, respose variable perturbation, and perturbation in the covariate with and without measurement error. We apply our results to two sets of real data to illustrate the theory developed. Finally, we present our conclusions and possible future work.
10

Modélisation de l’hétérogénéité tumorale par processus de branchement : cas du glioblastome / Modeling of tumor heterogeneity by branching process : case of glioblastoma

Obara, Tiphaine 07 October 2016 (has links)
Grâce aux progrès de la recherche, on sait aujourd’hui guérir près d’un cancer sur deux. Cependant, certaines tumeurs, telles que les glioblastomes restent parmi les plus agressives et les plus difficiles à traiter. La cause de cette résistance aux traitements pourrait provenir d’une sous-population de cellules ayant des caractéristiques communes aux cellules souches que l’on appelle cellules souches cancéreuses. De nombreux modèles mathématiques et numériques de croissance tumorale existent déjà mais peu tiennent compte de l’hétérogénéité intra-tumorale, qui est aujourd’hui un véritable challenge. Cette thèse s’intéresse à la dynamique des différentes sous-populations cellulaires d’un glioblastome. Elle consiste en l’élaboration d’un modèle mathématique de croissance tumorale reposant sur un processus de branchement de Bellman-Harris, à la fois multi-type et dépendant de l’âge. Ce modèle permet d’intégrer l’hétérogénéité cellulaire. Des simulations numériques reproduisent l’évolution des différents types de cellules et permettent de tester l’action de différents schémas thérapeutiques sur le développement tumoral. Une méthode d’estimation des paramètres du modèle numérique fondée sur le pseudo-maximum de vraisemblance a été adaptée. Cette approche est une alternative au maximum de vraisemblance dans le cas où la distribution de l’échantillon est inconnue. Enfin, nous présentons les expérimentations biologiques qui ont été mises en place dans le but de valider le modèle numérique / The latest advances in cancer research are paving the way to better treatments. However, some tumors such as glioblastomas remain among the most aggressive and difficult to treat. The cause of this resistance could be due to a sub-population of cells with characteristics common to stem cells. Many mathematical and numerical models on tumor growth already exist but few take into account the tumor heterogeneity. It is now a real challenge. This thesis focuses on the dynamics of different cell subpopulations in glioblastoma. It involves the development of a mathematical model of tumor growth based on a multitype, age-dependent branching process. This model allows to integrate cellular heterogeneity. Numerical simulations reproduce the evolution of different types of cells and simulate the action of several therapeutic strategies. A method of parameters estimation based on the pseudo-maximum likelihood has been developed. This approach is an alternative to the maximum likelihood in the case where the sample distribution is unknown. Finally, we present the biological experiments that have been implemented in order to validate the numerical model

Page generated in 0.4496 seconds