• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 587
  • 185
  • 3
  • Tagged with
  • 777
  • 777
  • 489
  • 237
  • 106
  • 100
  • 97
  • 91
  • 90
  • 85
  • 83
  • 78
  • 73
  • 63
  • 62
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
621

Essays in real-time forecasting

Liebermann, Joëlle 12 September 2012 (has links)
This thesis contains three essays in the field of real-time econometrics, and more particularly<p>forecasting.<p>The issue of using data as available in real-time to forecasters, policymakers or financial<p>markets is an important one which has only recently been taken on board in the empirical<p>literature. Data available and used in real-time are preliminary and differ from ex-post<p>revised data, and given that data revisions may be quite substantial, the use of latest<p>available instead of real-time can substantially affect empirical findings (see, among others,<p>Croushore’s (2011) survey). Furthermore, as variables are released on different dates<p>and with varying degrees of publication lags, in order not to disregard timely information,<p>datasets are characterized by the so-called “ragged-edge”structure problem. Hence, special<p>econometric frameworks, such as developed by Giannone, Reichlin and Small (2008) must<p>be used.<p>The first Chapter, “The impact of macroeconomic news on bond yields: (in)stabilities over<p>time and relative importance”, studies the reaction of U.S. Treasury bond yields to real-time<p>market-based news in the daily flow of macroeconomic releases which provide most of the<p>relevant information on their fundamentals, i.e. the state of the economy and inflation. We<p>find that yields react systematically to a set of news consisting of the soft data, which have<p>very short publication lags, and the most timely hard data, with the employment report<p>being the most important release. However, sub-samples evidence reveals that parameter<p>instability in terms of absolute and relative size of yields response to news, as well as<p>significance, is present. Especially, the often cited dominance to markets of the employment<p>report has been evolving over time, as the size of the yields reaction to it was steadily<p>increasing. Moreover, over the recent crisis period there has been an overall switch in the<p>relative importance of soft and hard data compared to the pre-crisis period, with the latter<p>becoming more important even if less timely, and the scope of hard data to which markets<p>react has increased and is more balanced as less concentrated on the employment report.<p>Markets have become more reactive to news over the recent crisis period, particularly to<p>hard data. This is a consequence of the fact that in periods of high uncertainty (bad state),<p>markets starve for information and attach a higher value to the marginal information content<p>of these news releases.<p>The second and third Chapters focus on the real-time ability of models to now-and-forecast<p>in a data-rich environment. It uses an econometric framework, that can deal with large<p>panels that have a “ragged-edge”structure, and to evaluate the models in real-time, we<p>constructed a database of vintages for US variables reproducing the exact information that<p>was available to a real-time forecaster.<p>The second Chapter, “Real-time nowcasting of GDP: a factor model versus professional<p>forecasters”, performs a fully real-time nowcasting (forecasting) exercise of US real GDP<p>growth using Giannone, Reichlin and Smalls (2008), henceforth (GRS), dynamic factor<p>model (DFM) framework which enables to handle large unbalanced datasets as available<p>in real-time. We track the daily evolution throughout the current and next quarter of the<p>model nowcasting performance. Similarly to GRS’s pseudo real-time results, we find that<p>the precision of the nowcasts increases with information releases. Moreover, the Survey of<p>Professional Forecasters does not carry additional information with respect to the model,<p>suggesting that the often cited superiority of the former, attributable to judgment, is weak<p>over our sample. As one moves forward along the real-time data flow, the continuous<p>updating of the model provides a more precise estimate of current quarter GDP growth and<p>the Survey of Professional Forecasters becomes stale. These results are robust to the recent<p>recession period.<p>The last Chapter, “Real-time forecasting in a data-rich environment”, evaluates the ability<p>of different models, to forecast key real and nominal U.S. monthly macroeconomic variables<p>in a data-rich environment and from the perspective of a real-time forecaster. Among<p>the approaches used to forecast in a data-rich environment, we use pooling of bi-variate<p>forecasts which is an indirect way to exploit large cross-section and the directly pooling of<p>information using a high-dimensional model (DFM and Bayesian VAR). Furthermore forecasts<p>combination schemes are used, to overcome the choice of model specification faced by<p>the practitioner (e.g. which criteria to use to select the parametrization of the model), as<p>we seek for evidence regarding the performance of a model that is robust across specifications/<p>combination schemes. Our findings show that predictability of the real variables is<p>confined over the recent recession/crisis period. This in line with the findings of D’Agostino<p>and Giannone (2012) over an earlier period, that gains in relative performance of models<p>using large datasets over univariate models are driven by downturn periods which are characterized<p>by higher comovements. These results are robust to the combination schemes<p>or models used. A point worth mentioning is that for nowcasting GDP exploiting crosssectional<p>information along the real-time data flow also helps over the end of the great moderation period. Since this is a quarterly aggregate proxying the state of the economy,<p>monthly variables carry information content for GDP. But similarly to the findings for the<p>monthly variables, predictability, as measured by the gains relative to the naive random<p>walk model, is higher during crisis/recession period than during tranquil times. Regarding<p>inflation, results are stable across time, but predictability is mainly found at nowcasting<p>and forecasting one-month ahead, with the BVAR standing out at nowcasting. The results<p>show that the forecasting gains at these short horizons stem mainly from exploiting timely<p>information. The results also show that direct pooling of information using a high dimensional<p>model (DFM or BVAR) which takes into account the cross-correlation between the<p>variables and efficiently deals with the “ragged-edge”structure of the dataset, yields more<p>accurate forecasts than the indirect pooling of bi-variate forecasts/models. / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
622

Essays to the application of behavioral economic concepts to the analysis of health behavior

Panidi, Ksenia 27 June 2012 (has links)
In this thesis I apply the concepts of Behavioral Economics to the analysis of the individual health care behavior. In the first chapter I provide a theoretical explanation of the link between loss aversion and health anxiety leading to infrequent preventive testing. In the second chapter I analyze this link empirically based on the general population questionnaire study. In the third chapter I theoretically explore the effects of motivational crowding-in and crowding-out induced by external or self-rewards for the self-control involving tasks such as weight loss or smoking cessation.<p><p>Understanding psychological factors behind the reluctance to use preventive testing is a significant step towards a more efficient health care policy. Some people visit doctors very rarely because of a fear to receive negative results of medical inspection, others prefer to resort to medical services in order to prevent any diseases. Recent research in the field of Behavioral Economics suggests that human's preferences may be significantly influenced by the choice of a reference point. In the first chapter I study the link between loss aversion and the frequently observed tendency to avoid useful but negative information (the ostrich effect) in the context of preventive health care choices. I consider a model with reference-dependent utility that allows to characterize how people choose their health care strategy, namely, the frequency of preventive checkups. In this model an individual lives for two periods and faces a trade-off. She makes a choice between delaying testing until the second period with the risk of a more costly treatment in the future, or learning a possibly unpleasant diagnosis today, that implies an emotional loss but prevents an illness from further development. The model shows that high loss aversion decreases the frequency of preventive testing due to the fear of a bad diagnosis. Moreover, I show that under certain conditions increasing risk of illness discourages testing.<p><p>In the second chapter I provide empirical support for the model predictions. I use a questionnaire study of a representative sample of the Dutch population to measure variables such as loss aversion, testing frequency and subjective risk. I consider the undiagnosed non-symptomatic population and concentrate on medical tests for four illnesses that include hypertension, diabetes, chronic lung disease and cancer. To measure loss aversion I employ a sequence of lottery questions formulated in terms of gains and losses of life years with respect to the current subjective life expectancy. To relate this measure of loss aversion to the testing frequency I use a two-part modeling approach. This approach distinguishes between the likelihood of participation in testing and the frequency of tests for those who decided to participate. The main findings confirm that loss aversion, as measured by lottery choices in terms of life expectancy, is significantly and negatively associated with the decision to participate in preventive testing for hypertension, diabetes and lung disease. Higher loss aversion also leads to lower frequency of self-tests for cancer among women. The effect is more pronounced in magnitude for people with higher subjective risk of illness.<p><p>In the third chapter I explore the phenomena of crowding-out and crowding-in of motivation to exercise self-control. Various health care choices, such as keeping a diet, reducing sugar consumption (e.g. in case of diabetes) or abstaining from smoking, require costly self-control efforts. I study the long-run and short-run influence of external and self-rewards offered to stimulate self-control. In particular, I develop a theoretical model based on the combination of the dual-self approach to the analysis of the time-inconsistency problem with the principal-agent framework. I show that the psychological property of disappointment aversion (represented as loss aversion with respect to the expected outcome) helps to explain the differences in the effects of rewards when a person does not perfectly know her self-control costs. The model is based on two main assumptions. First, a person learns her abstention costs only if she exerts effort. Second, observing high abstention costs brings disutility due to disappointment (loss) aversion. The model shows that in the absence of external reward an individual will exercise self-control only when her confidence in successful abstention is high enough. However, observing high abstention costs will discourage the individual from exerting effort in the second period, i.e. will lead to the crowding-out of motivation. On the contrary, choosing zero effort in period 1 does not reveal the self-control costs. Hence, this preserves the person's self-confidence helping her to abstain in the second period. Such crowding-in of motivation is observed for the intermediate level of self-confidence. I compare this situation to the case when an external reward is offered in the first period. The model shows that given a sufficiently low self-confidence external reward may lead to abstention in both periods. At the same time, without it a person would not abstain in any period. However, for an intermediate self-confidence, external reward may lead to the crowding-out of motivation. For the same level of self-confidence, the absence of such reward may cause crowding-in. Overall, the model generates testable predictions and helps to explain contradictory empirical findings on the motivational effects of different types of rewards. / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
623

Experimental study and modeling of single- and two-phase flow in singular geometries and safety relief valves

Kourakos, Vasilios 28 October 2011 (has links)
This research project was carried out at the von Karman Institute for Fluid Dynamics (VKI), in Belgium, in collaboration and with the funding of Centre Technique des Industries Mécaniques (CETIM) in France.<p>The flow of a mixture of two fluids in pipes can be frequently encountered in nuclear, chemical or mechanical engineering, where gas-liquid eactors, boilers, condensers, evaporators and combustion systems can be used. The presence of section changes or more generally geometrical singularities in pipes may affect significantly the behavior of twophase flow and subsequently the resulting pressure drop and mass flow rate. Therefore, it is an important subject of investigation in particular when the application concerns industrial safety valves.<p>This thesis is intended to provide a thorough research on two-phase (air-water) flow phenomena under various circumstances. The project is split in the following steps. At first, experiments are carried out in simple geometries such as smooth and sudden divergence and convergence singularities. Two experimental facilities are built; one in smaller scale in von Karman Institute and one in larger scale in CETIM. During the first part of the study, relatively simple geometrical discontinuities are investigated. The characterization and modeling of contraction and expansion nozzles (sudden and smooth change of section) is carried out. The pressure evolution is measured and pressure drop correlations are deduced. Flow visualization is also performed with a high-speed camera; the different flow patterns are identified and flow regime maps are established for a specific configuration.<p>A dual optical probe is used to determine the void fraction, bubble size and velocity upstream and downstream the singularities.<p>In the second part of the project, a more complex device, i.e. a Safety Relief Valve (SRV), mainly used in nuclear and chemistry industry, is thoroughly studied. A transparent model of a specific type of safety valve (1 1/2" G 3") is built and investigated in terms of pressure evolution. Additionally, flow rate measurements for several volumetric qualities and valve openings are carried out for air, water and two-phase mixtures. Full optical access allowed identification of the structure of the flow. The results are compared with measurements performed at the original industrial valve. Flowforce analysis is performed revealing that compressible and incompressible flowforces in SRV are inversed above a certain value of valve lift. This value varies with critical pressure ratio, therefore is directly linked to the position at which chocked flow occurs during air valve operation. In two-phase flow, for volumetric quality of air=20%, pure compressible flow behavior, in terms of flowforce, is remarked at full lift. Numerical simulations with commercial CFD code are carried out for air and water in axisymmetric 2D model of the valve in order to verify experimental findings.<p>The subject of modeling the discharge through a throttling device in two-phase flow is an important industrial problem. The proper design and sizing of this apparatus is a crucial issue which would prevent its wrong function or accidental operation failure that could cause a hazardous situation. So far reliability of existing models predicting the pressure drop and flow discharge in two-phase flow through the valve for various flow conditions is questionable. Nowadays, a common practice is widely adopted (standard ISO 4126-10 (2010), API RP 520 (2000)); the Homogeneous Equilibrium Method with the so-called !-method, although it still needs further validation. Additionally, based on !-methodology, Homogeneous Non-Equilibrium model has been proposed by Diener and Schmidt (2004) (HNE-DS), introducing a boiling delay coefficient. The accuracy of the aforementioned models is checked against experimental data both for transparent model and industrial SRV. The HNE-DS methodology is proved to be the most precise among the others. Finally, after application of HNE-DS method for air-water flow with cavitation, it is concluded that the behavior of flashing liquid is simulated in such case. Hence, for the specific tested conditions, this type of flow can be modeled with modified method of Diener and Schmidt (CF-HNE-DS) although further validation of this observation is required. / Doctorat en Sciences de l'ingénieur / info:eu-repo/semantics/nonPublished
624

Structural models for macroeconomics and forecasting

De Antonio Liedo, David 03 May 2010 (has links)
This Thesis is composed by three independent papers that investigate<p>central debates in empirical macroeconomic modeling.<p><p>Chapter 1, entitled “A Model for Real-Time Data Assessment with an Application to GDP Growth Rates”, provides a model for the data<p>revisions of macroeconomic variables that distinguishes between rational expectation updates and noise corrections. Thus, the model encompasses the two polar views regarding the publication process of statistical agencies: noise versus news. Most of the studies previous studies that analyze data revisions are based<p>on the classical noise and news regression approach introduced by Mankiew, Runkle and Shapiro (1984). The problem is that the statistical tests available do not formulate both extreme hypotheses as collectively exhaustive, as recognized by Aruoba (2008). That is, it would be possible to reject or accept both of them simultaneously. In turn, the model for the<p>DPP presented here allows for the simultaneous presence of both noise and news. While the “regression approach” followed by Faust et al. (2005), along the lines of Mankiew et al. (1984), identifies noise in the preliminary<p>figures, it is not possible for them to quantify it, as done by our model. <p><p>The second and third chapters acknowledge the possibility that macroeconomic data is measured with errors, but the approach followed to model the missmeasurement is extremely stylized and does not capture the complexity of the revision process that we describe in the first chapter.<p><p><p>Chapter 2, entitled “Revisiting the Success of the RBC model”, proposes the use of dynamic factor models as an alternative to the VAR based tools for the empirical validation of dynamic stochastic general equilibrium (DSGE) theories. Along the lines of Giannone et al. (2006), we use the state-space parameterisation of the factor models proposed by Forni et al. (2007) as a competitive benchmark that is able to capture weak statistical restrictions that DSGE models impose on the data. Our empirical illustration compares the out-of-sample forecasting performance of a simple RBC model augmented with a serially correlated noise component against several specifications belonging to classes of dynamic factor and VAR models. Although the performance of the RBC model is comparable<p>to that of the reduced form models, a formal test of predictive accuracy reveals that the weak restrictions are more useful at forecasting than the strong behavioral assumptions imposed by the microfoundations in the model economy.<p><p>The last chapter, “What are Shocks Capturing in DSGE modeling”, contributes to current debates on the use and interpretation of larger DSGE<p>models. Recent tendency in academic work and at central banks is to develop and estimate large DSGE models for policy analysis and forecasting. These models typically have many shocks (e.g. Smets and Wouters, 2003 and Adolfson, Laseen, Linde and Villani, 2005). On the other hand, empirical studies point out that few large shocks are sufficient to capture the covariance structure of macro data (Giannone, Reichlin and<p>Sala, 2005, Uhlig, 2004). In this Chapter, we propose to reconcile both views by considering an alternative DSGE estimation approach which<p>models explicitly the statistical agency along the lines of Sargent (1989). This enables us to distinguish whether the exogenous shocks in DSGE<p>modeling are structural or instead serve the purpose of fitting the data in presence of misspecification and measurement problems. When applied to the original Smets and Wouters (2007) model, we find that the explanatory power of the structural shocks decreases at high frequencies. This allows us to back out a smoother measure of the natural output gap than that<p>resulting from the original specification. / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
625

Three essays on spectral analysis and dynamic factors

Liska, Roman 10 September 2008 (has links)
The main objective of this work is to propose new procedures for the general dynamic factor analysis<p>introduced by Forni et al. (2000). First, we develop an identification method for determining the number of common shocks in the general dynamic factor model. Sufficient conditions for consistency of the criterion are provided for large n (number of series) and T (the series length). We believe that our procedure can shed<p>light on the ongoing debate on the number of factors driving the US or Eurozone economy. Second, we show how the dynamic factor analysis method proposed in Forni et al. (2000), combined with our identification method, allows for identifying and estimating joint and block-specific common factors. This leads to a more<p>sophisticated analysis of the structures of dynamic interrelations within and between the blocks in suchdatasets.<p>Besides the framework of the general dynamic factor model we also propose a consistent lag window spectral density estimator based on multivariate M-estimators by Maronna (1976) when the underlying data are coming from the alpha mixing stationary Gaussian process. / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
626

On the toll setting problem

Dewez, Sophie 08 June 2004 (has links)
In this thesis we study the problem of road taxation. This problem consists in finding the toll on the roads belonging to the government or a private company in order to maximize the revenue. An optimal taxation policy consists in determining level of tolls low enough to favor the use of toll arcs, and high enough to get important revenues. Since there are twolevels of decision, the problem is formulated as a bilevel bilinear program. / Doctorat en sciences, Orientation recherche opérationnelle / info:eu-repo/semantics/nonPublished
627

Cortical based mathematical models of geometric optical illusions / Modèles mathématiques basé sur l'architecture fonctionnelle de la cortex pour les illusions d'optique géométrique

Franceschiello, Benedetta 28 September 2017 (has links)
Cette thèse présente des modèles mathématiques pour la perception visuelle et s'occupe des phénomènes où on reconnait une brèche entre ce qui est représenté et ce qui est perçu. La complétion amodale consiste en percevoir un complètement d'un object qui est partiellement occlus, en opposition avec la complétion modale, dans laquelle on perçoit un object même si ses contours ne sont pas présents dans l'image [Gestalt, 99]. Ces contours, appelés illusoires, sont reconstruits par notre système visuelle et ils sont traités par les cortex visuels primaires (V1/V2) [93]. Des modèles géométriques de l'architecture fonctionnelle de V1 on le retrouve dans le travail de Hoffman [86]. Dans [139] Petitot propose un modèle pour le complètement de contours, équivalent neurale du modèle proposé par Mumford [125]. Dans cet environnement Citti et Sarti introduisent un modèle basé sur l'architecture fonctionnelle de la cortex visuel [28], qui justifie les illusions à un niveau neurale et envisage un modèle neuro-géometrique pour V1. Une autre classe sont les illusions d'optique géométriques (GOI), découvertes dans le XIX siècle [83, 190], qui apparaissent en présence d'une incompatibilité entre ce qui est présent dans l'espace object et le percept. L'idée fondamentale développée ici est que les GOIs se produisent suite à une polarisation de la connectivité de V1/V2, responsable de l'illusion. A partir de [28], où la connectivité qui construit les contours en V1 est modelée avec une métrique sub-Riemannian, on étend cela en disant que pour le GOIs la réponse corticale du stimule initial module la connectivité, en devenant un coefficient pour la métrique. GOIs seront testés avec ce modèle. / This thesis presents mathematical models for visual perception and deals with such phenomena in which there is a visible gap between what is represented and what we perceive. A phenomenon which drew the interest most is amodal completion, consisting in perceiving a completion of a partially occluded object, in contrast with the modal completion, where we perceive an object even though its boundaries are not present [Gestalt theory, 99]. Such boundaries reconstructed by our visual system are called illusory contours, and their neural processing is performed by the primary visual cortices (V1/V2), [93]. Geometric models of the functional architecture of primary visual areas date back to Hoffman [86]. In [139] Petitot proposed a model of single boundaries completion through constraint minimization, neural counterpart of the model of Mumford [125]. In this setting Citti and Sarti introduced a cortical based model [28], which justifies the illusions at a neural level and provides a neurogeometrical model for V1. Another class of phenomena are Geometric optical illusions (GOIs), discovered in the XIX century [83, 190], arising in presence of a mismatch of geometrical properties between an item in object space and its associated percept. The fundamental idea developed here is these phenomena arise due to a polarization of the connectivity of V1/V2, responsible for the misperception. Starting from [28] in which the connectivity building contours in V1 is modeled as a sub-Riemannian metric, we extend it claiming that in GOIs the cortical response to the stimulus modulates the connectivity of the cortex, becoming a coefficient for the metric. GOIs will be tested through this model.
628

Multiobjective Optimization and Multicriteria Decision Aid Applied to the Evaluation of Road Projects at the Design Stage

Sarrazin, Renaud 16 December 2015 (has links) (PDF)
Constructing a road is a complex process that may be represented as a series of correlated steps, from the planning to the construction and usage of the new road. At the heart of this process, the preliminary and detailed design stages are key elements that will ensure the quality and the adequacy of the final solution regarding the constraints and objectives of the project. In particular, infrastructure layout and design will have a strong impact on the global performances of the road in operational conditions. Among them, road safety, mobility, environment preservation, noise pollution limitation, economic feasibility and viability of the project, or even its socio-economic impact at the local level. Consequently, it is crucial to offer engineers and road planners some tools and methods that may assist them in designing and selecting the most efficient solutions considering the distinctive features of each design problem. In this work, a multicriteria analysis methodology is developed to carry out an integrated and preventive assessment of road projects at the design stage by considering both their safety performances and some economic and environmental aspects. Its purpose is to support design engineers in the analysis of their projects and the identification of innovative, consistent and effective solutions. The proposed methodology is composed of two main research frameworks. On the one hand, the road design problem is addressed by focusing successively on the structuring of the multicriteria problem, the identification of the approximate set of non-dominated solutions using a genetic algorithm (based on NSGA-II), and the application of the methodology to a real road design project. On the other hand, the methodological development of a multicriteria interval clustering model was performed (based on PROMETHEE). Due to the applicability of this model to the studied problem, the interactions between the two frameworks are also analysed. / Doctorat en Sciences de l'ingénieur et technologie / The present PhD thesis is an aggregation of published contributions related to the application of multicriteria analysis to the evaluation of road projects at the design stage. The aim of the two introductory chapters is to offer a synthesised and critical presentation of the scientific contributions that constitute the PhD thesis. The complete version of the journal articles and preprints are found in Chapters 3 to 6. In the appendices, we also provide reprints of conference papers that are usually related to one of the main contributions of the thesis. / info:eu-repo/semantics/nonPublished
629

Mathematical models of transport phenomena in biological tissues

Grau Ribes, Alexis 13 March 2020 (has links) (PDF)
Cette thèse est consacrée à l’élaboration et l’étude théorique de modèles de transport décrivant les dynamiques cellulaires et la communication intercellulaire dans les tissus épithéliaux. Nous nous intéressons d’abord à l’influence du transport de microARNs (miRNAs) sur la dynamique spatiotemporelle de réseaux de régulation génétique. Ces courtes séquences d’ARN régulent la synthèse des protéines en bloquant l’activité des ARN messagers et leur sécrétion via des vesicules extracellulaires en font des agents de communication intercellulaire. Différents modèles faisant intervenir des miRNAs extracellulaires ont été construits et étudiés numériquement. Les premiers sont des modèles génériques destinés à mettre en évidence l'effet d'une cellule ayant une production de miRNAs anormale sur l'expression génétique dans les cellules voisines. Nous abordons ensuite des modèles plus complexes et réalistes dans lesquels des oscillations (liées à des rythmes biologiques) et de la bistabilité (liée à une différenciation cellulaire) sont observées. Ces modèles permettent d’étudier des dynamiques de communication complexes observées en biologie, comme la synchronisation de cellules couplées ou la propagation d'un changement de phénotype. Nous mettons également en évidence le rôle de défauts, tels que des mutations génétiques ou encore des variations de densité cellulaire dans les tissus, sur ces phénomènes de propagation. La deuxième partie de la thèse est dédiée à la construction de modèles de réaction-diffusion dans lesquels la dynamique des cellules dépend de leur état interne. Sur base d’études expérimentales montrant l’influence de protéines et de miRNAs sur la mobilité et la prolifération des cellules, nous établissons un modèle multi-échelle dans lequel la dynamique intracellulaire et le mouvement des cellules interagissent. En effet, certaines protéines sont responsables de l’adhésion cellulaire ou régulent la vitesse de prolifération. Dans notre modèle, chaque cellule synthétise ces espèces d’intérêt et les processus cellulaires (migration, prolifération) dépendent de la concentration de ces espèces biochimiques. Ce modèle permet de reproduire des expériences de migration cellulaire et de prédire, notamment, l'influence d'E-cadherin, une protéine clé dans l'adhesion cellulaire, sur la dynamique de régénération d'un tissu. / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
630

Dynamique, hydrologie sous-glaciaire et régime polythermal du Glacier McCall, Alaska, USA: approche combinée par techniques radar et modélisation numérique / Dynamics, subglacial hydrology and polythermal regime of McCall Glacier, Alaska, USA: a combined approach by radar techniques and numerical modelling

Delcourt, Charlotte 12 September 2012 (has links)
Dans le contexte actuel du changement climatique, les glaciers arctiques contribuent de manière importante à l’élévation du niveau marin. Parmi eux, les glaciers dits « polythermaux » sont relativement répandus mais leur comportement demeure toujours mal connu. Afin de mieux comprendre la réponse de ces glaciers aux modifications du climat, nous nous somme intéressés au glacier McCall, situé en Alaska arctique, une zone marquée par un réchauffement du climat relativement prononcé.<p>Nous avons utilisé les techniques modernes de radio-écho sondage (radar) et de modélisation numérique, en combinaison avec des observations et mesures de terrain, afin d’identifier les processus physiques responsables de l’évolution de ce glacier ces dernières décennies.<p>Les données radar ont permis de reconstituer la géométrie actuelle du glacier, de distinguer les zones de glace « froide » (dont la température est située sous le point de fusion) des zones de glace « chaude » (température au point de fusion), ainsi que de détecter la présence d’eau à la base du glacier.<p>Ces informations ont été introduites dans un modèle à deux dimensions d’écoulement de la glace, afin de simuler le retrait du glacier depuis le Petit Age de la Glace (fin du 19ème siècle) selon différents scenarios.<p>Les résultats montrent que le modèle est capable de simuler l’évolution du glacier ces dernières décennies de manière réaliste et que le glacier McCall peut-être considéré comme un bon indicateur du changement climatique. Ils démontrent également que le retrait du front du glacier est principalement dû aux perturbations de son bilan de masse, chaque jour plus négatif. Cependant, la percolation et le regel d’une partie de l’eau de fonte sont des processus essentiels pour expliquer le maintien du glacier. Ceux-ci ont pour effet d’ajouter de la glace à l’ensemble du système qui serait autrement perdue par écoulement et drainage. De plus, ils ont paradoxalement pour effet de diminuer la température de la glace et participent donc à ralentir sa perte de masse.<p>En conclusion, la tendance générale au retrait du glacier McCall se confirme pour les années à venir et sa disparition semble inévitable. Cependant, nos résultats suggèrent que cette évolution future pourrait être moins rapide qu’annoncé, en raison de phénomènes complexes de regel d’une partie de l’eau de fonte jouant un rôle « tampon » en contrebalançant les effets directs du réchauffement atmosphérique dans la région.<p> / Doctorat en Sciences / info:eu-repo/semantics/nonPublished

Page generated in 0.079 seconds