1 |
Essays on spatial point processes and bioinformaticsFahlén, Jessica January 2010 (has links)
This thesis consists of two separate parts. The first part consists of one paper and considers problems concerning spatial point processes and the second part includes three papers in the field of bioinformatics. The first part of the thesis is based on a forestry problem of estimating the number of trees in a region by using the information in an aerial photo, showing the area covered by the trees. The positions of the trees are assumed to follow either a binomial point process or a hard-core Strauss process. Furthermore, discs of equal size are used to represent the tree-crowns. We provide formulas for the expectation and the variance of the relative vacancy for both processes. The formulas are approximate for the hard-core Strauss process. Simulations indicate that the approximations are accurate. The second part of this thesis focuses on pre-processing of microarray data. The microarray technology can be used to measure the expression of thousands of genes simultaneously in a single experiment. The technique is used to identify genes that are differentially expressed between two populations, e.g. diseased versus healthy individuals. This information can be used in several different ways, for example as diagnostic tools and in drug discovery. The microarray technique involves a number of complex experimental steps, where each step introduces variability in the data. Pre-processing aims to reduce this variation and is a crucial part of the data analysis. Paper II gives a review over several pre-processing methods. Spike-in data are used to describe how the different methods affect the sensitivity and bias of the experiment. An important step in pre-processing is dye-normalization. This normalization aims to remove the systematic differences due to the use of different dyes for coloring the samples. In Paper III a novel dye-normalization, the MC-normalization, is proposed. The idea behind this normalization is to let the channels’ individual intensities determine the correction, rather than the average intensity which is the case for the commonly used MA-normalization. Spike-in data showed that the MC-normalization reduced the bias for the differentially expressed genes compared to the MA-normalization. The standard method for preserving patient samples for diagnostic purposes is fixation in formalin followed by embedding in paraffin (FFPE). In Paper IV we used tongue-cancer microRNA-microarray data to study the effect of FFPE-storage. We suggest that the microRNAs are not equally affected by the storage time and propose a novel procedure to remove this bias. The procedure improves the ability of the analysis to detect differentially expressed microRNAs.
|
2 |
Essays on Spatial Panel Data Models with Common FactorsShi, Wei 28 September 2016 (has links)
No description available.
|
3 |
Long-term set-up of driven piles in sand.Axelsson, Gary January 2000 (has links)
No description available.
|
4 |
Long-term set-up of driven piles in sand.Axelsson, Gary January 2000 (has links)
No description available.
|
5 |
Modeling land-cover change in the Amazon using historical pathways of land cover change and Markov chains. A case study of Rondõnia, BrazilBecerra-Cordoba, Nancy 15 August 2008 (has links)
The present dissertation research has three purposes: the first one is to predict anthropogenic deforestation caused by small farmers firstly using only pathways of past land cover change and secondly using demographic, socioeconomic and land cover data at the farm level. The second purpose is to compare the explanatory and predictive capacity of both approaches at identifying areas at high risk of deforestation among small farms in Rondõnia, Brazil. The third purpose is to test the assumptions of stationary probabilities and homogeneous subjects, both commonly used assumptions in predictive stochastic models applied to small farmers' deforestation decisions. This study uses the following data: household surveys, maps, satellite images and their land cover classification at the pixel level, and pathways of past land cover change for each farm. These data are available for a panel sample of farms in three municipios in Rondõnia, Brazil (Alto Paraiso, Nova União, and Rolim de Moura) and cover a ten-year period of study (1992-2002). Pathways of past land cover change are graphic representations in the form of flow charts that depict Land Cover Change (LCC) in each farm during the ten-year period of study. Pathways were constructed using satellite images, survey data and maps, and a set of interviews performed on a sub-sample of 70 farms. A panel data analysis of the estimated empirical probabilities was conducted to test for subject and time effects using a Fixed Group Effects Model (FGEM), specifically the Least Square Dummy Variable (LSDV1) fixed effects technique.
Finally, the two predictive modeling approaches are compared. The first modeling approach predicts future LCC using only past land cover change data in the form of empirical transitional probabilities of LCC obtained from pathways of past LCC. These empirical probabilities are used in a LSDV1 for fixed–group effects, a LSDV1 for fixed-time effects, and an Ordinary Least Square model (OLS) for the pooled sample. Results from these models are entered in a modified Markov chain model's matrix multiplication. The second modeling approach predicts future LCC using socio-demographic and economic survey variables at the household level. The survey data is used to perform a multinomial logit regression model to predict the LC class of each pixel. In order to compare the explanatory and predictive capacity of both modeling approaches, LCC predictions at the pixel level are summarized in terms of percentage of cells in which future LC was predicted correctly. Percentage of correct predicted land cover class is compared against actual pixel classification from satellite images. The presence of differences among farmers in the LSDV1-fixed group effect by farmer suggests that small farmers are not a homogeneous group in term of their probabilities of LCC and that further classification of farmers into homogeneous subgroups will depict better their LCC decisions. Changes in the total area of landholdings proved a stronger influence in farmer's LCC decisions in their main property (primary lot) when compared to changes in the area of the primary lot. Panel data analysis of the LCC empirical transition probabilities (LSDV1 fixed time effects model) does not find enough evidence to prefer the fixed time effects model when compared to a Ordinary Least Square (OLS) pooled version of the probabilities. When applying the results of the panel data analysis to a modified markov chain model the LSDV1-farmer model provided a slightly better accuracy (59.25% accuracy) than the LSDV1-time and the OLS-pooled models (57.54% and 57.18%, respectively). The main finding for policy and planning purposes is that owners type 1—with stable total landholdings over time—tend to preserve forest with a much higher probability (0.9033) than owner with subdividing or expanding properties (probs. of 0.0013 and 0.0030). The main implication for policy making and planning is to encourage primary forest preservation, given that the Markov chain analysis shows that primary forest changes into another land cover, it will never go back to this original land cover class. Policy and planning recommendations are provided to encourage owner type 1 to continue their pattern of high forest conservation rates. Some recommendations include: securing land titling, providing health care and alternative sources of income for the OT1's family members and elderly owners to remain in the lot. Future research is encouraged to explore spatial autocorrelation in the pixel's probabilities of land cover change, effects of local policies and macro-economic variables in the farmer's LCC decisions. / Ph. D.
|
6 |
Le Conseil constitutionnel et le temps / Constitutional Council and TimeKamal, Mathilde 04 May 2018 (has links)
Souvent présenté comme un «maître du temps», le Conseil constitutionnel entretient en réalité avec la temporalité une relation complexe. Le temps est en effet pour le Conseil à la fois une contrainte et une ressource. Il est d’abord une contrainte car le temps enserre le procès constitutionnel dans des délais très stricts que ce soit dans le contentieux a priori ou dans le contentieux a posteriori. Au fil des ans, le Conseil constitutionnel s’est néanmoins accommodé de cette contrainte : il a toujours réussi à juger et, qui plus est, à juger «à temps» en développant des techniques et des méthodes pour apprivoiser cette contrainte temporelle. D’un autre côté cependant, le temps peut être considéré comme une véritable ressource pour le Conseil constitutionnel. Une ressource qui s’exprime par exemple dans la construction d’une jurisprudence originale et novatrice visant à encadrer la temporalité des lois. Une ressource encore avec le développement d’une jurisprudence de la modulation des effets des décisions par laquelle le Conseil adapte ses abrogations et ses réserves d’interprétations à la diversité des situations. C’est de cette relation polarisée entre «temps-contrainte» et «temps-ressource» qu’entend rendre compte la présente étude. / Usually promoted as a “Time Master”, the Constitutional Council is in a more complexrelationship with temporality. As to the Council, Time is both a constraint and a resource. Timeis first a constraint because it ties the constitutional trial in very short delays, either that theCouncil rules a priori or a posteriori. Years passing by, the Council has nevertheless dealt withthe constraint, managing to rule “on time”, developing methods to tame the temporal constraint.On the other hand though, Time can be considered as a resource, expressing itself in theconstruction of an innovative jurisprudence that aims to frame the temporality of laws or in theupgrowth of a jurisprudence linked to the modulation of the time effects of its rulings. Thisstudy underlines such a polarized relationship between “Time-constraint” and “Time-resource”.
|
7 |
Modélisation multi-échelle de l'endommagement et de l'émission acoustique dans les roches / Multiscale modelling of damage and acoustic emission in rocksDobrovat, Anca 27 May 2011 (has links)
La modélisation de la rupture des géo-matériaux constitue un important défi pour les applications telles que la séquestration du CO2 , le stockage de déchets nucléaires, la production des hydrocarbures ainsi que les projets de génie civil concernant les tunnels ou les excavations. L'objectif de cette thèse est de développer des lois d'évolution macroscopiques d'endommagement à partir des descriptions explicites de la rupture à l'échelle microscopique en vue de la modélisation du comportement d'endommagement à long terme des sites de stockage géologique. L'approche adoptée est basée sur l'homogénéisation par développements asymptotiques et la description énergétique de la propagation des micro-fissures, qui permettent l'obtention des lois d'endommagement et conduisent à une quantification explicite de l'énergie de l'émission acoustique associée à la rupture. Les modèles obtenus sont capables de prédire la dégradation des modules d'élasticité en raison de l'évolution des micro-fissures. Cette représentation permet de modéliser la propagation des ondes dans un milieu à endommagement évolutif. Deux types de modèles d'endommagement seront proposés: indépendants de temps et dépendants de temps. Les modèles dépendants de temps décrivent l'évolution progressive quasi-fragile de la micro-fissuration. Dans les modèles dépendants de temps, l'évolution des micro-fissures est décrite à travers un critère sous-critique et la propagation mixte, par branchement. En utilisant le modèle dépendant de temps, des simulations seront faites à trois niveaux: du laboratoire, du tunnel et du réservoir. / Accurate modeling of failure of geomaterials is the key to the success of a diverse range of engineering challenges including the topic of CO2 sequestration, nuclear waste disposal and hydrocarbon production plus civil engineering projects for tunnels or excavations. The aim of this thesis is to develop macroscopic damage evolution laws based on explicit descriptions of fracture at the micro-scale level which can be successfully employed to describe long term damage behavior of geologic storage sites. The approach taken is based on homogenization through asymptotic developments combined with micro-crack propagation energy analysis which leads to an explicit quantification of the acoustic emission (AE) energy associated with damage. Proposed damage models are capable of modeling the degradation of elastic moduli due to the micro-crack evolution. This representation allows the modeling of wave propagation in a medium with evolving damage. Two types of damage models will be considered: time independent and time dependent. Time independent damage models capable of describing progressive micro-cracking propagation (i.e. quasi-brittle type damage law) are considered. In the case of time-dependent damage models, the evolution of the micro-crack length during propagation is described through a sub-critical criterion and mixed mode propagation by branching. Using the time dependent damage model including rotational micro-cracks, simulations will be made at three levels: laboratory, tunnel and reservoir scales.
|
8 |
Modélisation double-échelle de la rupture des roches : influence du frottement sur les micro-fissures / Double-scale modelling of failure in rocks : influence of micro-cracks frictionWrzesniak, Aleksandra 14 December 2012 (has links)
Propagation des fissures microscopiques, est représentée par des variables d’endommagement. L’évolution de la variable d’endommagement est généralement formulée sur la base d’observations expérimentales. De nombreux modèles phénoménologiques d’endommagement ont été proposés dans la littérature. L’objet de cette thèse est de développer une nouvelle procédure pour obtenir des lois d’évolution macroscopique d’endommagement,dans lesquelles l’évolution de l’endommagement est entièrement déduite de l’analyse de la microstructure. Nous utilisons une homogénéisation basée sur des développements asymptotiques pour décrire le comportement global à partir de la description explicite d’un volume élémentaire microfissuré.Nous considérons d’une part un critère quasi-fragile (indépendant du temps) puis un critère sous-critique(dépendant du temps) pour décrire la propagation des microfissures. De plus, le frottement entre les lèvres des microfissures est pris en compte. Une analyse énergétique est proposée, conduisant à une loi d’évolution d’endommagement qui intègre une dégradation de la rigidité, un adoucissement du comportement du matériau, des effets de taille et d’unilatéralité, mettant en avant un comportement différent à la rupture en contact avec et sans frottement. L’information sur les micro-fissures est contenue dans les coefficients homogénéisés et dans la loi d’évolution de l’endommagement. Les coefficients homogénéisés décrivent la réponse globale en présence de micro-fissures (éventuellement statiques), tels qu’ils sont calculées avec la(quasi-) solution microscopique statique. La loi d’endommagement contient l’information sur l’évolution des micro-fissures, résultant de l’équilibre énergétique dans le temps pendant la propagation microscopique.La loi homogénéisée est formulée en incrément de contrainte. Les coefficients homogénéisés sont calculées numériquement pour des longueurs de fissures et des orientations différentes. Cela permet la construction complète des lois macroscopiques. Une première analyse concerne le comportement local macroscopique, pour des trajets de chargement complexes, afin de comprendre le comportement prédit par le modèle à deux échelles et l’influence des paramètres micro structuraux, comme par exemple le coefficient de frottement. Ensuite, la mise en œuvre en éléments finis des équations macroscopiques est effectuée et des simulations pour différents essais de compression sont réalisées. Les résultats des simulations numériques sont comparés avec les résultats expérimentaux obtenus en utilisant un nouvel appareil triaxial récemment mis au point au Laboratoire 3SR à Grenoble (France). / In continuum damage models, the degradation of the elastic moduli, as the results of microscopic crackgrowth, is represented through damage variables. The evolution of damage variable is generally postulatedbased on the results of the experimental observations. Many such phenomenological damage modelshave been proposed in the literature. The purpose of this contribution is to develop a new procedurein order to obtain macroscopic damage evolution laws, in which the damage evolution is completelydeduced from micro-structural analysis. We use homogenization based on two-scale asymptotic developmentsto describe the overall behaviour starting from explicit description of elementary volumes withmicro-cracks. We consider quasi-brittle (time independent) and sub-critical (time dependent) criteria formicro-cracks propagation. Additionally, frictional contact is assumed on the crack faces. An appropriatemicro-mechanical energy analysis is proposed, leading to a damage evolution law that incorporates stiffnessdegradation, material softening, size effect, and unilaterality, different fracture behaviour in contactwithout and with friction. The information about micro-cracks is contained in the homogenized coefficientsand in the damage evolution law. The homogenized coefficients describe the overall response inthe presence of (possibly static) micro-cracks, as they are computed with the (quasi-) static microscopicsolution. The damage law contains the information about the evolution of micro-cracks, as a result ofthe energy balance in time during the microscopic propagation. The homogenized law is obtained in therate form. Effective coefficients are numerically computed for different crack lengths and orientations.This allows for the complete construction of the macroscopic laws. A first analysis concerns the localmacroscopic behaviour, for complex loading paths, in order to understand the behaviour predicted bythe two-scale model and the influence of micro structural parameters, like for example friction coefficient.Next, the FEM implementation of the macroscopic equations is performed and simulations for variouscompression tests are conducted. The results of the numerical simulations are compared with the experimentalresults obtained using a new true-triaxial apparatus recently developed at the Laboratory 3SRin Grenoble (France).
|
Page generated in 0.0688 seconds