511 |
Deciphering the roles of Klf2a, Klf2b and Egr1 transcription factors in heart valve development using zebrafish as model organism / Etude du rôle des facteurs de transcription Klf2a, Klf2b et Egr1 dans le développement des valves cardiaques en utilisant le poisson zèbre comme organisme modèleFaggianelli-Conrozier, Nathalie 14 December 2018 (has links)
La circulation du flux sanguin à sens unique dans le système cardiovasculaire des vertébrés est assurée par les valves cardiaques. Leur formation est très contrôlée au cours du développement embryonnaire. Cependant, il arrive que celle-ci soit défectueuse, et donc à l’origine de maladies cardiaques congénitales. Ces maladies représentent une des causes majeures de décès à la naissance. L’étude de la formation des valves cardiaques constitue donc un champ de recherche majeur. Dans cette thèse, nous avons utilisé le poisson zèbre, comme animal d’étude modèle pour étudier la formation des valves atrio-ventriculaires. Les forces mécaniques générées par le flux sanguin constituent un signal modulant le programme génétique valvulaire. Elles initient la formation des valves en contraignant l’expression du facteur de transcription, Klf2a, à un groupe de cellules endothéliales du canal atrio-ventriculaire. Nos travaux ont démontré l’activation d’un autre facteur, Egr1, dans cette même région dans le même lapse de temps. Notre étude a cherché à élucider le réseau génétique impliquant klf2a, son paralogue klf2b, et egr1 en combinant une analyse pangénomique de l’expression génique et des sites accessibles de la chromatine avec une approche d’imagerie haute résolution in vivo. Nous avons déterminé les interactions entre ces facteurs et les réseaux qu’ils régulent. Cette étude a finalement démontré qu’egr1, klf2a/klf2b modulent la morphogénèse des valves cardiaques en contrôlant en particulier flt1, has2 et wnt9b. / Cardiac valves are necessary for maintaining a unidirectional blood flow in the cardiovascular system of vertebrates. Their efficient gating function requires a highly controlled developmental program. However, this program may be impaired and thus leading to defective valves. In fact, congenital heart valve diseases represent the most common form of birth defects. Therefore, cardiac valve development studies constitute a challenging research field. In this thesis, we used the zebrafish as a model organism for studying the formation of atrioventricular valves. To date, it is known that mechanical forces generated by blood flow constitute key modulators dictating valve formation. In particular, they initiate valvulogenesis by restricting the expression of the transcription factor Klf2a in a subset of endocardial cells of the atrio-ventricular canal. Our work demonstrated the activation of another transcription factor, Egr1, in this same region and within the same time window. We aimed at deciphering the mechanosentitive gene network involving klf2a, its paralog klf2b as well as egr1, by combining genome-wide analysis of gene expression and chromatin accessibility with live imaging. We addressed the potential interactions of these factors and studied their downstream signalling pathways. Finally, we demonstrated that egr1, klf2a/klf2b modulates valve morphogenesis by specifically controlling flt1, has2 and wnt9b expression.
|
512 |
Statistical detection with weak signals via regularizationLi, Jinzheng 01 July 2012 (has links)
There has been an increasing interest in uncovering smuggled nuclear materials associated with the War on Terror. Detection of special nuclear materials hidden in cargo containers is a major challenge in national and international security. We propose a new physics-based method to determine the presence of the spectral signature of one or more nuclides from a poorly resolved spectra with weak signatures. The method is different from traditional methods that rely primarily on peak finding algorithms. The new approach considers each of the signatures in the library to be a linear combination of subspectra. These subspectra are obtained by assuming a signature consisting of just one of the unique gamma rays emitted by the nuclei. We propose a Poisson regression model for deducing which nuclei are present in the observed spectrum. In recognition that a radiation source generally comprises few nuclear materials, the underlying Poisson model is sparse, i.e. most of the regression coefficients are zero (positive coefficients correspond to the presence of nuclear materials). We develop an iterative algorithm for a penalized likelihood estimation that prompts sparsity. We illustrate the efficacy of the proposed method by simulations using a variety of poorly resolved, low signal-to-noise ratio (SNR) situations, which show that the proposed approach enjoys excellent empirical performance even with SNR as low as to -15db. The proposed method is shown to be variable-selection consistent, in the framework of increasing detection time and under mild regularity conditions.
We study the problem of testing for shielding, i.e. the presence of intervening materials that attenuate the gamma ray signal. We show that, as detection time increases to infinity, the Lagrange multiplier test, the likelihood ratio test and Wald test are asymptotically equivalent, under the null hypothesis, and their asymptotic null distribution is Chi-square. We also derived the local power of these tests.
We also develop a nonparametric approach for detecting spectra indicative of the presence of SNM. This approach characterizes the shape change in a spectrum from background radiation. We do this by proposing a dissimilarity function that characterizes the complete shape change of a spectrum from the background, over all energy channels. We derive the null asymptotic test distributions in terms of functionals of the Brownian bridge. Simulation results show that the proposed approach is very powerful and promising for detecting weak signals. It is able to accurately detect weak signals with SNR as low as -37db.
|
513 |
Application of co-adjoint orbits to the loop group and the diffeomorphism group of the circleLano, Ralph Peter 01 May 1994 (has links)
No description available.
|
514 |
Bayesian and Empirical Bayes Approaches to Power Law Process and Microarray AnalysisChen, Zhao 12 July 2004 (has links)
In this dissertation, we apply Bayesian and Empirical Bayes methods for reliability growth models based on the power law process. We also apply Bayes methods for the study of microarrays, in particular, in the selection of differentially expressed genes.
The power law process has been used extensively in reliability growth models. Chapter 1 reviews some basic concepts in reliability growth models. Chapter 2 shows classical inferences on the power law process. We also assess the goodness of fit of a power law process for a reliability growth model. In chapter 3 we develop Bayesian procedures for the power law process with failure truncated data, using non-informative priors for the scale and location parameters. In addition to obtaining the posterior density of parameters of the power law process, prediction inferences for the expected number of failures in some time interval and the probability of future failure times are also discussed. The prediction results for the software reliability model are illustrated. We compare our result with the result of Bar-Lev,S.K. et al. Also, posterior densities of several parametric functions are given. Chapter 4 provides Empirical Bayes for the power law process with natural conjugate priors and nonparametric priors. For the natural conjugate priors, two-hyperparameter prior and a more generalized three-hyperparameter prior are used.
In chapter 5, we review some basic statistical procedures that are involved in microarray analysis. We will also present and compare several transformation and normalization methods for probe level data. The objective of chapter 6 is to select differentially expressed genes from tens of thousands of genes. Both classical methods (fold change, T-test, Wilcoxon Rank-sum Test, SAM and local Z-score and Empirical Bayes methods (EBarrays and LIMMA) are applied to obtain the results. Outputs of a typical classical method and a typical Empirical Bayes Method are discussed in detail.
|
515 |
Social Network Analysis of Researchers' Communication and Collaborative Networks Using Self-reported DataCimenler, Oguz 16 June 2014 (has links)
This research seeks an answer to the following question: what is the relationship between the structure of researchers' communication network and the structure of their collaborative output networks (e.g. co-authored publications, joint grant proposals, and joint patent applications), and the impact of these structures on their citation performance and the volume of collaborative research outputs? Three complementary studies are performed to answer this main question as discussed below.
1. Study I: A frequently used output to measure scientific (or research) collaboration is co-authorship in scholarly publications. Less frequently used are joint grant proposals and patents. Many scholars believe that co-authorship as the sole measure of research collaboration is insufficient because collaboration between researchers might not result in co-authorship. Collaborations involve informal communication (i.e., conversational exchange) between researchers. Using self-reports from 100 tenured/tenure-track faculty in the College of Engineering at the University of South Florida, researchers' networks are constructed from their communication relations and collaborations in three areas: joint publications, joint grant proposals, and joint patents. The data collection: 1) provides a rich data set of both researchers' in-progress and completed collaborative outputs, 2) yields a rating from the researchers on the importance of a tie to them 3) obtains multiple types of ties between researchers allowing for the comparison of their multiple networks. Exponential Random Graph Model (ERGM) results show that the more communication researchers have the more likely they produce collaborative outputs. Furthermore, the impact of four demographic attributes: gender, race, department affiliation, and spatial proximity on collaborative output relations is tested. The results indicate that grant proposals are submitted with mixed gender teams in the college of engineering. Besides, the same race researchers are more likely to publish together. The demographics do not have an additional leverage on joint patents.
2. Study II: Previous research shows that researchers' social network metrics obtained from a collaborative output network (e.g., joint publications or co-authorship network) impact their performance determined by g-index. This study uses a richer dataset to show that a scholar's performance should be considered with respect to position in multiple networks. Previous research using only the network of researchers' joint publications shows that a researcher's distinct connections to other researchers (i.e., degree centrality), a researcher's number of repeated collaborative outputs (i.e., average tie strength), and a researchers' redundant connections to a group of researchers who are themselves well-connected (i.e., efficiency coefficient) has a positive impact on the researchers' performance, while a researcher's tendency to connect with other researchers who are themselves well-connected (i.e., eigenvector centrality) had a negative impact on the researchers' performance. The findings of this study are similar except that eigenvector centrality has a positive impact on the performance of scholars. Moreover, the results demonstrate that a researcher's tendency towards dense local neighborhoods (as measured by the local clustering coefficient) and the researchers' demographic attributes such as gender should also be considered when investigating the impact of the social network metrics on the performance of researchers.
3. Study III: This study investigates to what extent researchers' interactions in the early stage of their collaborative network activities impact the number of collaborative outputs produced (e.g., joint publications, joint grant proposals, and joint patents). Path models using the Partial Least Squares (PLS) method are run to test the extent to which researchers' individual innovativeness, as determined by the specific indicators obtained from their interactions in the early stage of their collaborative network activities, impacts the number of collaborative outputs they produced taking into account the tie strength of a researcher to other conversational partners (TS). Within a college of engineering, it is found that researchers' individual innovativeness positively impacts the volume of their collaborative outputs. It is observed that TS positively impacts researchers' individual innovativeness, whereas TS negatively impacts researchers' volume of collaborative outputs. Furthermore, TS negatively impacts the relationship between researchers' individual innovativeness and the volume of their collaborative outputs, which is consistent with `Strength of Weak Ties' Theory. The results of this study contribute to the literature regarding the transformation of tacit knowledge into explicit knowledge in a university context.
|
516 |
Incorporating discontinuities in value-at-risk via the poisson jump diffusion model and variance gamma modelLee, Brendan Chee-Seng, Banking & Finance, Australian School of Business, UNSW January 2007 (has links)
We utilise several asset pricing models that allow for discontinuities in the returns and volatility time series in order to obtain estimates of Value-at-Risk (VaR). The first class of model that we use mixes a continuous diffusion process with discrete jumps at random points in time (Poisson Jump Diffusion Model). We also apply a purely discontinuous model that does not contain any continuous component at all in the underlying distribution (Variance Gamma Model). These models have been shown to have some success in capturing certain characteristics of return distributions, a few being leptokurtosis and skewness. Calibrating these models onto the returns of an index of Australian stocks (All Ordinaries Index), we then use the resulting parameters to obtain daily estimates of VaR. In order to obtain the VaR estimates for the Poisson Jump Diffusion Model and the Variance Gamma Model, we introduce the use of an innovation from option pricing techniques, which concentrates on the more tractable characteristic functions of the models. Having then obtained a series of VaR estimates, we then apply a variety of criteria to assess how each model performs and also evaluate these models against the traditional approaches to calculating VaR, such as that suggested by J.P. Morgan???s RiskMetrics. Our results show that whilst the Poisson Jump Diffusion model proved the most accurate at the 95% VaR level, neither the Poisson Jump Diffusion or Variance Gamma models were dominant in the other performance criteria examined. Overall, no model was clearly superior according to all the performance criteria analysed, and it seems that the extra computational time required to calibrate the Poisson Jump Diffusion and Variance Gamma models for the purposes of VaR estimation do not provide sufficient reward for the additional effort than that currently employed by Riskmetrics.
|
517 |
Effective diffusion coefficients for charged porous materials based on micro-scale analysesMohajeri, Arash January 2009 (has links)
Estimation of effective diffusion coefficients is essential to be able to describe the diffusive transport of solutes in porous media. It has been shown in theory that in the case of uncharged porous materials the effective diffusion coefficient of solutes is a function of the pore morphology of the material and can be described by their tortuosity (tensor). To estimate the apparent diffusion coefficients, the values of tortuosity and porosity should be known first. In contrast with calculation of porosity, which can be easily obtained, estimation of tortuosity is intricate, particularly with increasing micro-geometry complexity in porous media. Moreover, many engineering materials (e.g, clays and shales) are characterized by electrical surface charges on particles of the porous material which can strongly affect the diffusive transport properties of ions. For these materials, estimation of effective diffusion coefficients have been mostly based on phenomenological equations with no link to underlying microscale properties of these charged materials although a few recent studies have used alternative methods to obtain the diffusion parameters. / In the first part of this thesis a numerical method based on a recently proposed up-scaled Poisson-Nernst-Planck type of equation (PNP) and its microscale counterpart is employed to estimate the tortuosity and thus the effective and apparent diffusion coefficients in thin charged membranes. Beside this, a new mathematical approach for estimation of tortuosity is applied and validated. This mathematical approach is also derived while upscaling of micro-scale Poisson-Nernst-Planck system of equations using the volume averaging method. A variety of different pore 2D and 3D micro-geometries together with different electrochemical conditions are studied here. To validate the new approaches, the relation between porosity and tortuosity has been obtained using a multi-scale approach and compared with published results. These include comparison with the results from a recently developed numerical method that is based on macro and micro-scale PNP equations. / Results confirm that the tortuosity value is the same for porous media with electrically uncharged and charged particles but only when using a consistent set of PNP equations. The effects of charged particles are captured by the ratio of average concentration to effective intrinsic concentration in the macroscopic PNP equations. Using this ratio allows to consistently take into account electro-chemical interactions of ions and charges on particles and so excludes any ambiguity generally encountered in phenomenological equations. / Steady-state diffusion studies dominate this thesis; however, understanding of transient ion transport in porous media is also important. The last section of this thesis briefly introduces transient diffusion through bentonite. To do so, the micro Nernst-Planck equation with electro-neutrality condition (NPE) is solved for a porous medium which consists of compacted bentonite. This system has been studied before in another research using an experimental approach and the results are available for both transient and steady-state phases. Three different conditions are assumed for NPE governing equations and then the numerical results from these three conditions are compared to the experimental values and analytical phenomenological solution. The tortuosity is treated as a fitting parameter and the effective diffusion coefficient can be calculated based on these tortuosity values. The results show that including a sorption term in the NPE equations can render similar results as the experimental values in transient and steady state phases. Also, as a fitting parameter, the tortuosity values were found varying with background concentration. This highlights the need to monitor multiple diffusing ion fluxes and membrane potential to fully characterize electro-diffusive transport from fundamental principles (which have been investigated in first part of this thesis) rather than phenomenological equations for predictive studies. / This research has lead to two different journal articles submissions, one already accepted in Computers and Geotechnics (October 22, 2009, 5-yrs Impact Factor 0.884) and the other one still under review.
|
518 |
Etude de diffusions à valeurs dans des variétés lorentziennes.Angst, Jürgen 25 September 2009 (has links) (PDF)
L'objet de ce mémoire est l'étude de processus stochastiques à valeurs dans des variétés lorentziennes. En particulier, on s'intéresse au comportement asymptotique en temps long de ces processus et on souhaite voir en quoi celui-ci reflète la géométrie des variétés sous-jacentes. Nous limitons notre étude à celle de diffusions, c'est-à-dire de processus markoviens continus, à valeurs dans le fibré tangent unitaire de variétés lorentziennes fortement symétriques. L'introduction et l'étude de tels processus ont des motivations purement mathématiques mais aussi physiques. <br /><br />Ce mémoire est composé de deux parties. La première est consacrée à la preuve d'un théorème limite central pour une classe de diffusions minkowskiennes. Elle est motivée par des questions ouvertes de la littérature physique. La seconde partie du manuscrit est consacrée à l'étude détaillée d'une diffusion relativiste à valeurs dans les espaces de Robertson-Walker. En fonction de la courbure et de la vitesse d'expansion de ces espaces, nous déterminons précisément le comportement asymptotique de la diffusion relativiste et montrons que ses trajectoires approchent asymptotiquement des géodésiques de lumière aléatoires. Pour une classe d'espaces de Robertson-Walker, nous explicitons en outre la frontière de Poisson de la diffusion relativiste.
|
519 |
Approche comportementale de la dispersion larvaire en milieu marin = Behavioural approach to larval dispersal in marine systemsIrisson, Jean-Olivier 03 July 2008 (has links) (PDF)
La plupart des organismes marins démersaux présentent une phase larvaire pélagique avant le recrutement dans la population adulte. Cet épisode pélagique est souvent la seule opportunité de dispersion au cours du cycle de vie. De ce fait, il structure les connections entre populations, qui régissent la dynamique et la composition génétique des métapopulations benthiques. Cependant, ces "larves" ne sont pas de simples ébauches des adultes, dispersées au gré des courants en attendant leur métamorphose. Ce sont des organismes souvent très spécifiquement adaptés à leur milieu. Dans cette thèse nous nous sommes efforcés d'évaluer l'impact du comportement des larves lors de la phase pélagique. Nous nous sommes focalisés sur les larves de poissons (coralliens plus spécifiquement) dont les capacités sensorielles et motrices sont particulièrement élevées. Des approches expérimentales ont été développées afin de quantifier leur orientation et leur nage in situ. Grâce à une observation synchrone des caractéristiques physiques du milieu et de la distribution des larves lors d'une campagne océanographique, nous avons tenté de caractériser leur distribution en trois dimensions dans le milieu pélagique, afin de comprendre les interactions physico-biologiques déterminant le recrutement. Enfin, une approche de modélisation novatrice, faisant appel à des concepts de minimisation des coûts et de maximisation des bénéfices habituellement utilisés en économie ou en théorie de l'approvisionnement optimal, a permis d'intégrer le comportement des larves aux modèles Lagrangiens de dispersion
|
520 |
Chaînes de Markov régulées et approximation de Poisson pour l'analyse de séquences biologiquesVergne, Nicolas 11 July 2008 (has links) (PDF)
L'analyse statistique des séquences biologiques telles les séquences nucléotidiques (l'ADN et l'ARN) ou d'acides aminés (les protéines) nécessite la conception de différents modèles s'adaptant chacun à un ou plusieurs cas d'étude. Etant donnée la dépendance de la succession des nucléotides dans les séquences d'ADN, les modèles généralement utilisés sont des modèles de Markov. Le problème de ces modèles est de supposer l'homogénéité des séquences. Or, les séquences biologiques ne sont pas homogènes. Un exemple bien connu est la répartition en gc : le long d'une même séquence, alternent des régions riches en gc et des régions pauvres en gc. Pour rendre compte de l'hétérogénéité des séquences, d'autres modèles sont utilisés : les modèles de Markov cachés. La séquence est divisée en plusieurs régions homogènes. Les applications sont nombreuses, telle la recherche des régions codantes. Certaines particularités biologiques ne pouvant apparaître suivant ces modèles, nous proposons de nouveaux modèles, les chaînes de Markov régulées (DMM pour drifting Markov model). Au lieu d'ajuster une matrice de transition sur une séquence entière (modèle de Markov homogène classique) ou différentes matrices de transition sur différentes régions de la séquence (modèles de Markov cachés), nous permettons à la matrice de transition de varier (to drift) du début à la fin de la séquence. A chaque position t dans la séquence, nous avons une matrice de transition Πt/n(où n est la longueur de la séquence) éventuellement différente. Nos modèles sont donc des modèles de Markov hétérogènes contraints. Dans cette thèse, nous donnerons essentiellement deux manières de contraindre les modèles : la modélisation polynomiale et la modélisation par splines. Par exemple, pour une modélisation polynomiale de degré 1 (une dérive linéaire), nous nous donnons une matrice de départ Π0 et une matrice d'arrivée Π1 puis nous passons de l'une à l'autre en fonction de la position t dans la séquence : <br />Πt/n = (1-t/n) Π0 + t/n Π1.<br />Cette modélisation correspond à une évolution douce entre deux états. Par exemple cela peut traduire la transition entre deux régimes d'un chaîne de Markov cachée, qui pourrait parfois sembler trop brutale. Ces modèles peuvent donc être vus comme une alternative mais aussi comme un outil complémentaire aux modèles de Markov cachés. Tout au long de ce travail, nous avons considéré des dérives polynomiales de tout degré ainsi que des dérives par splines polynomiales : le but de ces modèles étant de les rendre plus flexibles que ceux des polynômes. Nous avons estimé nos modèles de multiples manières puis évalué la qualité de ces estimateurs avant de les utiliser en vue d'applications telle la recherche de mots exceptionnels. Nous avons mis en oeuvre le software DRIMM (bientôt disponible à http://stat.genopole.cnrs.fr/sg/software/drimm/, dédié à l'estimation de nos modèles. Ce programme regroupe toutes les possibilités offertes par nos modèles, tels le calcul des matrices en chaque position, le calcul des lois stationnaires, des distributions de probabilité en chaque position... L'utilisation de ce programme pour la recherche des mots exceptionnels est proposée dans des programmes auxiliaires (disponibles sur demande).<br />Plusieurs perspectives à ce travail sont envisageables. Nous avons jusqu'alors décidé de faire varier la matrice seulement en fonction de la position, mais nous pourrions prendre en compte des covariables tels le degré d'hydrophobicité, le pourcentage en gc, un indicateur de la structure des protéines (hélice α, feuillets β...). Nous pourrions aussi envisager de mêler HMM et variation continue, où sur chaque région, au lieu d'ajuster un modèle de Markov, nous ajusterions un modèle de chaînes de Markov régulées.
|
Page generated in 0.0493 seconds