• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 59
  • 14
  • 11
  • 6
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 119
  • 119
  • 22
  • 21
  • 17
  • 14
  • 13
  • 13
  • 13
  • 13
  • 12
  • 11
  • 11
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Modification Of Magnetic Properties Of Siderite By Thermal Treatment

Alkac, Dilek 01 September 2007 (has links) (PDF)
Obtaining high magnetic susceptibility phases from Hekimhan&amp / #8211 / Deveci siderite orevia preliminary thermal treatment has been the basic target of the thesis study.Thermal decomposition characteristics of samples, determined bythermogravimetric analysis (TGA), differential thermal analysis (DTA), and differential scanning calorimetry (DSC), were referenced in advancement of thestudy. Heat treatment experiments, particularly roasting, were carried out byconventional heating and microwave heating. Results showed that roasting of Hekimhan&amp / #8211 / Deveci siderite samples could not be achieved by microwave energywhilst conventional heating experiments recorded success. Subsequentlow&amp / #8211 / intensity magnetic separation of roasted samples gave recovery above 90%, where low&amp / #8211 / intensity magnetic separation of run&amp / #8211 / of&amp / #8211 / mine sample had failed. Formation of high magnetic susceptibility phases was verified by magneticsusceptibility balance and x&amp / #8211 / ray diffraction analysis (XRD), on roasted samples. Statistical modeling was applied to determine the optimum conditions of roastingin conventional heating system / based on heating temperature, time of heating, particle size as factors.It was concluded that roasting at T= 560 &ordm / C, for t= 45 minutes was adequate toobtain desired results. Particle size was noted to be not much effective on the process as other factors at the studied size range. Kinetics (E, n) and reaction mechanism for the thermal decomposition in conventional heating system were evaluated with different solid&amp / #8211 / state reaction models by interpretation of the model graphs.Three&amp / #8211 / dimensional diffusion reaction models reported to characterize the thermal decomposition well, with values of activation energy (E), E= 85.53 kJ/mol (Jander) / E= 85.49 kJ/mol, (Ginstling&amp / #8211 / Brounshtein).
52

Age Dependent Analysis and Modeling of Prostate Cancer Data

Bonsu, Nana Osei Mensa 01 January 2013 (has links)
Growth rate of prostate cancer tumor is an important aspect of understanding the natural history of prostate cancer. Using real prostate cancer data from the SEER database with tumor size as a response variable, we have clustered the cancerous tumor sizes into age groups to enhance its analytical behavior. The rate of change of the response variable as a function of age is given for each cluster. Residual analysis attests to the quality of the analytical model and the subject estimates. In addition, we have identified the probability distribution that characterize the behavior of the response variable and proceeded with basic parametric analysis. There are several remarkable treatment options available for prostate cancer patients. In this present study, we have considered the three commonly used treatment for prostate cancer: radiation therapy, surgery, and combination of surgery and radiation therapy. The study uses data from the SEER database to evaluate and rank the effectiveness of these treatment options using survival analysis in conjunction with basic parametric analysis. The evaluation is based on the stage of the prostate cancer classification. Improvement in prostate cancer disease can be measured by improvement in its mortality. Also, mortality projection is crucial for policy makers and the financial stability of insurance business. Our research applies a parametric model proposed by Renshaw et al. (1996) to project the force of mortality for prostate cancer. The proposed modeling structure can pick up both age and year effects.
53

Protein Crystallization: Soft Matter and Chemical Physics Perspectives

Fusco, Diana January 2014 (has links)
<p>X-ray and neutron crystallography are the predominant methods for obtaining atomic-scale information on bimolecular macromolecules. Despite the success of these techniques, generating well diffracting crystals critically limits going from protein to structure. In practice, the crystallization process proceeds through knowledge-informed empiricism. Better physico-chemical understanding remains elusive because of the large number of variables involved, hence little guidance is available to systematically identify solution conditions that promote crystallization. </p><p>The fields of structural biology and soft matter have independently sought out fundamental principles to rationalize protein crystallization. Yet the conceptual differences and limited overlap between the two disciplines may have prevented a comprehensive understanding of the phenomenon to emerge. Part of this dissertation focuses on computational studies of rubredoxin and human uniquitin that bridge the two fields.</p><p>Using atomistic simulations, the protein crystal contacts are characterized, and patchy particle models are accordingly parameterized. Comparing the phase diagrams of these schematic models with experimental results enables the critical review of the assumptions behind the two approaches, and reveals insights about protein-protein interactions that can be leveraged to crystallize proteins more generally. In addition, exploration of the model parameter space provides a rationale for several experimental observations, such as the success and occasional failure of George and Wilson's proposal for protein crystallization conditions and the competition between different crystal forms.</p><p>These simple physical models enlighten the connection between protein phase behavior and protein-protein interactions, which are, however, remarkably sensitive to the protein chemical environment. To help determine relationships between the physico-chemical protein properties and crystallization propensity, statistical models are trained on samples for 182 proteins supplied by the Northeast Structural Genomics consortium. Gaussian processes, which capture trends beyond the reach of linear statistical models, distinguish between two main physico-chemical mechanisms driving crystallization. One is characterized by low levels of side chain entropy and has been extensively reported in the literature. The other identifies specific electrostatic interactions not previously described in the crystallization context. Because evidence for two distinct mechanisms can be gleaned both from crystal contacts and from solution conditions leading to successful crystallization, the model offers future avenues for optimizing crystallization screens based on partial structural information. The availability of crystallization data coupled with structural outcomes analyzed through state-of-the-art statistical models may thus guide macromolecular crystallization toward a more rational basis.</p><p>To conclude, the behavior of water in protein crystals is specifically examined. Water is not only essential for the correct functioning and folding of proteins, but it is also a key player in protein crystal assembly. Although water occupies up to 80% of the volume fraction of a protein crystal, its structure has so far received little attention and it is often overly simplified in the structural refinement process. Merging information derived from molecular dynamics simulations and original structural information provides a way to better understand the behavior of water in crystals and to develop a method that enriches standard structural refinement.</p> / Dissertation
54

A Microdata Analysis Approach to Transport Infrastructure Maintenance

Svenson, Kristin January 2017 (has links)
Maintenance of transport infrastructure assets is widely advocated as the key in minimizing current and future costs of the transportation network. While effective maintenance decisions are often a result of engineering skills and practical knowledge, efficient decisions must also account for the net result over an asset's life-cycle. One essential aspect in the long term perspective of transport infrastructure maintenance is to proactively estimate maintenance needs. In dealing with immediate maintenance actions, support tools that can prioritize potential maintenance candidates are important to obtain an efficient maintenance strategy. This dissertation consists of five individual research papers presenting a microdata analysis approach to transport infrastructure maintenance. Microdata analysis is a multidisciplinary field in which large quantities of data is collected, analyzed, and interpreted to improve decision-making. Increased access to transport infrastructure data enables a deeper understanding of causal effects and a possibility to make predictions of future outcomes. The microdata analysis approach covers the complete process from data collection to actual decisions and is therefore well suited for the task of improving efficiency in transport infrastructure maintenance. Statistical modeling was the selected analysis method in this dissertation and provided solutions to the different problems presented in each of the five papers. In Paper I, a time-to-event model was used to estimate remaining road pavement lifetimes in Sweden. In Paper II, an extension of the model in Paper I assessed the impact of latent variables on road lifetimes; displaying the sections in a road network that are weaker due to e.g. subsoil conditions or undetected heavy traffic. The study in Paper III incorporated a probabilistic parametric distribution as a representation of road lifetimes into an equation for the marginal cost of road wear. Differentiated road wear marginal costs for heavy and light vehicles are an important information basis for decisions regarding vehicle miles traveled (VMT) taxation policies. In Paper IV, a distribution based clustering method was used to distinguish between road segments that are deteriorating and road segments that have a stationary road condition. Within railway networks, temporary speed restrictions are often imposed because of maintenance and must be addressed in order to keep punctuality. The study in Paper V evaluated the empirical effect on running time of speed restrictions on a Norwegian railway line using a generalized linear mixed model.
55

Two-dimensional expressive speech animation = Animação 2D de fala expressiva / Animação 2D de fala expressiva

Costa, Paula Dornhofer Paro, 1978- 26 August 2018 (has links)
Orientador: José Mario De Martino / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação / Made available in DSpace on 2018-08-26T21:43:57Z (GMT). No. of bitstreams: 1 Costa_PaulaDornhoferParo_D.pdf: 15894797 bytes, checksum: 194a20ae502dfc7198a008d576e23e4c (MD5) Previous issue date: 2015 / Resumo: O desenvolvimento da tecnologia de animação facial busca atender uma demanda crescente por aplicações envolvendo assistentes, vendedores, tutores e apresentadores de notícias virtuais; personagens realistas de videogames, agentes sociais e ferramentas para experimentos científicos em psicologia e ciências comportamentais. Um aspecto relevante e desafiador no desenvolvimento de cabeças falantes, ou "talking heads", é a reprodução realista dos movimentos articulatórios da fala combinados aos elementos de comunicação não-verbal e de expressão de emoções. Este trabalho presenta uma metodologia de síntese de animação facial baseada em imagens, ou animação facial 2D, que permite a reprodução de uma ampla gama de estados emocionais de fala expressiva, além de suportar a modulação de movimentos da cabeça e o controle de elementos faciais tais como o piscar de olhos e o arqueamento de sobrancelhas. A síntese da animação utiliza uma base de imagens-protótipo que são processadas para obtenção dos quadros-chave da animação. Os pesos utilizados para a combinação das imagens-protótipo são derivados de um modelo estatístico de aparência e formas, construído a partir de um conjunto de imagens de treinamento extraídas de um corpus audiovisual de uma face real. A síntese das poses-chave é guiada pela transcrição fonética temporizada da fala a ser animada e pela informação do estado emocional almejado. As poses-chave representam visemas dependentes de contexto fonético que implicitamente modelam os efeitos da coarticulação na fala visual. A transição entre poses-chave adjacentes é realizada por um algoritmo de metamorfose não-linear entre imagens. As animações sintetizadas aplicando-se a metodologia proposta foram avaliadas por meio de avaliação perceptual de reconhecimento de emoções. Dentre as contribuições deste trabalho encontra-se a construção de uma base de dados de vídeo e captura de movimento para fala expressiva em português do Brasil / Abstract: The facial animation technology experiences an increasing demand for applications involving virtual assistants, sellers, tutors and newscasters; lifelike game characters, social agents, and tools for scientific experiments in psychology and behavioral sciences. A relevant and challenging aspect of the development of talking heads is the realistic reproduction of the speech articulatory movements combined with the elements of non-verbal communication and the expression of emotions. This work presents an image-based, or 2D, facial animation synthesis methodology that allows the reproduction of a wide range of expressive speech emotional states and also supports the modulation of head movements and the control of face elements, like the blinking of the eyes and the raising of the eyebrows. The synthesis of the animation uses a database of prototype images which are combined to produce animation keyframes. The weights used for combining the prototype images are derived from a statistical active appearance model (AAM), which is built from a set of sample images extracted from an audio-visual corpus of a real face. The generation of the animation keyframes is driven by the timed phonetic transcription of the speech to be animated and the desired emotional state. The keyposes consist of expressive context-dependent visemes that implicitly model the speech coarticulation effects. The transition between adjacent keyposes is performed through a non-linear image morphing algorithm. To evaluate the synthesized animations, a perceptual evaluation based on the recognition of emotions was performed. Among the contributions of the work is also the building of a database of expressive speech video and motion capture data for Brazilian Portuguese / Doutorado / Engenharia de Computação / Doutora em Engenharia Elétrica
56

Análise de custo-eficácia dos pagamentos por serviços ambientais em paisagens fragmentadas: estudo de caso de São Paulo / Cost-effectiveness analysis of payments for environmental services in fragmented landscapes: case study in the State of São Paulo

Arthur Nicolaus Fendrich 14 November 2017 (has links)
Mesmo com o crescimento da dependência da vida humana em relação aos serviços ecossistêmicos, a taxa de perda de diversidade genética no planeta tem alcançado níveis semelhantes à de grandes eventos de extinção, evidenciando a necessidade de ações para a conservação dos recursos naturais. Em adição aos tradicionais instrumentos de comando e controle para a conservação, os instrumentos econômicos têm tido crescente atenção no mundo nos últimos anos, com especial destaque para os Pagamentos por Serviços Ambientais (PSA). A abordagem de pagamentos de incentivos tem crescido na última década e, apesar das potencialidades que o PSA apresenta, muitos programas falham em incorporar o conhecimento científico em sua execução, sendo esse um dos aspectos que podem acarretar baixo desempenho ambiental e econômico. Neste contexto, o presente projeto buscou avaliar a custo-eficácia do PSA em paisagens fragmentadas. A área de estudo é o estado de São Paulo, cuja fragmentação historicamente ocorre pela expansão agropecuária e pelos diversos impactos decorrentes do grande crescimento populacional em seu território. Foram distribuídos questionários para a obtenção das preferências dos proprietários rurais paulistas em relação aos programas de PSA para restauração de vegetação nativa. Os dados coletados foram relacionados a características socioeconômicas e ambientais e um modelo beta inflacionado de zero misto dentro da classe GAMLSS foi utilizado. Em seguida, o modelo foi utilizado para predizer os resultados para os proprietários não entrevistados e a curva de investimento para diferentes retornos para conservação foi construída. Os resultados apontaram que o PSA é uma alternativa muito custosa frente aos orçamentos ambientais paulistas e que traz poucos benefícios para a restauração no estado de São Paulo. A pesquisa possui uma vertente teórica, pois contribui para a compreensão da adequabilidade do PSA em paisagens fragmentadas, e uma vertente prática, pois explicita a quantidade de recursos necessária para a execução dos programas analisados. / Although the dependence of human activities on ecosystem services has risen in the past decades, the current rate of genetic diversity loss has substantially declined and reached alarming levels. In addition to the traditional command and control approach for the promotion of conservation, growing attention has been given to economic instruments, especially to Payments for Environmental Services (PES). Despite all potentialities of the PES instrument, many programs fail to introduce scientic knowledge in the execution. Such a lack of foundation may result in low environmental and economic performance. The present research aims at evaluating the cost-effectiveness of PES in fragmented landscapes. The study area is the state of São Paulo, which has been fragmented by the agricultural and pasture expansion, and the impacts linked to the large population growth. A survey with dierent PES programs was sent to rural landowners and responses were analyzed and linked to socioeconomic and environmental characteristics through a zero-inflated beta mixed model, within the GAMLSS framework. The model was used to predict enrollment of non-respondents in different PES programs. Finally, the relationship between total area for restoration and the amount of resources needed for each program was compared to the environmental budget of the state of São Paulo. Results show that PES is a very costly alternative that can provide only few results for restoration. The present work has a theoretical orientation, as it contributes to the comprehension of the feasibility of PES programs in fragmented landscapes, and a practical orientation, as it quantifies the amount of resources required by the programs analyzed.
57

Caractérisation et modélisation de la variabilité au niveau du dispositif dans les MOSFET FD-SOI avancés / Characterization and modelling of device level variability in advanced FD-SOI MOSFETs

Pradeep, Krishna 08 April 2019 (has links)
Selon l’esprit de la “loi de Moore” utilisant des techniques innovantes telles que l’intégration 3D et de nouvelles architectures d’appareils, le marché a également évolué pour commencer à imposer des exigences spécifiques aux composants, comme des appareils à faible consommation et à faible fuite, requis par l’Internet des objets (IoT) applications et périphériques hautes performances demandés par les applications 5-G et les centres de données. Ainsi, le secteur des semi-conducteurs s’est peu à peu laissé guider par les avancées technologiques, mais aussi par les applications.La réduction de la tension d’alimentation est encore plus importante pour les applications à faible puissance, comme dans l’IoT, cela est limité par la variabilité du périphérique. L’abaissement de la tension d’alimentation implique une marge réduite pour que les concepteurs gèrent la variabilité du dispositif. Cela nécessite un accès à des outils améliorés permettant aux concepteurs de prévoir la variabilité des périphériques et d’évaluer son effet sur les performances des leur conception, ainsi que des innovations technologiques permettant de réduire la variabilité des périphériques.Cette thèse se concentre dans la première partie et examine comment la variabilité du dispositif peut être modélisée avec précision et comment sa prévision peut être incluse dans les modèles compacts utilisés par les concepteurs dans leurs simulations SPICE. La thèse analyse d’abord la variabilité du dispositif dans les transistors FD-SOI avancés à l’aide de mesures directes. À l’échelle spatiale, en fonction de la distance entre les deux dispositifs considérés, la variabilité peut être classée en unités de fabrication intra-matrice, inter-matrice, inter-tranche, inter-lot ou même entre différentes usines de fabrication. Par souci de simplicité, toute la variabilité d’une même matrice peut être regroupée en tant que variabilité locale, tandis que d’autres en tant que variabilité globale. Enfin, entre deux dispositifs arbitraires, il y aura des contributions de la variabilité locale et globale, auquel cas il est plus facile de l’appeler la variabilité totale. Des stratégies de mesure dédiées sont développées à l’aide de structures de test spécialisées pour évaluer directement la variabilité à différentes échelles spatiales à l’aide de caractérisations C-V et I-V. L’effet de la variabilité est d’abord analysé sur des facteurs de qualité (FOM) sélectionnés et des paramètres de procédés extraits des courbes C-V et I-V, pour lesquels des méthodologies d’extraction de paramètres sont développées ou des méthodes existantes améliorées. Cette analyse aide à identifier la distribution des paramétres et les corrélations possibles présentes entre les paramètres.Ensuite, nous analysons la variabilité dépendante de la polarisation dans les courbes I-V et C-V. Pour cela, une métrique universelle, qui fonctionne quelle que soit l’échelle spatiale de la variabilité, est definée sur la base de l’analyse des appariement précédemment rapportée pour la variabilité locale. Cette thèse étend également cette approche à la variabilité globale et totale. L’analyse de l’ensemble des courbes permet de ne pas manquer certaines informations critiques dans une plage de polarisation particulière, qui n’apparaissaient pas dans les FOM sélectionnés.Une approche de modélisation satistique est utilisée pour modéliser la variabilité observée et identifier les sources de variations, en termes de sensibilité à chaque source de variabilité, en utilisant un modèle physique compact comme Leti-UTSOI. Le modèle compact est d’abord étalonné sur les courbes C-V et I-V dans différentes conditions de polarisation et géométries. L’analyse des FOM et de leurs corrélations a permis d’identifier les dépendances manquantes dans le modèle compact. Celles-ci ont également été incluses en apportant de petites modifications au modèle compact. / The ``Moore's Law'' has defined the advancement of the semi-conductor industry for almost half a century. The device dimensions have reduced with each new technology node, and the design community and the market for the semiconductor have always followed this advancement of the industry and created applications which took better advantage of these new devices. But during the past decade, with the device dimensions approaching the fundamental limits imposed by the materials, the pace of this scaling down of device dimensions has decreased. While the technology struggled to keep alive the spirit of ``Moore's Law'' using innovative techniques like 3-D integration and new device architectures, the market also evolved to start making specific demands on the devices, like low power, low leakage devices demanded by Internet of Things (IoT) applications and high performance devices demanded by 5-G and data centre applications. So the semiconductor industry has slowly moved away from being driven by technology advancement, and rather it is now being driven by applications.Increasing power dissipation is an unavoidable outcome of the scaling process, while also targeting higher frequency applications. Historically, this issue has been handled by replacing the basic transistors (BJTs by MOSFETs), freezing the operation frequency in the system, lowering supply voltage, etc. The reduction of supply voltage is even more important for low power applications like in IoT, but this is limited by the device variability. Lowering the supply voltage implies reduced margin for the designers to handle the device variability. This calls for access to improved tools for the designers to predict the variability in the devices and evaluate its effect on the performance of their design and innovations in technology to reduce the variability in the devices. This thesis concentrates in the first part, and evaluates how the device variability can be accurately modelled and how its prediction can be included in the compact models used by the designers in their SPICE simulations.At first the thesis analyses the device variability in advanced FD-SOI transistors using direct measurements. In the spatial scale, depending on the distance between the two devices being considered, the variability can be classified into intra-die, inter-die, inter-wafer, inter-lot or even between different fabs. For the sake of simplicity all the variability within a single die can be grouped together as local variability, while others as global variability. Finally between two arbitrary device, there will be contributions from both local and global variability, in which case it is easier to term it as the total variability. Dedicated measurement strategies are developed using specialized test structures to directly evaluate the variability in different spatial scales using C-V and I-V characterisations. The effect of variability is first analysed on selected figure of merits (FOMs) and process parameters extracted from the C-V and I-V curves, for which parameter extraction methodologies are developed or existing methods are improved. This analysis helps identify the distribution of the parameters and the possible correlations present between the parameters.A very detailed analysis of the device variability in advanced FD-SOI transistors is undertaken in this thesis and a novel and unique characterisation and modelling methodology for the different types of variability is presented in great detail. The dominant sources of variability in the device behaviour, in terms of C-V and I-V and also in terms of parasitics (like gate leakage current) are identified and quantified. This work paves the way to a greater understanding of the device variability in FD-SOI transistors and can be easily adopted to improve the predictability of the commercial SPICE compact models for device variability.
58

MECHANISMS AND APPLICATIONS OF SOLID-STATE HYDROGEN DEUTERIUM EXCHANGE

Rishabh Tukra (10900263) 17 August 2021 (has links)
<div><div><div><p>To prolong their long-term stability, protein molecules are commonly dispensed as lyophilized powders to be reconstituted before use. Evaluating the stability of these biomolecules in the solid state is routinely done by using various analytical techniques such as glass transition temperature, residual moisture content and other spectroscopic techniques. However, these techniques often show poor correlation with long term storage stability studies. As a result, time intensive long term storage stability studies are still the golden standard for evaluating protein formulations in the solid state. Over the past few years, our lab has developed solid-state hydrogen deuterium exchange- mass spectrometry (ssHDX-MS) as an analytical tool that probes the backbone of a protein molecule in the solid state. ssHDX-MS gives a snapshot of protein-matrix interactions in the solid state and has a quick turnaround of a few weeks as opposed to a few months for accelerated stability testing. Additionally, various studies in the past have demonstrated that ssHDX-MS can be used for a wide range of biomolecules and shows strong correlation to long term stability studies routinely employed.</p><p>The main aim of this dissertation is to provide an initial understanding of the mechanism behind ssHDX-MS in structured protein formulations. Specifically, this dissertation is an attempt at studying the effects of various experimental variables on the ssHDX-MS of myoglobin formulations as well as demonstrating the utility of this analytical technique. Firstly, the effects of varying temperature and relative humidity on ssHDX-MS of myoglobin formulations is studied with the help of statistical modeling. Secondly, the effects of pressure on ssHDX-MS of myoglobin formulations are evaluated at an intact and peptide digest levels. Finally, ssHDX-MS is used as a characterization tool to evaluate the effects of two different lyophilization methods on the structure and stability of myoglobin formulations. The results of studies described in this dissertation show ssHDX-MS to be sensitive to changes in experimental parameters, namely temperature, relative humidity, pressure, and excipients. Additionally, ssHDX-MS results were in good agreement with other routinely employed analytical and stability testing techniques when used to compare the effects of two lyophilization methods on myoglobin formulations.</p></div></div></div>
59

Machine Learning Approaches to Reveal Discrete Signals in Gene Expression

Changlin Wan (12450321) 24 April 2022 (has links)
<p>Gene expression is an intricate process that determines different cell types and functions in metazoans, where most of its regulation is communicated through discrete signals, like whether the DNA helix is open, whether an enzyme binds with its target, etc. Understanding the regulation signals of the selective expression process is essential to the full comprehension of biological mechanism and complicated biological systems. In this research, we seek to reveal the discrete signals in gene expression by utilizing novel machine learning approaches. Specifically, we focus on two types of data chromatin conformation capture (3C) and single cell RNA sequencing (scRNA-seq). To identify potential regulators, we utilize a new hypergraph neural network to predict genome interactions, where we find the gene co-regulation may result from the shared enhancer element. To reveal the discrete expression state from scRNA-seq data, we propose a novel model called LTMG that considered the biological noise and showed better goodness of fitting compared with existing models. Next, we applied Boolean matrix factorization to find the co-regulation modules from the identified expression states, where we revealed the general property in cancer cells across different patients. Lastly, to find more reliable modules, we analyze the bias in the data and proposed BIND, the first algorithm to quantify the column- and row-wise bias in binary matrix.</p>
60

Methods and algorithms to learn spatio-temporal changes from longitudinal manifold-valued observations / Méthodes et algorithmes pour l’apprentissage de modèles d'évolution spatio-temporels à partir de données longitudinales sur une variété

Schiratti, Jean-Baptiste 23 January 2017 (has links)
Dans ce manuscrit, nous présentons un modèle à effets mixtes, présenté dans un cadre Bayésien, permettant d'estimer la progression temporelle d'un phénomène biologique à partir d'observations répétées, à valeurs dans une variété Riemannienne, et obtenues pour un individu ou groupe d'individus. La progression est modélisée par des trajectoires continues dans l'espace des observations, que l'on suppose être une variété Riemannienne. La trajectoire moyenne est définie par les effets mixtes du modèle. Pour définir les trajectoires de progression individuelles, nous avons introduit la notion de variation parallèle d'une courbe sur une variété Riemannienne. Pour chaque individu, une trajectoire individuelle est construite en considérant une variation parallèle de la trajectoire moyenne et en reparamétrisant en temps cette parallèle. Les transformations spatio-temporelles sujet-spécifiques, que sont la variation parallèle et la reparamétrisation temporelle sont définnies par les effets aléatoires du modèle et permettent de quantifier les changements de direction et vitesse à laquelle les trajectoires sont parcourues. Le cadre de la géométrie Riemannienne permet d'utiliser ce modèle générique avec n'importe quel type de données définies par des contraintes lisses. Une version stochastique de l'algorithme EM, le Monte Carlo Markov Chains Stochastic Approximation EM (MCMC-SAEM), est utilisé pour estimer les paramètres du modèle au sens du maximum a posteriori. L'utilisation du MCMC-SAEM avec un schéma numérique permettant de calculer le transport parallèle est discutée dans ce manuscrit. De plus, le modèle et le MCMC-SAEM sont validés sur des données synthétiques, ainsi qu'en grande dimension. Enfin, nous des résultats obtenus sur différents jeux de données liés à la santé. / We propose a generic Bayesian mixed-effects model to estimate the temporal progression of a biological phenomenon from manifold-valued observations obtained at multiple time points for an individual or group of individuals. The progression is modeled by continuous trajectories in the space of measurements, which is assumed to be a Riemannian manifold. The group-average trajectory is defined by the fixed effects of the model. To define the individual trajectories, we introduced the notion of « parallel variations » of a curve on a Riemannian manifold. For each individual, the individual trajectory is constructed by considering a parallel variation of the average trajectory and reparametrizing this parallel in time. The subject specific spatiotemporal transformations, namely parallel variation and time reparametrization, are defined by the individual random effects and allow to quantify the changes in direction and pace at which the trajectories are followed. The framework of Riemannian geometry allows the model to be used with any kind of measurements with smooth constraints. A stochastic version of the Expectation-Maximization algorithm, the Monte Carlo Markov Chains Stochastic Approximation EM algorithm (MCMC-SAEM), is used to produce produce maximum a posteriori estimates of the parameters. The use of the MCMC-SAEM together with a numerical scheme for the approximation of parallel transport is discussed. In addition to this, the method is validated on synthetic data and in high-dimensional settings. We also provide experimental results obtained on health data.

Page generated in 0.3304 seconds