Spelling suggestions: "subject:"emporal models"" "subject:"atemporal models""
1 |
Computer vision for the analysis of cellular activityEllabban, Amr January 2014 (has links)
In the field of cell biology, there is an increasing use of time-lapse data to understand cellular function. Using automated microscopes, large numbers of images can be acquired, delivering videos of cell samples over time. Analysing the images manually is extremely time consuming as there are typically thousands of individual images in any given sequence. Additionally, decisions made by those analysing the images, e.g. labelling a mitotic phase (one of a set of distinct sequential stages of cell division) can be subjective, especially around transition boundaries between phases, leading to inconsistencies in the annotation. There is therefore a need for tools which facilitate automated high-throughput analysis. In this thesis we develop systems to automatically detect, track and analyse sub-cellular structures in image sequences to address biological research needs in three areas: (i) Mitotic phase labelling, (ii) Mitotic defect detection, and (iii) Cell volume estimation. We begin by presenting a system for automated segmentation and mitotic phase labelling using temporal models. This work takes the novel approach of using temporal features evaluated over the whole of the mitotic phases rather than over single frames, thereby capturing the distinctive behaviour over the phases. We compare and contrast three different temporal models: Dynamic Time Warping, Hidden Markov Models, and Semi Markov Models. A new loss function is proposed for the Semi Markov model to make it more robust to inconsistencies in data annotation near transition boundaries. We then present an approach for detecting subtle chromosome segregation errors in mitosis in embryonic stem cells, targeting two cases: misaligned chromosomes in a metaphase cell, and lagging chromosomes between anaphase cells. We additionally explore an unsupervised approach to detect unusual mitotic occurrences and test its applicability to detecting misaligned metaphase chromosomes. Finally, we describe a fully automated method, suited to high-throughput analysis, for estimating the volume of spherical mitotic cells based on a learned membrane classifier and a circular Hough transform. We also describe how it is being used further in biological research.
|
2 |
Computational model validation using a novel multiscale multidimensional spatio-temporal meta model checking approachOvidiu, Parvu January 2016 (has links)
Computational models of complex biological systems can provide a better understanding of how living systems function but need to be validated before they are employed for real-life (e.g. clinical) applications. One of the most frequently employed in silico approaches for validating such models is model checking. Traditional model checking approaches are limited to uniscale non-spatial computational models because they do not explicitly distinguish between different scales, and do not take properties of (emergent) spatial structures (e.g. density of multicellular population) into account. This thesis defines a novel multiscale multidimensional spatio-temporal meta model checking methodology which enables validating multiscale (spatial) computational models of biological systems relative to how both numeric (e.g. concentrations) and spatial system properties are expected to change over time and across multiple scales. The methodology has two important advantages. First it supports computational models encoded using various high-level modelling formalisms because it is defined relative to time series data and not the models used to produce them. Secondly the methodology is generic because it can be automatically reconfigured according to case study specific types of spatial structures and properties using the meta model checking approach. In addition the methodology could be employed for multiple domains of science, but we illustrate its applicability here only against biological case studies. To automate the computational model validation process, the approach was implemented in software tools, which are made freely available online. Their efficacy is illustrated against two uniscale and four multiscale quantitative computational models encoding phase variation in bacterial colonies and the chemotactic aggregation of cells, respectively the rat cardiovascular system dynamics, the uterine contractions of labour, the Xenopus laevis cell cycle and the acute inflammation of the gut and lung. This novel model checking approach will enable the efficient construction of reliable multiscale computational models of complex systems.
|
3 |
COVID-19 Disease Mapping Based on Poisson Kriging Model and Bayesian Spatial Statistical ModelMu, Jingrui 25 January 2022 (has links)
Since the start of the COVID-19 pandemic in December 2019, much research has
been done to develop the spatial-temporal methods to track it and to predict the
spread of the virus. In this thesis, a COVID-19 dataset containing the number of biweekly infected cases registered in Ontario since the start of the pandemic to the end
of June 2021 is analysed using Bayesian Spatial-temporal models and Area-to-area
(Area-to-point) Poisson Kriging models. With the Bayesian models, spatial-temporal
effects on infected risk will be checked and ATP Poisson Kriging models will show
how the virus spreads over the space and the spatial clustering feature. According
to these models, a Shinyapp website https://mujingrui.shinyapps.io/covid19 is
developed to present the results.
|
4 |
Modelagem estatística de dados pesqueiros / Statistical modelling of fisheries dataMayer, Fernando de Pol 08 March 2019 (has links)
A flutuação na abundância de estoques pesqueiros é tema de investigação científica desde o final do século XIX. A ciência pesqueira se estabeleceu para que se pudesse compreender os impactos da exploração da pesca em recursos naturais renováveis, e, principalmente, para evitar colapsos e preservar as populações exploradas. A principal fonte de dados para investigações pesqueiras são provenientes da própria pesca. Um dos problemas associados à esse fato, é que os pescadores procuram sempre por áreas de alto rendimento, o que faz com que os dados de captura analisados sejam de uma amostra não aleatória. Além disso, entre as diversas espécies que são capturadas, algumas poucas são consideradas alvo da pescaria, enquanto que as demais são chamadas de capturas incidentais. Tudo isso faz com que dados pesqueiros possuam características particulares, como assimetria, excesso de zeros e sobredispersão. Tradicionalmente, estes dados são analisados ignorando-se a estrutura de dependência espacial e temporal, reconhecidamente importante devido a natureza das informações. O objetivo deste trabalho foi então utilizar abordagens estatísticas recentes, que são capazes de lidar com as características desafiadoras de dados pesqueiros. Inicialmente, foram utilizados modelos hierárquicos bayesianos para a modelagem da estrutura temporal da captura de uma espécie. Para esta modelagem, foi utilizado a aproximação de Laplace encaixada e integrada (INLA), um método recente na literatura para realizar inferência no contexto bayesiano através de aproximação, ao contrário dos tradicionais métodos por amostragem. Esta alternativa é computacionalmente mais eficiente, e se mostrou extremamente viável para a análise de dados pesqueiros. Ao final, pode-se concluir que a dependência temporal é de fato importante, e não pode ser ignorada nesse tipo de dado. Seguindo a mesma ideia, foi também desenvolvido um método de estimação em dois estágios para a obtenção dos parâmetros de modelos dinâmicos de biomassa. A estimação em dois estágios permite a consideração de toda a estrutura de dependência espaço-temporal presente nos dados pesqueiros. De fato, os resultados mostraram que quando estas estruturas são levadas em consideração, as estimativas dos parâmetros de interesse são não viesadas e consistentes. Abordagens multivariadas seriam escolhas naturais para a análise da captura de diversas espécies. No entanto, a maioria destas técnicas envolve a suposição de normalidade, o que praticamente torna inviável seu uso em dados de captura. Os Modelos Multivariados de Covariância Linear Generalizada (McGLM) são uma alternativa recente, pois permitem considerar que as respostas possam assumir qualquer distribuição da família exponencial de dispersão. Isso faz com que dados que possuem dependência, excesso de zeros e sobredispersão possam ser considerados de maneira natural. Neste trabalho, os McGLMs foram utilizados para se verificar a influência de um dispositivo que evita a captura incidental de aves, na captura de duas espécies consideradas alvo da pescaria. Com isso, pôde-se concluir que o artefato, comprovadamente eficiente na redução da captura de aves, não diminui a captura das espécies desejadas. / Fluctuation in abundance of fish stocks is scientifically studied since the end of the 19th century. Fisheries science was established to understand impacts of fisheries in renewable natural resources, and mainly to avoid overfishing and to preserve explored populations. The primary data source for fisheries investigation is from fisheries itself. One of the problems associated with this fact is that fishermen are always looking for high productivity areas, which makes catch data a non-random sample. In addition, among many species captured, only a few are considered targets, while the rest are called bycatch. As a consequence, fisheries data presents particular features, such as asymmetry, excess zeros, and overdispersion. It is well known that catches are highly dependent in time and space, however, this spatiotemporal structure is traditionally ignored when analyzing catch data. The aim of this work was then to apply newly developed statistical approaches, capable of handling the challenging features of fisheries data. Initially, the temporal structure of a bycatch species was modeled through Bayesian hierarchical models. The Integrated Nested Laplace Approximation (INLA), a recent method that uses approximation rather than sampling, in the Bayesian framework, was used. This approach is computationally more efficient and turned out to be a quite viable method to analyze fisheries data. As a conclusion, we showed that the temporal dependence is indeed important, and cannot be ignored when analyzing this kind of data. Following the same reasoning, a two-stage approach was developed for the estimation of parameters from biomass dyna- mic models. This two-stage approach allows the inclusion of spatial and temporal effects, inherently present in fisheries data. Results showed that when this structures are considered, parameter estimates are unbiased and consistent. Multivariate statistical methods should be the default approach when analyzing catch data from several species. However, most of this techniques are based on the normality assumption, which makes it of little use for catch data. The recently developed Multivariate Covariance Generalized Linear Models (McGLM) assumes that response variables could follow any distribution from the exponential dispersion family. This flexibility allows that data with both overdispersion and excess zeros may be modeled in a natural way. In this work, McGLMs were used to assess the influence of a device used to prevent seabird bycatch, in two target species captures. As a conclusion, we showed that the device does not interfere in the catch of targeted species.
|
5 |
Basal Metabolic Rate (BMR) estimation using Probabilistic Graphical ModelsJackson, Zara January 2019 (has links)
Obesity is a growing problem globally. Currently 2.3 billion adults are overweight, and this number is rising. The most common method for weight loss is calorie counting, in which to lose weight a person should be in a calorie deficit. Basal Metabolic Rate accounts for the majority of calories a person burns in a day and it is therefore a major contributor to accurate calorie counting. This paper uses a Dynamic Bayesian Network to estimate Basal Metabolic Rate (BMR) for a sample of 219 individuals from all Body Mass Index (BMI) categories. The data was collected through the Lifesum app. A comparison of the estimated BMR values was made with the commonly used Harris Benedict equation, finding that food journaling is a sufficient method to estimate BMR. Next day weight prediction was also computed based on the estimated BMR. The results stated that the Harris Benedict equation produced more accurate predictions than the metabolic model proposed, therefore more work is necessary to find a model that accurately estimates BMR.
|
6 |
Spatio-Temporal Representations and Analysis of Brain Function from fMRIJanoos, Firdaus H. 17 March 2011 (has links)
No description available.
|
7 |
Modélisation de l'interprétation des pianistes & applications d'auto-encodeurs sur des modèles temporelsLauly, Stanislas 04 1900 (has links)
Ce mémoire traite d'abord du problème de la modélisation de l'interprétation des pianistes à l'aide de l'apprentissage machine. Il s'occupe ensuite de présenter de nouveaux modèles temporels qui utilisent des auto-encodeurs pour améliorer l'apprentissage de séquences.
Dans un premier temps, nous présentons le travail préalablement fait dans le domaine de la modélisation de l'expressivité musicale, notamment les modèles statistiques du professeur Widmer. Nous parlons ensuite de notre ensemble de données, unique au monde, qu'il a été nécessaire de créer pour accomplir notre tâche. Cet ensemble est composé de 13 pianistes différents enregistrés sur le fameux piano Bösendorfer 290SE. Enfin, nous expliquons en détail les résultats de l'apprentissage de réseaux de neurones et de réseaux de neurones récurrents. Ceux-ci sont appliqués sur les données mentionnées pour apprendre les variations expressives propres à un style de musique.
Dans un deuxième temps, ce mémoire aborde la découverte de modèles statistiques expérimentaux qui impliquent l'utilisation d'auto-encodeurs sur des réseaux de neurones récurrents. Pour pouvoir tester la limite de leur capacité d'apprentissage, nous utilisons deux ensembles de données artificielles développées à l'Université de Toronto. / This thesis addresses the problem of modeling pianists' interpretations using machine learning, and presents new models that use temporal auto-encoders to improve their learning for sequences.
We present previous work in the field of modeling musical expression, including Professor Widmer's statistical models. We then discuss our unique dataset created specifically for our task. This dataset is composed of 13 different pianists recorded on the famous Bösendorfer 290SE piano. Finally, we present the learning results of neural networks and recurrent neural networks in detail. These algorithms are applied to the dataset to learn expressive variations specific to a style of music.
We also present novel statistical models involving the use of auto-encoders in recurrent neural networks. To test the limits of these algorithms' ability to learn, we use two artificial datasets developed at the University of Toronto.
|
8 |
Modélisation de l'interprétation des pianistes & applications d'auto-encodeurs sur des modèles temporelsLauly, Stanislas 04 1900 (has links)
Ce mémoire traite d'abord du problème de la modélisation de l'interprétation des pianistes à l'aide de l'apprentissage machine. Il s'occupe ensuite de présenter de nouveaux modèles temporels qui utilisent des auto-encodeurs pour améliorer l'apprentissage de séquences.
Dans un premier temps, nous présentons le travail préalablement fait dans le domaine de la modélisation de l'expressivité musicale, notamment les modèles statistiques du professeur Widmer. Nous parlons ensuite de notre ensemble de données, unique au monde, qu'il a été nécessaire de créer pour accomplir notre tâche. Cet ensemble est composé de 13 pianistes différents enregistrés sur le fameux piano Bösendorfer 290SE. Enfin, nous expliquons en détail les résultats de l'apprentissage de réseaux de neurones et de réseaux de neurones récurrents. Ceux-ci sont appliqués sur les données mentionnées pour apprendre les variations expressives propres à un style de musique.
Dans un deuxième temps, ce mémoire aborde la découverte de modèles statistiques expérimentaux qui impliquent l'utilisation d'auto-encodeurs sur des réseaux de neurones récurrents. Pour pouvoir tester la limite de leur capacité d'apprentissage, nous utilisons deux ensembles de données artificielles développées à l'Université de Toronto. / This thesis addresses the problem of modeling pianists' interpretations using machine learning, and presents new models that use temporal auto-encoders to improve their learning for sequences.
We present previous work in the field of modeling musical expression, including Professor Widmer's statistical models. We then discuss our unique dataset created specifically for our task. This dataset is composed of 13 different pianists recorded on the famous Bösendorfer 290SE piano. Finally, we present the learning results of neural networks and recurrent neural networks in detail. These algorithms are applied to the dataset to learn expressive variations specific to a style of music.
We also present novel statistical models involving the use of auto-encoders in recurrent neural networks. To test the limits of these algorithms' ability to learn, we use two artificial datasets developed at the University of Toronto.
|
9 |
The effects of physical, biological and anthropogenic noise on the occurrence of dolphins in the Pacific region of the Panama CanalCampbell Castillo, Inez January 2014 (has links)
The main aim of this thesis was to investigate the occurrence of dolphins in Pacific waters adjacent to the Panama Canal in the context of biological, temporal and spatial factors. Acoustic data were collected at 101 sites at a range of distances and depths from the shipping region. Data were collected between March 2010 and April 2011 in a diurnal cycle over a total of 114 recording days. Received sound levels were split into 1/3 Octave bandwidths to study variation in sound pressure levels and then converted to spectrum density levels to show the sound components of the background noise in this region. Generalised Linear Models were used to relate dolphin whistle detections to temporal, spatial, environmental and acoustic variables. The major sources of background noise were biological noise from soniferous fish and snapping shrimp and anthropogenic noise from vessels characterised by mid to high frequencies produced by artisanal fishing boats. There was monthly and diurnal variation with some locations characterised by loud sounds in the mid to high frequencies at night. Whistle characteristics analysis revealed that the frequencies and range of the whistles were different to those previously reported under similar conditions. Whistles varied diurnally and in the presence of fish chorus and fishing boats. The study highlights a strong correlation between fish choruses and whistle detection. Temporal and spatial models showed that whistle detections varied monthly and in relation to fish noise and small vessel engine noise. Dolphins were distributed throughout most of the study area; however, whistle detections varied with distance from the coast. The results provide new knowledge about background noise composition in this region and provide the first information on the ecology of dolphin whistles in relation to this background noise, especially to fish chorus.
|
Page generated in 0.0773 seconds