Spelling suggestions: "subject:"1earning cachine"" "subject:"1earning amachine""
121 |
The Role of Actively Created Doppler shifts in Bats Behavioral Experiments and Biomimetic ReproductionsYin, Xiaoyan 19 January 2021 (has links)
Many animal species are known for their unparalleled abilities to encode sensory information that supports fast, reliable action in complex environments, but the mechanisms remain often unclear. Through fast ear motions, bats can encode information on target direction into time-frequency Doppler signatures. These species were thought to be evolutionarily tuned to Doppler shifts generated by a prey's wing beat. Self-generated Doppler shifts from the bat's own flight motion were for the most part considered a nuisance that the bats compensate for. My findings indicate that these Doppler-based biosonar systems may be more complicated than previously thought because the animals can actively inject Doppler shifts into their input signals. The work in this dissertation presents a novel nonlinear principle for sensory information encoding in bats. Up to now, sound-direction finding has required either multiple signal frequencies or multiple pressure receivers. Inspired by bat species that add Doppler shifts to their biosonar echoes through fast ear motions, I present a source-direction finding paradigm based on a single frequency and a single pressure receiver. Non-rigid ear motions produce complex Doppler signatures that depend on source direction but are difficult to interpret. To demonstrate that deep learning can solve this problem, I have combined a soft-robotic microphone baffle that mimics a deforming bat ear with a CNN for regression. With this integrated cyber-physical setup, I have able to achieve a direction-finding accuracy of 1 degree based on a single baffle motion. / Doctor of Philosophy / Bats are well-known for their intricate biosonar system that allow the animals to navigate even the most complex natural environments. While the mechanism behind most of these abilities remains unknown, an interesting observation is that some bat species produce fast movements of their ears when actively exploring their surroundings. By moving their pinna, the bats create a time-variant reception characteristic and very little research has been directed at exploring the potential benefits of such behavior so far. One hypothesis is that the speed of the pinna motions modulates the received biosonar echoes with Doppler-shift patterns that could convey sensory information that is useful for navigation. This dissertation intends to explore this hypothetical dynamic sensing mechanism by building a soft-robotic biomimetic receiver to replicate the dynamics of the bat pinna. The experiments with this biomimetic pinna robot demonstrate that the non-rigid ear motions produce Doppler signatures that contain information about the direction of a sound source. However, these patterns are difficult to interpret because of their complexity. By combining the soft-robotic pinna with a convolutional neural network for processing the Doppler signatures in the time-frequency domain, I have been able to accurately estimate the source direction with an error margin of less than one degree. This working system, composed of a soft-robotic biomimetic ear integrated with a deep neural net, demonstrates that the use of Doppler signatures as a source of sensory information is a viable hypothesis for explaining the sensory skills of bats.
|
122 |
Development and application of computational tools for in vitro studies of human cardiac diseases and cardioactive drugsKim, Youngbin January 2024 (has links)
As cardiovascular disease remains the global leading cause of death, there is an urgent need to study the pathophysiology of the heart and to effectively evaluate cardioprotective drugs. Due to the difficulty in studying the human heart in vivo and sourcing human heart tissues, induced pluripotent stem cells (iPSCs) have provided a promising alternative for modeling cardiac diseases and evaluating candidate drugs. In recent years, computational methods, including machine learning, have given rise to a new class of tools to evaluate cardiac function more rapidly and comprehensively.
In this dissertation, I develop and apply computational tools to probe the function of human iPSC-derived cardiac models (Aim 1), apply machine learning methods in the context of cardiomyocyte disease phenotyping and cardioactive drug profiling (Aim 2), and develop a pipeline for deep learning-driven cardiac fibroblast phenotypic drug discovery (Aim 3).
|
123 |
Design of a Novel Wearable Ultrasound Vest for Autonomous Monitoring of the Heart Using Machine LearningGoodman, Garrett G. January 2020 (has links)
No description available.
|
124 |
Advancing human pose and gesture recognitionPfister, Tomas January 2015 (has links)
This thesis presents new methods in two closely related areas of computer vision: human pose estimation, and gesture recognition in videos. In human pose estimation, we show that random forests can be used to estimate human pose in monocular videos. To this end, we propose a co-segmentation algorithm for segmenting humans out of videos, and an evaluator that predicts whether the estimated poses are correct or not. We further extend this pose estimator to new domains (with a transfer learning approach), and enhance its predictions by predicting the joint positions sequentially (rather than independently) in an image, and using temporal information in the videos (rather than predicting the poses from a single frame). Finally, we go beyond random forests, and show that convolutional neural networks can be used to estimate human pose even more accurately and efficiently. We propose two new convolutional neural network architectures, and show how optical flow can be employed in convolutional nets to further improve the predictions. In gesture recognition, we explore the idea of using weak supervision to learn gestures. We show that we can learn sign language automatically from signed TV broadcasts with subtitles by letting algorithms 'watch' the TV broadcasts and 'match' the signs with the subtitles. We further show that if even a small amount of strong supervision is available (as there is for sign language, in the form of sign language video dictionaries), this strong supervision can be combined with weak supervision to learn even better models.
|
125 |
Desenvolvimento de uma instrumentação de captura de imagens in situ para estudo da distribuição vertical do plâncton / Development of an in situ image capture instrumentation to study the vertical distri bution of planktonMedeiros, Maia Gomes 18 December 2017 (has links)
Desenvolveu-se, pela Universidade de São Paulo, o protótipo de um equipamento submersível de captura para estudo de plâncton. Baseado na técnica shadowgraph, é formado por um feixe de LED infravermelho colimado e uma câmera de alta resolução, executados por um sistema de controle automatizado. Foram utilizados softwares de visão computacional desenvolvidos pelo Laboratório de Sistemas Planctônicos (LAPS) que executam várias tarefas, incluindo a captura e segmentação de imagens e a extração de informações com o intuito de classificar automaticamente novos conjuntos de regiões de interesse (ROIs). O teste de aprendizado de máquina contou com 57 mil quadros e 230 mil ROIs e teve, como base, dois algoritmos de classificação: o Support Vector Machine (SVM) e o Random Forest (RF). O conjunto escolhido para o treinamento inicial continha 15 classes de fito e zooplâncton, às quais foi atribuído um subconjunto de 5 mil ROIs. Os ROIs foram separados em grandes classes de, pelo menos, 100 ROIs cada. O resultado, calculado por meio do algoritmo de aprendizagem RF e SVM e fundamentado no método de validação cruzada, teve uma precisão de 0,78 e 0,79, respectivamente. O conjunto de imagens é proveniente de Ubatuba, no estado de São Paulo. Os perfis verticais elaborados apresentaram diferentes padrões de distribuição de partículas. O instrumento tem sido útil para a geração de dados espacialmente refinados em ecossistemas costeiros e oceânicos. / The University of São Paulo developed an underwater image capture system prototype to study plankton. Based on the shadowgraphic image technique, the system consists of a collimated infrared LED beam and a high-resolution camera, both executed by an automated control system. Computer vision software developed by the research laboratory was used to perform various tasks, including image capturing; image segmentation; and extract information to automatic classify news regions of interest (ROIs). The machine learning test had 57,000 frames and 230,000 ROIs, based on two classification algorithms: Support Vector Machine (SVM) and Random Forest (RF). The chosen set of the initial training had 15 classes of phytoplankton and zooplankton, which was assigned a subset of 5,000 ROIs. Big classes of, at least, 100 ROIs each were organized. The result, calculated by the RF and SVM learning algorithm and based on the cross-validation method, had a 0.78 and 0.79 precision score, respectively. The image package comes from Ubatuba, in the state of São Paulo. The vertical profiles elaborated presented different particles distribution patterns. The instrument has been useful for spatially refined data generation in coastal and oceanic ecosystems.
|
126 |
Desenvolvimento de uma instrumentação de captura de imagens in situ para estudo da distribuição vertical do plâncton / Development of an in situ image capture instrumentation to study the vertical distri bution of planktonMaia Gomes Medeiros 18 December 2017 (has links)
Desenvolveu-se, pela Universidade de São Paulo, o protótipo de um equipamento submersível de captura para estudo de plâncton. Baseado na técnica shadowgraph, é formado por um feixe de LED infravermelho colimado e uma câmera de alta resolução, executados por um sistema de controle automatizado. Foram utilizados softwares de visão computacional desenvolvidos pelo Laboratório de Sistemas Planctônicos (LAPS) que executam várias tarefas, incluindo a captura e segmentação de imagens e a extração de informações com o intuito de classificar automaticamente novos conjuntos de regiões de interesse (ROIs). O teste de aprendizado de máquina contou com 57 mil quadros e 230 mil ROIs e teve, como base, dois algoritmos de classificação: o Support Vector Machine (SVM) e o Random Forest (RF). O conjunto escolhido para o treinamento inicial continha 15 classes de fito e zooplâncton, às quais foi atribuído um subconjunto de 5 mil ROIs. Os ROIs foram separados em grandes classes de, pelo menos, 100 ROIs cada. O resultado, calculado por meio do algoritmo de aprendizagem RF e SVM e fundamentado no método de validação cruzada, teve uma precisão de 0,78 e 0,79, respectivamente. O conjunto de imagens é proveniente de Ubatuba, no estado de São Paulo. Os perfis verticais elaborados apresentaram diferentes padrões de distribuição de partículas. O instrumento tem sido útil para a geração de dados espacialmente refinados em ecossistemas costeiros e oceânicos. / The University of São Paulo developed an underwater image capture system prototype to study plankton. Based on the shadowgraphic image technique, the system consists of a collimated infrared LED beam and a high-resolution camera, both executed by an automated control system. Computer vision software developed by the research laboratory was used to perform various tasks, including image capturing; image segmentation; and extract information to automatic classify news regions of interest (ROIs). The machine learning test had 57,000 frames and 230,000 ROIs, based on two classification algorithms: Support Vector Machine (SVM) and Random Forest (RF). The chosen set of the initial training had 15 classes of phytoplankton and zooplankton, which was assigned a subset of 5,000 ROIs. Big classes of, at least, 100 ROIs each were organized. The result, calculated by the RF and SVM learning algorithm and based on the cross-validation method, had a 0.78 and 0.79 precision score, respectively. The image package comes from Ubatuba, in the state of São Paulo. The vertical profiles elaborated presented different particles distribution patterns. The instrument has been useful for spatially refined data generation in coastal and oceanic ecosystems.
|
127 |
Support vector classification analysis of resting state functional connectivity fMRICraddock, Richard Cameron 17 November 2009 (has links)
Since its discovery in 1995 resting state functional connectivity derived from functional
MRI data has become a popular neuroimaging method for study psychiatric disorders.
Current methods for analyzing resting state functional connectivity in disease involve
thousands of univariate tests, and the specification of regions of interests to employ in the
analysis. There are several drawbacks to these methods. First the mass univariate tests
employed are insensitive to the information present in distributed networks of functional
connectivity. Second, the null hypothesis testing employed to select functional connectivity
dierences between groups does not evaluate the predictive power of identified functional
connectivities. Third, the specification of regions of interests is confounded by experimentor
bias in terms of which regions should be modeled and experimental error in terms
of the size and location of these regions of interests. The objective of this dissertation is
to improve the methods for functional connectivity analysis using multivariate predictive
modeling, feature selection, and whole brain parcellation.
A method of applying Support vector classification (SVC) to resting state functional
connectivity data was developed in the context of a neuroimaging study of depression.
The interpretability of the obtained classifier was optimized using feature selection techniques
that incorporate reliability information. The problem of selecting regions of interests
for whole brain functional connectivity analysis was addressed by clustering whole brain
functional connectivity data to parcellate the brain into contiguous functionally homogenous
regions. This newly developed famework was applied to derive a classifier capable of
correctly seperating the functional connectivity patterns of patients with depression from
those of healthy controls 90% of the time. The features most relevant to the obtain classifier
match those previously identified in previous studies, but also include several regions not
previously implicated in the functional networks underlying depression.
|
128 |
Estimation of glottal source features from the spectral envelope of the acoustic speech signalTorres, Juan Félix 17 May 2010 (has links)
Speech communication encompasses diverse types of information, including phonetics, affective state, voice quality, and speaker identity. From a speech production standpoint, the acoustic speech signal can be mainly divided into glottal source and vocal tract components, which play distinct roles in rendering the various types of information it contains. Most deployed speech analysis systems, however, do not explicitly represent these two components as distinct entities, as their joint estimation from the acoustic speech signal becomes an ill-defined blind deconvolution problem. Nevertheless, because of the desire to understand glottal behavior and how it relates to perceived voice quality, there has been continued interest in explicitly estimating the glottal component of the speech signal. To this end, several inverse filtering (IF) algorithms have been proposed, but they are unreliable in practice because of the blind formulation of the separation problem. In an effort to develop a method that can bypass the challenging IF process, this thesis proposes a new glottal source information extraction method that relies on supervised machine learning to transform smoothed spectral representations of speech, which are already used in some of the most widely deployed and successful speech analysis applications, into a set of glottal source features. A transformation method based on Gaussian mixture regression (GMR) is presented and compared to current IF methods in terms of feature similarity, reliability, and speaker discrimination capability on a large speech corpus, and potential representations of the spectral envelope of speech are investigated for their ability represent glottal source variation in a predictable manner. The proposed system was found to produce glottal source features that reasonably matched their IF counterparts in many cases, while being less susceptible to spurious errors. The development of the proposed method entailed a study into the aspects of glottal source information that are already contained within the spectral features commonly used in speech analysis, yielding an objective assessment regarding the expected advantages of explicitly using glottal information extracted from the speech signal via currently available IF methods, versus the alternative of relying on the glottal source information that is implicitly contained in spectral envelope representations.
|
129 |
On discriminative semi-supervised incremental learning with a multi-view perspective for image concept modelingByun, Byungki 17 January 2012 (has links)
This dissertation presents the development of a semi-supervised incremental learning framework with a multi-view perspective for image concept modeling. For reliable image concept characterization, having a large number of labeled images is crucial. However, the size of the training set is often limited due to the cost required for generating concept labels associated with objects in a large quantity of images. To address this issue, in this research, we propose to incrementally incorporate unlabeled samples into a learning process to enhance concept models originally learned with a small number of labeled samples. To tackle the sub-optimality problem of conventional techniques, the proposed incremental learning framework selects unlabeled samples based on an expected error reduction function that measures contributions of the unlabeled samples based on their ability to increase the modeling accuracy. To improve the convergence property of the proposed incremental learning framework, we further propose a multi-view learning approach that makes use of multiple features such as color, texture, etc., of images when including unlabeled samples. For robustness to mismatches between training and testing conditions, a discriminative learning algorithm, namely a kernelized maximal- figure-of-merit (kMFoM) learning approach is also developed. Combining individual techniques, we conduct a set of experiments on various image concept modeling problems, such as handwritten digit recognition, object recognition, and image spam detection to highlight the effectiveness of the proposed framework.
|
130 |
Answering complex questions : supervised approachesSadid-Al-Hasan, Sheikh, University of Lethbridge. Faculty of Arts and Science January 2009 (has links)
The term “Google” has become a verb for most of us. Search engines, however, have
certain limitations. For example ask it for the impact of the current global financial crisis
in different parts of the world, and you can expect to sift through thousands of results for
the answer. This motivates the research in complex question answering where the purpose
is to create summaries of large volumes of information as answers to complex questions,
rather than simply offering a listing of sources. Unlike simple questions, complex questions
cannot be answered easily as they often require inferencing and synthesizing information
from multiple documents. Hence, this task is accomplished by the query-focused multidocument
summarization systems. In this thesis we apply different supervised learning
techniques to confront the complex question answering problem. To run our experiments,
we consider the DUC-2007 main task.
A huge amount of labeled data is a prerequisite for supervised training. It is expensive
and time consuming when humans perform the labeling task manually. Automatic labeling
can be a good remedy to this problem. We employ five different automatic annotation
techniques to build extracts from human abstracts using ROUGE, Basic Element (BE) overlap,
syntactic similarity measure, semantic similarity measure and Extended String Subsequence
Kernel (ESSK). The representative supervised methods we use are Support Vector
Machines (SVM), Conditional Random Fields (CRF), Hidden Markov Models (HMM) and
Maximum Entropy (MaxEnt). We annotate DUC-2006 data and use them to train our systems,
whereas 25 topics of DUC-2007 data set are used as test data. The evaluation results
reveal the impact of automatic labeling methods on the performance of the supervised approaches
to complex question answering. We also experiment with two ensemble-based
approaches that show promising results for this problem domain. / x, 108 leaves : ill. ; 29 cm
|
Page generated in 0.088 seconds