• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 104
  • 14
  • 12
  • 5
  • 3
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 204
  • 204
  • 109
  • 60
  • 41
  • 40
  • 40
  • 35
  • 34
  • 33
  • 33
  • 21
  • 21
  • 21
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Desenvolvimento de uma instrumentação de captura de imagens in situ para estudo da distribuição vertical do plâncton / Development of an in situ image capture instrumentation to study the vertical distri bution of plankton

Maia Gomes Medeiros 18 December 2017 (has links)
Desenvolveu-se, pela Universidade de São Paulo, o protótipo de um equipamento submersível de captura para estudo de plâncton. Baseado na técnica shadowgraph, é formado por um feixe de LED infravermelho colimado e uma câmera de alta resolução, executados por um sistema de controle automatizado. Foram utilizados softwares de visão computacional desenvolvidos pelo Laboratório de Sistemas Planctônicos (LAPS) que executam várias tarefas, incluindo a captura e segmentação de imagens e a extração de informações com o intuito de classificar automaticamente novos conjuntos de regiões de interesse (ROIs). O teste de aprendizado de máquina contou com 57 mil quadros e 230 mil ROIs e teve, como base, dois algoritmos de classificação: o Support Vector Machine (SVM) e o Random Forest (RF). O conjunto escolhido para o treinamento inicial continha 15 classes de fito e zooplâncton, às quais foi atribuído um subconjunto de 5 mil ROIs. Os ROIs foram separados em grandes classes de, pelo menos, 100 ROIs cada. O resultado, calculado por meio do algoritmo de aprendizagem RF e SVM e fundamentado no método de validação cruzada, teve uma precisão de 0,78 e 0,79, respectivamente. O conjunto de imagens é proveniente de Ubatuba, no estado de São Paulo. Os perfis verticais elaborados apresentaram diferentes padrões de distribuição de partículas. O instrumento tem sido útil para a geração de dados espacialmente refinados em ecossistemas costeiros e oceânicos. / The University of São Paulo developed an underwater image capture system prototype to study plankton. Based on the shadowgraphic image technique, the system consists of a collimated infrared LED beam and a high-resolution camera, both executed by an automated control system. Computer vision software developed by the research laboratory was used to perform various tasks, including image capturing; image segmentation; and extract information to automatic classify news regions of interest (ROIs). The machine learning test had 57,000 frames and 230,000 ROIs, based on two classification algorithms: Support Vector Machine (SVM) and Random Forest (RF). The chosen set of the initial training had 15 classes of phytoplankton and zooplankton, which was assigned a subset of 5,000 ROIs. Big classes of, at least, 100 ROIs each were organized. The result, calculated by the RF and SVM learning algorithm and based on the cross-validation method, had a 0.78 and 0.79 precision score, respectively. The image package comes from Ubatuba, in the state of São Paulo. The vertical profiles elaborated presented different particles distribution patterns. The instrument has been useful for spatially refined data generation in coastal and oceanic ecosystems.
132

Support vector classification analysis of resting state functional connectivity fMRI

Craddock, Richard Cameron 17 November 2009 (has links)
Since its discovery in 1995 resting state functional connectivity derived from functional MRI data has become a popular neuroimaging method for study psychiatric disorders. Current methods for analyzing resting state functional connectivity in disease involve thousands of univariate tests, and the specification of regions of interests to employ in the analysis. There are several drawbacks to these methods. First the mass univariate tests employed are insensitive to the information present in distributed networks of functional connectivity. Second, the null hypothesis testing employed to select functional connectivity dierences between groups does not evaluate the predictive power of identified functional connectivities. Third, the specification of regions of interests is confounded by experimentor bias in terms of which regions should be modeled and experimental error in terms of the size and location of these regions of interests. The objective of this dissertation is to improve the methods for functional connectivity analysis using multivariate predictive modeling, feature selection, and whole brain parcellation. A method of applying Support vector classification (SVC) to resting state functional connectivity data was developed in the context of a neuroimaging study of depression. The interpretability of the obtained classifier was optimized using feature selection techniques that incorporate reliability information. The problem of selecting regions of interests for whole brain functional connectivity analysis was addressed by clustering whole brain functional connectivity data to parcellate the brain into contiguous functionally homogenous regions. This newly developed famework was applied to derive a classifier capable of correctly seperating the functional connectivity patterns of patients with depression from those of healthy controls 90% of the time. The features most relevant to the obtain classifier match those previously identified in previous studies, but also include several regions not previously implicated in the functional networks underlying depression.
133

Estimation of glottal source features from the spectral envelope of the acoustic speech signal

Torres, Juan Félix 17 May 2010 (has links)
Speech communication encompasses diverse types of information, including phonetics, affective state, voice quality, and speaker identity. From a speech production standpoint, the acoustic speech signal can be mainly divided into glottal source and vocal tract components, which play distinct roles in rendering the various types of information it contains. Most deployed speech analysis systems, however, do not explicitly represent these two components as distinct entities, as their joint estimation from the acoustic speech signal becomes an ill-defined blind deconvolution problem. Nevertheless, because of the desire to understand glottal behavior and how it relates to perceived voice quality, there has been continued interest in explicitly estimating the glottal component of the speech signal. To this end, several inverse filtering (IF) algorithms have been proposed, but they are unreliable in practice because of the blind formulation of the separation problem. In an effort to develop a method that can bypass the challenging IF process, this thesis proposes a new glottal source information extraction method that relies on supervised machine learning to transform smoothed spectral representations of speech, which are already used in some of the most widely deployed and successful speech analysis applications, into a set of glottal source features. A transformation method based on Gaussian mixture regression (GMR) is presented and compared to current IF methods in terms of feature similarity, reliability, and speaker discrimination capability on a large speech corpus, and potential representations of the spectral envelope of speech are investigated for their ability represent glottal source variation in a predictable manner. The proposed system was found to produce glottal source features that reasonably matched their IF counterparts in many cases, while being less susceptible to spurious errors. The development of the proposed method entailed a study into the aspects of glottal source information that are already contained within the spectral features commonly used in speech analysis, yielding an objective assessment regarding the expected advantages of explicitly using glottal information extracted from the speech signal via currently available IF methods, versus the alternative of relying on the glottal source information that is implicitly contained in spectral envelope representations.
134

On discriminative semi-supervised incremental learning with a multi-view perspective for image concept modeling

Byun, Byungki 17 January 2012 (has links)
This dissertation presents the development of a semi-supervised incremental learning framework with a multi-view perspective for image concept modeling. For reliable image concept characterization, having a large number of labeled images is crucial. However, the size of the training set is often limited due to the cost required for generating concept labels associated with objects in a large quantity of images. To address this issue, in this research, we propose to incrementally incorporate unlabeled samples into a learning process to enhance concept models originally learned with a small number of labeled samples. To tackle the sub-optimality problem of conventional techniques, the proposed incremental learning framework selects unlabeled samples based on an expected error reduction function that measures contributions of the unlabeled samples based on their ability to increase the modeling accuracy. To improve the convergence property of the proposed incremental learning framework, we further propose a multi-view learning approach that makes use of multiple features such as color, texture, etc., of images when including unlabeled samples. For robustness to mismatches between training and testing conditions, a discriminative learning algorithm, namely a kernelized maximal- figure-of-merit (kMFoM) learning approach is also developed. Combining individual techniques, we conduct a set of experiments on various image concept modeling problems, such as handwritten digit recognition, object recognition, and image spam detection to highlight the effectiveness of the proposed framework.
135

Answering complex questions : supervised approaches

Sadid-Al-Hasan, Sheikh, University of Lethbridge. Faculty of Arts and Science January 2009 (has links)
The term “Google” has become a verb for most of us. Search engines, however, have certain limitations. For example ask it for the impact of the current global financial crisis in different parts of the world, and you can expect to sift through thousands of results for the answer. This motivates the research in complex question answering where the purpose is to create summaries of large volumes of information as answers to complex questions, rather than simply offering a listing of sources. Unlike simple questions, complex questions cannot be answered easily as they often require inferencing and synthesizing information from multiple documents. Hence, this task is accomplished by the query-focused multidocument summarization systems. In this thesis we apply different supervised learning techniques to confront the complex question answering problem. To run our experiments, we consider the DUC-2007 main task. A huge amount of labeled data is a prerequisite for supervised training. It is expensive and time consuming when humans perform the labeling task manually. Automatic labeling can be a good remedy to this problem. We employ five different automatic annotation techniques to build extracts from human abstracts using ROUGE, Basic Element (BE) overlap, syntactic similarity measure, semantic similarity measure and Extended String Subsequence Kernel (ESSK). The representative supervised methods we use are Support Vector Machines (SVM), Conditional Random Fields (CRF), Hidden Markov Models (HMM) and Maximum Entropy (MaxEnt). We annotate DUC-2006 data and use them to train our systems, whereas 25 topics of DUC-2007 data set are used as test data. The evaluation results reveal the impact of automatic labeling methods on the performance of the supervised approaches to complex question answering. We also experiment with two ensemble-based approaches that show promising results for this problem domain. / x, 108 leaves : ill. ; 29 cm
136

A robust & reliable Data-driven prognostics approach based on extreme learning machine and fuzzy clustering.

Javed, Kamran 09 April 2014 (has links) (PDF)
Le Pronostic et l'étude de l'état de santé (en anglais Prognostics and Health Management (PHM)) vise à étendre le cycle de vie d'un actif physique, tout en réduisant les coûts d'exploitation et de maintenance. Pour cette raison, le pronostic est considéré comme un processus clé avec des capacités de prédictions. En effet, des estimations précises de la durée de vie avant défaillance d'un équipement, Remaining Useful Life (RUL), permettent de mieux définir un plan d'actions visant à accroître la sécurité, réduire les temps d'arrêt, assurer l'achèvement de la mission et l'efficacité de la production. Des études récentes montrent que les approches guidées par les données sont de plus en plus appliquées pour le pronostic de défaillance. Elles peuvent être considérées comme des modèles de type " boite noire " pour l'étude du comportement du système directement à partir des données de surveillance d'état, pour définir l'état actuel du system et prédire la progression future de défauts. Cependant, l'approximation du comportement des machines critiques est une tâche difficile qui peut entraîner des mauvais pronostics. Pour la compréhension de la modélisation de pronostic guidé par les données, on considère les points suivants. 1) Comment traiter les données brutes de surveillance pour obtenir des caractéristiques appropriées reflétant l'évolution de la dégradation ? 2) Comment distinguer les états de dégradation et définir des critères de défaillance (qui peuvent varier d'un cas à un autre)? 3) Comment être sûr que les modèles définis seront assez robustes pour montrer une performance stable avec des entrées incertaines s'écartant des expériences acquises, et seront suffisamment fiables pour intégrer des données inconnues (c'est à dire les conditions de fonctionnement, les variations de l'ingénierie, etc.)? 4) Comment réaliser facilement une intégration sous des contraintes et des exigences industrielles? Ces questions sont des problèmes abordés dans cette thèse. Elles ont conduit à développer une nouvelle approche allant au-delà des limites des méthodes classiques de pronostic guidé par les données. Les principales contributions sont les suivantes. <br>- L'étape de traitement des données est améliorée par l'introduction d'une nouvelle approche d'extraction des caractéristiques à l'aide de fonctions trigonométriques et cumulatives qui sont basées sur trois caractéristiques : la monotonie, la "trendability" et la prévisibilité. L'idée principale de ce développement est de transformer les données brutes en indicateur qui améliorent la précision des prévisions à long terme. <br>- Pour tenir compte de la robustesse, la fiabilité et l'applicabilité, un nouvel algorithme de prédiction est proposé: Summation Wavelet-Extreme Learning Machine (SWELM). Le SW-ELM assure de bonnes performances de prédiction, tout en réduisant le temps d'apprentissage. Un ensemble de SW-ELM est également proposé pour quantifier l'incertitude et améliorer la précision des estimations. <br>- Les performances du pronostic sont également renforcées grâce à la proposition d'un nouvel algorithme d'évaluation de la santé: Subtractive-Maximum Entropy Fuzzy Clustering (S-MEFC). S-MEFC est une approche de classification non supervisée qui utilise l'inférence de l'entropie maximale pour représenter l'incertitude de données multidimensionnelles. Elle peut automatiquement déterminer le nombre d'états, sans intervention humaine. <br>- Le modèle de pronostic final est obtenu en intégrant le SW-ELM et le S-MEFC pour montrer l'évolution de la dégradation de la machine avec des prédictions simultanées et l'estimation d'états discrets. Ce programme permet également de définir dynamiquement les seuils de défaillance et d'estimer le RUL des machines surveillées. Les développements sont validés sur des données réelles à partir de trois plates-formes expérimentales: PRONOSTIA FEMTO-ST (banc d'essai des roulements), CNC SIMTech (Les fraises d'usinage), C-MAPSS NASA (turboréacteurs) et d'autres données de référence. En raison de la nature réaliste de la stratégie d'estimation du RUL proposée, des résultats très prometteurs sont atteints. Toutefois, la perspective principale de ce travail est d'améliorer la fiabilité du modèle de pronostic.
137

Melhoria da atratividade de faces em imagens = Enhancement of faces attractiveness in images / Enhancement of faces attractiveness in images

Leite, Tatiane Silvia 20 August 2018 (has links)
Orientador: José Mario De Martino / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação / Made available in DSpace on 2018-08-20T14:28:14Z (GMT). No. of bitstreams: 1 Leite_TatianeSilvia_M.pdf: 77678050 bytes, checksum: 402062baa2ae89224527d82c64355abd (MD5) Previous issue date: 2012 / Resumo: O rosto desempenha um papel importante na comunicação e expressão de emoções. Por ser o cartão de visitas individual e caracterizar a primeira impressão de cada um, sua aparência e seu formato tornam-se alvo de diversos estudos. Um rosto mais atraente é capaz de capturar com maior facilidade não apenas a atenção de quem o observa, como também sua empatia. Nesta linha, o presente trabalho tem como objetivo o desenvolvimento de uma metodologia para manipulação e transformação de imagens fotográficas de faces com a finalidade de aumentar a atratividade destes rostos. Para isso, foram abordados dois aspectos de modificação da face: o geométrico e o de textura da pele do rosto. No contexto deste trabalho, foi construída uma base de imagens de faces. Nas imagens desta base foram identificados pontos de interesse e calculadas distâncias entre eles para a caracterização das proporções da face. Adicionalmente, foi atribuído um grau de atratividade para cada face, a partir de avaliação realizada por um grupo de 40 voluntários. As medidas de proporção e atratividade foram utilizadas, no processo de melhoria geométrica da face, como conjunto de treinamento para os algoritmos de aprendizado de máquina. Como resultado do processamento são geradas novas medidas para o rosto que se deseja tornar mais atraente. Utilizando a técnica de warping, a imagem do rosto de entrada é modificada para as novas medidas encontradas. A imagem resultante deste processo serve como imagem de entrada para o processo de modificação da textura. Neste processamento é gerada uma nova imagem com a cor dos pixels da região de pele do rosto alterada. A principal contribuição deste trabalho consiste em unir o processo de modificação geométrica do rosto à modificação de textura da pele. Esta união resultou em um ganho de atratividade maior do que se estas técnicas fossem utilizadas separadamente. Este ganho foi comprovado com testes de pós-avaliação realizados com voluntários analisando os resultados finais nas imagens / Abstract: The face plays an important role in communication and expression of emotions. Face characterizes the first impression of each person; thus, its appearance and shape became the target of several studies. An attractive face is capable of capturing more easily not only the attention of the beholder, as well as his/her empathy. In this vein, this study aims to develop a methodology for handling and processing of images of faces in order to increase the attractiveness of these faces. It was addressed two aspects of modification of the face: the geometric and texture (considering only the skin of the face). In this work, a large database of face images was built. All these faces were marked with feature points and from them it was taken measures considered interesting to analyze the dimensions and proportions of the faces. Besides that, they were also evaluated according to their degree of attraction by a group of volunteers. This information was used in the enhancement of the face geometry, using machine learning algorithms. At this stage new measures were generated for the input face which is considered in the beautification process. Using the technique of warping, the input face image is warped to fit the new measures found by the algorithms. The resulting image from this process serves as the input image to the process of texture modification. At this stage it is generated a new image with the color of pixels in the region of skin of the face changed. The main contribution of this work is to join the process of face geometry modification with the process of face skin texture modification. The result of this union generates image faces which have greater enhancement of attractiveness than if the processes were used separately. This gain was confirmed by post-evaluation tests conducted with volunteers that analyzed the final results / Mestrado / Engenharia de Computação / Mestre em Engenharia Elétrica
138

Using Reinforcement Learning in Partial Order Plan Space

Ceylan, Hakan 05 1900 (has links)
Partial order planning is an important approach that solves planning problems without completely specifying the orderings between the actions in the plan. This property provides greater flexibility in executing plans; hence making the partial order planners a preferred choice over other planning methodologies. However, in order to find partially ordered plans, partial order planners perform a search in plan space rather than in space of world states and an uninformed search in plan space leads to poor efficiency. In this thesis, I discuss applying a reinforcement learning method, called First-visit Monte Carlo method, to partial order planning in order to design agents which do not need any training data or heuristics but are still able to make informed decisions in plan space based on experience. Communicating effectively with the agent is crucial in reinforcement learning. I address how this task was accomplished in plan space and the results from an evaluation of a blocks world test bed.
139

Aplikace Bayesovských sítí / Bayesian Networks Applications

Chaloupka, David January 2013 (has links)
This master's thesis deals with possible applications of Bayesian networks. The theoretical part is mainly of mathematical nature. At first, we focus on general probability theory and later we move on to the theory of Bayesian networks and discuss approaches to inference and to model learning while providing explanations of pros and cons of these techniques. The practical part focuses on applications that demand learning a Bayesian network, both in terms of network parameters as well as structure. These applications include general benchmarks, usage of Bayesian networks for knowledge discovery regarding the causes of criminality and exploration of the possibility of using a Bayesian network as a spam filter.
140

High Performance Silicon Photonic Interconnected Systems

Zhu, Ziyi January 2022 (has links)
Advances in data-driven applications, particularly artificial intelligence and deep learning, are driving the explosive growth of computation and communication in today’s data centers and high-performance computing (HPC) systems. Increasingly, system performance is not constrained by the compute speed at individual nodes, but by the data movement between them. This calls for innovative architectures, smart connectivity, and extreme bandwidth densities in interconnect designs. Silicon photonics technology leverages mature complementary metal-oxide-semiconductor (CMOS) manufacturing infrastructure and is promising for low cost, high-bandwidth, and reconfigurable interconnects. Flexible and high-performance photonic switched architectures are capable of improving the system performance. The work in this dissertation explores various photonic interconnected systems and the associated optical switching functionalities, hardware platforms, and novel architectures. It demonstrates the capabilities of silicon photonics to enable efficient deep learning training. We first present field programmable gate array (FPGA) based open-loop and closed-loop control for optical spectral-and-spatial switching of silicon photonic cascaded micro-ring resonator (MRR) switches. Our control achieves wavelength locking at the user-defined resonance of the MRR for optical unicast, multicast, and multiwavelength-select functionalities. Digital-to-analog converters (DACs) and analog-to-digital converters (ADCs) are necessary for the control of the switch. We experimentally demonstrate the optical switching functionalities using an FPGA-based switch controller through both traditional multi-bit DAC/ADC and novel single-wired DAC/ADC circuits. For system-level integration, interfaces to the switch controller in a network control plane are developed. The successful control and the switching functionalitiesachieved are essential for system-level architectural innovations as presented in the following sections. Next, this thesis presents two novel photonic switched architectures using the MRR-based switches. First, a photonic switched memory system architecture was designed to address memory challenges in deep learning. The reconfigurable photonic interconnects provide scalable solutions and enable efficient use of disaggregated memory resources for deep learning training. An experimental testbed was built with a processing system and two remote memory nodes using silicon photonic switch fabrics and system performance improvements were demonstrated. The collective results and existing high-bandwidth optical I/Os show the potential of integrating the photonic switched memory to state-of-the-art processing systems. Second, the scaling trends of deep learning models and distributed training workloads are challenging network capacities in today’s data centers and HPCs. A system architecture that leverages SiP switch-enabled server regrouping is proposed to tackle the challenges and accelerate distributed deep learning training. An experimental testbed with a SiP switch-enabled reconfigurable fat tree topology was built to evaluate the network performance of distributed ring all-reduce and parameter server workloads. We also present system-scale simulations. Server regrouping and bandwidth steering were performed on a large-scale tapered fat tree with 1024 compute nodes to show the benefits of using photonic switched architectures in systems at scale. Finally, this dissertation explores high-bandwidth photonic interconnect designs for disaggregated systems. We first introduce and discuss two disaggregated architectures leveraging extreme high bandwidth interconnects with optically interconnected computing resources. We present the concept of rack-scale graphics processing unit (GPU) disaggregation with optical circuit switches and electrical aggregator switches. The architecture can leverage the flexibility of high bandwidth optical switches to increase hardware utilization and reduce application runtimes. A testbed was built to demonstrate resource disaggregation and defragmentation. In addition, we also present an extreme high-bandwidth optical interconnect accelerated low-latency communication architecture for deep learning training. The disaggregated architecture utilizes comb laser sources and MRR-based cross-bar switching fabrics to enable an all-to-all high bandwidth communication with a constant latency cost for distributed deep learning training. We discuss emerging technologies in the silicon photonics platform, including light source, transceivers, and switch architectures, to accommodate extreme high bandwidth requirements in HPC and data center environments. A prototype hardware innovation - Optical Network Interface Cards (comprised of FPGA, photonic integrated circuits (PIC), electronic integrated circuits (EIC), interposer, and high-speed printed circuit board (PCB)) is presented to show the path toward fast lanes for expedited execution at 10 terabits. Taken together, the work in this dissertation demonstrates the capabilities of high-bandwidth silicon photonic interconnects and innovative architectural designs to accelerate deep learning training in optically connected data center and HPC systems.

Page generated in 0.0746 seconds