• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 200
  • 21
  • 18
  • 9
  • 5
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 332
  • 332
  • 122
  • 113
  • 83
  • 81
  • 80
  • 65
  • 63
  • 62
  • 56
  • 49
  • 48
  • 48
  • 46
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Resolução de correferência em múltiplos documentos utilizando aprendizado não supervisionado / Co-reference resolution in multiples documents through unsupervised learning

Jefferson Fontinele da Silva 05 May 2011 (has links)
Um dos problemas encontrados em sistemas de Processamento de Línguas Naturais (PLN) é a dificuldade de se identificar que elementos textuais referem-se à mesma entidade. Esse fenômeno, no qual o conjunto de elementos textuais remete a uma mesma entidade, é denominado de correferência. Sistemas de resolução de correferência podem melhorar o desempenho de diversas aplicações do PLN, como: sumarização, extração de informação, sistemas de perguntas e respostas. Recentemente, pesquisas em PLN têm explorado a possibilidade de identificar os elementos correferentes em múltiplos documentos. Neste contexto, este trabalho tem como foco o desenvolvimento de um método aprendizado não supervisionado para resolução de correferência em múltiplos documentos, utilizando como língua-alvo o português. Não se conhece, até o momento, nenhum sistema com essa finalidade para o português. Os resultados dos experimentos feitos com o sistema sugerem que o método desenvolvido é superior a métodos baseados em concordância de cadeias de caracteres / One of the problems found in Natural Language Processing (NLP) systems is the difficulty of identifying textual elements that refer to the same entity. This phenomenon, in which the set of textual elements refers to a single entity, is called coreference. Coreference resolution systems can improve the performance of various NLP applications, such as automatic summarization, information extraction systems, question answering systems. Recently, research in NLP has explored the possibility of identifying the coreferent elements in multiple documents. In this context, this work focuses on the development of an unsupervised method for coreference resolution in multiple documents, using Portuguese as the target language. Until now, it is not known any system for this purpose for the Portuguese. The results of the experiments with the system suggest that the developed method is superior to methods based on string matching
142

Graph neural networks for spatial gene expression analysis of the developing human heart

Yuan, Xiao January 2020 (has links)
Single-cell RNA sequencing and in situ sequencing were combined in a recent study of the developing human heart to explore the transcriptional landscape at three developmental stages. However, the method used in the study to create the spatial cellular maps has some limitations. It relies on image segmentation of the nuclei and cell types defined in advance by single-cell sequencing. In this study, we applied a new unsupervised approach based on graph neural networks on the in situ sequencing data of the human heart to find spatial gene expression patterns and detect novel cell and sub-cell types. In this thesis, we first introduce some relevant background knowledge about the sequencing techniques that generate our data, machine learning in single-cell analysis, and deep learning on graphs. We have explored several graph neural network models and algorithms to learn embeddings for spatial gene expression. Dimensionality reduction and cluster analysis were performed on the embeddings for visualization and identification of biologically functional domains. Based on the cluster gene expression profiles, locations of the clusters in the heart sections, and comparison with cell types defined in the previous study, the results of our experiments demonstrate that graph neural networks can learn meaningful representations of spatial gene expression in the human heart. We hope further validations of our clustering results could give new insights into cell development and differentiation processes of the human heart.
143

Three Facets of Online Political Networks: Communities, Antagonisms, and Polarization

January 2019 (has links)
abstract: Millions of users leave digital traces of their political engagements on social media platforms every day. Users form networks of interactions, produce textual content, like and share each others' content. This creates an invaluable opportunity to better understand the political engagements of internet users. In this proposal, I present three algorithmic solutions to three facets of online political networks; namely, detection of communities, antagonisms and the impact of certain types of accounts on political polarization. First, I develop a multi-view community detection algorithm to find politically pure communities. I find that word usage among other content types (i.e. hashtags, URLs) complement user interactions the best in accurately detecting communities. Second, I focus on detecting negative linkages between politically motivated social media users. Major social media platforms do not facilitate their users with built-in negative interaction options. However, many political network analysis tasks rely on not only positive but also negative linkages. Here, I present the SocLSFact framework to detect negative linkages among social media users. It utilizes three pieces of information; sentiment cues of textual interactions, positive interactions, and socially balanced triads. I evaluate the contribution of each three aspects in negative link detection performance on multiple tasks. Third, I propose an experimental setup that quantifies the polarization impact of automated accounts on Twitter retweet networks. I focus on a dataset of tragic Parkland shooting event and its aftermath. I show that when automated accounts are removed from the retweet network the network polarization decrease significantly, while a same number of accounts to the automated accounts are removed randomly the difference is not significant. I also find that prominent predictors of engagement of automatically generated content is not very different than what previous studies point out in general engaging content on social media. Last but not least, I identify accounts which self-disclose their automated nature in their profile by using expressions such as bot, chat-bot, or robot. I find that human engagement to self-disclosing accounts compared to non-disclosing automated accounts is much smaller. This observational finding can motivate further efforts into automated account detection research to prevent their unintended impact. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2019
144

VISUAL INTERPRETATION TO UNCERTAINTIES IN 2D EMBEDDING FROM PROBABILISTIC-BASED NON-LINEAR DIMENSIONALITY REDUCTION METHODS

Junhan Zhao (11024559) 25 June 2021 (has links)
Enabling human understanding of high-dimensional (HD) data is critical for scientific research but highly challenging. To deal with large datasets, probabilistic-based non-linear DR models, like UMAP and t-SNE, lead the performance on reducing the high dimensionality. However, considering the trade-off between global and local structure preservation and the randomness initialized for computation, applying non-linear models in different parameter settings to unknown high-dimensional structure data may return different 2D visual forms. Much critical neighborhood relationship may be falsely imposed, and uncertainty may be introduced into the low-dimensional embedding visualizations, so-called distortion. In this work, a survey has been conducted to illustrate the most state-of-the-art layout enrichment works for interpreting dimensionality reduction methods and results. Responding to the lack of visual interpretation techniques to probabilistic-based DR methods, we propose a visualization technique called ManiGraph, which facilitates users to explore multi-view 2D embeddings via mesoscopic structure graphs. A dynamic mesoscopic structure first subsets HD data by a hexagonal grid in visual space from non-linear embedding (e.g., UMAP). Then, it measures the regional adapted trustworthiness/continuity and visualizes the restored missing and highlighted false connections between subsets from high-dimensional space to the low-dimensional in a node-linkage manner. The visualization helps users understand and interpret the distortion from both visualization and model stages. We further demonstrate the user cases tested on intuitive 3D toy datasets, fashion-MNIST, and single-cell RNA sequencing with domain experts in unsupervised scenarios. This work will potentially benefit the data science community, from toolkit users to DR algorithm developers.<br>
145

EVALUATION OF UNSUPERVISED MACHINE LEARNING MODELS FOR ANOMALY DETECTION IN TIME SERIES SENSOR DATA

Bracci, Lorenzo, Namazi, Amirhossein January 2021 (has links)
With the advancement of the internet of things and the digitization of societies sensor recording time series data can be found in an always increasing number of places including among other proximity sensors on cars, temperature sensors in manufacturing plants and motion sensors inside smart homes. This always increasing reliability of society on these devices lead to a need for detecting unusual behaviour which could be caused by malfunctioning of the sensor or by the detection of an uncommon event. The unusual behaviour mentioned is often referred to as an anomaly. In order to detect anomalous behaviours, advanced technologies combining mathematics and computer science, which are often referred to as under the umbrella of machine learning, are frequently used to solve these problems. In order to help machines to learn valuable patterns often human supervision is needed, which in this case would correspond to use recordings which a person has already classified as anomalies or normal points. It is unfortunately time consuming to label data, especially the large datasets that are created from sensor recordings. Therefore in this thesis techniques that require no supervision are evaluated to perform anomaly detection. Several different machine learning models are trained on different datasets in order to gain a better understanding concerning which techniques perform better when different requirements are important such as presence of a smaller dataset or stricter requirements on inference time. Out of the models evaluated, OCSVM resulted in the best overall performance, achieving an accuracy of 85% and K- means was the fastest model as it took 0.04 milliseconds to run inference on one sample. Furthermore LSTM based models showed most possible improvements with larger datasets. / Med utvecklingen av Sakernas internet och digitaliseringen av samhället kan man registrera tidsseriedata på allt fler platser, bland annat igenom närhetssensorer på bilar, temperatursensorer i tillverkningsanläggningar och rörelsesensorer i smarta hem. Detta ständigt ökande beroende i samhället av dessa enheter leder till ett behov av att upptäcka ovanligt beteende som kan orsakas av funktionsstörning i sensorn eller genom upptäckt av en ovanlig händelse. Det ovanliga beteendet som nämns kallas ofta för en anomali. För att upptäcka avvikande beteenden används avancerad teknik som kombinerar matematik och datavetenskap, som ofta kallas maskininlärning. För att hjälpa maskiner att lära sig värdefulla mönster behövs ofta mänsklig tillsyn, vilket i detta fall skulle motsvara användningsinspelningar som en person redan har klassificerat som avvikelser eller normala punkter. Tyvärr är det tidskrävande att märka data, särskilt de stora datamängder som skapas från sensorinspelningar. Därför utvärderas tekniker som inte kräver någon handledning i denna avhandling för att utföra anomalidetektering. Flera olika maskininlärningsmodeller utbildas på olika datamängder för att få en bättre förståelse för vilka tekniker som fungerar bättre när olika krav är viktiga, t.ex. närvaro av en mindre dataset eller strängare krav på inferens tid. Av de utvärderade modellerna resulterade OCSVM i bästa totala prestanda, uppnådde en noggrannhet på 85% och K- means var den snabbaste modellen eftersom det hade en inferens tid av 0,04 millisekunder. Dessutom visade LSTM- baserade modeller de bästa möjliga förbättringarna med större datamängder.
146

Automated error matching system using machine learning and data clustering : Evaluating unsupervised learning methods for categorizing error types, capturing bugs, and detecting outliers.

Bjurenfalk, Jonatan, Johnson, August January 2021 (has links)
For large and complex software systems, it is a time-consuming process to manually inspect error logs produced from the test suites of such systems. Whether it is for identifyingabnormal faults, or finding bugs; it is a process that limits development progress, and requires experience. An automated solution for such processes could potentially lead to efficient fault identification and bug reporting, while also enabling developers to spend more time on improving system functionality. Three unsupervised clustering algorithms are evaluated for the task, HDBSCAN, DBSCAN, and X-Means. In addition, HDBSCAN, DBSCAN and an LSTM-based autoencoder are evaluated for outlier detection. The dataset consists of error logs produced from a robotic test system. These logs are cleaned and pre-processed using stopword removal, stemming, term frequency-inverse document frequency (tf-idf) and singular value decomposition (SVD). Two domain experts are tasked with evaluating the results produced from clustering and outlier detection. Results indicate that X-Means outperform the other clustering algorithms when tasked with automatically categorizing error types, and capturing bugs. Furthermore, none of the outlier detection methods yielded sufficient results. However, it was found that X-Means’s clusters with a size of one data point yielded an accurate representation of outliers occurring in the error log dataset. Conclusively, the domain experts deemed X-means to be a helpful tool for categorizing error types, capturing bugs, and detecting outliers.
147

Identifying Machine States and Sensor Properties for a Digital Machine Template : Automatically recognize states in a machine using multivariate time series cluster analysis

Viking, Jakob January 2021 (has links)
Digital twins have become a large part of new cyber-physical systems as they allow for the simulation of a physical object in the digital world. In addition to the new approaches of digital twins, machines have become more intelligent, allowing them to produce more data than ever before. Within the area of digital twins, there is a need for a less complex approach than a fully optimised digital twin. This approach is more like a digital shadow of the physical object. Therefore, the focus of this thesis is to study machine states and statistical distributions for all sensors in a machine. Where as majority of studies in the literature focuses on generating data from a digital twin, this study focuses on what characteristics a digital twin have. The solution is by defining a term named digital machine template that contains the states and statistical properties of each sensor in a given machine. The primary approach is to create a proof of work application that uses traditional data mining technologies and clustering to analyze how many states there are in a machine and how the sensor data is structured. It all results in a digital machine template with all of the information mentioned above. The results contain all the states a machine might have and the possible statistical distributions of each senor in each state. The digital machine template opens the possibility of using it as a basis for creating a digital twins. It allows the time of development to be shorter than that of a regular digital twin. More research still needs to be done as the less complex approach may lead to missing information or information not being interpreted correctly. It still shows promises as a less complex way of looking at digital twins since it may become necessary due to digital twins becoming even more complex by the day.
148

Unsupervised Image-to-image translation : Taking inspiration from human perception / Unsupervised Image-to-image translation : Taking inspiration from human perception

Sveding, Jens Jakob January 2021 (has links)
Generative Artificial Intelligence is a field of artificial intelligence where systems can learn underlying patterns in previously seen content and generate new content. This thesis explores a generative artificial intelligence technique used for image-toimage translations called Cycle-consistent Adversarial network (CycleGAN), which can translate images from one domain into another. The CycleGAN is a stateof-the-art technique for doing unsupervised image-to-image translations. It uses the concept of cycle-consistency to learn a mapping between image distributions, where the Mean Absolute Error function is used to compare images and thereby learn an underlying mapping between the two image distributions. In this work, we propose to use the Structural Similarity Index Measure (SSIM) as an alternative to the Mean Absolute Error function. The SSIM is a metric inspired by human perception, which measures the difference in two images by comparing the difference in, contrast, luminance, and structure. We examine if using the SSIM as the cycle-consistency loss in the CycleGAN will improve the image quality of generated images as measured by the Inception Score and Fréchet Inception Distance. The inception Score and Fréchet Inception Distance are both metrics that have been proposed as methods for evaluating the quality of images generated by generative adversarial networks (GAN). We conduct a controlled experiment to collect the quantitative metrics. Our results suggest that using the SSIM in the CycleGAN as the cycle-consistency loss will, in most cases, improve the image quality of generated images as measured Inception Score and Fréchet Inception Distance.
149

Entwicklung eines Monte-Carlo-Verfahrens zum selbständigen Lernen von Gauß-Mischverteilungen

Lauer, Martin 03 March 2005 (has links)
In der Arbeit wird ein neuartiges Lernverfahren für Gauß-Mischverteilungen entwickelt. Es basiert auf der Technik der Markov-Chain Monte-Carlo Verfahren und ist in der Lage, in einem Zuge die Größe der Mischverteilung sowie deren Parameter zu bestimmen. Das Verfahren zeichnet sich sowohl durch eine gute Anpassung an die Trainingsdaten als auch durch eine gute Generalisierungsleistung aus. Ausgehend von einer Beschreibung der stochastischen Grundlagen und einer Analyse der Probleme, die beim Lernen von Gauß-Mischverteilungen auftreten, wird in der Abeit das neue Lernverfahren schrittweise entwickelt und seine Eigenschaften untersucht. Ein experimenteller Vergleich mit bekannten Lernverfahren für Gauß-Mischverteilungen weist die Eignung des neuen Verfahrens auch empirisch nach.
150

Evaluating CNN-based models for unsupervised image denoising / En utvärdering av CNN-baserade metoder för icke-vägledd avbrusning av bilder

Lind, Johan January 2021 (has links)
Images are often corrupted by noise which reduces their visual quality and interferes with analysis. Convolutional Neural Networks (CNNs) have become a popular method for denoising images, but their training typically relies on access to thousands of pairs of noisy and clean versions of the same underlying picture. Unsupervised methods lack this requirement and can instead be trained purely using noisy images. This thesis evaluated two different unsupervised denoising algorithms: Noise2Self (N2S) and Parametric Probabilistic Noise2Void (PPN2V), both of which train an internal CNN to denoise images. Four different CNNs were tested in order to investigate how the performance of these algorithms would be affected by different network architectures. The testing used two different datasets: one containing clean images corrupted by synthetic noise, and one containing images damaged by real noise originating from the camera used to capture them. Two of the networks, UNet and a CBAM-augmented UNet resulted in high performance competitive with the strong classical denoisers BM3D and NLM. The other two networks - GRDN and MultiResUNet - on the other hand generally caused poor performance.

Page generated in 1.7065 seconds