• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 201
  • 21
  • 18
  • 9
  • 5
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 335
  • 335
  • 124
  • 113
  • 84
  • 81
  • 81
  • 65
  • 64
  • 63
  • 58
  • 49
  • 48
  • 48
  • 46
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

Advancing Keyword Clustering Techniques: A Comparative Exploration of Supervised and Unsupervised Methods : Investigating the Effectiveness and Performance of Supervised and Unsupervised Methods with Sentence Embeddings / Jämförande analys av klustringstekniker för klustring av nyckelord : Undersökning av effektiviteten och prestandan hos övervakade och oövervakade metoder med inbäddade ord

Caliò, Filippo January 2023 (has links)
Clustering keywords is an important Natural Language Processing task that can be adopted by several businesses since it helps to organize and group related keywords together. By clustering keywords, businesses can better understand the topics their customers are interested in. This thesis project provides a detailed comparison of two different approaches that might be used for performing this task and aims to investigate whether having the labels associated with the keywords improves the clusters obtained. The keywords are clustered using both supervised learning, training a neural network and applying community detection algorithms such as Louvain, and unsupervised learning algorithms, such as HDBSCAN and K-Means. The evaluation is mainly based on metrics like NMI and ARI. The results show that supervised learning can produce better clusters than unsupervised learning. By looking at the NMI score, the supervised learning approach composed by training a neural network with Margin Ranking Loss and applying Kruskal achieves a slightly better score of 0.771 against the 0.693 of the unsupervised learning approach proposed, but by looking at the ARI score, the difference is more relevant. HDBSCAN achieves a lower score of 0.112 compared to the supervised learning approach with the Margin Ranking Loss (0.296), meaning that the clusters formed by HDBSCAN may lack meaningful structure or exhibit randomness. Based on the evaluation metrics, the study demonstrates that supervised learning utilizing the Margin Ranking Loss outperforms unsupervised learning techniques in terms of cluster accuracy. However, when trained with a BCE loss function, it yields less accurate clusters (NMI: 0.473, ARI: 0.108), highlighting that the unsupervised algorithms surpass this particular supervised learning approach. / Klustring av nyckelord är en viktig uppgift inom Natural Language Processing som kan användas av flera företag eftersom den hjälper till att organisera och gruppera relaterade nyckelord tillsammans. Genom att klustra nyckelord kan företag bättre förstå vilka ämnen deras kunder är intresserade av. Detta examensarbete ger en detaljerad jämförelse av två olika metoder som kan användas för att utföra denna uppgift och syftar till att undersöka om de etiketter som är associerade med nyckelorden förbättrar de kluster som erhålls. Nyckelorden klustras med hjälp av både övervakad inlärning, träning av ett neuralt nätverk och tillämpning av algoritmer för community-detektering, t.ex. Louvain, och algoritmer för oövervakad inlärning, t.ex. HDBSCAN och KMeans. Utvärderingen baseras huvudsakligen på mått som NMI och ARI. Resultaten visar att övervakad inlärning kan ge bättre kluster än oövervakad inlärning. Om man tittar på NMI-poängen uppnår den övervakade inlärningsmetoden som består av att träna ett neuralt nätverk med Margin Ranking Loss och tillämpa Kruskal en något bättre poäng på 0,771 jämfört med 0,693 för den föreslagna oövervakade inlärningsmetoden, men om man tittar på ARI-poängen är skillnaden mer relevant. HDBSCAN uppnår en lägre poäng på 0,112 jämfört med den övervakade inlärningsmetoden med Margin Ranking Loss (0,296), vilket innebär att de kluster som bildas av HDBSCAN kan sakna meningsfull struktur eller uppvisa slumpmässighet. Baserat på utvärderingsmetrikerna visar studien att övervakad inlärning som använder Margin Ranking Loss överträffar tekniker för oövervakad inlärning när det gäller klusternoggrannhet. När den tränas med en BCEförlustfunktion ger den dock mindre exakta kluster (NMI: 0,473, ARI: 0,108), vilket belyser att de oövervakade algoritmerna överträffar denna speciella övervakade inlärningsmetod.
292

Generative Image-to-Image Translation with Applications in Computational Pathology

Fangda Li (17272816) 24 October 2023 (has links)
<p dir="ltr">Generative Image-to-Image Translation (I2IT) involves transforming an input image from one domain to another. Typically, this transformation retains the content in the input image while adjusting the domain-dependent style elements. Generative I2IT finds utility in a wide range of applications, yet its effectiveness hinges on adaptations to the unique characteristics of the data at hand. This dissertation pushes the boundaries of I2IT by applying it to stain-related problems in computational pathology. Particularly, the main contributions span two major applications of stain translation: H&E-to-H&E and H&E-to-IHC, each with its unique requirements and challenges. More specifically, the first contribution addresses the generalization challenge posed by the high variability in H&E stain appearances to any task-specific machine learning models. To this end, the Generative Stain Augmentation Network (G-SAN) is introduced to augment the training images in any downstream task with random and diverse H&E stain appearances. Experimental results demonstrate G-SAN’s ability to enhance model generalization across stain variations in downstream tasks. The second key contribution in this dissertation focuses on H&E-to-IHC stain translation. The major challenge in learning accurate H&E-to-IHC stain translation is the frequent and sometimes severe inconsistencies in the groundtruth H&E-IHC image pairs. To make training more robust to these inconsistencies, a novel contrastive learning based loss, named the Adaptive Supervised PatchNCE (ASP) loss is presented. Experimental results suggest that the proposed ASP-based framework outperforms the state-of-the-art in H&E-to-IHC stain translation by significant margins. Additionally, a new dataset for H&E-to-IHC translation – the Multi-IHC Stain Translation (MIST) dataset, is released to the public, featuring paired images from H&E to four different IHC stains. For future directions of generative I2IT in stain translation problems, a proof-of-concept study of applying the latest diffusion model based I2IT methods to the problem of virtual H&E staining is presented.</p>
293

On discovering and learning structure under limited supervision

Mudumba, Sai Rajeswar 08 1900 (has links)
Les formes, les surfaces, les événements et les objets (vivants et non vivants) constituent le monde. L'intelligence des agents naturels, tels que les humains, va au-delà de la simple reconnaissance de formes. Nous excellons à construire des représentations et à distiller des connaissances pour comprendre et déduire la structure du monde. Spécifiquement, le développement de telles capacités de raisonnement peut se produire même avec une supervision limitée. D'autre part, malgré son développement phénoménal, les succès majeurs de l'apprentissage automatique, en particulier des modèles d'apprentissage profond, se situent principalement dans les tâches qui ont accès à de grands ensembles de données annotées. Dans cette thèse, nous proposons de nouvelles solutions pour aider à combler cette lacune en permettant aux modèles d'apprentissage automatique d'apprendre la structure et de permettre un raisonnement efficace en présence de tâches faiblement supervisés. Le thème récurrent de la thèse tente de s'articuler autour de la question « Comment un système perceptif peut-il apprendre à organiser des informations sensorielles en connaissances utiles sous une supervision limitée ? » Et il aborde les thèmes de la géométrie, de la composition et des associations dans quatre articles distincts avec des applications à la vision par ordinateur (CV) et à l'apprentissage par renforcement (RL). Notre première contribution ---Pix2Shape---présente une approche basée sur l'analyse par synthèse pour la perception. Pix2Shape exploite des modèles génératifs probabilistes pour apprendre des représentations 3D à partir d'images 2D uniques. Le formalisme qui en résulte nous offre une nouvelle façon de distiller l'information d'une scène ainsi qu'une représentation puissantes des images. Nous y parvenons en augmentant l'apprentissage profond non supervisé avec des biais inductifs basés sur la physique pour décomposer la structure causale des images en géométrie, orientation, pose, réflectance et éclairage. Notre deuxième contribution ---MILe--- aborde les problèmes d'ambiguïté dans les ensembles de données à label unique tels que ImageNet. Il est souvent inapproprié de décrire une image avec un seul label lorsqu'il est composé de plus d'un objet proéminent. Nous montrons que l'intégration d'idées issues de la littérature linguistique cognitive et l'imposition de biais inductifs appropriés aident à distiller de multiples descriptions possibles à l'aide d'ensembles de données aussi faiblement étiquetés. Ensuite, nous passons au paradigme d'apprentissage par renforcement, et considérons un agent interagissant avec son environnement sans signal de récompense. Notre troisième contribution ---HaC--- est une approche non supervisée basée sur la curiosité pour apprendre les associations entre les modalités visuelles et tactiles. Cela aide l'agent à explorer l'environnement de manière autonome et à utiliser davantage ses connaissances pour s'adapter aux tâches en aval. La supervision dense des récompenses n'est pas toujours disponible (ou n'est pas facile à concevoir), dans de tels cas, une exploration efficace est utile pour générer un comportement significatif de manière auto-supervisée. Pour notre contribution finale, nous abordons l'information limitée contenue dans les représentations obtenues par des agents RL non supervisés. Ceci peut avoir un effet néfaste sur la performance des agents lorsque leur perception est basée sur des images de haute dimension. Notre approche a base de modèles combine l'exploration et la planification sans récompense pour affiner efficacement les modèles pré-formés non supervisés, obtenant des résultats comparables à un agent entraîné spécifiquement sur ces tâches. Il s'agit d'une étape vers la création d'agents capables de généraliser rapidement à plusieurs tâches en utilisant uniquement des images comme perception. / Shapes, surfaces, events, and objects (living and non-living) constitute the world. The intelligence of natural agents, such as humans is beyond pattern recognition. We excel at building representations and distilling knowledge to understand and infer the structure of the world. Critically, the development of such reasoning capabilities can occur even with limited supervision. On the other hand, despite its phenomenal development, the major successes of machine learning, in particular, deep learning models are primarily in tasks that have access to large annotated datasets. In this dissertation, we propose novel solutions to help address this gap by enabling machine learning models to learn the structure and enable effective reasoning in the presence of weakly supervised settings. The recurring theme of the thesis tries to revolve around the question of "How can a perceptual system learn to organize sensory information into useful knowledge under limited supervision?" And it discusses the themes of geometry, compositions, and associations in four separate articles with applications to computer vision (CV) and reinforcement learning (RL). Our first contribution ---Pix2Shape---presents an analysis-by-synthesis based approach(also referred to as inverse graphics) for perception. Pix2Shape leverages probabilistic generative models to learn 3D-aware representations from single 2D images. The resulting formalism allows us to perform a novel view synthesis of a scene and produce powerful representations of images. We achieve this by augmenting unsupervised learning with physically based inductive biases to decompose a scene structure into geometry, pose, reflectance and lighting. Our Second contribution ---MILe--- addresses the ambiguity issues in single-labeled datasets such as ImageNet. It is often inappropriate to describe an image with a single label when it is composed of more than one prominent object. We show that integrating ideas from Cognitive linguistic literature and imposing appropriate inductive biases helps in distilling multiple possible descriptions using such weakly labeled datasets. Next, moving into the RL setting, we consider an agent interacting with its environment without a reward signal. Our third Contribution ---HaC--- is a curiosity based unsupervised approach to learning associations between visual and tactile modalities. This aids the agent to explore the environment in an analogous self-guided fashion and further use this knowledge to adapt to downstream tasks. In the absence of reward supervision, intrinsic movitivation is useful to generate meaningful behavior in a self-supervised manner. In our final contribution, we address the representation learning bottleneck in unsupervised RL agents that has detrimental effect on the performance on high-dimensional pixel based inputs. Our model-based approach combines reward-free exploration and planning to efficiently fine-tune unsupervised pre-trained models, achieving comparable results to task-specific baselines. This is a step towards building agents that can generalize quickly on more than a single task using image inputs alone.
294

Fault Detection and Diagnosis for Automotive Camera using Unsupervised Learning / Feldetektering och Diagnostik för Bilkamera med Oövervakat Lärande

Li, Ziyou January 2023 (has links)
This thesis aims to investigate a fault detection and diagnosis system for automotive cameras using unsupervised learning. 1) Can a front-looking wide-angle camera image dataset be created using Hardware-in-Loop (HIL) simulations? 2) Can an Adversarial Autoencoder (AAE) based unsupervised camera fault detection and diagnosis method be crafted for SPA2 Vehicle Control Unit (VCU) using an image dataset created using Hardware-inLoop? 3) Does using AAE surpass the performance of using Variational Autoencoder (VAE) for the unsupervised automotive camera fault diagnosis model? In the field of camera fault studies, automotive cameras stand out for its complex operational context, particularly in Advanced Driver-Assistance Systems (ADAS) applications. The literature review finds a notable gap in comprehensive image datasets addressing the image artefact spectrum of ADAS-equipped automotive cameras under real-world driving conditions. In this study, normal and fault scenarios for automotive cameras are defined leveraging published and company studies and a fault diagnosis model using unsupervised learning is proposed and examined. The types of image faults defined and included are Lens Flare, Gaussian Noise and Dead Pixels. Along with normal driving images, a balanced fault-injected image dataset is collected using real-time sensor simulation under driving scenario with industrially-recognised HIL setup. An AAE-based unsupervised automotive camera fault diagnosis system using VGG16 as encoder-decoder structure is proposed and experiments on its performance are conducted on both the selfcollected dataset and fault-injected KITTI raw images. For non-processed KITTI dataset, morphological operations are examined and are employed as preprocessing. The performance of the system is discussed in comparison to supervised and unsupervised image partition methods in related works. The research found that the AAE method outperforms popular VAE method, using VGG16 as encoder-decoder structure significantly using 3-layer Convolutional Neural Network (CNN) and ResNet18 and morphological preprocessings significantly ameliorate system performance. The best performing VGG16- AAE model achieves 62.7% accuracy to diagnosis on own dataset, and 86.4% accuracy on double-erosion-processed fault-injected KITTI dataset. In conclusion, this study introduced a novel scheme for collecting automotive sensor data using Hardware-in-Loop, utilised preprocessing techniques that enhance image partitioning and examined the application of unsupervised models for diagnosing faults in automotive cameras. / Denna avhandling syftar till att undersöka ett felupptäcknings- och diagnossystem för bilkameror med hjälp av oövervakad inlärning. De huvudsakliga forskningsfrågorna är om en bilduppsättning från en frontmonterad vidvinkelkamera kan skapas med hjälp av Hardware-in-Loop (HIL)-simulationer, om en Adversarial Autoencoder (AAE)-baserad metod för oövervakad felupptäckt och diagnos för SPA2 Vehicle Control Unit (VCU) kan utformas med en bilduppsättning skapad med Hardware-in-Loop, och om användningen av AAE skulle överträffa prestandan av att använda Variational Autoencoder (VAE) för den oövervakade modellen för felanalys i bilkameror. Befintliga studier om felanalys fokuserar på roterande maskiner, luftbehandlingsenheter och järnvägsfordon. Få studier undersöker definitionen av feltyper i bilkameror och klassificerar normala och felaktiga bilddata från kameror i kommersiella passagerarfordon. I denna studie definieras normala och felaktiga scenarier för bilkameror och en modell för felanalys med oövervakad inlärning föreslås och undersöks. De typer av bildfel som definieras är Lens Flare, Gaussiskt brus och Döda pixlar. Tillsammans med normala bilder samlas en balanserad uppsättning felinjicerade bilder in med hjälp av realtidssensor-simulering under körscenarier med industriellt erkänd HIL-uppsättning. Ett AAE-baserat system för oövervakad felanalys i bilkameror med VGG16 som kodaredekoderstruktur föreslås och experiment på dess prestanda genomförs både på den självinsamlade uppsättningen och felinjicerade KITTI-raw-bilder. För icke-behandlade KITTI-uppsättningar undersöks morfologiska operationer och används som förbehandling. Systemets prestanda diskuteras i jämförelse med övervakade och oövervakade bildpartitioneringsmetoder i relaterade arbeten. Forskningen fann att AAE-metoden överträffar den populära VAEmetoden, genom att använda VGG16 som kodare-dekoderstruktur signifikant med ett 3-lagers konvolutionellt neuralt nätverk (CNN) och ResNet18 och morfologiska förbehandlingar förbättrar systemets prestanda avsevärt. Den bäst presterande VGG16-AAE-modellen uppnår 62,7 % noggrannhet för diagnos på egen uppsättning, och 86,4 % noggrannhet på dubbelerosionsbehandlad felinjicerad KITTI-uppsättning. Sammanfattningsvis introducerade denna studie ett nytt system för insamling av data från bilsensorer med Hardware-in-Loop, utnyttjade förbehandlingstekniker som förbättrar bildpartitionering och undersökte tillämpningen av oövervakade modeller för att diagnostisera fel i bilkameror.
295

Identification of Fundamental Driving Scenarios Using Unsupervised Machine Learning / Identifiering av grundläggande körscenarier med icke-guidad maskininlärning

Anantha Padmanaban, Deepika January 2020 (has links)
A challenge to release autonomous vehicles to public roads is safety verification of the developed features. Safety test driving of vehicles is not practically feasible as the acceptance criterion is driving at least 2.1 billion kilometers [1]. An alternative to this distance-based testing is the scenario-based approach, where the intelligent vehicles are exposed to known scenarios. Identification of such scenarios from the driving data is crucial for this validation. The aim of this thesis is to investigate the possibility of unsupervised identification of driving scenarios from the driving data. The task is performed in two major parts. The first is the segmentation of the time series driving data by detecting changepoints, followed by the clustering of the previously obtained segments. Time-series segmentation is approached using a Deep Learning method, while the second task is performed using time series clustering. The work also includes a visual approach for validating the time-series segmentation, followed by a quantitative measure of the performance. The approach is also qualitatively compared against a Bayesian Nonparametric approach to identify the usefulness of the proposed method. Based on the analysis of results, there is a discussion about the usefulness and drawbacks of the method, followed by the scope for future research. / En utmaning att släppa autonoma fordon på allmänna vägar är säkerhetsverifiering av de utvecklade funktionerna. Säkerhetstestning av fordon är inte praktiskt genomförbart eftersom acceptanskriteriet kör minst 2,1 miljarder kilometer [1]. Ett alternativ till denna distansbaserade testning är det scenaribaserade tillväga-gångssättet, där intelligenta fordon utsätts för kända scenarier. Identifiering av sådana scenarier från kördata är avgörande för denna validering. Syftet med denna avhandling är att undersöka möjligheten till oövervakad identifiering av körscenarier från kördata. Uppgiften utförs i två huvuddelar. Den första är segmenteringen av tidsseriedrivdata genom att detektera ändringspunkter, följt av klustring av de tidigare erhållna segmenten. Tidsseriesegmentering närmar sig med en Deep Learningmetod, medan den andra uppgiften utförs med hjälp av tidsseriekluster. Arbetet innehåller också ett visuellt tillvägagångssätt för att validera tidsserierna, följt av ett kvantitativt mått på prestanda. Tillvägagångssättet jämförs också med en Bayesian icke-parametrisk metod för att identifiera användbarheten av den föreslagna metoden. Baserat på analysen av resultaten diskuteras metodens användbarhet och nackdelar, följt av möjligheten för framtida forskning.
296

Unsupervised Anomaly Detection on Time Series Data: An Implementation on Electricity Consumption Series / Oövervakad anomalidetektion i tidsseriedata: en implementation på elförbrukningsserier

Lindroth Henriksson, Amelia January 2021 (has links)
Digitization of the energy industry, introduction of smart grids and increasing regulation of electricity consumption metering have resulted in vast amounts of electricity data. This data presents a unique opportunity to understand the electricity usage and to make it more efficient, reducing electricity consumption and carbon emissions. An important initial step in analyzing the data is to identify anomalies. In this thesis the problem of anomaly detection in electricity consumption series is addressed using four machine learning methods: density based spatial clustering for applications with noise (DBSCAN), local outlier factor (LOF), isolation forest (iForest) and one-class support vector machine (OC-SVM). In order to evaluate the methods synthetic anomalies were introduced to the electricity consumption series and the methods were then evaluated for the two anomaly types point anomaly and collective anomaly. In addition to electricity consumption data, features describing the prior consumption, outdoor temperature and date-time properties were included in the models. Results indicate that the addition of the temperature feature and the lag features generally impaired anomaly detection performance, while the inclusion of date-time features improved it. Of the four methods, OC-SVM was found to perform the best at detecting point anomalies, while LOF performed the best at detecting collective anomalies. In an attempt to improve the models' detection power the electricity consumption series were de-trended and de-seasonalized and the same experiments were carried out. The models did not perform better on the decomposed series than on the non-decomposed. / Digitaliseringen av elbranschen, införandet av smarta nät samt ökad reglering av elmätning har resulterat i stora mängder eldata. Denna data skapar en unik möjlighet att analysera och förstå fastigheters elförbrukning för att kunna effektivisera den. Ett viktigt inledande steg i analysen av denna data är att identifiera möjliga anomalier. I denna uppsats testas fyra olika maskininlärningsmetoder för detektering av anomalier i elförbrukningsserier: densitetsbaserad spatiell klustring för applikationer med brus (DBSCAN), lokal avvikelse-faktor (LOF), isoleringsskog (iForest) och en-klass stödvektormaskin (OC-SVM). För att kunna utvärdera metoderna infördes syntetiska anomalier i elförbrukningsserierna och de fyra metoderna utvärderades därefter för de två anomalityperna punktanomali och gruppanomali. Utöver elförbrukningsdatan inkluderades även variabler som beskriver tidigare elförbrukning, utomhustemperatur och tidsegenskaper i modellerna. Resultaten tyder på att tillägget av temperaturvariabeln och lag-variablerna i allmänhet försämrade modellernas prestanda, medan införandet av tidsvariablerna förbättrade den. Av de fyra metoderna visade sig OC-SVM vara bäst på att detektera punktanomalier medan LOF var bäst på att detektera gruppanomalier. I ett försök att förbättra modellernas detekteringsförmåga utfördes samma experiment efter att elförbrukningsserierna trend- och säsongsrensats. Modellerna presterade inte bättre på de rensade serierna än på de icke-rensade.
297

Data Fusion of Infrared, Radar, and Acoustics Based Monitoring System

Mirzaei, Golrokh 22 July 2014 (has links)
No description available.
298

DEEP LEARNING BASED METHODS FOR AUTOMATIC EXTRACTION OF SYNTACTIC PATTERNS AND THEIR APPLICATION FOR KNOWLEDGE DISCOVERY

Mdahsanul Kabir (16501281) 03 January 2024 (has links)
<p dir="ltr">Semantic pairs, which consist of related entities or concepts, serve as the foundation for comprehending the meaning of language in both written and spoken forms. These pairs enable to grasp the nuances of relationships between words, phrases, or ideas, forming the basis for more advanced language tasks like entity recognition, sentiment analysis, machine translation, and question answering. They allow to infer causality, identify hierarchies, and connect ideas within a text, ultimately enhancing the depth and accuracy of automated language processing.</p><p dir="ltr">Nevertheless, the task of extracting semantic pairs from sentences poses a significant challenge, necessitating the relevance of syntactic dependency patterns (SDPs). Thankfully, semantic relationships exhibit adherence to distinct SDPs when connecting pairs of entities. Recognizing this fact underscores the critical importance of extracting these SDPs, particularly for specific semantic relationships like hyponym-hypernym, meronym-holonym, and cause-effect associations. The automated extraction of such SDPs carries substantial advantages for various downstream applications, including entity extraction, ontology development, and question answering. Unfortunately, this pivotal facet of pattern extraction has remained relatively overlooked by researchers in the domains of natural language processing (NLP) and information retrieval.</p><p dir="ltr">To address this gap, I introduce an attention-based supervised deep learning model, ASPER. ASPER is designed to extract SDPs that denote semantic relationships between entities within a given sentential context. I rigorously evaluate the performance of ASPER across three distinct semantic relations: hyponym-hypernym, cause-effect, and meronym-holonym, utilizing six datasets. My experimental findings demonstrate ASPER's ability to automatically identify an array of SDPs that mirror the presence of these semantic relationships within sentences, outperforming existing pattern extraction methods by a substantial margin.</p><p dir="ltr">Second, I want to use the SDPs to extract semantic pairs from sentences. I choose to extract cause-effect entities from medical literature. This task is instrumental in compiling various causality relationships, such as those between diseases and symptoms, medications and side effects, and genes and diseases. Existing solutions excel in sentences where cause and effect phrases are straightforward, such as named entities, single-word nouns, or short noun phrases. However, in the complex landscape of medical literature, cause and effect expressions often extend over several words, stumping existing methods, resulting in incomplete extractions that provide low-quality, non-informative, and at times, conflicting information. To overcome this challenge, I introduce an innovative unsupervised method for extracting cause and effect phrases, PatternCausality tailored explicitly for medical literature. PatternCausality employs a set of cause-effect dependency patterns as templates to identify the key terms within cause and effect phrases. It then utilizes a novel phrase extraction technique to produce comprehensive and meaningful cause and effect expressions from sentences. Experiments conducted on a dataset constructed from PubMed articles reveal that PatternCausality significantly outperforms existing methods, achieving a remarkable order of magnitude improvement in the F-score metric over the best-performing alternatives. I also develop various PatternCausality variants that utilize diverse phrase extraction methods, all of which surpass existing approaches. PatternCausality and its variants exhibit notable performance improvements in extracting cause and effect entities in a domain-neutral benchmark dataset, wherein cause and effect entities are confined to single-word nouns or noun phrases of one to two words.</p><p dir="ltr">Nevertheless, PatternCausality operates within an unsupervised framework and relies heavily on SDPs, motivating me to explore the development of a supervised approach. Although SDPs play a pivotal role in semantic relation extraction, pattern-based methodologies remain unsupervised, and the multitude of potential patterns within a language can be overwhelming. Furthermore, patterns do not consistently capture the broader context of a sentence, leading to the extraction of false-positive semantic pairs. As an illustration, consider the hyponym-hypernym pattern <i>the w of u</i> which can correctly extract semantic pairs for a sentence like <i>the village of Aasu</i> but fails to do so for the phrase <i>the moment of impact</i>. The root cause of this limitation lies in the pattern's inability to capture the nuanced meaning of words and phrases in a sentence and their contextual significance. These observations have spurred my exploration of a third model, DepBERT which constitutes a dependency-aware supervised transformer model. DepBERT's primary contribution lies in introducing the underlying dependency structure of sentences to a language model with the aim of enhancing token classification performance. To achieve this, I must first reframe the task of semantic pair extraction as a token classification problem. The DepBERT model can harness both the tree-like structure of dependency patterns and the masked language architecture of transformers, marking a significant milestone, as most large language models (LLMs) predominantly focus on semantics and word co-occurrence while neglecting the crucial role of dependency architecture.</p><p dir="ltr">In summary, my overarching contributions in this thesis are threefold. First, I validate the significance of the dependency architecture within various components of sentences and publish SDPs that incorporate these dependency relationships. Subsequently, I employ these SDPs in a practical medical domain to extract vital cause-effect pairs from sentences. Finally, my third contribution distinguishes this thesis by integrating dependency relations into a deep learning model, enhancing the understanding of language and the extraction of valuable semantic associations.</p>
299

[en] ANOMALY DETECTION IN DATA CENTER MACHINE MONITORING METRICS / [pt] DETECÇÃO DE ANOMALIAS NAS MÉTRICAS DAS MONITORAÇÕES DE MÁQUINAS DE UM DATA CENTER

RICARDO SOUZA DIAS 17 January 2020 (has links)
[pt] Um data center normalmente possui grande quantidade de máquinas com diferentes configurações de hardware. Múltiplas aplicações são executadas e software e hardware são constantemente atualizados. Para evitar a interrupção de aplicações críticas, que podem causar grandes prejuízos financeiros, os administradores de sistemas devem identificar e corrigir as falhas o mais cedo possível. No entanto, a identificação de falhas em data centers de produção muitas vezes ocorre apenas quando as aplicações e serviços já estão indisponíveis. Entre as diferentes causas da detecção tardia de falhas estão o uso técnicas de monitoração baseadas apenas em thresholds. O aumento crescente na complexidade de aplicações que são constantemente atualizadas torna difícil a configuração de thresholds ótimos para cada métrica e servidor. Este trabalho propõe o uso de técnicas de detecção de anomalias no lugar de técnicas baseadas em thresholds. Uma anomalia é um comportamento do sistema que é incomum e significativamente diferente do comportamento normal anterior. Desenvolvemos um algoritmo para detecção de anomalias, chamado DASRS (Decreased Anomaly Score by Repeated Sequence) que analisa em tempo real as métricas coletadas de servidores de um data center de produção. O DASRS apresentou excelentes resultados de acurácia, compatível com os algoritmos do estado da arte, além de tempo de processamento e consumo de memória menores. Por esse motivo, o DASRS atende aos requisitos de processamento em tempo real de um grande volume de dados. / [en] A data center typically has a large number of machines with different hardware configurations. Multiple applications are executed and software and hardware are constantly updated. To avoid disruption of critical applications, which can cause significant financial loss, system administrators should identify and correct failures as early as possible. However, fault-detection in production data centers often occurs only when applications and services are already unavailable. Among the different causes of late fault-detection are the use of thresholds-only monitoring techniques. The increasing complexity of constantly updating applications makes it difficult to set optimal thresholds for each metric and server. This paper proposes the use of anomaly detection techniques in place of thresholds based techniques. An anomaly is a system behavior that is unusual and significantly different from the previous normal behavior. We have developed an anomaly detection algorithm called Decreased Anomaly Score by Repeated Sequence (DASRS) that analyzes real-time metrics collected from servers in a production data center. DASRS has showed excellent accuracy results, compatible with state-of-the-art algorithms, and reduced processing time and memory consumption. For this reason, DASRS meets the real-time processing requirements of a large volume of data.
300

TEMPORAL EVENT MODELING OF SOCIAL HARM WITH HIGH DIMENSIONAL AND LATENT COVARIATES

Xueying Liu (13118850) 09 September 2022 (has links)
<p>    </p> <p>The counting process is the fundamental of many real-world problems with event data. Poisson process, used as the background intensity of Hawkes process, is the most commonly used point process. The Hawkes process, a self-exciting point process fits to temporal event data, spatial-temporal event data, and event data with covariates. We study the Hawkes process that fits to heterogeneous drug overdose data via a novel semi-parametric approach. The counting process is also related to survival data based on the fact that they both study the occurrences of events over time. We fit a Cox model to temporal event data with a large corpus that is processed into high dimensional covariates. We study the significant features that influence the intensity of events. </p>

Page generated in 0.0956 seconds