• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 33
  • 5
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 55
  • 55
  • 18
  • 15
  • 13
  • 12
  • 10
  • 9
  • 8
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Analýza systémových záznamů / System Log Analysis

Ščotka, Jan January 2008 (has links)
The goal of this master thesis is to make possible to perform system log analysis in more general way than well-known host-based instrusion detection systems (HIDS). The way how to achieve this goal is via proposed user-friendly regular expressions. This thesis deals with making regular expressions possible to use in the field of log analysis, and mainly by users unfamiliar with formal aspects of computer science.
42

Evaluation of Automotive Data mining and Pattern Recognition Techniques for Bug Analysis

Gawande, Rashmi 25 January 2016 (has links)
In an automotive infotainment system, while analyzing bug reports, developers have to spend significant time on reading log messages and trying to locate anomalous behavior before identifying its root cause. The log messages need to be viewed in a Traceviewer tool to read in a human readable form and have to be extracted to text files by applying manual filters in order to further analyze the behavior. There is a need to evaluate machine learning/data mining methods which could potentially assist in error analysis. One such method could be learning patterns for “normal” messages. “Normal” could even mean that they contain keywords like “exception”, “error”, “failed” but are harmless or not relevant to the bug that is currently analyzed. These patterns could then be applied as a filter, leaving behind only truly anomalous messages that are interesting for analysis. A successful application of the filter would reduce the noise, leaving only a few “anomalous” messages. After evaluation of the researched candidate algorithms, two algorithms namely GSP and FP Growth were found useful and thus implemented together in a prototype. The prototype implementation overall includes processes like pre-processing, creation of input, executing algorithms, creation of training set and analysis of new trace logs. Execution of prototype resulted in reducing manual effort thus achieving the objective of this thesis work.
43

Subsurface Framework and Fault Timing in the Missourian Granite Wash Interval, Stiles Ranch and Mills Ranch Fields, Wheeler County, Texas

Lomago, Brendan Michael 14 December 2018 (has links)
The recent and rapid growth of horizontal drilling in the Anadarko basin necessitates newer studies to characterize reservoir and source rock quality in the region. Most oil production in the basin comes from the Granite Wash reservoirs, which are composed of stacked tight sandstones and conglomerates that range from Virgillian (305-299 Ma) to Atokan (311-309.4 Ma) in age. By utilizing geophysical well logging data available in raster format, the Granite Wash reservoirs and their respective marine flooding surfaces were stratigraphically mapped across the regional fault systems. Additionally, well log trends were calibrated with coincident core data to minimize uncertainty regarding facies variability and lateral continuity of these intervals. In this thesis, inferred lithofacies were grouped into medium submarine fan lobe, distal fan lobe, and offshore facies (the interpreted depositional environments). By creating isopach and net sand maps in Petra, faulting in the Missourian was determined to have occurred syndepositionally at the fifth order scale of stratigraphic hierarchy.
44

EXPLORING HEALTH WEBSITE USERS BY WEB MINING

Kong, Wei 07 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / With the continuous growth of health information on the Internet, providing user-orientated health service online has become a great challenge to health providers. Understanding the information needs of the users is the first step to providing tailored health service. The purpose of this study is to examine the navigation behavior of different user groups by extracting their search terms and to make some suggestions to reconstruct a website for more customized Web service. This study analyzed five months’ of daily access weblog files from one local health provider’s website, discovered the most popular general topics and health related topics, and compared the information search strategies for both patient/consumer and doctor groups. Our findings show that users are not searching health information as much as was thought. The top two health topics which patients are concerned about are children’s health and occupational health. Another topic that both user groups are interested in is medical records. Also, patients and doctors have different search strategies when looking for information on this website. Patients get back to the previous page more often, while doctors usually go to the final page directly and then leave the page without coming back. As a result, some suggestions to redesign and improve the website are discussed; a more intuitive portal and more customized links for both user groups are suggested.
45

Discover patterns within train log data using unsupervised learning and network analysis

Guo, Zehua January 2022 (has links)
With the development of information technology in recent years, log analysis has gradually become a hot research topic. However, manual log analysis requires specialized knowledge and is a time-consuming task. Therefore, more and more researchers are searching for ways to automate log analysis. In this project, we explore methods for train log analysis using natural language processing and unsupervised machine learning. Multiple language models are used in this project to extract word embeddings, one of which is the traditional language model TF-IDF, and the other three are the very popular transformer-based model, BERT, and its variants, the DistilBERT and the RoBERTa. In addition, we also compare two unsupervised clustering algorithms, the DBSCAN and the Mini-Batch k-means. The silhouette coefficient and Davies-Bouldin score are utilized for evaluating the clustering performance. Moreover, the metadata of the train logs is used to verify the effectiveness of the unsupervised methods. Apart from unsupervised learning, network analysis is applied to the train log data in order to explore the connections between the patterns, which are identified by train control system experts. Network visualization and centrality analysis are investigated to analyze the relationship and, in terms of graph theory, importance of the patterns. In general, this project provides a feasible direction to conduct log analysis and processing in the future. / I och med informationsteknologins utveckling de senaste åren har logganalys gradvis blivit ett hett forskningsämne. Manuell logganalys kräver dock specialistkunskap och är en tidskrävande uppgift. Därför söker fler och fler forskare efter sätt att automatisera logganalys. I detta projekt utforskar vi metoder för tåglogganalys med hjälp av naturlig språkbehandling och oövervakad maskininlärning. Flera språkmodeller används i detta projekt för att extrahera ordinbäddningar, varav en är den traditionella språkmodellen TF-IDF, och de andra tre är den mycket populära transformatorbaserade modellen, BERT, och dess varianter, DistilBERT och RoBERTa. Dessutom jämför vi två oövervakade klustringsalgoritmer, DBSCAN och Mini-Batch k-means. Siluettkoefficienten och Davies-Bouldin-poängen används för att utvärdera klustringsprestandan. Dessutom används tågloggarnas metadata för att verifiera effektiviteten hos de oövervakade metoderna. Förutom oövervakad inlärning tillämpas nätverksanalys på tågloggdata för att utforska sambanden mellan mönstren, som identifieras av experter på tågstyrsystem. Nätverksvisualisering och centralitetsanalys undersöks för att analysera sambandet och grafteoriskt betydelsen av mönstren mönstren. I allmänhet ger detta projekt en genomförbar riktning för att genomföra logganalys och bearbetning i framtiden.
46

Integrating Telecommunications-Specific Language Models into a Trouble Report Retrieval Approach / Integrering av telekommunikationsspecifika språkmodeller i en metod för hämtning av problemrapporter

Bosch, Nathan January 2022 (has links)
In the development of large telecommunications systems, it is imperative to identify, report, analyze and, thereafter, resolve both software and hardware faults. This resolution process often relies on written trouble reports (TRs), that contain information about the observed fault and, after analysis, information about why the fault occurred and the decision to resolve the fault. Due to the scale and number of TRs, it is possible that a newly written fault is very similar to previously written faults, e.g., a duplicate fault. In this scenario, it can be beneficial to retrieve similar TRs that have been previously created to aid the resolution process. Previous work at Ericsson [1], introduced a multi-stage BERT-based approach to retrieve similar TRs given a newly written fault observation. This approach significantly outperformed simpler models like BM25, but suffered from two major challenges: 1) it did not leverage the vast non-task-specific telecommunications data at Ericsson, something that had seen success in other work [2], and 2) the model did not generalize effectively to TRs outside of the telecommunications domain it was trained on. In this thesis, we 1) investigate three different transfer learning strategies to attain stronger performance on a downstream TR duplicate retrieval task, notably focusing on effectively integrating existing telecommunicationsspecific language data into the model fine-tuning process, 2) investigate the efficacy of catastrophic forgetting mitigation strategies when fine-tuning the BERT models, and 3) identify how well the models perform on out-of-domain TR data. We find that integrating existing telecommunications knowledge through the form of a pretrained telecommunications-specific language model into our fine-tuning strategies allows us to outperform a domain adaptation fine-tuning strategy. In addition to this, we find that Elastic Weight Consolidation (EWC) is an effective strategy for mitigating catastrophic forgetting and attaining strong downstream performance on the duplicate TR retrieval task. Finally, we find that the generalizability of models is strong enough to perform reasonably effectively on out-of-domain TR data, indicating that the approaches may be eligible in a real-world deployment. / Vid utvecklingen av stora telekommunikationssystem är det absolut nödvändigt att identifiera, rapportera, analysera och därefter lösa både mjukvaru och hårdvarufel. Denna lösningsprocess bygger ofta på noggrant skrivna felrapporter (TRs), som innehåller information om det observerade felet och, efter analys, information om varför felet uppstod och beslutet att åtgärda felet. På grund av skalan och antalet TR:er är det möjligt att ett nyskrivet fel är mycket likt tidigare skrivna fel, t.ex. ett duplikatfel. I det här scenariot kan det vara mycket fördelaktigt att hämta tidigare skapade, liknande TR:er för att underlätta upplösningsprocessen. Tidigare arbete på Ericsson [1], introducerade en flerstegs BERT-baserad metod för att hämta liknande TRs givet en nyskriven felobservation. Detta tillvägagångssätt överträffade betydligt enklare modeller som BM-25, men led av två stora utmaningar: 1) det utnyttjade inte den stora icke-uppgiftsspecifika telekommunikationsdatan hos Ericsson, något som hade sett framgång i annat arbete [2], och 2) modellen generaliserades inte effektivt till TR:er utanför den telekommunikationsdomän som den bildades på. I den här masteruppsatsen undersöker vi 1) tre olika strategier för överföringsinlärning för att uppnå starkare prestanda på en nedströms TR dubbletthämtningsuppgift, varav några fokuserar på att effektivt integrera fintliga telekommunikationsspecifika språkdata i modellfinjusteringsprocessen, 2) undersöker effektiviteten av katastrofala missglömningsreducerande strategier vid finjustering av BERT-modellerna, och 3) identifiera hur väl modellerna presterar på TR-data utanför domänen. Resultatet är genom att integrera befintlig telekommunikationskunskap i form av en förtränad telekommunikationsspecifik språkmodell i våra finjusteringsstrategier kan vi överträffa en finjusteringsstrategi för domänanpassning. Utöver detta har vi fåt fram att EWC är en effektiv strategi för att mildra katastrofal glömska och uppnå stark nedströmsprestanda på dubbla TR hämtningsuppgiften. Slutligen finner vi att generaliserbarheten av modeller är tillräckligt stark för att prestera någorlunda effektivt på TR-data utanför domänen, vilket indikerar att tillvägagångssätten som beskrivs i denna avhandling kan vara kvalificerade i en verklig implementering.
47

Anomaly Detection in Telecom Service Provider Network Infrastructure Security Logs using an LSTM Autoencoder : Leveraging Time Series Patterns for Improved Anomaly Detection / Avvikelsedetektering i säkerhetsloggar för nätverksinfrastruktur hos en telekomtjänstleverantör med en LSTM Autoencoder : Uttnyttjande av tidsseriemönster för förbättrad avvikelsedetektering

Vlk, Vendela January 2024 (has links)
New regulations are placed on Swedish Telecom Service Providers (TSPs) due to a rising concern for safeguarding network security and privacy in the face of ever-evolving cyber threats. These regulations demand that Swedish telecom companies expand their data security strategies with proactive security measures. Logs, serving as digital footprints in IT infrastructure, play a crucial role in identifying anomalies that could indicate security breaches. Deep Learning (DL) has been used to detect anomalies in logs due to its ability to discern intricate patterns within the data. By leveraging deep learning-based models, it is not only possible to identify anomalies but also to predict and mitigate potential threats within the telecom network. An LSTM autoencoder was implemented to detect anomalies in two separate multivariate temporal log datasets; the BETH cybersecurity dataset, and a Cisco log dataset that was created specifically for this thesis. The empirical results in this thesis show that the LSTM autoencoder reached an ROC AUC of 99.5% for the BETH dataset and 76.6% for the Cisco audit dataset. The use of an additional anomaly detection aid in the Cisco audit dataset let the model reach an ROC AUC of 99.6%. The conclusion that could be drawn from this work was that the systematic approach to developing a deep learning model for anomaly detection in log data was efficient. However, the study’s findings raise crucial considerations regarding the appropriateness of various log data for deep learning models used in anomaly detection. / Nya föreskrifter har införts för svenska telekomtjänsteleverantörer på grund av en ökad angelägenhet av att säkerställa nätverkssäkerhet och integritet inför ständigt föränderliga cyberhot. Dessa föreskrifter kräver att svenska telekomföretag utvidgar sina dataskyddsstrategier med proaktiva säkerhetsåtgärder. Loggar, som fungerar som digitala fotspår inom IT-infrastruktur, spelar en avgörande roll för att identifiera avvikelser som kan tyda på säkerhetsintrång. Djupinlärning har använts för att upptäcka avvikelser i loggar på grund av dess förmåga att urskilja intrikata mönster inom data. Genom att utnyttja modeller baserade på djupinlärning är det inte bara möjligt att identifiera avvikelser utan även att förutsäga samt mildra konsekvenserna av potentiella hot inom telekomnätet. En LSTM-autoencoder implementerades för att upptäcka avvikelser i två separata multivariata tidsserielogguppsättningar; BETH-cybersäkerhetsdatauppsättningen och en Cisco-loggdatauppsättning som skapades specifikt för detta arbete. De empiriska resultaten i denna avhandling visar att LSTM-autoencodern uppnådde en ROC AUC på 99.5% för BETH-datauppsättningen och 76.6% för Cisco-datauppsättningen. Användningen av ett ytterligare avvikelsedetekteringsstöd i Cisco-datauppsättningen möjliggjorde att modellen uppnådde en ROC AUC på 99.6%. Slutsatsen som kunde dras från detta arbete var att den systematiska metoden för att utveckla en djupinlärningsmodell för avvikelsedetektering i loggdata var effektiv. Dock väcker studiens resultat kritiska överväganden angående lämpligheten av olika loggdata för djupinlärningsmodeller som används för avvikelsedetektering.
48

Extraction automatique de protocoles de communication pour la composition de services Web / Automatic extraction of communication protocols for web services composition

Musaraj, Kreshnik 13 December 2010 (has links)
La gestion des processus-métiers, des architectures orientées-services et leur rétro-ingénierie s’appuie fortement sur l’extraction des protocoles-métier des services Web et des modèles des processus-métiers à partir de fichiers de journaux. La fouille et l’extraction de ces modèles visent la (re)découverte du comportement d'un modèle mis en œuvre lors de son exécution en utilisant uniquement les traces d'activité, ne faisant usage d’aucune information a priori sur le modèle cible. Notre étude préliminaire montre que : (i) une minorité de données sur l'interaction sont enregistrées par le processus et les architectures de services, (ii) un nombre limité de méthodes d'extraction découvrent ce modèle sans connaître ni les instances positives du protocole, ni l'information pour les déduire, et (iii) les approches actuelles se basent sur des hypothèses restrictives que seule une fraction des services Web issus du monde réel satisfont. Rendre possible l'extraction de ces modèles d'interaction des journaux d'activité, en se basant sur des hypothèses réalistes nécessite: (i) des approches qui font abstraction du contexte de l'entreprise afin de permettre une utilisation élargie et générique, et (ii) des outils pour évaluer le résultat de la fouille à travers la mise en œuvre du cycle de vie des modèles découverts de services. En outre, puisque les journaux d'interaction sont souvent incomplets, comportent des erreurs et de l’information incertaine, alors les approches d'extraction proposées dans cette thèse doivent être capables de traiter ces imperfections correctement. Nous proposons un ensemble de modèles mathématiques qui englobent les différents aspects de la fouille des protocoles-métiers. Les approches d’extraction que nous présentons, issues de l'algèbre linéaire, nous permettent d'extraire le protocole-métier tout en fusionnant les étapes classiques de la fouille des processus-métiers. D'autre part, notre représentation du protocole basée sur des séries temporelles des variations de densité de flux permet de récupérer l'ordre temporel de l'exécution des événements et des messages dans un processus. En outre, nous proposons la définition des expirations propres pour identifier les transitions temporisées, et fournissons une méthode pour les extraire en dépit de leur propriété d'être invisible dans les journaux. Finalement, nous présentons un cadre multitâche visant à soutenir toutes les étapes du cycle de vie des workflow de processus et des protocoles, allant de la conception à l'optimisation. Les approches présentées dans ce manuscrit ont été implantées dans des outils de prototypage, et validées expérimentalement sur des ensembles de données et des modèles de processus et de services Web. Le protocole-métier découvert, peut ensuite être utilisé pour effectuer une multitude de tâches dans une organisation ou une entreprise. / Business process management, service-oriented architectures and their reverse engineering heavily rely on the fundamental endeavor of mining business process models and Web service business protocols from log files. Model extraction and mining aim at the (re)discovery of the behavior of a running model implementation using solely its interaction and activity traces, and no a priori information on the target model. Our preliminary study shows that : (i) a minority of interaction data is recorded by process and service-aware architectures, (ii) a limited number of methods achieve model extraction without knowledge of either positive process and protocol instances or the information to infer them, and (iii) the existing approaches rely on restrictive assumptions that only a fraction of real-world Web services satisfy. Enabling the extraction of these interaction models from activity logs based on realistic hypothesis necessitates: (i) approaches that make abstraction of the business context in order to allow their extended and generic usage, and (ii) tools for assessing the mining result through implementation of the process and service life-cycle. Moreover, since interaction logs are often incomplete, uncertain and contain errors, then mining approaches proposed in this work need to be capable of handling these imperfections properly. We propose a set of mathematical models that encompass the different aspects of process and protocol mining. The extraction approaches that we present, issued from linear algebra, allow us to extract the business protocol while merging the classic process mining stages. On the other hand, our protocol representation based on time series of flow density variations makes it possible to recover the temporal order of execution of events and messages in the process. In addition, we propose the concept of proper timeouts to refer to timed transitions, and provide a method for extracting them despite their property of being invisible in logs. In the end, we present a multitask framework aimed at supporting all the steps of the process workflow and business protocol life-cycle from design to optimization.The approaches presented in this manuscript have been implemented in prototype tools, and experimentally validated on scalable datasets and real-world process and web service models.The discovered business protocols, can thus be used to perform a multitude of tasks in an organization or enterprise.
49

Cooperative security log analysis using machine learning : Analyzing different approaches to log featurization and classification / Kooperativ säkerhetslogganalys med maskininlärning

Malmfors, Fredrik January 2022 (has links)
This thesis evaluates the performance of different machine learning approaches to log classification based on a dataset derived from simulating intrusive behavior towards an enterprise web application. The first experiment consists of performing attacks towards the web app in correlation with the logs to create a labeled dataset. The second experiment consists of one unsupervised model based on a variational autoencoder and four super- vised models based on both conventional feature-engineering techniques with deep neural networks and embedding-based feature techniques followed by long-short-term memory architectures and convolutional neural networks. With this dataset, the embedding-based approaches performed much better than the conventional one. The autoencoder did not perform well compared to the supervised models. To conclude, embedding-based ap- proaches show promise even on datasets with different characteristics compared to natural language.
50

Subsurface Depositional Systems Analysis of the Cambrian Eau Claire Formation in Western Ohio

Laneville, Michael Warren 26 November 2018 (has links)
No description available.

Page generated in 0.0833 seconds