• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • 1
  • Tagged with
  • 6
  • 6
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Event Mining for System and Service Management

Tang, Liang 18 April 2014 (has links)
Modern IT infrastructures are constructed by large scale computing systems and administered by IT service providers. Manually maintaining such large computing systems is costly and inefficient. Service providers often seek automatic or semi-automatic methodologies of detecting and resolving system issues to improve their service quality and efficiency. This dissertation investigates several data-driven approaches for assisting service providers in achieving this goal. The detailed problems studied by these approaches can be categorized into the three aspects in the service workflow: 1) preprocessing raw textual system logs to structural events; 2) refining monitoring configurations for eliminating false positives and false negatives; 3) improving the efficiency of system diagnosis on detected alerts. Solving these problems usually requires a huge amount of domain knowledge about the particular computing systems. The approaches investigated by this dissertation are developed based on event mining algorithms, which are able to automatically derive part of that knowledge from the historical system logs, events and tickets. In particular, two textual clustering algorithms are developed for converting raw textual logs into system events. For refining the monitoring configuration, a rule based alert prediction algorithm is proposed for eliminating false alerts (false positives) without losing any real alert and a textual classification method is applied to identify the missing alerts (false negatives) from manual incident tickets. For system diagnosis, this dissertation presents an efficient algorithm for discovering the temporal dependencies between system events with corresponding time lags, which can help the administrators to determine the redundancies of deployed monitoring situations and dependencies of system components. To improve the efficiency of incident ticket resolving, several KNN-based algorithms that recommend relevant historical tickets with resolutions for incoming tickets are investigated. Finally, this dissertation offers a novel algorithm for searching similar textual event segments over large system logs that assists administrators to locate similar system behaviors in the logs. Extensive empirical evaluation on system logs, events and tickets from real IT infrastructures demonstrates the effectiveness and efficiency of the proposed approaches.
2

AI/ML Development for RAN Applications : Deep Learning in Log Event Prediction / AI/ML-utveckling för RAN-applikationer : Deep Learning i Log Event Prediction

Sun, Yuxin January 2023 (has links)
Since many log tracing application and diagnostic commands are now available on nodes at base station, event log can easily be collected, parsed and structured for network performance analysis. In order to improve In Service Performance of customer network, a sequential machine learning model can be trained, test, and deployed on each node to learn from the past events to predict future crashes or a failure. This thesis project focuses on the evaluation and analysis of the effectiveness of deep learning models in predicting log events. It explores the application of stacked long short-term memory(LSTM) based model in capturing temporal dependencies and patterns within log event data. In addition, it investigates the probability distribution of the next event from the logs and estimates event trigger time to predict the future node restart event. This thesis project aims to improve the node availability time in base station of Ericsson and contribute to further application in log event prediction using deep learning techniques. A framework with two main phases is utilized to analyze and predict the occurrence of restart events based on the sequence of events. In the first phase, we perform natural language processing(NLP) on the log content to obtain the log key, and then identify the sequence that will cause the restart event from the sequence node events. In the second phase, we analyze these sequence of events which resulted in restart, and predict how many minutes in the future the restart event will occur. Experiment results show that our framework achieves no less than 73% accuracy on restart prediction and more than 1.5 minutes lead time on restart. Moreover, our framework also performs well for non-restart events. / Eftersom många loggspårningsapplikationer och diagnostiska kommandon nu finns tillgängliga på noder vid basstationen, kan händelseloggar enkelt samlas in, analyseras och struktureras för analys av nätverksprestanda. För att förbättra kundnätverkets In Service Performance kan en sekventiell maskininlärningsmodell tränas, testas och distribueras på varje nod för att lära av tidigare händelser för att förutsäga framtida krascher eller ett fel. Detta examensarbete fokuserar på utvärdering och analys av effektiviteten hos modeller för djupinlärning för att förutsäga logghändelser. Den utforskar tillämpningen av staplade långtidsminne (LSTM)-baserad modell för att fånga tidsmässiga beroenden och mönster i logghändelsedata. Dessutom undersöker den sannolikhetsfördelningen för nästa händelse från loggarna och uppskattar händelseutlösningstiden för att förutsäga den framtida omstartshändelsen för noden. Detta examensarbete syftar till att förbättra nodtillgänglighetstiden i Ericssons basstation och bidra till ytterligare tillämpning inom logghändelseprediktion med hjälp av djupinlärningstekniker. Ett ramverk med två huvudfaser används för att analysera och förutsäga förekomsten av omstartshändelser baserat på händelseförloppet. I den första fasen utför vi naturlig språkbehandling (NLP) på logginnehållet för att erhålla loggnyckeln och identifierar sedan sekvensen som kommer att orsaka omstartshändelsen från sekvensnodhändelserna. I den andra fasen analyserar vi dessa händelseförlopp som resulterade i omstart och förutsäger hur många minuter i framtiden omstartshändelsen kommer att inträffa. Experimentresultat visar att vårt ramverk uppnår inte mindre än 73% noggrannhet vid omstartsförutsägelse och mer än 1,5 minuters ledtid vid omstart. Dessutom fungerar vårt ramverk bra för händelser som inte startar om.
3

Deployment failure analysis using machine learning

Alviste, Joosep Franz Moorits January 2020 (has links)
Manually diagnosing recurrent faults in software systems can be an inefficient use of time for engineers. Manual diagnosis of faults is commonly performed by inspecting system logs during the failure time. The DevOps engineers in Pipedrive, a SaaS business offering a sales CRM platform, have developed a simple regular-expression-based service for automatically classifying failed deployments. However, such a solution is not scalable, and a more sophisticated solution isrequired. In this thesis, log mining was used to automatically diagnose Pipedrive's failed deployments based on the deployment logs. Multiple log parsing and machine learning algorithms were compared based on the resulting log mining pipeline's F1 score. A proof of concept log mining pipeline was created that consisted of log parsing with the Drain algorithm, transforming the log files into event count vectors and finally training a random forest machine learning model to classify the deployment logs. The pipeline gave an F1 score of 0.75 when classifying testing data and a lower score of 0.65 when classifying the evaluation dataset.
4

Enhancements of pre-processing, analysis and presentation techniques in web log mining / Žiniatinklio įrašų gavybos paruošimo, analizės ir rezultatų pateikimo naudotojui tobulinimas

Pabarškaitė, Židrina 13 July 2009 (has links)
As Internet is becoming an important part of our life, more attention is paid to the information quality and how it is displayed to the user. The research area of this work is web data analysis and methods how to process this data. This knowledge can be extracted by gathering web servers’ data – log files, where all users’ navigational patters about browsing are recorded. The research object of the dissertation is web log data mining process. General topics that are related with this object: web log data preparation methods, data mining algorithms for prediction and classification tasks, web text mining. The key target of the thesis is to develop methods how to improve knowledge discovery steps mining web log data that would reveal new opportunities to the data analyst. While performing web log analysis, it was discovered that insufficient interest has been paid to web log data cleaning process. By reducing the number of redundant records data mining process becomes much more effective and faster. Therefore a new original cleaning framework was introduced which leaves records that only corresponds to the real user clicks. People tend to understand technical information more if it is similar to a human language. Therefore it is advantageous to use decision trees for mining web log data, as they generate web usage patterns in the form of rules which are understandable to humans. However, it was discovered that users browsing history length is different, therefore specific data... [to full text] / Internetui skverbiantis į mūsų gyvenimą, vis didesnis dėmesys kreipiamas į informacijos pateikimo kokybę, bei į tai, kaip informacija yra pateikta. Disertacijos tyrimų sritis yra žiniatinklio serverių kaupiamų duomenų gavyba bei duomenų pateikimo galutiniam naudotojui gerinimo būdai. Tam reikalingos žinios išgaunamos iš žiniatinklio serverio žurnalo įrašų, kuriuose fiksuojama informacija apie išsiųstus vartotojams žiniatinklio puslapius. Darbo tyrimų objektas yra žiniatinklio įrašų gavyba, o su šiuo objektu susiję dalykai: žiniatinklio duomenų paruošimo etapų tobulinimas, žiniatinklio tekstų analizė, duomenų analizės algoritmai prognozavimo ir klasifikavimo uždaviniams spręsti. Pagrindinis disertacijos tikslas – perprasti svetainių naudotojų elgesio formas, tiriant žiniatinklio įrašus, tobulinti paruošimo, analizės ir rezultatų interpretavimo etapų metodologijas. Darbo tyrimai atskleidė naujas žiniatinklio duomenų analizės galimybes. Išsiaiškinta, kad internetinių duomenų – žiniatinklio įrašų švarinimui buvo skirtas nepakankamas dėmesys. Parodyta, kad sumažinus nereikšmingų įrašų kiekį, duomenų analizės procesas tampa efektyvesnis. Todėl buvo sukurtas naujas metodas, kurį pritaikius žinių pateikimas atitinka tikruosius vartotojų maršrutus. Tyrimo metu nustatyta, kad naudotojų naršymo istorija yra skirtingų ilgių, todėl atlikus specifinį duomenų paruošimą – suformavus fiksuoto ilgio vektorius, tikslinga taikyti iki šiol nenaudotus praktikoje sprendimų medžių algoritmus... [toliau žr. visą tekstą]
5

Caching Techniques For Dynamic Web Servers

Suresha, * 07 1900 (has links)
Websites are shifting from static model to dynamic model, in order to deliver their users with dynamic, interactive, and personalized experiences. However, dynamic content generation comes at a cost – each request requires computation as well as communication across multiple components within the website and across the Internet. In fact, dynamic pages are constructed on the fly, on demand. Due to their construction overheads and non-cacheability, dynamic pages result in substantially increased user response times, server load and increased bandwidth consumption, as compared to static pages. With the exponential growth of Internet traffic and with websites becoming increasingly complex, performance and scalability have become major bottlenecks for dynamic websites. A variety of strategies have been proposed to address these issues. Many of these solutions perform well in their individual contexts, but have not been analyzed in an integrated fashion. In our work, we have carried out a study of combining a carefully chosen set of these approaches and analyzed their behavior. Specifically, we consider solutions based on the recently-proposed fragment caching technique, since it ensures both correctness and freshness of page contents. We have developed mechanisms for reducing bandwidth consumption and dynamic page construction overheads by integrating fragment caching with various techniques such as proxy-based caching of dynamic contents, pre-generating pages, and caching program code. We start with presenting a dynamic proxy caching technique that combines the benefits of both proxy-based and server-side caching approaches, without suffering from their individual limitations. This technique concentrates on reducing the bandwidth consumption due to dynamic web pages. Then, we move on to presenting mechanisms for reducing dynamic page construction times -- during normal loading, this is done through a hybrid technique of fragment caching and page pre-generation, utilizing the excess capacity with which web servers are typically provisioned to handle peak loads. During peak loading, this is achieved by integrating fragment-caching and code-caching, optionally augmented with page pre-generation. In summary, we present a variety of methods for integrating existing solutions for serving dynamic web pages with the goal of achieving reduced bandwidth consumption from the web infrastructure perspective, and reduced page construction times from user perspective.
6

Τεχνικές εξόρυξης γνώσης με χρήση σημασιολογιών από δεδομένα πλοήγησης χρηστών (web usage log mining) με σκοπό την εξατομίκευση δικτυακών τόπων / Knowledge extraction techniques using semantics of web usage log mining in order to personalize websites

Θεοδωρίδης, Ιωάννης-Βασίλειος 06 May 2009 (has links)
Η παρούσα Διπλωματική Εργασία μελετά το θέμα της προσωποποίησης - εξατομίκευσης δικτυακών τόπων. Αρχικά, παρουσιάζεται μια ανασκόπηση στη σχετική βιβλιογραφία όπου εντοπίζεται πληθώρα αναφορών και λύσεων -ακαδημαϊκών και εμπορικών- για το συγκεκριμένο θέμα. Στις περισσότερες από αυτές τις περιπτώσεις καταβάλλεται προσπάθεια για εξατομίκευση η οποία στηρίζεται σε δεδομένα που συλλέγονται από δηλώσεις ή ενέργειες του χρήστη, άμεσα ή έμμεσα. Όμως, η μελέτη των σχετικών άρθρων δείχνει ότι η μέχρι σήμερα επιτυχία των εγχειρημάτων αξιοποίησης δεδομένων χρήσης του ιστού (web usage data) είναι περιορισμένη. Το βασικό έλλειμμα που διαπιστώνεται είναι το γεγονός ότι η διαχείριση του περιεχομένου ενός δικτυακού τόπου συνήθως γίνεται με μηχανιστικό τρόπο, αποφεύγοντας τόσο την κατανόηση του περιεχομένου του όσο και της δομής του. Ακολούθως, στη Διπλωματική Εργασία γίνεται απόπειρα εξατομίκευσης δικτυακών τόπων με ημιαυτόματο τρόπο χρησιμοποιώντας τα αρχεία καταγραφής χρήσης ιστού ενώ ταυτόχρονα βασίζεται σε σημασιολογικές και εννοιολογικές αναλύσεις του περιεχομένου των δικτυακών τόπων. Με αυτήν τη μέθοδο υλοποιείται ένα εργαλείο που εξατομικεύει τον δικτυακό τόπο προτείνοντας στους χρήστες ιστοσελίδες με παραπλήσιο εννοιολογικό περιεχόμενο. Αυτό γίνεται δημιουργώντας την οντολογία του εκάστοτε δικτυακού τόπου και συνδυάζοντάς τη με τα δεδομένα πλοήγησης των χρηστών. / The present Diploma Dissertation attempts to study the personalization of websites. Initially, a thorough review of the relevant bibliography is presented, in which a plethora of academic and commercial reports and solutions is located regarding the subject of website personalization. In most cases, to achieve personalization, the researchers are based on data which are directly or indirectly collected by user statements or actions. However, the study of relative articles shows that there is limited success in the use of web usage data for personalization purposes. The fundamental problem lies in the fact that the comprehension of the content and the structure of a website is often neglected or even avoided. Further on, personalization of websites in a semi-automatic way is attempted using log files while it is simultaneously based in semantic and conceptual analysis of the website content. In this way, a tool is developed that personalizes websites by proposing web pages with similar conceptual content to the users. This is done by creating the ontology of the website and combining it with the users’ web usage data.

Page generated in 0.0791 seconds