Spelling suggestions: "subject:"log film"" "subject:"log fine""
1 |
ViDLog: Understanding Website Usability through Log File ReanimationMenezes, Chris 05 September 2012 (has links)
Webserver logfiles are an inexpensive, automatically captured text-based recording of user interactions with a website. In this thesis, a tool, ViDLog, was created to take logfiles and reanimate a user session with the purpose of gaining usability insights.
To evaluate the effectiveness and value of reanimating user sessions, 10 usability professionals viewed logfile-recorded website usage using ViDLog and were then asked to infer users’ goals, strategies, successes or failures, and proficiencies; and afterwards, rate, ViDLog across multiple dimensions.
ViDLog’s logfile reanimation proved successful for gaining usability insights; usability professionals were able to infer users’ goals, strategies, successes or failures, and proficiencies. Participants were able to do this without ViDLog training, without familiarity of the website being evaluated (Orlando), and without domain knowledge of the subject depicted in the user sessions (women’s literature). However, they were only able to infer users’ overarching goal, not specific goal criteria; and were only able to determine relative proficiencies after viewing both user sessions. They also expended a good deal of mental effort when comprehending ambiguous user sessions, and found inefficiencies in ViDLog’s user interface. / Dr. Susan Brown for The Orlando Project
|
2 |
Log File Categorization and Anomaly Analysis Using Grammar InferenceMemon, Ahmed Umar 28 May 2008 (has links)
In the information age of today, vast amounts of sensitive and confidential data is exchanged over an array of different mediums. Accompanied with this phenomenon is a comparable increase in the number and types of attacks to acquire this information. Information security and data consistency have hence, become quintessentially important. Log file analysis has proven to be a good defense mechanism as logs provide an accessible record of network activities in the form of server generated messages. However, manual analysis is tedious and prohibitively time consuming. Traditional log analysis techniques, based on pattern matching and data mining approaches, are ad hoc and cannot readily adapt to different kinds of log files.
The goal of this research is to explore the use of grammar inference for log file analysis in order to build a more adaptive, flexible and generic method for message categorization, anomaly detection and reporting. The grammar inference process employs robust parsing, islands grammars and source transformation techniques.
We test the system by using three different kinds of log file training sets as input and infer a grammar and generate message categories for each set. We detect anomalous messages in new log files using the inferred grammar as a catalog of valid traces and present a reporting program to extract the instances of specified message categories from the log files. / Thesis (Master, Computing) -- Queen's University, 2008-05-22 14:12:30.199
|
3 |
Deriving System Vulnerabilities Using Log AnalyticsHigbee, Matthew Somers 01 November 2015 (has links)
System Administrators use many of the same tactics that are implemented by hackers to validate the security of their systems, such as port scanning and vulnerability scanning. Port scanning is slow, and can be highly inaccurate. After a scan is complete, the results of the scan must be cross checked with a vulnerability database to discover if any vulnerabilities are present. While these techniques are useful, they have severe limitations. System Administrators have full access to all of their machines. They should not have to rely exclusively on port scanning them from the outside of their machines to check for vulnerabilities when they have this level of access. This thesis introduces a novel concept for replacing port scanning with a Log File Inventory Management System. This system will be able to automatically build an accurate system inventory using existing log files. This system inventory will then be automatically cross checked with a database of known vulnerabilities in real-time resulting in faster and more accurate vulnerability reporting than is found in traditional port scanning methods.
|
4 |
Clustering Generic Log Files Under Limited Data Assumptions / Klustring av generiska loggfiler under begränsade antagandenEriksson, Håkan January 2016 (has links)
Complex computer systems are often prone to anomalous or erroneous behavior, which can lead to costly downtime as the systems are diagnosed and repaired. One source of information for diagnosing the errors and anomalies are log files, which are often generated in vast and diverse amounts. However, the log files' size and semi-structured nature makes manual analysis of log files generally infeasible. Some automation is desirable to sift through the log files to find the source of the anomalies or errors. This project aimed to develop a generic algorithm that could cluster diverse log files in accordance to domain expertise. The results show that the developed algorithm performs well in accordance to manual clustering even under more relaxed data assumptions. / Komplexa datorsystem är ofta benägna att uppvisa anormalt eller felaktigt beteende, vilket kan leda till kostsamma driftstopp under tiden som systemen diagnosticeras och repareras. En informationskälla till feldiagnosticeringen är loggfiler, vilka ofta genereras i stora mängder och av olika typer. Givet loggfilernas storlek och semistrukturerade utseende så blir en manuell analys orimlig att genomföra. Viss automatisering är önsvkärd för att sovra bland loggfilerna så att källan till felen och anormaliteterna blir enklare att upptäcka. Det här projektet syftade till att utveckla en generell algoritm som kan klustra olikartade loggfiler i enlighet med domänexpertis. Resultaten visar att algoritmen presterar väl i enlighet med manuell klustring även med färre antaganden om datan.
|
5 |
Country and language level differences in multilingual digital librariesGäde, Maria 07 April 2014 (has links)
Während die Bedeutung von mehrsprachigem Zugang zu Informationssystemen unumstritten ist, bleibt es unklar, ob und in welchem Umfang Systemfunktionalitäten und -oberflächen sowie das Interaktionsdesign an länder- bzw. sprachspezifisches Nutzerverhalten angepasst werden muss und sollte. Die Dissertation legt den Fokus auf die Identifikation von länder- und sprachspezifischen Unterschieden in Interaktionen mit dem Informationssystem als entscheidende Voraussetzung für die Entwicklung von mehrsprachigen Digitalen Bibliotheken. Durch den Mangel an vergleichbaren Studien -und Analyseansätzen, identifiziert die Studie zunächst Indikatoren, die auf Unterschiede im Verhalten von Nutzern aus unterschiedlichen Ländern und aus unterschiedlichen Sprachgruppen hinweisen können. Basierend auf der Selektion von Indikatoren wurde für die Arbeit ein individuell auf die Problematik von mehrsprachigem Zugang zu Informationssystemen angepasstes Logformat und Analysetool entwickelt, der Europeana Language Logger (ELL). Auf der Grundlage aller Variablen wurden Länderprofile erstellt und grafisch umgesetzt. Diese eignen sich für die Beschreibung und den Vergleich von länder- und sprachspezifischen Interaktionen innerhalb eines bestimmten Systems. Um die Erkenntnisse aus der Fallstudie verallgemeinern können, wurde auf der Basis einer Clusteranalyse eine Gewichtung von starken und schwachen Variablen für die Identifizierung von länder- und sprachspezifischen Unterschieden vorgenommen. / While the importance of multilingual access to information systems is unquestioned, it remains unclear if and to what extent system functionalities, interfaces or interaction patterns need to be adapted according to country or language specific user behaviors. This dissertation postulates that the identification of country and language level differences in user interactions is a crucial step for designing effective multilingual digital libraries. Due to the lack of comparable studies and analysis approaches, the research in this dissertation identifies indicators that could show differences in the interactions of users from different countries or languages. A customized logging format and logger (Europeana Language Logger) is developed in order to trace these variables in a digital library. For each investigated variable, differences between country groups are presented and discussed. Country profiles are developed as a tool to visualize different characteristics in comparison. To generalize the findings from the case study, the individual variables are prioritized by determining which ones show the most significant country and language level differences.
|
6 |
Tracing the process of self-regulated learning – students’ strategic activity in g/nStudy learning environmentMalmberg, J. (Jonna) 27 May 2014 (has links)
Abstract
This study focuses on the process of self-regulated learning by investigating in detail how learners engage in self-regulated and strategic learning when studying in g/nStudy learning environments. The study uses trace methods to enable recognition of temporal patterns in learners’ activity that can signal strategic and self-regulated learning.
The study comprises three data sets. In each data set, g/nStudy technology was used to support and trace self-regulated learning. In the analysis, micro-analytical protocols along with qualitative approach were favoured to better understand the process of self-regulated and strategic learning in authentic classroom settings.
The results suggested that the specific technological tools used to support strategic and self-regulated learning can also be used methodologically to investigate patterns emerging from students’ cognitive regulation activity. The advantage of designing specific tools to trace and support self-regulated learning also helps to interpret the way in which the learning patterns actually inform SRL theoretically and empirically. Depending on how the tools are used, they can signal the typical patterns existing in the learning processes of students or student groups.
The learning patterns found in the students’ cognitive regulation activity varied in terms of how often the patterns emerged in their learning, how the patterns were composed and when the patterns were used. Moreover, there were intra-individual differences – firstly, in how students with different learning outcomes allocated their study tactic use, and secondly, how self-regulated learning was used in challenging learning situations perceived by students.
These findings indicate log file traces can reveal differences in self-regulated learning between individuals and between groups of learners with similar characteristics based on the learning patterns they used. However, learning patterns obtained from log file traces can sometimes be complex rather than simple. Therefore, log file traces need to be combined with other situation-specific measurements to better understand how they might elucidate self-regulated learning in the learning context. / Tiivistelmä
Tässä väitöskirjassa tutkitaan oppilaiden itsesäätöisen ja strategisen oppimisen ilmenemistä oppimisprosessin aikana. Tutkimuksessa hyödynnetään g/nStudy- oppimisympäristöä, jonka avulla on mahdollista tukea ja jäljittää oppimisen strategista toimintaa. g/nStudy-oppimisympäristö tallentaa lokidataa, joka on tarkkaa ajallista informaatiota siitä toiminnasta, jota oppilas tekee työskentelynsä aikana. Toisin sanoen, lokidatasta on mahdollista jäljittää ne tiedot, jotka reflektoivat strategista – ja itsesäätöistä oppimista. Erityisenä mielenkiinnon kohteena oli selvittää miten lokidatasta voi löytää strategisia oppimisen toimintamalleja, ja miten nämä strategiset oppimisen toimintamallit vaihtelevat oppilaiden, oppilasryhmien ja erilaisten oppimisen tilanteiden aikana.
Väitöstutkimus muodostuu kolmesta erillisestä tutkimusaineistosta. Jokaisessa kolmessa aineistossa on hyödynnetty g/nStudy-teknologian mahdollisuuksia tukea ja jäljittää itsesäätöistä oppimista. Tutkimusaineiston analyysissä hyödynnetään mikroanalyyttista lähestymistapaa sekä laadullista tutkimusotetta. Tutkimuksen analyyttinen lähestymistapa antaa mahdollisuuden ymmärtää itsesäätöisen- ja strategisen oppimisen ilmenemistä aidossa oppimistilanteessa.
Tutkimustulokset osoittavat, että oppimisympäristöön sisällytettyjä teknologisia työkaluja voidaan käyttää tukemaan itsesäätöistä ja strategista toimintaa. Sen lisäksi samoja työkaluja voidaan käyttää myös menetelmällisenä välineenä tutkittaessa itsesäätöistä – ja strategista toimintaa erilaisissa oppimistilanteissa. Tutkimus -tulokset osoittavat, että oppimisen strategiset toimintamallit vaihtelivat oppilaiden – ja oppimistilanteiden välillä. Oppimisen strategisissa toimintamalleissa oli myös laadullisia eroja sen suhteen, miten usein ne ilmenivät oppimisprosessin aikana ja mistä strategisista toiminnoista ne koostuivat.
Johtopäätöksenä voi todeta, että lokidatan käyttäminen tutkimusmenetelmänä edesauttaa paljastamaan opiskelun strategisia toimintamalleja oppilaiden – ja oppilasryhmien välillä. Tutkimuksen perusteella voidaan todeta, että strategiset toimintamallit voivat olla hyvinkin monimuotoisia. On tärkeää tunnistaa, missä tilanteissa ja milloin näitä toimintamalleja käytetään ja erityisesti mikä on niiden vaikutus oppimisen laatuun.
|
7 |
Mikroprocesorem řízená testovací jednotka / Microprocessor controlled testing unitMejzlík, Vladimír January 2010 (has links)
This project deals with the design of an autonomous microprocessor controlled testing unit for automatic controlling the output of the tested device, depending on the excitation of its inputs. There are possible realizations of testing unit hardware functional blocks described. Possible options are objectively analyzed in accord with the project specification and with regard to the mutual compatibility of individual blocks, availability, price and desired functionality. The most appropriate selected solution is implemented using the specific circuit elements. The output of the project is realized functional testing unit and elaborated product documentation. There was control software for the microprocessor of the unit written. The software implements an interpreter for the test algorithm execution, carries out the test evaluation and stores a record of the test process to the file. There was also the utility for a PC, which allows uploading tests to the testing unit via USB created.
|
8 |
Evaluation of text classification techniques for log file classification / Utvärdering av textklassificeringstekniker för klassificering avloggfilerOlin, Per January 2020 (has links)
System log files are filled with logged events, status codes, and other messages. By analyzing the log files, the systems current state can be determined, and find out if something during its execution went wrong. Log file analysis has been studied for some time now, where recent studies have shown state-of-the-art performance using machine learning techniques. In this thesis, document classification solutions were tested on log files in order to classify regular system runs versus abnormal system runs. To solve this task, supervised and unsupervised learning methods were combined. Doc2Vec was used to extract document features, and Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) based architectures on the classification task. With the use of the machine learning models and preprocessing techniques the tested models yielded an f1-score and accuracy above 95% when classifying log files.
|
9 |
Evaluating the use of Machine Learning for Fault Detection using Log File AnalysisTenov, Rosen Nikolaev January 2021 (has links)
Under de senaste åren fick maskininlärning mer och mer popularitet i samhället. Den implementeras i stor utsträckning inom många datavetenskapliga områden, t.ex. igenkänning av tal, video, objekt, sentimentanalys osv. Dessutom genererar moderna datorsystem och program stora filer med loggdata under deras körning och användning. Dessa loggfiler innehåller vanligtvis enorma mängder data, vilket leder till svårigheter att bearbeta all data manuellt. Således är användning av maskininlärningstekniker vid analys av loggdata för detektering av anomalibeteende av stort intresse för att uppnå skalbar underhåll av systemen. Syftet med detta arbete var att undersöka tillgängliga framträdande metoder för att implementera maskininlärning för upptäckning av loggfel och utvärdera en av dessa metoder. Uppsatsen fokuserade på att utvärdera DeepLog artificiella neurala nätverk som innehåller Long short-term memory algoritm. Utvärderingen omfattade mätning av den exekveringstid som behövdes och vilken precision, återkallande, noggrannhet och F1-index uppnåddes med modellen för maskininlärningsfelsdetektering vid användning av två olika loggdatamängder, en från OpenStack och en annan från Hadoop Distributed File System. Resultaten visade att DeepLog presterade bättre när man använde OpenStack-datamängd genom att uppnå höga resultat för alla index, särskilt recallsindex på cirka 90% som minimerade falska negativa förutsägelser, vilket är viktigt vid loggfelsdetektering. När DeepLog användes med HDFS-datamängd förbättrades körningstiden något men noggrannheten och recall av modellen tappades. Framtida arbete inkluderar att försöka och testa modellen med andra loggdatamängder eller andra ML-modeller för upptäckning av loggfel. / During the last years machine learning was gaining more and more popularity in the society. It is widely implemented in many fields of computer science, e.g. recognition of speech, video, objects, sentiment analysis, etc. Additionally, modern computer systems and programs generate large files with log data through their execution. These log files contain usually immense amount of data, which is a struggle for processing it manually. Thus, using machine learning techniques in the analysis of log data for detection of anomaly behavior is of a high interest for achieving scalable maintaining of the systems. The purpose of this work was to look into available prominent approaches of implementing machine learning for log fault detection and evaluate one of them. The paper focused on evaluating DeepLog artificial neural network that incorporates Long short-term memory. The evaluation included measuring the execution time needed and what precision, recall, accuracy and F1-index were achieved by the machine learning fault detection model when using two different log datasets, one from OpenStack and another from Hadoop Distributed File System. The results showed that DeepLog performed better when using OpenStack dataset by achieving high results for all indexes, especially the recall index of around 90%, minimizing the false negative predictions, which is important in the log fault detection. When using DeepLog with HDFS dataset the execution time was slightly improved but the accuracy and recall of the model were dropped. Future works includes trying another log datasets or ML models for log fault detection.
|
10 |
Naive Bayesian Spam Filters for Log File AnalysisHavens, Russel William 13 July 2011 (has links) (PDF)
As computer system usage grows in our world, system administrators need better visibility into the workings of computer systems, especially when those systems have problems or go down. Most system components, from hardware, through OS, to application server and application, write log files of some sort, be it system-standardized logs such syslog or application specific logs. These logs very often contain valuable clues to the nature of system problems and outages, but their verbosity can make them difficult to utilize. Statistical data mining methods could help in filtering and classifying log entries, but these tools are often out of the reach of administrators. This research tests the effectiveness of three off-the-shelf Bayesian spam email filters (SpamAssassin, SpamBayes and Bogofilter) for effectiveness as log entry classifiers. A simple scoring system, the Filter Effectiveness Scale (FES), is proposed and used to compare these filters. These filters are tested in three stages: 1) the filters were tested with the SpamAssassin corpus, with various manipulations made to the messages, 2) the filters were tested for their ability to differentiate two types of log entries taken from actual production systems, and 3) the filters were trained on log entries from actual system outages and then tested on effectiveness for finding similar outages via the log files. For stage 1, messages were tested with normalized bodies, normalized headers and with each sentence from each message body as a separate message with a standardized message. The impact of each manipulation is presented. For stages 2 and 3, log entries were tested with digits normalized to zeros, with words chained together to various lengths and one or all levels of word chains used together. The impacts of these manipulations are presented. In each of these stages, it was found that these widely available Bayesian content filters were effective in differentiating log entries. Tables of correct match percentages or score graphs, according to the nature of tests and numbers of entries are presented, are presented, and FES scores are assigned to the filters according to the attributes impacting their effectiveness. This research leads to the suggestion that simple, off-the-shelf Bayesian content filters can be used to assist system administrators and log mining systems in sifting log entries to find entries related to known conditions (for which there are example log entries), and to exclude outages which are not related to specific known entry sets.
|
Page generated in 0.0569 seconds