• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 2
  • 1
  • Tagged with
  • 13
  • 13
  • 9
  • 7
  • 6
  • 6
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Secure logging mechanisms for smart cards

Markantonakis, Constantinos January 2000 (has links)
No description available.
2

A Novel Authentication And Validation Mechanism For Analyzing Syslogs Forensically

Monteiro, Steena D.S. 01 December 2008 (has links)
This research proposes a novel technique for authenticating and validating syslogs for forensic analysis. This technique uses a modification of the Needham Schroeder protocol, which uses nonces (numbers used only once) and public keys. Syslogs, which were developed from an event-logging perspective and not from an evidence-sustaining one, are system treasure maps that chart out and pinpoint attacks and attack attempts. Over the past few years, research on securing syslogs has yielded enhanced syslog protocols that focus on tamper prevention and detection. However, many of these protocols, though efficient from a security perspective, are inadequate when forensics comes into play. From a legal perspective, any kind of evidence found at a crime scene needs to be validated. In addition, any digital forensic evidence when presented in court needs to be admissible, authentic, believable, and reliable. Currently, a patchy log on the server side and client side cannot be considered as formal authentication of a wrongdoer. This work presents a method that ties together, authenticates, and validates all the entities involved in the crime scene--the user using the application, the system that is being used, and the application being used on the system by the user. This means that instead of merely transmitting the header and the message, which is the standard syslog protocol format, the syslog entry along with the user fingerprint, application fingerprint, and system fingerprint are transmitted to the logging server. The assignment of digital fingerprints and the addition of a challenge response mechanism to the underlying syslogging mechanism aim to validate generated syslogs forensically.
3

Automatic Status Logger For a Gas Turbine

Jonas, Susanne January 2007 (has links)
<p>The Company Siemens Industrial Turbo Machinery AB manufactures and launches in operation among other things gas turbines, steam turbines, compressors, turn-key power plants and carries out service for components for heat and power production. Siemens also performs research and development, marketing, sales and installations of turbines and completes power plants, service and refurbish.</p><p>Our thesis for the engineering degree is to develop an automatic status log which will be used as a tool to control how the status of the machine is before and after technical service at gas turbines. Operational disturbances will be registered in a structured way in order to get a good possibility to follow up the reliability of the application.</p><p>An automatic log function has been developed and will be activated at start, stop and shutdown of the turbine system. Log files are created automatically and get a name with the event type, the date and the time. The files contain data as timestamp, name, measured values and units of the signals which are going to be analyzed by the support engineers. They can evaluate the cause of the problem using the log files.</p>
4

Automatic Status Logger For a Gas Turbine

Jonas, Susanne January 2007 (has links)
The Company Siemens Industrial Turbo Machinery AB manufactures and launches in operation among other things gas turbines, steam turbines, compressors, turn-key power plants and carries out service for components for heat and power production. Siemens also performs research and development, marketing, sales and installations of turbines and completes power plants, service and refurbish. Our thesis for the engineering degree is to develop an automatic status log which will be used as a tool to control how the status of the machine is before and after technical service at gas turbines. Operational disturbances will be registered in a structured way in order to get a good possibility to follow up the reliability of the application. An automatic log function has been developed and will be activated at start, stop and shutdown of the turbine system. Log files are created automatically and get a name with the event type, the date and the time. The files contain data as timestamp, name, measured values and units of the signals which are going to be analyzed by the support engineers. They can evaluate the cause of the problem using the log files.
5

Aktiv felhantering av loggdata

Åhlander, Mattias January 2020 (has links)
The main goal of this project has been to investigate how a message queue can be used to handle error codes in log files more actively. The project has followed the Design Science Research Methodology for development and implementation of the solution. A model of the transaction system was developed and emulated in newly developed applications. Two experiments were performed, the first of which tested a longer run time with intervals between messages and the second a time measurement of how long it takes to send 20 000 messages. The first experiment showed that the message queue was able to handle all messages which gave a high throughput of 22.5 messages per second without any messages being lost. The implemented consumer application received all messages and successfully counted the number of error codes in the received data. The experiments that have been carried out have proven that a message queue can be implemented to handle error codes in log files more actively. The future work that can be performed may include an evaluation of the security of the system, comparisons of performance compared to other message queues, performing the experiments on more powerful computers and implementation of machine learning to classify the log data. / Målet med det här projektet har varit att undersöka hur en meddelandekö kan användas för att felhantera felkoder i loggfiler mer aktivt. Projektet har följt Design Science Research Methodology för utveckling och implementering av lösningen. En modell av transaktionssystemet togs fram och emulerades i nyutvecklade applikationer. Två experiment utfördes varav det första testade en längre körning med intervall mellan meddelanden och det andra en tidmätning för hur lång tid det tar att skicka 20 000 meddelanden. Det första experimentet visade att meddelandekön klarade av att hantera meddelanden som skickades över två timmar. Det andra experimentet visade att systemet tog 14 minuter och 45 sekunder att skicka och hantera alla meddelanden, vilket gav en hög genomströmning av 22.5 meddelanden per sekund utan att några meddelanden gick förlorade. Den implementerade mottagarapplikationen tog emot alla meddelanden och lyckades räkna upp antalet felkoder som presenterades i den inkomna datan. De experiment som har utförts har bevisat att en meddelandekö kan implementeras för att felhantera felkoder i loggfiler mer aktivt. De framtida arbeten som kan utföras omfattar en utvärdering av säkerheten av systemet, jämförelser av prestanda jämfört med andra meddelandeköer, utföra experimenten på kraftfullare datorer och en implementering av maskininlärning för att klassificera loggdatan.
6

ParCam : Applikation till Android för tolkning av parkeringsskyltar

Forsberg, Tomas January 2020 (has links)
It is not always that easy to accurately interpret a parking signs The driver is expected to keep track of what every road sign, direction, prohibition, and amendment means, both by themselves and in combination with each others In addition, the driver must also keep track of the time, date, if there is a holiday, week number, etcs This can make the driver unsure of the rules, or interpret the rules incorrectly, which can lead to hefty fnes or even a towed vehicles By developing a mobile application that can analyze a photograph of a parking sign and quickly give the driver the verdict, the interpretation process can be made easys The purpose of this study has been to examine available technology within image and text analysis and then develop a prototype of an Android application that can interpret a photograph of a parking sign and quickly give the correct verdict, with the help of said technologys The constructed prototype will be evaluated partly by user tests to evaluate the application’s usability, and partly by functionality tests to evaluate the accuracy of the analysis processs Based on the results from the tests, a conclusion was drawn that the application gave a very informative and clear verdict, which was correct most of the time, but ran into problems with certain signs and under more demanding environmental circumstancess The tests also showed that the interface was perceived as easy to understand and use, though less interaction needed from the user was desireds There is a great potential for future development of ParCam, where the focus will be on increasing the automation of the processs / Att tolka en parkeringsskylt korrekt är inte alltid så  enkelt. Föraren förväntas ha koll på vad alla vägmärken, anvisningar, förbud, och tillägg betyder, både för sig själva och i kombination med varandra. Dessutom måste föraren även ha koll på  tid, datum, ev. helgdag, veckonummer m.m. Detta kan leda till att föraren blir osäker på vad som gäller eller tolkar reglerna felaktigt, vilket kan leda till dryga böter och även bortbogserat fordon. Genom att utveckla en mobilapplikation som kan analysera ett fotografi av en parkeringsskylt och snabbt ge svar kan denna tolkningsprocess underlättas för föraren. Syftet med denna studie har varit att utforska befintliga teknologier inom bild- och textanalys och därefter konstruera en prototyp av en Android-app som med hjälp av denna teknologi samt användarens mobilkamera kunna tolka fotografier av en parkeringsskylt och snabbt ge en korrekt utvärdering. Den konstruerade prototypen kommer att utvärderas dels genom användartester för att testa applikationens användbarhet och dels genom analys av utdata för att mäta analysens träffsäkerhet. Från testerna drogs slutsatsen att applikationen gav ett väldigt tydligt och informativt svar där analysen var korrekt de allra flesta gångerna, men stötte på problem med vissa skyltar och under svårare miljöförhållanden. Testerna visade också att gränssnittet upplevdes lätt att använda, men skulle helst kräva mindre inblandning från användaren. Det finns stor utvecklingspotential för ParCam, där fokus kommer att läggas på utökad automatisering av processen.
7

Anomaly Detection in Log Files Using Machine Learning

Björnerud, Philip January 2021 (has links)
Logs generated by the applications, devices, and servers contain information that can be used to determine the health of the system. Manual inspection of logs is important, for example during upgrades, to determine whether the upgrade and data migration were successful. However, manual testing is not reliable enough, and manual inspection of logs is tedious and time-­consuming. In this thesis, we propose to use the machine learning techniques K­means and DBSCAN to find anomaly sequences in log files. This research also investigated two different kinds of data representation techniques, feature vector representation, and IDF representation. Evaluation metrics such as F1 score, recall, and precision were used to analyze the performance of the applied machine learning algorithms. The study found that the algorithms have large differences regarding detection of anomalies, in which the algorithms performed better in finding the different kinds of anomalous sequences, rather than finding the total amount of them. The result of the study could help the user to find anomalous sequences, without manually inspecting the log file.
8

Anomaly Detection in Log Files Using Machine Learning Techniques

Mandagondi, Lakshmi Geethanjali January 2021 (has links)
Context: Log files are produced in most larger computer systems today which contain highly valuable information about the behavior of the system and thus they are consulted fairly often in order to analyze behavioral aspects of the system. Because of the very high number of log entries produced in some systems, it is however extremely difficult to seek out relevant information in these files. Computer-based log analysis techniques are therefore indispensable for the method of finding relevant data in log files. Objectives: The major problem is to find important events in log files. Events in the test suite such as connections error or disruption are not considered abnormal events. Rather the events which cause system interruption must be considered abnormal events. The goal is to use machine learning techniques to "learn" what an"expected" behavior of a particular test suite is. This means that the system must be able to learn to distinguish between a log file that has an anomaly, and which does not have an anomaly based on the previous sequences. Methods: Various algorithms are implemented and compared to other existing algorithms based on their performance. The algorithms are executed on a parsed set of labeled log files and are evaluated by analyzing the anomalous events contained in the log files by conducting an experiment using the algorithms. The algorithms used were Local Outlier Factor, Random Forest, and Term Frequency Inverse DocumentFrequency. We then use clustering using KMeans and PCA to gain some valuable insights from the data by observing groups of data points to find the anomalous events. Results: The results show that the Term Frequency Inverse Document Frequency method works better in finding the anomalous events in the data compared to the other two approaches after conducting an experiment which is discussed in detail. Conclusions: The results will help developers to find the anomalous events without manually looking at the log file row by row. The model provides the events which are behaving differently compared to the rest of the event in the log and that causes the system to interrupt.
9

System Integration and Verification Verdict Automation using Machine Learning

Kommareddy, Anthony January 2023 (has links)
Context: The volume of log files is massive as they contain vital information about the application’s behavior; they map out broad parts of the application, allowing us to understand how every component behaves, whether normally or abnormally. As a result, it is critical to examine the log files to see if the system is deviating from its usual path. Because they are so large, it is difficult for the developer to identify each and every error. So, to overcome this problem we developed a machine-learning model to detect types of errors in log files with minimal manual effort.  Objectives: The main objective is to discover errors in log files throughout the testing and production phases so that the application behaves properly. We intend to detect errors by training the module with relevant datasets and teaching the model to differentiate between the types of errors like error, debug, info, fail, etc. caused when the application is tested or operated during the production phase.  Methods: We employ machine learning techniques like SVM and multinomial naive Bayes as well as long-short-term memory (LSTM) networks, which are a sort of re-current neural network capable of learning order dependency in the prediction of sequences, which is appropriate for our use case. These techniques are used to de- termine whether errors such as assert, fail, error, and warning were generated. Then we used verdict generation machine learning techniques to generate the verdict from the error log messages.  Results: The results indicated that, instead of manually detecting errors, we can easily discover and fix them by integrating machine learning and classification methods, making it easier to move the application to production.  Conclusion: The results will assist developers in identifying the errors without having to manually examine the log file row by row. This approach has the potential to reduce the need for additional human efforts to examine log files for errors and can determine the type of error that occurred in the specific row that caused the application to diverge from its typical flow.
10

Προηγμένες τεχνικές και αλγόριθμοι εξόρυξης γνώσης για την προσωποποίηση της πρόσβασης σε δικτυακούς τόπους / Advanced techniques and algorithms of knowledge mining from Web Sites

Γιαννακούδη, Θεοδούλα 16 May 2007 (has links)
Η προσωποποίηση του ιστού είναι ένα πεδίο που έχει κερδίσει μεγάλη προσοχή όχι μόνο στην ερευνητική περιοχή, όπου πολλές ερευνητικές μονάδες έχουν ασχοληθεί με το πρόβλημα από διαφορετικές μεριές, αλλά και στην επιχειρησιακή περιοχή, όπου υπάρχει μία ποικιλία εργαλείων και εφαρμογών που διαθέτουν ένα ή περισσότερα modules στη διαδικασία της εξατομίκευσης. Ο στόχος όλων αυτών είναι, εξερευνώντας τις πληροφορίες που κρύβονται στα logs του εξυπηρετητή δικτύου να ανακαλύψουν τις αλληλεπιδράσεις μεταξύ των επισκεπτών των ιστότοπων και των ιστοσελίδων που περιέχονται σε αυτούς. Οι πληροφορίες αυτές μπορούν να αξιοποιηθούν για τη βελτιστοποίηση των δικτυακών τόπων, εξασφαλίζοντας έτσι αποτελεσματικότερη πλοήγηση για τον επισκέπτη και διατήρηση του πελάτη στην περίπτωση του επιχειρηματικού τομέα. Ένα βασικό βήμα πριν την εξατομίκευση αποτελεί η εξόρυξη χρησιμοποίησης από τον ιστό, ώστε να αποκαλυφθεί τη γνώση που κρύβεται στα log αρχεία ενός web εξυπηρετητή. Εφαρμόζοντας στατιστικές μεθόδους και μεθόδους εξόρυξης δεδομένων στα web log δεδομένα, μπορούν να προσδιοριστούν ενδιαφέροντα πρότυπα που αφορούν τη συμπεριφορά πλοήγησης των χρηστών, όπως συστάδες χρηστών και σελίδων και πιθανές συσχετίσεις μεταξύ web σελίδων και ομάδων χρηστών. Τα τελευταία χρόνια, γίνεται μια προσπάθεια συγχώνευσης του περιεχομένου του ιστού στη διαδικασία εξόρυξης χρησιμοποίησης, για να επαυξηθεί η αποτελεσματικότητα της εξατομίκευσης. Το ενδιαφέρον σε αυτή τη διπλωματική εργασία εστιάζεται στο πεδίο της εξόρυξης γνώσης για τη χρησιμοποίηση δικτυακών τόπων και πώς η διαδικασία αυτή μπορεί να επωφεληθεί από τα χαρακτηριστικά του σημασιολογικού ιστού. Αρχικά, παρουσιάζονται τεχνικές και αλγόριθμοι που έχουν προταθεί τα τελευταία χρόνια για εξόρυξη χρησιμοποίησης από τα log αρχεία των web εξυπηρετητών. Έπειτα εισάγεται και ο ρόλος του περιεχομένου στη διαδικασία αυτή και παρουσιάζονται δύο εργασίες που λαμβάνουν υπόψη και το περιεχόμενο των δικτυακών τόπων: μία τεχνική εξόρυξης χρησιμοποίησης με βάση το PLSA, η οποία δίνει στο τέλος και τη δυνατότητα ενοποίησης του περιεχομένου του ιστού και ένα σύστημα προσωποποίησης το οποίο χρησιμοποιεί το περιεχόμενο του ιστοτόπου για να βελτιώσει την αποτελεσματικότητα της μηχανής παραγωγής προτάσεων. Αφού αναλυθεί θεωρητικά το πεδίο εξόρυξης γνώσης από τα logs μέσα από την περιγραφή των σύγχρονων τεχνικών, προτείνεται το σύστημα ORGAN-Ontology-oRiented usaGe ANalysis- το οποίο αφορά στη φάση της ανάλυσης των log αρχείων και την εξόρυξη γνώσης για τη χρησιμοποίηση των δικτυακών τόπων με άξονα τη σημασιολογία του ιστοτόπου. Τα σημασιολογικά χαρακτηριστικά του δικτυακού τόπου έχουν προκύψει με τεχνικές εξόρυξης δεδομένων από το σύνολο των ιστοσελίδων και έχουν σχολιαστεί από μία OWL οντολογία. Το ORGAN παρέχει διεπαφή για την υποβολή ερωτήσεων σχετικών με την επισκεψιμότητα και τη σημασιολογία των σελίδων, αξιοποιώντας τη γνώση για το site, όπως αναπαρίσταται πάνω στην οντολογία. Περιγράφεται διεξοδικά ο σχεδιασμός, η ανάπτυξη και η πειραματική αξιολόγηση του συστήματος και σχολιάζονται τα αποτελέσματα του. / Web personalization is a domain which has gained great momentum not only in the research area, where many research units have addressed the problem form different perspectives, but also in the industrial area, where a variety of modules for the personalization process is available. The objective is, researching the information hidden in the web server log files to discover the interactions between web sites visitors and web sites pages. This information can be further exploited for web sites optimization, ensuring more effective navigation for the user and client retention in the industrial case. A primary step before the personalization is the web usage mining, where the knowledge hidden in the log files is revealed. Web usage mining is the procedure where the information stored in the Web server logs is processed by applying statistical and data mining techniques such as clustering, association rules discovery, classification, and sequential pattern discovery, in order to reveal useful patterns that can be further analyzed. Recently, there has been an effort to incorporate Web content in the web usage mining process, in order to enhance the effectiveness of personalization. The interest in this thesis is focused on the domain of the knowledge mining for usage of web sites and how this procedure can get the better of attributes of the semantic web. Initially, techniques and algorithms that have been proposed lately in the field of web usage mining are presented. After, the role of the context in the usage mining process is introduced and two relevant works are presented: a usage mining technique based on the PLSA model, which may integrate attributes of the site content, and a personalization system which uses the site content in order to enhance a recommendation engine. After analyzing theoretically the usage mining domain, a new system is proposed, the ORGAN, which is named after Ontology-oRiented usaGe ANalysis. ORGAN concerns the stage of log files analysis and the domain of knowledge mining for the web site usage based on the semantic attributes of the web site. The web site semantic attributes have resulted from the web site pages applying data mining techniques and have been annotated by an OWL ontology. ORGAN provides an interface for queries submission concerning the average level of visitation and the semantics of the web site pages, exploiting the knowledge for the site, as it is derived from the ontology. There is an extensive description of the design, the development and the experimental evaluation of the system.

Page generated in 0.073 seconds