• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 1
  • Tagged with
  • 7
  • 7
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A Study of Log Patternization for Linux-based Systems

Hung, Jui-lin 30 June 2010 (has links)
With the rapid development of Internet technology, as well as extensive use of broadband networks, the issues of network security are increasing. In order to deal with these complex issues, network adminstrators adopt firewalls, intrusion detection systems, intrusion prevention systems to prevent them, in addition, the collection and analysis of log are also very important. By the log analysis, administrators can understand the error messages generated by system and the abnormal behavior of external connections, and develop the corresponding security policy on the use of the security tools. The current log analyzer, besides default rules, administrators have to spend much time reviewing the syslog of their system in detail to set the corresponding rules for their system, and each analyzer has its own unique rules of definitions. The purpose of this study is to transform tens of thounds of logs into a small number of valuable patterns, classify these patterns into abnormal ones and normal ones, and sum up the logs corresponding with listed patterns to assist administrator to review. In this study, we adopt the concept of string similarity comparison, and do similarity comparison for each log to find out all patterns which presented by regular expression. After experimental evaluation, this study can indeed analyze and generate all patterns of logs automatically, and these patterns can be applies to a practical tool of network security.
2

The Properties of Property Alignment on the Semantic Web

Cheatham, Michelle Andreen 25 August 2014 (has links)
No description available.
3

Efficient number similarity check

Simonsson, David January 2024 (has links)
Efficiency in algorithms is important, especially in terms of execution time, as it directly impacts user experience. For example, when a customer visits a website, even a mere one-second delay can significantly reduce their patience, and the likelihood of them abandoning the site increases. This principle applies to search algorithms as well. This project is about implementing a time-efficient tree-based search algorithm that focuses on finding similarities between search input and stored data. The objective is to achieve an execution time as close to O(1) regardless of the data size. The implemented algorithm will be compared with a linear search algorithm, which has an execution time that grows along with the data size. By measuring the executiontimes of both search methods, the project aims to demonstrate the superiority of the tree-based search algorithm in terms of time efficiency.
4

Cuckoo Filter Probabilistic Password Similarity Detection

Degerfeldt, Anton January 2024 (has links)
Authentication in digital systems is still prominently done through passwords. These passwords should simultaneously be easy to remember, unique, and change over time. Humans, however, have a limited ability to remember complex passwords. To make this easier, users often adopt schemes where a base word is only modified slightly. While such schemes can easily fulfil basic password requirements based on length or the symbols used, they can leave users vulnerable. Leaked passwords, even expired ones, can be exploited by malicious actors and a single compromised account can cascade to multiple services.  We propose a v-gram based approach to detect similarity with a set of passwords, which could be used to improve user password habits. The proposed scheme utilizes a Cuckoo Filter, which allows for inherent obfuscation of the stored passwords and the integration of encryption techniques natively. The system could for example be embedded in a password manager to inform users when they are using a password that is too similar to a previous password. This work comprises an analysis of several aspects of the system in order to assess its suitability.  A Cuckoo Filter using a single byte fingerprint for each v-gram can achieve load factors exceeding 95%, while maintaining a false positivity rate of less than 3%. The computational cost of guessing a password based on the information stored within the filter is relatively low. While the false positivity rate of the filter and the size of the alphabet have an impact, they are only logarithmically proportional to the cost, and the attack is considered a significant vulnerability. Nevertheless, the proposed system can be a viable alternative for detecting similarity between passwords — if configured correctly — and could be used to guide user behaviour to more secure password habits.
5

Efficient fuzzy type-ahead search on big data using a ranked trie data structure

Bergman, John January 2018 (has links)
The efficiency of modern search engines depends on how well they present typo-corrected results to a user while typing. So-called fuzzy type-ahead search combines fuzzy string matching and search-as-you-type functionality, and creates a powerful tool for exploring indexed data. Current fuzzy type-ahead search algorithms work well on small data sets, but for big data of social networking services such as Facebook, e-commerce sites such as Amazon, or media streaming services such as YouTube, responsive fuzzy type-ahead search remains a great challenge. This thesis describes a method that enables responsive type-ahead search combined with fuzzy string matching on big data by keeping the search time optimal for human interaction at the expense of lower accuracy for less popular records when a query contains typos. This makes the method effective for e-commerce and media services where the popularity of search terms is a result of human behaviour and thus often follow a power-law distribution. / Effektiviteten hos moderna sökmotorer beror på hur väl de presenterar rättstavade resultat för en användare medan en sökning skrivs. Så kallad fuzzy type-ahead sök kombinerar approximativ strängmatchning och sök-medan-du-skriver funktionalitet, vilket skapar ett kraftfullt verktyg för att utforska data. Dagens algoritmer för fuzzy type-ahead sök fungerar väl för små mängder data, men för data i storleksordningen “big data” från t.ex sociala nätverkstjänster så som Facebook, e-handelssidor så som Amazon, eller media tjänster så som YouTube, är en responsiv fuzzy type-ahead sök ännu en stor utmaning. Denna avhandling beskriver en metod som möjliggör responsiv type-ahead sök kombinerat med approximativ strängmatchning för big data genom att hålla söktiden optimal för mänsklig interaktion på bekostnad av lägre precision för mindre populär information när en sök-förfrågan innehåller felstavningar. Detta gör metoden effektiv för e-handel och mediatjänster där populariteten av sök-termer är ett resultat av mänskligt beteende vilket ofta följer en potens-lag distribution.
6

Large-scale semi-supervised learning for natural language processing

Bergsma, Shane A 11 1900 (has links)
Natural Language Processing (NLP) develops computational approaches to processing language data. Supervised machine learning has become the dominant methodology of modern NLP. The performance of a supervised NLP system crucially depends on the amount of data available for training. In the standard supervised framework, if a sequence of words was not encountered in the training set, the system can only guess at its label at test time. The cost of producing labeled training examples is a bottleneck for current NLP technology. On the other hand, a vast quantity of unlabeled data is freely available. This dissertation proposes effective, efficient, versatile methodologies for 1) extracting useful information from very large (potentially web-scale) volumes of unlabeled data and 2) combining such information with standard supervised machine learning for NLP. We demonstrate novel ways to exploit unlabeled data, we scale these approaches to make use of all the text on the web, and we show improvements on a variety of challenging NLP tasks. This combination of learning from both labeled and unlabeled data is often referred to as semi-supervised learning. Although lacking manually-provided labels, the statistics of unlabeled patterns can often distinguish the correct label for an ambiguous test instance. In the first part of this dissertation, we propose to use the counts of unlabeled patterns as features in supervised classifiers, with these classifiers trained on varying amounts of labeled data. We propose a general approach for integrating information from multiple, overlapping sequences of context for lexical disambiguation problems. We also show how standard machine learning algorithms can be modified to incorporate a particular kind of prior knowledge: knowledge of effective weightings for count-based features. We also evaluate performance within and across domains for two generation and two analysis tasks, assessing the impact of combining web-scale counts with conventional features. In the second part of this dissertation, rather than using the aggregate statistics as features, we propose to use them to generate labeled training examples. By automatically labeling a large number of examples, we can train powerful discriminative models, leveraging fine-grained features of input words.
7

Large-scale semi-supervised learning for natural language processing

Bergsma, Shane A Unknown Date
No description available.

Page generated in 0.0634 seconds