• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 263
  • Tagged with
  • 263
  • 263
  • 263
  • 263
  • 263
  • 30
  • 28
  • 27
  • 25
  • 24
  • 24
  • 21
  • 19
  • 17
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Design And Analysis Of Hash Functions

Kocak, Onur 01 July 2009 (has links) (PDF)
Hash functions are cryptographic tools that are used in various applications like digital signature, message integrity checking, password storage and random number generation. These cryptographic primitives were, first, constructed using modular arithmetical operations which were popular at that time because of public key cryptography. Later, in 1989, Merkle and Damgard independently proposed an iterative construction method. This method was easy to implement and had a security proof. MD-4 was the first hash function to be designed using Merkle-Damgard construction. MD-5 and SHA algorithms followed MD-4. The improvements in the construction methods accordingly resulted in improvements and variations of cryptanalytic methods. The series of attacks of Wang et al. on MD and SHA families threaten the security of these hash functions. Moreover, as the standard hashing algorithm SHA-2 has a similar structure with the mentioned hash functions, its security became questionable. Therefore, NIST announced a publicly available contest to select the new algorithm as the new hash standard SHA-3. The design and analysis of hash functions became the most interesting topic of cryptography. A considerable number of algorithms had been designed for the competition. These algorithms were tested against possible attacks and proposed to NIST. After this step, a worldwide interest started to check the security of the algorithms which will continue untill 4th quarter of 2011 to contribute to the selection process. This thesis presents two important aspects of hash functions: design and analysis. The design of hash functions are investigated under two subtopics which are compression functions and the construction methods. Compression functions are the core of the hashing algorithms and most of the effort is on the compression function when designing an algorithm. Moreover, for Merkle-Damgard hash functions, the security of the algorithm depends on the security of the compression function. Construction method is also an important design parameter which defines the strength of the algorithm. Construction method and compression function should be consistent with each other. On the other hand, when designing a hash function analysis is as important as choosing designing parameters. Using known attacks, possible weaknesses in the algorithm can be revealed and algorithm can be strengthened. Also, the security of a hash function can be examined using cryptanalytic methods. The analysis part of the thesis is consisting of various generic attacks that are selected to apply most of the hash functions. This part includes the attacks that NIST is expecting from new standard algorithm to resist.
72

A Web Service Based Trust And Reputation System For Transitory Collaboration Formation In Supply Chains

Tasyurt, Ibrahim 01 September 2009 (has links) (PDF)
Today, advancements in the information technologies increased the significance of electronic business in the world. Besides the numerous advantages provided by these advancements, competition has also increased for the enterprises. In this competitive environment, companies have to access information faster and response to the changes quickly. In a supply chain, it is a highly possible that one of the partners may defect in providing its services. When these exceptional cases occur, the pending parties have to establish transitory collaborations to replace the missing partner promptly in order not to suffer this deficiency economically. Companies need to know the competences and capabilities of their prospective business partners before establishing partnerships. Furthermore, the reputations of the candidate partners have to be known to avoid possible regrettable partnerships. In this thesis, we have developed a trust and reputation model that can be used over supply chains to determine and exploit the reputation of providers in transitory collaboration formation. The trust model takes the behaviors of providers, consumers into account and combines multiple criteria to aggregate a single reputation value. Experimental results show that, our model provides a robust and reliable reputation mechanism addressing a number of issues that have not been covered in the related studies. In addition to this, an implementation of the model is realized within a Web application and the functionalities have been exposed as Web Services. The interoperability of the Web Services have been ensured through standard GS1 XML documents, which are utilized and extended in scope of the thesis. Furthermore, client interaction is provided through Web based user interfaces and REST services.
73

Ontology-based Spatio-temporal Video Management System

Simsek, Atakan 01 September 2009 (has links) (PDF)
In this thesis, a system, called Ontology-Based Spatio-Temporal Video Management System (OntoVMS) is developed in order to supply a framework which can be used for semantic data modeling and querying in video files. OntoVMS supports semantic data modeling which can be divided into concept modeling, spatio-temporal relation and trajectory data modeling. The system uses Rhizomik MPEG-7 Ontology as the core ontology. Moreover ontology expression capability is extended by automatically attaching domain ontologies. OntoVMS supports querying of all spatial relations such as directional relations (north, south ...), mixed directional relations (northeast, southwest ...), distance relations (near, far), positional relations (above, below ...) and topological relations (inside, touch ...) / temporal relations such as starts, equal, precedes / and trajectories of objects of interest. In order to enhance querying capability, compound queries are added to the framework so that the user can combine simple queries by using &quot / (&quot / , &quot / )&quot / , &quot / AND&quot / and &quot / OR&quot / operators. Finally, the use of the system is demonstrated with a semi-automatic face annotation tool.
74

Prediction Of Protein-protein Interactions From Sequence Using Evolutionary Relations Of Proteins And Species

Guney, Tacettin Dogacan 01 October 2009 (has links) (PDF)
Prediction of protein-protein interactions is an important part in understanding the biological processes in a living cell. There are completely sequenced organisms that do not yet have experimentally verified protein-protein interaction networks. For such organisms, we can not generally use a supervised method, where a portion of the protein-protein interaction network is used as training set. Furthermore, for newly-sequenced organisms, many other data sources, such as gene expression data and gene ontology annotations, that are used to identify protein-protein interaction networks may not be available. In this thesis work, our aim is to identify and cluster likely protein-protein interaction pairs using only sequence of proteins and evolutionary information. We use a protein&rsquo / s phylogenetic profile because the co-evolutionary pressure hypothesis suggests that proteins with similar phylogenetic profiles are likely to interact. We also divide phylogenetic profile into smaller profiles based on the evolutionary lines. These divided profiles are then used to score the similarity between all possible protein pairs. Since not all profile groups have the same number of elements, it is a difficult task to assess the similarity between such pairs. We show that many commonly used measures do not work well and that the end result greatly depends on the type of the similarity measure used. We also introduce a novel similarity measure. The resulting dense putative interaction network contains many false-positive interactions, therefore we apply the Markov Clustering algorithm to cluster the protein-protein interaction network and filter out the weaker edges. The end result is a set of clusters where proteins within the clusters are likely to be functionally linked and to interact. While this method does not perform as well as supervised methods, it has the advantage of not requiring a training set and being able to work only using sequence data and evolutionary information. So it can be used as a first step in identifying protein-protein interactions in newly-sequenced organisms.
75

A Script Based Modular Game Engine Framework For Augmented Reality Applications

Kuru, Muhammed Furkan 01 October 2009 (has links) (PDF)
Augmented Reality (AR) is a technology which blends virtual and real worlds. The technology has various potential application domains such as broadcasting, architecture, manufacturing, and entertainment. As the tempting developments in AR technology continues, the solutions for rapid creation of AR applications become crucial. This thesis presents an AR application development framework with scripting capability as a solution for rapid application development and rapid prototyping in AR. The proposed AR framework shares several components with game engines. Thus, the framework is designed as an extension of a game engine. The components of the framework are designed to be changable in a plug-in system. The proposed framework provides the developers with the ability of agile coding through the scripting language. Our solution embeds a dynamic scripting programming language (Python) in a strictly typed static programming language (C++) in order to achieve both agility and performance. The communication between the AR framework components and the scripting programming language is established through a messaging mechanism.
76

Multi-resolution Visualization Of Large Scale Protein Networks Enriched With Gene Ontology Annotations

Yasar, Sevgi 01 September 2009 (has links) (PDF)
Genome scale protein-protein interactions (PPIs) are interpreted as networks or graphs with thousands of nodes from the perspective of computer science. PPI networks represent various types of possible interactions among proteins or genes of a genome. PPI data is vital in protein function prediction since functions of the cells are performed by groups of proteins interacting with each other and main complexes of the cell are made of proteins interacting with each other. Recent increase in protein interaction prediction techniques have made great amount of protein-protein interaction data available for genomes. As a consequence, a systematic visualization and analysis technique has become crucial. To the best of our knowledge, no PPI visualization tool consider multi-resolution viewing of PPI network. In this thesis, we implemented a new approach for PPI network visualization which supports multi-resolution viewing of compound graphs. We construct compound nodes and label them by using gene set enrichment methods based on Gene Ontology annotations. This thesis further suggests new methods for PPI network visualization.
77

Design And Implementation Of A Hybrid And Configurable Access Control Model

Turan, Ugur 01 October 2009 (has links) (PDF)
A hybrid and configurable access control model is designed to satisfy the requirements of using different access control models in the same schema. The idea is arised to completely combine and configure the two main access control models, discretionary and mandatory which have been widely used in many systems so far with their advantages and disadvantages. The motivation originates from the fact that / in real life usage, discretionary based systems needs some strict policies and mandatory based systems needs some flexibility. The model is designed to combine these two appoaches in a single and configurable model, with some required real life extensions, in a conflictfree fashion and configurable degree of combination. Implementation of the model has been done and main important cases which shows the power and expressiveness of the model are designed and implemented. The authorization process is in the responsibility of the model which can be combined with secured authentication and auditing schemas. The new approaches as Role-Based, Context-Based and Temporal access control can easily be embedded in the model due to its generic and modular design.
78

An Extensible Framework For Automated Network Attack Signature Generation

Kenar, Serkan 01 January 2010 (has links) (PDF)
The effectiveness of misuse-based intrusion detection systems (IDS) are seriously broken, with the advance of threats in terms of speed and scale. Today worms, trojans, viruses and other threats can spread all around the globe in less than thirty minutes. In order to detect these emerging threats, signatures must be generated automatically and distributed to intrusion detection systems rapidly. There are studies on automatically generating signatures for worms and attacks. However, either these systems rely on Honeypots which are supposed to receive only suspicious traffic, or use port-scanning outlier detectors. In this study, an open, extensible system based on an network IDS is proposed to identify suspicious traffic using anomaly detection methods, and to automatically generate signatures of attacks out of this suspicious traffic. The generated signatures are classified and fedback into the IDS either locally or distributed. Design and proof-of-concept implementation are described and developed system is tested on both synthetic and real network data. The system is designed as a framework to test different methods and evaluate the outcomes of varying configurations easily. The test results show that, with a properly defined attack detection algorithm, attack signatures could be generated with high accuracy and efficiency. The resulting system could be used to prevent early damages of fast-spreading worms and other threats.
79

A Distributed Graph Mining Framework Based On Mapreduce

Alkan, Sertan 01 January 2010 (has links) (PDF)
The frequent patterns hidden in a graph can reveal crucial information about the network the graph represents. Existing techniques to mine the frequent subgraphs in a graph database generally rely on the premise that the data can be fit into main memory of the device that the computation takes place. Even though there are some algorithms that are designed using highly optimized methods to some extent, many lack the solution to the problem of scalability. In this thesis work, our aim is to find and enumerate the subgraphs that are at least as frequent as the designated threshold in a given graph. Here, we propose a new distributed algorithm for frequent subgraph mining problem that can scale horizontally as the computing cluster size increases. The method described here, uses a partitioning method and Map/Reduce programming model to distribute the computation of frequent subgraphs. In the core of this algorithm, we make use of an existing graph partitioning method to split the given data in the distributed file system and to merge and join the computed subgraphs without losing information. The frequent subgraph computation in each split is done using another known method which can enumerate the frequent patterns. Although current algorithms can efficiently find frequent patterns, they are not parallel or distributed algorithms in that even when they partition the data, they are designed to work on a single machine. Furthermore, these algorithms are computationally expensive but not fault tolerant and are not designed to work on a distributed file system. Using the Map/Reduce paradigm, we distribute the computation of frequent patterns to every machine in a cluster. Our algorithm, first bi-partitions the data via successive Map/Reduce jobs, then invokes another Map/Reduce job to compute the subgraphs in partitions using CloseGraph, recovers the whole set by invoking a series of Map/Reduce jobs to merge-join the previously found patterns. The implementation uses an open source Map/Reduce environment, Hadoop. In our experiments, our method can scale up to large graphs, as the graph data size gets bigger, this method performs better than the existing algorithms.
80

Ontology Population Using Human Computation

Evirgen, Gencay Kemal 01 January 2010 (has links) (PDF)
In recent years, many researchers have developed new techniques on ontology population. However, these methods cannot overcome the semantic gap between humans and the extracted ontologies. Words-Around is a web application that forms a user-friendly environment which channels the vast Internet population to provide data towards solving ontology population problem that no known efficient computer algorithms can yet solve. This application&rsquo / s fundamental data structure is a list of words that people naturally link to each other. It displays these lists as a word cloud that is fun to drag around and play with. Users are prompted to enter whatever word comes to their mind upon seeing a word that is suggested from the application&rsquo / s database / or they can search for one word in particular to see what associations other users have made to it. Once logged in, users can view their activity history, which words they were the first to associate, and mark particular words as misspellings or as junk, to help keep the list&rsquo / s structure to be relevant and accurate. The results of this implementation indicate the fact that an interesting application that enables users just to play with its visual elements can also be useful to gather information.

Page generated in 0.0875 seconds