• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 263
  • Tagged with
  • 263
  • 263
  • 263
  • 263
  • 263
  • 30
  • 28
  • 27
  • 25
  • 24
  • 24
  • 21
  • 19
  • 17
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Testbatn - A Scenario Based Test Platform For Conformance Andinteroperability Testing

Namli, Tuncay 01 June 2011 (has links) (PDF)
Today, interoperability is the major challenge for e-Business and e-Government domains. The fundamental solution is the standardization in different levels of business-to-business interactions. However publishing standards alone are not enough to assure interoperability between products of different vendors. In this respect, testing and certification activities are very important to promote standard adoption, validate conformance and interoperability of the products and maintain correct information exchange. In e-Business collaborations, standards need to address different layers of interoperability stack / communication layer, business document layer and business process layer. Although there have been conformance and interoperability testing tools and initiatives for each one of these categories, there is currently no support for testing an integration of the above within a test scenario which is similar to real life use cases. Together with the integration of different layers of testing, testing process should be automated so that test case execution can be done at low cost, and repeated if required. In this theses, a highly adaptable and flexible Test Execution Model and a complementary XML based Test Description Language consisting of high level test constructs which can handle or simulate different parts or layers of the interoperability stack is designed. The computer interpretable test description language allow dynamic set up of test cases and provides flexibility to design, modify, maintain and extend the test functionality in contrast to a priori designed and hard coded test cases. The work presented in this thesis is a part of the TestBATN system supported by TUBITAK, TEYDEB Project No: 7070191.
122

Multiresolution Formation Preserving Path Planning In 3-d Virtual Environments

Hosgor, Can 01 September 2011 (has links) (PDF)
The complexity of the path finding and navigation problem increases when multiple agents are involved and these agents have to maintain a predefined formation while moving on a 3-D terrain. In this thesis, a novel approach for multiresolution formation representation is proposed, that allows hierarchical formations of arbitrary depth to be defined using different referencing schemes. This formation representation approach is then utilized to find and realize a collision free optimal path from an initial location to a goal location on a 3-D terrain, while preserving the formation. The proposed metod first employs a terrain analysis technique that constructs a weighted search graph from height-map data. The graph is used by an off-line search algorithm to find the shortest path. The path is realized by an on-line planner, which guides the formation along the path while avoiding collisions and maintaining the formation. The methods proposed here are easily adaptable to several application areas, especially to real time strategy games and military simulations.
123

Content Based Packet Filtering In Linux Kernel Using Deterministic Finite Automata

Bilal, Tahir 01 September 2011 (has links) (PDF)
In this thesis, we present a content based packet filtering Architecture in Linux using Deterministic Finite Automata and iptables framework. New generation firewalls and intrusion detection systems not only filter or inspect network packets according to their header fields but also take into account the content of payload. These systems use a set of signatures in the form of regular expressions or plain strings to scan network packets. This scanning phase is a CPU intensive task which may degrade network performance. Currently, the Linux kernel firewall scans network packets separately for each signature in the signature set provided by the user. This approach constitutes a considerable bottleneck to network performance. We implement a content based packet filtering architecture and a multiple string matching extension for the Linux kernel firewall that matches all signatures at once, and show that we are able to filter network traffic by consuming constant bandwidth regardless of the number of signatures. Furthermore, we show that we can do packet filtering in multi-gigabit rates.
124

Tag-based Music Recommendation Systems Using Semantic Relations And Multi-domain Information

Tatli, Ipek 01 September 2011 (has links) (PDF)
With the evolution of Web 2.0, most social-networking sites let their members participate in content generation. Users can label items with tags in these websites. A tag can be anything but it is actually a short description of the item. Because tags represent the reason why a user likes an item, but not how much user likes it / they are better identifiers of user profiles than ratings, which are usually numerical values assigned to items by users. Thus, the tag-based contextual representations of music tracks are concentrated in this study. Items are generally represented by vector space models in the content based recommendation systems. In tag-based recommendation systems, users and items are defined in terms of weighted vectors of social tags. When there is a large amount of tags, calculation of the items to be recommended becomes hard, because working with huge vectors is a time-consuming job. The main objective of this thesis is to represent individual tracks (songs) in lower dimensional spaces. An approach is described for creating music recommendations based on user-supplied tags that are augmented with a hierarchical structure extracted for top level genres from Dbpedia. In this structure, each genre is represented by its stylistic origins, typical instruments, derivative forms, sub genres and fusion genres. In addition to very large vector space models, insufficient number of user tags is another problem in the recommendation field. The proposed method is evaluated with different user profiling methods in case of any insufficiency in the number of user tags. User profiles are extended with multi-domain information. By using multi-domain information, the goal of making more successful and realistic predictions is achieved.
125

An Ontology-based Hybrid Recommendation System Using Semantic Similarity Measure And Feature Weighting

Ceylan, Ugur 01 September 2011 (has links) (PDF)
The task of the recommendation systems is to recommend items that are relevant to the preferences of users. Two main approaches in recommendation systems are collaborative filtering and content-based filtering. Collaborative filtering systems have some major problems such as sparsity, scalability, new item and new user problems. In this thesis, a hybrid recommendation system that is based on content-boosted collaborative filtering approach is proposed in order to overcome sparsity and new item problems of collaborative filtering. The content-based part of the proposed approach exploits semantic similarities between items based on a priori defined ontology-based metadata in movie domain and derived feature-weights from content-based user models. Using the semantic similarities between items and collaborative-based user models, recommendations are generated. The results of the evaluation phase show that the proposed approach improves the quality of recommendations.
126

Model Checking Of Apoptosis Signaling Pathways In Lung Cancers

Parlak, Mehtap Ayfer 01 October 2011 (has links) (PDF)
Model checking is a formal verification technique which is widely used in different areas for automated verification and analysis. In this study, we applied a Model Checking method to a biological system. Firstly we constructed a single-cell, Boolean network model for the signaling pathways of apoptosis (programmed cell death) in lung cancers by combining the intrinsic and extrinsic Apoptosis pathways, p53 signaling pathway and p53 - DAP Kinase pathway in Lung cancers. We translated this model to the NuSMV input language. Then we converted known experimental results to CTL properties and checked the conformance of our model with respect to biological experimental results. We examined the dynamics of the apoptosis in lung cancer using NuSMV symbolic model checker and identified the relationship between apoptosis and lung cancer. Finally we generalized the whole process by introducing translation rules and CTL property patterns for biological queries so that model checking any signaling pathway can be automated.
127

An Ontology-based Approach For Delay Analysis

Bilgin, Gozde 01 December 2011 (has links) (PDF)
Delay is a common problem of construction sector. Recent improvements in the sector increased the competition and this led the construction projects to be more complex than before and difficult to be completed in time. This situation not only increased the delay problems, but also made the analysis of delays difficult and that caused further problems as disputes between parties to the contract. Sound knowledge in delay analysis subject is needed to enhance the solution of the delay problem in construction projects. So, this study aims to share knowledge in delay analysis issue by construction of a delay analysis ontology that provides direct and comprehensive knowledge. The constructed ontology may ease the information sharing process and provide a base for the usage of information in computers for different purposes especially in risk and claim management processes. It may enable companies to create their own knowledge bases and decision support systems that may achieve improvement in the knowledge and its usability. To meet this objective, detailed literature review on delay subject is carried out and an ontology on delay analysis issue is created. The created ontology is validated through its comparison with three different case studies.
128

Automated Navigation Model Extraction For Web Load Testing

Kara, Ismihan Refika 01 December 2011 (has links) (PDF)
Web pages serve a huge number of internet users in nearly every area. An adequate testing is needed to address the problems of web domains for more efficient and accurate services. We present an automated tool to test web applications against execution errors and the errors occured when many users connect the same server concurrently. Our tool, called NaMoX, attains the clickables of the web pages, creates a model exerting depth first search algorithm. NaMoX simulates a number of users, parses the developed model, and tests the model by branch coverage analysis. We have performed experiments on five web sites. We have reported the response times when a click operation is eventuated. We have found 188 errors in total. Quality metrics are extracted and this is applied to the case studies.
129

Next Page Prediction With Popularity Based Page Rank, Duration Based Page Rank And Semantic Tagging Approach

Yanik, Banu Deniz 01 February 2012 (has links) (PDF)
Using page rank and semantic information are frequently used techniques in next page prediction systems. In our work, we extend the use of Page Rank algorithm for next page prediction with several navigational attributes, which are size of the page, duration of the page visit and duration of transition (two page visits sequentially), frequency of page and transition. In our model, we define popularity of transitions and pages by using duration information, use it in a relation with page size, and visit frequency factors. By using the popularity value of pages, we bias conventional Page Rank algorithm and model a next page prediction system that produces page recommendations under given top-n value. Moreover, we extract semantic terms from web URLs in order to tag pages semantically. The extracted terms are mapped into web URLs with different level of details in order to find semantically similar pages for next page recommendations. With this tagging, we model another next page prediction method, which uses Semantic Tagging (ST) similarity and exploits PPR values as a supportive method. Moreover, we model a Hybrid Page Rank (HPR) algorithm that uses both Semantic Tagging based approach and Popularity Based Page Rank values of pages together in order to investigate the effect of PPR and ST with equal weights. In addition, we investigate the effect of local (a synopsis of directed web graph) and global (whole directed web graph) modeling on next page prediction accuracy.
130

Expert Finding In Domains With Unclear Topics

Selcuk Dogan, Gonca Hulya 01 February 2012 (has links) (PDF)
Expert finding is an Information Retrieval (IR) task that is used to find the needed experts. To find the needed experts is a noticeable problem in many commercial, educational or governmental organizations. It is highly crucial to find the appropriate experts, when seeking referees for a paper submitted to a conference or when looking for a consultant for a software project. It is also important to find the similar experts in case of the absence or the inability of the selected expert. Traditional expert finding methods are modeled based on three components which are a supporting document collection, a list of candidate experts and a set of pre-defined topics. In reality, most of the time pre-defined topics are not available. In this study, we propose an expert finding system which generates a semantic layer between domains and experts using Latent Dirichlet Allocation (LDA). A traditional expert finding method (voting approach) is used in order to match the domains and the experts as the baseline method. In case similar experts are needed, the system recommends experts matching the qualities of the selected experts. The proposed model is applied to a semi-synthetic data set to prove the concept and it performs better than the baseline method. The proposed model is also applied to the projects of the Technology and Innovation Funding Programs Directorate (TEYDEB) of The Scientific and Technological Research Council of Turkey (T&Uuml / B

Page generated in 0.0531 seconds