• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 263
  • Tagged with
  • 263
  • 263
  • 263
  • 263
  • 263
  • 30
  • 28
  • 27
  • 25
  • 24
  • 24
  • 21
  • 19
  • 17
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Coevolution Based Prediction Of Protein-protein Interactions With Reduced Training Data

Pamuk, Bahar 01 February 2009 (has links) (PDF)
Protein-protein interactions are important for the prediction of protein functions since two interacting proteins usually have similar functions in a cell. Available protein interaction networks are incomplete / but, they can be used to predict new interactions in a supervised learning framework. However, in the case that the known protein network includes large number of protein pairs, the training time of the machine learning algorithm becomes quite long. In this thesis work, our aim is to predict protein-protein interactions with a known portion of the interaction network. We used Support Vector Machines (SVM) as the machine learning algoritm and used the already known protein pairs in the network. We chose to use phylogenetic profiles of proteins to form the feature vectors required for the learner since the similarity of two proteins in evolution gives a reasonable rating about whether the two proteins interact or not. For large data sets, the training time of SVM becomes quite long, therefore we reduced the data size in a sensible way while we keep approximately the same prediction accuracy. We applied a number of clustering techniques to extract the most representative data and features in a two categorical framework. Knowing that the training data set is a two dimensional matrix, we applied data reduction methods in both dimensions, i.e., both in data size and in feature vector size. We observed that the data clustered by the k-means clustering technique gave superior results in prediction accuracies compared to another data clustering algorithm which was also developed for reducing data size for SVM training. Still the true positive and false positive rates (TPR-FPR) of the training data sets constructed by the two clustering methods did not give satisfying results about which method outperforms the other. On the other hand, we applied feature selection methods on the feature vectors of training data by selecting the most representative features in biological and in statistical meaning. We used phylogenetic tree of organisms to identify the organisms which are evolutionarily significant. Additionally we applied Fisher&sbquo / &Auml / &ocirc / s test method to select the features which are most representative statistically. The accuracy and TPR-FPR values obtained by feature selection methods could not provide to make a certain decision on the performance comparisons. However it can be mentioned that phylogenetic tree method resulted in acceptable prediction values when compared to Fisher&sbquo / &Auml / &ocirc / s test.
62

3d Object Recognition By Geometric Hashing For Robotics Applications

Hozatli, Aykut 01 February 2009 (has links) (PDF)
The main aim of 3D Object recognition is to recognize objects under translation and rotation. Geometric Hashing is one of the methods which represents a rotation and translation invariant approach and provides indexing of structural features of the objects in an efficient way. In this thesis, Geometric Hashing is used to store the geometric relationship between discriminative surface properties which are based on surface curvature. In this thesis surface is represented by shape index and splash where shape index defines particular shaped surfaces and splash introduces topological information. The method is tested on 3D object databases and compared with other methods in the literature.
63

Design And Implementation Of A Monitoring Framework

Kuz, Kadir 01 May 2009 (has links) (PDF)
In this thesis work, the symptoms in Windows XP operating system for fault monitoring are investigated and a fault monitoring library is developed. A test GUI is implemented to examine this library. Performance tests including memory and CPU usage are done to see its overhead to the system and platform tests on the current version of Windows operating system series (Windows Vista) are done to see for compatibility. In this thesis, fault monitor-fault detector interface is also defined and implemented. To monitor a symptom that is not implemented in the monitoring library, projects can implement their own monitors. A monitoring framework is designed to control and coordinate these monitors with the main one. To create monitors for Java projects easily, a monitor creator library is developed.
64

Web Based Geographical Information Systems For Middle East Technical University Campus

Turkmendag, Gokce 01 June 2009 (has links) (PDF)
Middle East Technical University (METU) campus has such an extensive area that reaching the necessary information which affects campus life, such as the locations of the buildings, classrooms, computer labs, and etc. may be very difficult for anyone who does not know the campus well, and even for a student, personnel or a graduate who had a long time in the campus. An interactive campus map and a database structure related to this map which can be accessed by multiple types of users on the Internet can display this information with its geographical locations, and will reduce the &quot / difficulty for reaching information&quot / widely. For this purpose, data of METU were collected from various sources, edited, organized, and inserted into data tables. An interactive campus map displaying the locations of the physical structures and facilities in the campus was created in Scalable Vector Grapics (SVG) standard, and published on the Internet. By JavaScript functions, the map can be browsed with map navigation tools, including zoom in, zoom out, move and information buttons, and layers control. There is a search section on the user interface, which allows users make queries to find building and classroom names, and list the buildings and facilities according to their usage and category types. Data are stored in PostgreSQL database, transmitted through PHP scripts, and can be edited by authorized users through the specialized web interfaces. Lastly, web-based implementation of the application is entirely based on open-source standards.
65

Design And Implementation Of An Open Security Architecture For A Software-based Security Module

Kaynar, Kaan 01 May 2009 (has links) (PDF)
Main purpose of this thesis work is to design a comprehensive and open security architecture whose desired parts could be realized on a general-purpose embedded computer without any special cryptography hardware. The architecture provides security mechanisms that implement known cryptography techniques, operations of some famous network security protocols and appropriate system security methods. Consequently, a server machine may offload a substantial part of its security processing tasks to an embedded computer realizing the architecture. The mechanisms provided can be accessed by a server machine using a client-side API and via a secure protocol which provides message integrity and peer authentication. To demonstrate the practicability of the security architecture, a set of its security mechanisms was realized on an embedded PC/104-plus computer. A server machine was connected to and requested mechanisms from the embedded computer over the Ethernet network interface. Four types of performance parameters were measured. They are / number of executions of a symmetric encryption method by the embedded computer per second, number of executions of a public-key signing method by the embedded computer per second, footprint of the implementation on the embedded computer memory, and the embedded computer CPU power utilized by the implementation. Apart from various security mechanisms and the secure protocol via which they can be accessed, the architecture defines a reliable software-based method for protection and storage of secret information belonging to clients.
66

Design And Implementation Of An Ontology Extraction Framework And A Semantic Search Engine Over Jsr-170 Compliant Content Repositories

Aluc, Gunes 01 July 2009 (has links) (PDF)
A Content Management System (CMS) is a software application for creating, publishing, editing and managing content. The future step in content management system development is building intelligence over existing content resources that are heterogeneous in nature. Intelligence collected at the knowledge base can later on be used for executing semantic queries. Expressing the relations among content resources with ontological formalisms is therefore the key to implementing such semantic features. In this work, a methodology for the semantic lifting of JSR-170 compliant content repositories to ontologies is devised. The fact that in the worst case JSR-170 enforces no particular structural restrictions on the content model poses a technical challenge both for the initial build-up and further synchronization of the knowledge base. To address this problem, some recurring structural patterns in JSR-170 compliant content repositories are exploited. The value of the ontology extraction framework is assessed through a semantic search mechanism that is built on top of the extracted ontologies. The work in this thesis is complementary to the &ldquo / Interactive Knowledge Stack for small to medium CMS/KMS providers (IKS)&rdquo / project funded by the EC (FP7-ICT-2007-3).
67

Un/cefact Ccts Based E-business Document Design And Customization Environment For Achivieng Data Interoperability

Tuncer, Fulya 01 June 2009 (has links) (PDF)
The leading effort for creating a standard semantic basis for business documents to solve the electronic business document interoperability problem came from the UN/CEFACT (United Nations Centre for Trade Facilitation and Electronic Business) Core Components Technical Specification (CCTS) through a conceptual document modeling methodology. Currently, the main challenge in using UN/CEFACT CCTS based approaches is that the document artifacts are stored in spreadsheets and this makes it very difficult to discover the previously defined components and to check their consistency. Furthermore, businesses need to customize standard documents according to their specific needs. The first XML implementation of UN/CEFACT CCTS, namely, Universal Business Language (UBL) provides detailed text-based descriptions of customization mechanisms. However, without automated tool support, it is difficult to apply the customization and to maintain the consistency of the customizations. In this thesis, these problems are addressed by providing an online e-business document design and customization environment, i.e. iSURF eDoCreator, which integrates the machine processable versions of paper-based UN/CEFACT CCTTS modeling methodology and UBL customization guidelines, accompanied with an online common UN/CEFACT CCTS based document component repository. In this way, iSURF eDoCreator environment aims to maximize re-use of available document building blocks and minimize the tedious document design and customization efforts. The environment also performs the gap analysis between different customizations of UBL to show how interoperable is the compared document models. The research leading to these results has received funding from the European Community&#039 / s FP7/2007-2013 under grant agreement n&deg / 213031, the iSURF Project.
68

Semantic Interoperability Of The Un/cefact Ccts Based Electronic Business Document Standards

Kabak, Yildiray 01 July 2009 (has links) (PDF)
The interoperability of the electronic documents exchanged in eBusiness applications is an important problem in industry. Currently, this problem is handled by the mapping experts who understand the meaning of every element in the involved document schemas and define the mappings among them which is a very costly and tedious process. In order to improve electronic document interoperability, the UN/CEFACT produced the Core Components Technical Specification (CCTS) which defines a common structure and semantic properties for document artifacts. However, at present, this document content information is available only through text-based search mechanisms and tools. In this thesis, the semantics of CCTS based business document standards is explicated through a formal, machine processable language as an ontology. In this way, it becomes possible to compute a harmonized ontology, which gives the similarities among document schema ontology classes of different document standards through both the semantic properties they share and the semantic equivalences established through reasoning. However, as expected, the harmonized ontology only helps discovering the similarities of structurally and semantically equivalent elements. In order to handle the structurally different but semantically similar document artifacts, heuristic rules are developed describing the possible ways of organizing simple document artifacts into compound artifacts as defined in the CCTS methodology. Finally, the equivalences discovered among document schema ontologies are used for the semi-automated generation of XSLT definitions for the translation of real-life document instances.
69

Factors That Affect The Duration Of Cmmi-based Software Process Improvement Initiatives

Karagul, Yasemin 01 June 2009 (has links) (PDF)
Reference models developed for software process improvement (SPI) provide guidelines about what to do while assessing and improving the processes, but they do not answer the questions of how. There have been a number of studies that try to find effective and strategic implementation models or to identify factors that affect the SPI success. However, these studies do not provide answers to questions about the effect of these factors on SPI program duration or accelerated SPI studies. This study aims to investigate the factors that affect CMMI-based SPI duration. It consists of two phases: in the first phase, factors that influence SPI success are identified and hypotheses related to these factors are formulated based on the case studies published in the literature. In the second phase of the study, hypotheses are revised based on the results of the qualitative research conducted in seven companies, six of which have obtained CMMI-Level 3 certification as a consequence of their SPI effort. The study has shown that management commitment and involvement as well as process documentation have had a significant shortening effect on CMMI-based SPI duration, within the context of the studied cases. Software process improvement / CMMI / Success factors / Duration factors. Enter specific words or phrases that are listed in thesis.
70

Parallel Closet+ Algorithm For Finding Frequent Closed Itemsets

Sen, Tayfun 01 July 2009 (has links) (PDF)
Data mining is proving itself to be a very important field as the data available is increasing exponentially, thanks to first computerization and now internetization. On the other hand, cluster computing systems made up of commodity hardware are becoming widespread, along with the multicore processor architectures. This high computing power is synthesized with data mining to process huge amounts of data and to reach information and knowledge. Frequent itemset mining is a special subtopic of data mining because it is an integral part of many types of data mining tasks. Often this task is a prerequisite for many other data mining algorithms, most notably algorithms in the association rule mining area. For this reason, it is studied heavily in the literature. In this thesis, a parallel implementation of CLOSET+, a frequent closed itemset mining algorithm, is presented. The CLOSET+ algorithm has been modified to run on multiple processors simultaneously, in order to obtain results faster. Open MPI and Boost libraries have been used for the communication between different processes and the program has been tested on different inputs and parameters. Experimental results show that the algorithm exhibits high speedup and eficiency for dense data when the support value is higher than a determined value. Proposed parallel algorithm could prove to be useful for application areas where fast response is needed for low to medium number of frequent closed itemsets. A particular application area is the Web where online applications have similar requirements.

Page generated in 0.0502 seconds