• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 263
  • Tagged with
  • 263
  • 263
  • 263
  • 263
  • 263
  • 30
  • 28
  • 27
  • 25
  • 24
  • 24
  • 21
  • 19
  • 17
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

E-cosmic: A Business Process Model Based Functional Size Estimation Approach

Kaya, Mahir 01 February 2010 (has links) (PDF)
The cost and effort estimation of projects depend on software size. A software product size is needed at as early a phase of the project as possible. Conventional Early Functional Size Estimation methods generate size at the early phase but result in subjectivity and unrepeatability due to manual calculation. On the other hand, automated Functional Size Measurement calculation approaches require constructs which are available in considerably late software development phases. In this study we developed an approach called e-Cosmic to calculate and automate the functional size measurement based on the business processes. Functions and input and output relationship types of each function are identified in the business process model. The size of each relationship type is determined by assigning appropriate data movements based on the COSMIC Measurement Manual. Then, relationship type size is aggregated to produce the size of each function. The size of the software product is the sum of the size of these functions. Automation of this process based on business process model is performed by developing a script in the ARIS tool concept. Three case studies were conducted to validate the proposed functional size estimation method (e-Cosmic). The size of the products in the case studies are measured manually with COSMIC FSM (Abran et al, 2007) as well as using a conventional early estimation method, called Early and Quick COSMIC FFP. We compared the results of different approaches and discussed the usability of e-Cosmic based on the findings.
82

Massive Crowd Simulation With Parallel Processing

Yilmaz, Erdal 01 February 2010 (has links) (PDF)
This thesis analyzes how parallel processing with Graphics Processing Unit (GPU) could be used for massive crowd simulation, not only in terms of rendering but also the computational power that is required for realistic simulation. The extreme population in massive crowd simulation introduces an extra computational load, which is quite difficult to meet by using Central Processing Unit (CPU) resources only. The thesis shows the specific methods and approaches that maximize the throughput of GPU parallel computing, while using GPU as the main processor for massive crowd simulation. The methodology introduced in this thesis makes it possible to simulate and visualize hundreds of thousands of virtual characters in real-time. In order to achieve two orders of magnitude speedups by using GPU parallel processing, various stream compaction and effective memory access approaches were employed. To simulate crowd behavior, fuzzy logic functionality on the GPU has been implemented from scratch. This implementation is capable of computing more than half billion fuzzy inferences per second.
83

A Service Oriented Peer To Peer Web Service Discovery Mechanism With Categorization

Ozorhan, Mustafa Onur 01 March 2010 (has links) (PDF)
This thesis, studies automated methods to achieve web service advertisement and discovery, and presents efficient search and matching techniques based on OWL-S. In the proposed system, the service discovery and matchmaking is performed via a centralized peer-to-peer web service repository. The repository has the ability to run on a software cloud, which improves the availability and scalability of the service discovery. The service advertisement is done semi-automatically on the client side, with an automatic WSDL to OWL-S conversion, and manual service description annotation. An OWL-S based unified ontology -Suggested Upper Merged Ontology- is used during annotation, to enhance semantic matching abilities of the system. The service advertisement and availability are continuously monitored on the client side to improve the accuracy of the query results. User-agents generate query specification using the system ontology, to provide semantic unification between the client and the system during service discovery. Query matching is performed via complex Hilbert Spaces composed of conceptual planes and categorical similarities for each web service. User preferences following the service queries are monitored and used to improve the service match scores in the long run.
84

Evaluation And Selection Of Case Tools:a Methodology And A Case Study

Oksar, Koray 01 February 2010 (has links) (PDF)
Today&rsquo / s Computer Aided Software Engineering (CASE) technology covers nearly all activities in software development ranging from requirement analysis to deployment.Organizations are evaluating CASE tool solutions to automate or ease their processes. While reducing human errors, these tools also increase control, visibility and auditability of the processes. However, to achieve these benefits, the right tool or tools should be selected for usage in the intended processes. This is not an easy task when the vast number of tools in the market is considered. Failure to select the right tool may impede project&rsquo / s progress besides causing economic loss. In this thesis study, a methodology is proposed for CASE tool evaluation and selection among various candidates and the points that separate this work from similar studies in the literature are explained. Moreover, the methodology is performed on a case study.
85

Data Mining On Architecture Simulation

Maden, Engin 01 March 2010 (has links) (PDF)
Data mining is the process of extracting patterns from huge data. One of the branches in data mining is mining sequence data and here the data can be viewed as a sequence of events and each event has an associated time of occurrence. Sequence data is modelled using episodes and events are included in episodes. The aim of this thesis work is analysing architecture simulation output data by applying episode mining techniques, showing the previously known relationships between the events in architecture and providing an environment to predict the performance of a program in an architecture before executing the codes. One of the most important points here is the application area of episode mining techniques. Architecture simulation data is a new domain to apply these techniques and by using the results of these techniques making predictions about the performance of programs in an architecture before execution can be considered as a new approach. For this purpose, by implementing three episode mining techniques which are WINEPI approach, non-overlapping occurrence based approach and MINEPI approach a data mining tool has been developed. This tool has three main components. These are data pre-processor, episode miner and output analyser.
86

A Hybrid Movie Recommender Using Dynamic Fuzzy Clustering

Gurcan, Fatih 01 March 2010 (has links) (PDF)
Recommender systems are information retrieval tools helping users in their information seeking tasks and guiding them in a large space of possible options. Many hybrid recommender systems are proposed so far to overcome shortcomings born of pure content-based (PCB) and pure collaborative filtering (PCF) systems. Most studies on recommender systems aim to improve the accuracy and efficiency of predictions. In this thesis, we propose an online hybrid recommender strategy (CBCFdfc) based on content boosted collaborative filtering algorithm which aims to improve the prediction accuracy and efficiency. CBCFdfc combines content-based and collaborative characteristics to solve problems like sparsity, new item and over-specialization. CBCFdfc uses fuzzy clustering to keep a certain level of prediction accuracy while decreasing online prediction time. We compare CBCFdfc with PCB and PCF according to prediction accuracy metrics, and with CBCFonl (online CBCF without clustering) according to online recommendation time. Test results showed that CBCFdfc performs better than other approaches in most cases. We, also, evaluate the effect of user-specified parameters to the prediction accuracy and efficiency. According to test results, we determine optimal values for these parameters. In addition to experiments made on simulated data, we also perform a user study and evaluate opinions of users about recommended movies. The results that are obtained in user evaluation are satisfactory. As a result, the proposed system can be regarded as an accurate and efficient hybrid online movie recommender.
87

Using Semantic Web Services For Data Integration In Banking Domain

Okat, Caglar 01 May 2010 (has links) (PDF)
A semantic model oriented transformation mechanism is developed for the centralization of intra-enterprise data integration. Such a mechanism is especially crucial in the banking domain which is selected in this study. A new domain ontology is constructed to provide basis for annotations. A bottom-up approach is preferred for semantic annotations to utilize existing web service definitions. Transformations between syntactic web service XML responses and semantic model concepts are defined in transformation files. Transformation files are stored and executed in a separate central transformation repository to enhance abstraction and reusability. An RDF-Store is implemented to store transformed RDF data. Inference power of semantic model is exposed by executing semantic queries in the RDF-Store.
88

Hla Fom Development With Model Transformations

Dinc, Ali Cem 01 May 2010 (has links) (PDF)
There has been a recent interest in the model-based development approach in the modeling and simulation community. The Model-Driven Architecture (MDA) of OMG envisions a fully model-based development process where models are created for capturing not only requirements, but also designs and implementations. Domain-specific metamodels and model transformations constitute the cornerstones of this approach. We have developed transformations from the data part of Field Artillery (FA) domain models to High Level Architecture (HLA) Object Model Template (OMT) models, honoring the MDA philosophy. In the MDA terminology, the former corresponds to the CIM (Computation-Independent Model) or, arguably, PIM (Platform-Independent Model), and the latter corresponds to the PSM (Platform-Specific Model), where the platform is HLA. As a case study for the source metamodel, we have developed a metamodel for the data model part of the (observed) fire techniques of the FA domain. All of the entities in the metamodel are derived from the NATO&rsquo / s Command and Control Information Exchange Data Model (C2IEDM) elements.
89

Neton: A New Tool For Discovering The Semantic Potential Of Biomedical Data In Umls Semantic Network

Gulden Ozdemir, Birsen 01 March 2010 (has links) (PDF)
The Unified Medical Language System Semantic Network (UMLS SN) being an upper-level abstraction of the biomedical domain has a complex structure due to many relationships, making it difficult for human orientation. Therefore, while the SN is a valuable source for modeling contents of the biomedical domain its usage is limited. NetON was designed and built for the automatic transformation of UMLS SN to OWL sublanguages to support semantic operations between biomedical systems. NetON uses advances in the Semantic Web, a candidate technology for sustaining knowledge intensive tasks. Ontology Web Language (OWL) sublanguage rules are used to represent information in UMLS SN. The major contribution of NetON is the opportunity of automatic transformation of UMLS SN to OWL sublanguages named as OWL Basic Species. The aim of NetON is maximum possible information transformation from UMLS SN. The only information that is not able to be transformed to any OWL Basic Species due to the lack of appropriate constructors in OWL standard is inheritance blockings in UMLS SN. In UMLS SN, there are unseen assertions that can be inferred by using inference rules on explicitly specified assertions which are not essentially valid for all the descendants. Deduction outcomes of any OWL reasoners on NetON OWL Basic Species will also include false positives due to the lack of inheritance blocking information. The algorithms of the second dimension consider the inheritance blocking information while executing inference rules. As this cannot be done by any OWL reasoner, the second dimension offers a solution for application developers.
90

Ontology Learning And Question Answering (qa) Systems

Baskurt, Meltem 01 May 2010 (has links) (PDF)
Ontology Learning requires a deep specialization on Semantic Web, Knowledge Representation, Search Engines, Inductive Learning, Natural Language Processing, Information Storage, Extraction and Retrieval. Huge amount of domain specific, unstructured on-line data needs to be expressed in machine understandable and semantically searchable format. Currently users are often forced to search manually in the results returned by the keyword-based search services. They also want to use their native languages to express what they search. In this thesis we developed an ontology based question answering system that satisfies these needs by the research outputs of the areas stated above. The system allows users to enter a question about a restricted domain by means of natural language and returns exact answer of the questions. A set of questions are collected from the users in the domain. In addition to questions, their corresponding question templates were generated on the basis of the domain ontology. When the user asks a question and hits the search button, system chooses the suitable question template and builds a SPARQL query according to this template. System is also capable of answering questions required inference by using generic inference rules defined at a rule file. Our evaluation with ten users shows that the sytem is extremely simple to use without any training resulting in very good query performance.

Page generated in 0.079 seconds