• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 382
  • 116
  • Tagged with
  • 498
  • 498
  • 498
  • 498
  • 498
  • 471
  • 27
  • 10
  • 10
  • 4
  • 4
  • 4
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
311

A study of online construction of fragment replicas

Torres Pizzorno, Fernanda January 2005 (has links)
High availability in database systems is achieved using data replication and online repair. On a system containing 2 replicas of each fragment, the loss of a fragment replica due to a node crash makes the system more vulnerable. In such a situation, only one replica of the fragments contained in the crashed node will be available until a new replica is generated. In this study we have investigated different methods of regenerating a new fragment replica that is up to date with the transactions that have happened during the process of regenerating it. The objective is to determine which method performs the best in terms of completion time at each of the nodes involved, in different conditions. We have investigated three different methods for sending the data from the node containing the primary fragment replica to the node being repaired, and one method for catching-up with the transactions executed at the node containing the primary fragment replica during the repair process. These methods assume that the access method used by the DB system is B-trees. The methods differ by the volume of data sent over the network, and by the work (and time) needed to prepare the data prior to sending. They consist respectively in sending the entire B-tree, sending the leaves of the B-tree only, and sending the data only; the latter has two alternatives on the node being repaired, depending on whether the data is being inserted into a new B-tree, or whether the B-tree is being regenerated from the leaf-level and up. This study shows that the choice of recovery method should be made considering the network configuration that will be used. For common network configurations like 100Mbits or lower, it is interesting to use methods that minimize the volume of data transfered. For higher network bandwidth, it is more important to minimize the amount of work done at the nodes.
312

Development of a Semantic Web Solution for Directory Services

Buil Aranda, Carlos January 2005 (has links)
The motivation for this work is based in a common problem in organizations. The problem is to access and to manage the growing amount of stored data in companies. Companies can take advantage with the utilization of the emerging Semantic Web technology in order to solve this problem. Invenio AS is in a situation where it is necessary to access a directory service in an efficient way and the Semantic Web languages can be used to solve it. In this thesis, a literature study has been done, an investigation about the main ontology languages proposed by World Wide Web Consortium, RDF(S) and OWL with its extension for Web services OWL-S and the ontology language proposed by the International Organization for Standardization, Topic Maps. This literature study can be used like an introduction to these Web ontology languages RDF, OWL (and OWL-S) and Topic Maps. A model of the databases has been extracted and designed in UML. The extracted model has been used to create a common ontology, merging both the initial databases. The ontology that represents the database in the three languages has been analysed. The quality and semantic accuracy of the languages for the Invenio case has been analysed and we have obtained detailed results from this analysis.
313

Knowledge Transfer in Open Source Communities of Practice : The Case of Azureus

Evans, Peter John Dalland January 2005 (has links)
This paper discusses knowledge sharing dynamics in open source communities of practice based on an empirical study of an open source project. The paper describes how the online community in the study displayed many characteristics of an ongoing community of practice (Lave and Wenger 1991), as well as the distinct role technology and artefacts played in collaboration within the community. It is shown that while the theory of communities of practice captures many important aspects of learning and knowledge sharing in the project, it neglects the role of artefacts and the way they can contribute to these dynamics. Concepts of knowledge and knowledge transfer are discussed in order to explain aspects of these, relevant to the observations made in the study. The purpose of the paper is to offer practical and theoretical contributions to understanding distributed knowledge transfer, as well as characteristics of open source development.
314

Feature selection in Medline using text and data mining techniques

Strand, Lars Helge January 2005 (has links)
In this thesis we propose a new method for searching for gene products gene products and give annotations associating genes with Gene Ontology codes. Many solutions already exists, using different techniques, however few are capable of addressing the whole GO hierarchy. We propose a method for exploring this hierarchy by dividing it into subtrees, trying to find terms that are characteristics for the subtrees involved. Using a feature selection based on chi-square analysis and naive Bayes classification to find the correct GO nodes.
315

Document retrieval from suffix arrays on disk

Falkenberg, Hans Christian January 2005 (has links)
The research papers about suffix arrays have grown many, and asymptotically better algorithms are being developed. There are, however, two areas that seem to have been a little forgotten - searching in external memory and document retrieval from a suffix array. We present and compare four different methods for document retrieval from an external suffix array. Our results show that only one yields adequate results in the presence of many documents, namely embedding document information into the suffix array. We also touch on the subject of searching external suffix arrays, presenting and discussing four techniques.
316

AutAT : Automatic Acceptance Testing of Web Applications

Skytteren, Stein Kåre, Øvstetun, Trond Marius January 2005 (has links)
Today, more and more applications are web based. As these systems are getting larger and larger, the need for testing them is increasing. XP and other agile methodologies stress the importance of test driven development and automatically testing at all levels of testing. There exists a few open source automatical testing frameworks for web applications' features. These are, however, rather poor when it comes to usability, efficiency and quality factors. This project has created a tool for automatic acceptance testing, called AutAT, which aimes at being an improvement when compared to the previous tools' testing features. The tool has been empirically tested to verify that it is better when it comes to the parameters usability, efficiency and quality. The results from this test clearly show that AutAT is superior to available state of the art open source tools for acceptance testing.
317

Code Reuse in Object Oriented Software Development : How to Develop a Plan for Reuse

Eriksen, Lisa Wold January 2005 (has links)
Code reuse in object oriented software development has been common for some time. A recent study performed by the author revealed that while software developers in small Norwegian companies regard code reuse as important and useful, they are prone to perform ad-hoc reuse. This reduces the positive effects achieved through reuse, and although most of the developers wish to perform more systematic reuse, they do not know how to do this. This thesis aims to help amend this problem by developing a set of guidelines describing the process of making a plan for reuse. To develop the guidelines, a literature study was performed, followed by three phases of writing. Between the three phases of writing, two rounds of three feedback interviews were performed to elicit information on the usability and clarity of the guidelines. Each feedback interview was performed with a developer from a small Norwegian company at the developer's workplace. After each set of interviews, the guidelines were revised and improved. The final set of guidelines presented in this report was considered by the developers to be easily understandable and useful, but further work remains to make the guidelines complete; a set of examples of how the process could be performed is essential to help the developers make the leap from the theoretical descriptions of the guidelines to making their own plan for reuse.
318

High Availability Transactions

Kolltveit, Heine January 2005 (has links)
This thesis presents a framework of a passively replicated transaction manager. By integrating transactions and replication, two well known fault tolerance techniques, the framework provides high availability for transactional systems and better support for non-deterministic execution for replicated systems. A prototype Java implementation of the framework, based on Jgroup/ARM and Jini, has been developed and performance tests have been executed. The results indicate that the response time for a simple credit-debit transaction heavily depends on the degree of replication for both servers and the transaction manager. E.g. a system with two replicas of the transaction manager and the servers quadruples the response time compared to the nonreplicated case. Thus, the performance penalty of replication should be weighed against the increased availability on a per application basis.
319

An Aspect-Oriented Approach to Adaptive Systems

Hveding, John Christian January 2005 (has links)
Adaptive systems are systems that react to changes in their environment and adapt to these changes by changing their behavior. The FAMOUS project aims to build an adaptive system by creating a generic middleware platform. This project explores how adaptive systems in general and the FAMOUS project in particular can benefit from using aspect-oriented technology. We propose using run-time aspect weaving to perform adaptations. We create a prototype to demonstrate how one can model aspects for adaptations. We suggest that variability engineering of the applications for an adaptive platform can benefit from aspect-oriented software development.
320

A Security Focused Integration Architecture for an Electronic Observation Chart

Divic, Mirela, Huse, Ida Hveding January 2005 (has links)
An observation chart contains a collection of information from several different health information systems used at a hospital. Today, health personnel often has to access these health information systems during patient care and manually register information from them into the observation chart. Integration of the health information systems which constitute an observation chart is therefore needed. Integration means that systems used by a large amount of users are put together in such a way that all users gain access to the information they need. An integration will increase the efficiency of information flow by automatically retrieving information from relevant health information systems into an electronic observation chart. These improvements in turn will hopefully result in better quality of patient care, reduced time spent on treating each patient and therefore also reduced costs. This thesis describes a security focused integration architecture for an electronic observation chart system (EOC-system). This thesis also explores standards, strategies, laws and regulations relevant for the architectural description of the EOC-system. The EOC-system is going to be developed by CARDIAC, a company focusing on technology within health care, and the architectural description will be a support in this development process. The architectural description for CARDIAC’s EOC-system is based on the Model-based Architecture description Framework for Information Integration Abstraction (MAFIIA), which is an architectural description framework for software intensive systems with a specialization towards Information Integration Systems (IIS). The architectural description has also followed MAFIIA’s two extensions, MAFIIA/H and MAFIIA/RBAC, which respectively relate to the health care domain and to role-based access control (RBAC). The work with this thesis, following the MAFIIA architectural description framework, has resulted in a detailed and structured architectural description which sees the architecture from several viewpoints and describes different aspects of it. Security and integration are emphasized in the architectural description; a combination of a service-oriented and portal-oriented integration architecture is chosen and the security mechanisms digital signing, secure communication, auditing and access control are ensured.

Page generated in 0.0826 seconds