321 |
Derby: Replication and AvailabilitySørensen, Egil January 2007 (has links)
<p>This paper describes the work done to add hot standby replication functionality to the Apache Derby Database Management System. The Apache Derby project is a relational database implemented entirely in Java. Its key advantages are that it has a small footprint and it is based on the standard Java JDBC and SQL standards. It is also easy to install, deploy and use as well as it can be embedded in almost any light-weight Java application. By implementing a hot standby scheme in Apache Derby several features are added. The contents of the database is replicated at run time to another site providing online runtime backup. As the hot standby takes over on faults availability is added in that a client can connect to the hot standby after a crash. Thus the crash is masked from the clients. In addition to this, online upgrades of software and hardware can be done by taking down one database at the time. Then when the upgrade is completed the upgraded server is synchronized and back online with no downtime. A fully functional prototype of the Apache Derby hot standby scheme has been created in this project using logical logs, fail-fast takeovers and logical catchups after an internal up-to-crash recovery and reconnection. This project builds on the ideas that are presented in Derby: Write to Neighbor Mode.</p>
|
322 |
Improved Backward Compatibility and API Stability with Advanced Continuous IntegrationDrolshammer, Erik January 2007 (has links)
<p>Services with a stable API and good backward compatibility is important for component-based software development and service-oriented architectures. Despite its importance, little tool support is currently available to ensure that services are backward compatible. To address this problem a new continuous integration technique has been developed. The idea is to build projects that depend on a service with a new version of the service. This ensures that the development version is compatible with projects that depend on the regular version. A continuous integration server is used to initiate builds. This entails that if a build breaks, the developers get feedback right away, and it is easy to determine which change that caused the broken build. We show that an implementation is feasible by implementing a prototype as a proof of concept. The prototype use Continuum as the underlying build engine and utilize metadata from the Maven Project Object Model (POM). The prototype has support for multiple services. Services can thus be checked for compatibility with each other, in addition to backward compatibility with the regular version. Keywords: Continuous integration, Continuum, Maven, Component-based software development (CBSD), Service-Oriented Architecture (SOA), Test-Driven Development (TDD), agile software development</p>
|
323 |
Apache Derby SMP scalability : Investigating limitations and opportunities for improvementMorken, Anders, Pahr, Per Ottar Ribe January 2007 (has links)
<p>This report investigates the B-Tree access method of Apache Derby. Apache Derby is an open source Java database system. The detailed focus of the report is on performance aspects of the Derby page latch implementation. Our focal point is the interaction between the B-Tree access method and page latching, and the impact of these components on the ability of Derby to scale on multiprocessor systems. Derby uses simple and in the single-threaded case inexpensive exclusive-only page latches. We investigate the impact on scalability of this design, and contrast it with a version of Derby modified to support both shared read-only and exclusive page access for lookups in index structures. This evaluation is made for single-threaded as well as multi-threaded scenarios on multiprocessing systems. Based on analyses of benchmark results and profiler traces, we then suggest how Derby may be able to utilize modern Java locking primitives to improve multiprocessor scalability.</p>
|
324 |
Game Enhanced Lectures : An Implementation and Analysis of a Lecture GameMørch-Storstein, Ole Kristian, Øfsdahl, Terje January 2007 (has links)
<p>Educational games have recently caught the attention of educational organizations witnessing newfound potential that is not achievable through traditional lectures. By reviewing findings from authoritative theory, we present the conception and implementation of a prototype educational game for lecture enhancement. The concept is based on the idea of playing a game during lectures, with students answering multiple choice questions using their own mobile phones and receiving instant feedback by watching a large screen displaying animated graphics. It is shown how such a concept is made readily available for students and schools by using regular mobile phones and computers they already possess. We describe an example implementation, along with pedagogical guidelines for usage, and the analysis of how the prototype was received in an authentic setting. Students generally found the prototype easy to use and thought it contributed to increased learning outcome. The prototype was perceived as entertaining, and half the students claimed they would attend more lectures if such a system was being used. In spite of this, 10% of the students felt reluctant to pay for the GPRS/3G data transmission fees resulting from playing the game.</p>
|
325 |
Temporal Text Mining : The TTM TestbenchFivelstad, Ole Kristian January 2007 (has links)
<p>This master thesis presents the Temporal Text Mining(TTM) Testbench, an application for discovering association rules in temporal document collections. It is a continuation of work done in a project the fall of 2005 and work done in a project the fall of 2006. These projects have laid the foundation for this thesis. The focus of the work is on identifying and extracting meaningful terms from textual documents to improve the meaningfulness of the mined association rules. Much work has been done to compile the theoretical foundation of this project. This foundation has been used for assessing different approaches for finding meaningful and descriptive terms. The old TTM Testbench has been extended to include usage of WordNet, and operations for finding collocations, performing word sense disambiguation, and for extracting higher-level concepts and categories from the individual documents. A method for rating association rules based on the semantic similarity of the terms present in the rules has also been implemented. This was done in an attempt to narrow down the result set, and filter out rules which are not likely to be interesting. Experiments performed with the improved application shows that the usage of WordNet and the new operations can help increase the meaningfulness of the rules. One factor which plays a big part in this, is that synonyms of words are added to make the term more understandable. However, the experiments showed that it was difficult to decide if a rule was interesting or not, this made it impossible to draw any conclusions regarding the suitability of semantic similarity for finding interesting rules. All work on the TTM Testbench so far has focused on finding association rules in web newspapers. It may however be useful to perform experiments in a more limited domain, for example medicine, where the interestingness of a rule may be more easily decided.</p>
|
326 |
Arbitration and Planning of Workflow Processes in a Context-Rich Cooperative EnvironmentIndahl, Christian, Rud, Kjell Martin January 2007 (has links)
<p>Hardware has come a long way to support pervasive computing and workflow management, whilst software has fallen behind. Existing systems lack the ability to make decisions that corresponds with user intents and are unable to handle complex context-rich workflow conflicts. Since workflow systems are meant to facilitate normal workers, we have looked at how workflows can be generated and adapted without prior knowledge of programming. The approach is based on the elaboration of so called calm technologies, bringing user interference to a minimum. We propose ways of automating the process of obtaining context, generating workflows, making plans, and schedule resources before execution. Obtaining context is proposed done by a Context service which delivers tailored context information through abstraction. To create workflows, the only thing a user needs to know is what he wants to achieve. The rest is generated. The planning mechanism used is the Scheduling service first proposed in our depth study. As a part of this, we describe a method for how to simulate future context for better planning purposes, decreasing the need for adaption and replanning caused by context changes. When several actors execute workflows in an environment, conflicts will occur. We have made a proof-of-concept implementation of the Arbitration architecture from our depth study. This approach used case-based reasoning to recognise conflicts between workflows and select a solution. We set out to find a way to store a conflict representation as a CBR-case so it can be recognised in a different context and enable the service to recognise conflicts that are similar in nature. We found that a case could be stored using ontologies to describe the nature of the workflow constituents that make up the conflict. In addition, context and state triggers are proposed. These filter the cases that can not be conflicts, due to current contextual information or other states, before the CBR framework computes similarity of the cases against the current workflows. Using an expert system supporting fuzzy logic, it could speed up the similarity computations required to recognise conflicts. After running some scenarios, we found that the system was able to detect known conflicts in a different context and discover prior unknown. This was achieved because of the similarity in nature to a known conflict.</p>
|
327 |
Ontology Learning - Suggesting Associations from TextKvarv, Gøran Sveia January 2007 (has links)
<p>In many applications, large-scale ontologies have to be constructed and maintained. A manual construction of an ontology is a time consuming and resource demanding process, often involving some domain experts. It would therefore be beneficial to support this process with tools that automates the construction of an ontology. This master thesis has examined the use of association rules for suggesting associations between words in text. In ontology learning, concepts are often extracted from domain specific text. Applying the association rules algorithm on the same text, the associations found can be used to discover candidate relations between concepts in an ontology. This algorithm has been implemented and integrated in GATE, a framework for natural language processing. Alongside the association rules algorithm, several information extraction and natural language processing techniques have been implemented, in which this algorithm is built upon. This has resulted in a framework for ontology learning. A qualitative evaluation of the associations found by the system has shown that the associations found by the association rules algorithm has promising results for detecting relations between concepts in an ontology. It has also been found that this algorithm is dependent on an accurate extraction of keywords. Further, a subjective evaluation of GATE has shown that it is suited as a framework for ontology learning.</p>
|
328 |
Domain Specific Languages for Executable SpecificationsAlvestad, Kristian January 2007 (has links)
<p>In agile software development, acceptance test-driven development is sometimes mentioned, and some have explored the possibilities. This study investigates if a non-technical individual can write executable specifications based on domain specific languages from three different frameworks. Fit, which is an acceptance testing framework based on HTML forms, CubicTest which is an acceptance testing framework that uses modeling through Eclipse, and RSpec, a BDD framework for specifying system behavior through examples. This study involves an experiment where the perceived effectiveness and understandability of the three frameworks are evaluated. 10 students participated in a one and a half hour experiment for which they had prepared themselves for, by having one week to acquire overview of their assigned framework. The experiment was held in a computer laboratory at the Norwegian University of Science and Technology. After results were gathered and analyzed, statistical hypothesis testing was unfortunately not able to reject the null-hypothesis of the study. No conclusions could therefore be drawn. The results of the study are discussed and possible improvements and further work is mentioned.</p>
|
329 |
User-centered and collaborative service management in UbiCollab : Design and implementationJohansen, Kim-Steve January 2007 (has links)
<p>This project has been carried out as a contribution to the UbiCollab project. The project aims to provide a platform for the support of ubiquitous collaboration. UbiCollab tries to support collaboration in the users' natural environment, and draws upon research in areas such as user mobility and ubiquitous computing to achieve this. The platform provides functionality such as location-awareness, integration with the physical environment and mobility support. UbiCollab is based on service oriented architecture (SOA), and integration of computerized services and service management are key aspects of the platform. A previous pre-study has been performed by this author in the autumn of 2006 to compile a set of requirements and propose an architecture for a user-centered and collaborative service management system. This work builds on that study, and provides the design and implementation of a service management system for UbiCollab. The system aims to provide users with the tools to effortlessly discover, provide, and consume services. Users will also be able to take advantage of new services as they become available in dynamically changing environments. Work done on the service management system consists of the design and implementation of several platform components and their application programming interfaces (APIs). In addition a set of applications to test the flexibility and functionality of the platform, as well as the completeness of the APIs, have been designed and implemented. What sets the service management system in UbiCollab apart from similar systems is the focus on end-users and collaboration. User-friendliness is achieved by creating a pluggable service discovery system where the inherent complexity of service discovery protocols are hidden from the user. In addition discovery of services by pointing at the service of interest with an RFID device is supported. The use of pointing provides a natural way of communicating. Collaboration is supported by allowing users to share their services (publish) in defined groups, and consume services shared by other users.</p>
|
330 |
Ontology-Driven Query Reformulation in Semantic SearchSolskinnsbakk, Geir January 2007 (has links)
<p>Semantic search is a research area in which the goal is to understand the users intended meaning of the query. This requires disambiguation of the user query and interpreting the semantics of the query. Semantic search would thus improve the users search experience through more precise result sets. Moreover, ontologies are explicit conceptualizations of domains, defining concepts, their properties, and the relations among them. This makes ontologies semantic representations of specific domains, suitable to use as a basis for semantic search applications. In this thesis we explore how such a semantic search system based on ontologies may be constructed. The system is built as a query reformulation module that uses an underlying search engine based on Lucene. We employ text mining techniques to semantically enrich an ontology by building feature vectors for the concepts of the ontology. The feature vectors are tailored to a specific document collection and domain, reflecting the vocabulary in the document collection and the domain. We propose four query reformulation strategies for evaluation. The interpretation and expansion of the user query is based on the ontology and the feature vectors. Finally the reformulated query is fired as a weighted query into the Lucene search engine. The evaluation of the implemented prototype reveals that search is in general improved by our reformulation approaches. It is however difficult to give any definite conclusion to which query types benefit the most from our approach, and which reformulation strategy improves the search result the most. All four of the reformulation strategies seem to on average perform quite equally.</p>
|
Page generated in 0.0558 seconds