351 |
A Comparison between JACK Intelligent Agents and JACK Teams Applied in TeamworkSpillum, Øystein January 2008 (has links)
<p>This report investigates JACK Intelligent Agents and JACK Teams, and makes a comparison between the two. The main object was to find indications that point out which modeling paradigm that results in least development effort, and which one that is creating the most feasible platform regarding teamwork construction. The application domain is decision-support systems used in oil production. The aspects evaluated are development effort, degree of coupling, encapsulation of functionality, abstraction level, delegation of autonomy, and scalability. The solutions developed in the comparison had static team formations that included few teammembers. This caused less development effort by using JACK Intelligent Agents, and was the main reason why it was considerate to be the preferred modeling paradigm in this case. This was partly experienced because reasoning based on the actual team membership was not used in the JACK Teams version. The use of roles was used instead, causing more JACK entities where it could have been avoided. Dynamic team formations during runtime were not needed due to the reference problem introduced. Maintaince during runtime, for instance introducing new subteams and changing the role structure was not looked into. Introducing teams in large scale was not performed. These four factors could have caused a different result. The question is if JACK Teams shows its potential through the oil production system designed in this report.</p>
|
352 |
Storing and Querying RDF in MarsBang, Ole Petter, Fjeldskår, Tormod January 2009 (has links)
<p>As part of the Semantic Web movement, the Resource Description Framework (RDF) is gaining momentum as a format for storing data, particularly metadata. The SPARQL Protocol and RDF Query Language is a SQL-like query language, recommended by W3C for querying RDF data. FAST is exploring the possibilities of supporting storage and querying of RDF data in their Mars search engine. To facilitate this, a SPARQL parser has been created for the Microsoft .NET Framework, using the MPLex and MPPG tools from Microsoft's Managed Babel package. This thesis proposes a solution for efficiently storing and retrieving RDF data in Mars, based on decomposition and B+ Tree indexing. Further, a method for transforming SPARQL queries into Mars operator graphs is described. Finally, the implementation of a prototype implementation is discussed. The prototype has been developed in collaboration with FAST and has required customized indexing in Mars. Some deviations from the proposed solution were made in order to create a working prototype within the available time frame. The focus has been on exploring possibilities, and performance has thus not been a priority, neither in indexing nor in evaluation.</p>
|
353 |
Design and Evaluation of a User-Centric Information System : Enhancing Student Life with Mobile ComputingMoe, Sindre Paulsrud January 2009 (has links)
<p>This project is a continuation of the work carried out autumn 2008 by the author, which reviewed the digital communication channels currently used for distribution of student information at the Norwegian University of Science and Technology (NTNU), and defined the key design decisions for a mobile service called MSIS. The project proposes a new mobile computer system (MSIS) intended to make user-centric information more easily available to students at NTNU. The system is designed using a Service Oriented Architecture (SOA), providing a number of services which offers functionality such as a dynamic course schedule, and a location search tool. Furthermore, MSIS makes use of context-awareness and elements of mobile computing, in order to provide a service that dynamically adapts to the situation of the user. A geographical positioning module based on Wi-Fi location fingerprinting technology is described, which makes it possible to determine the position of a handheld device within existing wireless network infrastructure. The project has been carried out in accordance with the design-science research model over a number of implementation and evaluation iterations. A user-driven evaluation of the MSIS service has been conducted among a group of NTNU students. The utility and usability of the system were evaluated by applying observational and empirical evaluation methods in a real-world environment on campus. The user tests identified numerous issues with the initial design, and suggested ideas for enhancements which have been implemented in the final version of the system. The Mobile Service Acceptance Model (MSAM) has been used to examine the factors that are influential for user adoption of mobile services in light of our project. The MSAM instrument measures different facets of a mobile information service, such as the perceived usefulness, ease of use, and usage intention. Our findings confirm that the utility of the MSIS system is perceived as very high, and students would likely benefit from such a system. There is no doubt great potential for a service like MSIS, and it is believed to be a useful addition to existing systems.</p>
|
354 |
The use of Enterprise Architecture, IT Strategy and IT Governance at StatoilHydroParmo, Christopher Ludt January 2009 (has links)
<p>The master thesis will extend the students depth study. Through the master thesis the student shall study and evaluate how IT Governance, Enterprise Architecture and IT Strategy are related in StatoilHydro. The student shall also research StatoilHydro´s awareness of the concepts. The student shall propose improvements and/or changes based on this evaluation.</p>
|
355 |
World of Wisdom - World Editor : User-interface for creating game worlds for World of WisdomKrogsæter, Thor Grunde January 2009 (has links)
<p>During the fall of 2008 a prototype of an educational multiplayer role-playing game called World of Wisdom (WoW) was developed as part of the specialization project TDT4570. WoW focuses on using knowledge for progressing through the game. The goal of this thesis was to design and develop a user-interface for teachers, that could be used to generate new content for WoW. In this thesis we described the design and implementation of such a user-interface called the WoW World Editor. The World Editor supports generating new maps, creatures, objects and questions for World of Wisdom. By making it easier to create the worlds, the course staff can focus on creating the knowledge for the game. For the students to be able to interact with the course staff while playing the game, we suggest a seperate client for the course staff. This client will then have additional functions that can be used to aid the students with problem, and to get valuable feedback from the players.</p>
|
356 |
Feature Selection for Text CategorisationGarnes, Øystein Løhre January 2009 (has links)
<p>Text categorization is the task of discovering the category or class text documents belongs to, or in other words spotting the correct topic for text documents. While there today exists many machine learning schemes for building automatic classifiers, these are typically resource demanding and do not always achieve the best results when given the whole contents of the documents. A popular solution to these problems is called feature selection. The features (e.g. terms) in a document collection are given weights based on a simple scheme, and then ranked by these weights. Next, each document is represented using only the top ranked features, typically only a few percent of the features. The classifier is then built in considerably less time, and might even improve accuracy. In situations where the documents can belong to one of a series of categories, one can either build a multi-class classifier and use one feature set for all categories, or one can split the problem into a series of binary categorization tasks (deciding if documents belong to a category or not) and create one ranked feature subset for each category/classifier. Many feature selection metrics have been suggested over the last decades, including supervised methods that make use of a manually pre-categorized set of training documents, and unsupervised methods that need only training documents of the same type or collection that is to be categorized. While many of these look promising, there has been a lack of large-scale comparison experiments. Also, several methods have been proposed the last two years. Moreover, most evaluations are conducted on a set of binary tasks instead of a multi-class task as this often gives better results, although multi-class categorization with a joint feature set often is used in operational environments. In this report, we present results from the comparison of 16 feature selection methods (in addition to random selection) using various feature set sizes. Of these, 5 were unsupervised , and 11 were supervised. All methods are tested on both a Naive Bayes (NB) classifier and a Support Vector Machine (SVM) classifier. We conducted multi-class experiments using a collection with 20 non-overlapping categories, and each feature selection method produced feature sets common for all the categories. We also combined feature selection methods and evaluated their joint efforts. We found that the classical supervised methods had the best performance, including Chi Square, Information Gain and Mutual Information. The Chi Square variant GSS coefficient was also among the top performers. Odds Ratio showed excellent performance for NB, but not for SVM. The three unsupervised methods Collection Frequency, Collection Frequency Inverse Document Frequency and Term Frequency Document Frequency all showed performances close to the best group. The Bi-Normal Separation metric produced excellent results for the smallest feature subsets. The weirdness factor performed several times better than random selection, but was not among the top performing group. Some combination experiments achieved better results than each method alone, but the majority did not. The top performers Chi square and GSS coefficient classified more documents when used together than alone.Four of the five combinations that showed increase in performance included the BNS metric.</p>
|
357 |
Semantic Cache Investment : Adaption of Cache Investment for DASCOSABeiske, Konrad Giæver, Bjørndalen, Jan January 2009 (has links)
<p>Semantic cache and distribution introduce new obstacles to how we use cache in query processing in databases. We have adapted a caching strategy called cache investment to work in a peer-to-peer database with semantic cache. Cache investment is a technique that influences the query optimizer without changing it. It suggests cache candidates based on knowledge about queries executed in the past. These queries are not only limited to the local site, but also detects locality in queries by looking at queries processed on remote sites. Our implementation of Semantic cache investment for distributed databases shows a great performance improvement, especially when multiple queries are active at the same time. To utilize cache investment we have looked into how a distributed query optimizer can be extended to use cache content in planning. This allows the query optimizer to detect and include beneficial cache content on remote sites that it otherwise would have ignored. Our implementation of a cache-aware optimizer shows an improvement in performance, but its most important task is to evaluate cache candidates provided through cache investment.</p>
|
358 |
Prototyping a location aware application for UBiT. : A map-based application, designed, implemented and evaluated.Olsen, Bjarne Sletten January 2009 (has links)
<p>Through the research performed in this thesis, it has been shown how location awareness and maps can be exploited to facilitate the use of library resources, such as information on documents and objects. A prototype has been developed to demonstrate the feasibility of integrating several different information sources for this use. The prototype created allows for users located within the city centre of Trondheim to search for documents and to locate the library departments holding them. The user is shown a map and given information on how to travel to the nearest bus stop, as well as bus schedules on how to get to the selected library department. Several information sources for the prototype has been identified and evaluated. The prototype communicates with BIBSYS for document information retrieval, Google Maps for map generation, team-trafikk.no for bus schedules querying and Amazon.com and LibraryThing.com for book cover image downloading. To ensure data consistency some local data sources are also maintained, such as a list of all the UBiT (NTNU library) departments in Trondheim. The prototype was implemented so that it would satisfy a set of requirements. These requirements were created by applying the technique of use cases. Each requirement has been discussed and prioritised based on requests from UBiT. The most important requirements have been incorporated into the design of the prototype. This focuses on modularity and it has been discussed how the external sources best can be integrated with the prototype. The prototype is implemented using a combination of programming languages. The differences between these languages have posed a challenge, and solutions to how these can be avoided are presented. The prototype has been tested according to an extensive test plan, and the results of these tests have been document and evaluated. Each of the design decisions have been evaluated and discussed, and suggestions on how these could have been improved are given. Finally, suggestions on how the functionality of the prototype can be extended are presented. The prototype created in this thesis allows for users, familiar or unfamiliar with the city and its transportation network, to locate a document and travel to the library holding it. It demonstrates how emerging technologies such as location awareness can contribute to increased use of library services.</p>
|
359 |
Similarity Search in Large Databases using Metric Indexing and Standard Database Access MethodsOttesen, Erik Bagge January 2009 (has links)
<p>Several methods exists for performing similarity searches quickly using metric indexing. However, most of these methods are based on main memory indexing or require specialized disk access methods. We have described and implemented a method combining standard database access methods with the LAESA Linear Approximating Eliminating Search Algorithm to perform both range and K nearest neighbour (KNN) queries using standard database access methods and relational operators. We have studied and tested various existing implementations of R-trees, and implemented the R*-tree. We also found that some of the optimizations in R*-trees was damaging to the response time at very high dimensionality. This is mostly due to the increased CPU time removing any benefit from reducing the number of disk accesses. Further we have performed comprehensive experiments using different access methods, join operators, pivot counts and range limits for both range and nearest neighbour queries. We will also implement and experiment using a multi-threaded execution environment running on several processors.</p>
|
360 |
Building a Production Module for a Telecommunications CompanySansaloni Talens, Javier January 2009 (has links)
<p>Nowadays the computer science is extremely important in business; thanks to it you can automate tasks, streamline processes and obtain information. The best resource for the new companies is the information. I mean, today the information is very necessary with the new technologies. Employers who are responsible to decide in their companies, have begun to understand that the information helps their business and also can be one of the best critical factors that can show us if the company work is successfully or not. (1) In recent years organizations have recognized the importance of managing key resources such as the working hours and the raw materials. ERP applications are often used to standardize business processes and unify data; the importance of that software in the companies is growing every day. Although we also can say that in some cases the ERP software doesn´t solve some business problems because there are processes that aren´t standard or common. Moreover, if we would create a module for our company it has to be usable .The usability is very important because that and the automation will help to increase the performance of coordinators. This project is a study of a particular case of a telecommunications company in which it has found a problem with the production process; In This Project we will be able to solve each problem that will appear in various stages of construction our module. These steps include: a study about if there are problems in the selected area of business, whether buying or building a new module, heuristic techniques and methods to improve the usability, quality of build module using polls, usability guidelines, use case... The results have improved the production of the company and the system provides the necessary information to the coordinators but the coordinators want further improvement. The study of the usability helps users to be ready to use the software correctly.</p>
|
Page generated in 0.055 seconds