Spelling suggestions: "subject:"datat og informasjonsforvaltning"" "subject:"tatar og informasjonsforvaltning""
41 |
Using the Geographical Location of Photos in Mobile PhonesAmundsen, Jon Anders January 2008 (has links)
<p>Digital cameras in mobile phones have become very popular in the recent years, and it is common to have large photo collections stored in the phone. Organizing these photos on the phone is still a big problem though. This study explores different ways of utilizing the location of where the photos were taken to make it easier to manage a large photo collection. Several different positioning technologies that can be used to obtain the location of where a photo was taken are presented. Three of the application suggestions for using location information of photos were implemented as prototypes on the Android platform. Android is a new platform for mobile phones developed by Google and the Open Handset Alliance, which has been made available as a preview release for developers. A part of this study was to investigate how suitable this platform is for developing location-based software. It was found that it is very suitable, although there still are some bugs and missing features that are expected to be fixed before the final release. The three application prototypes that were implemented were called “From Photo to Map”, “From Map to Photos” and “Who Lives Here?” The “From Photo to Map” application lets the user see a map where the location of a selected photo is visualized with a marker. The “From Map to Photos” application shows a map with markers at all of the locations where the user has taken photos. When one of the markers is selected, the photos taken at that location is shown. The “Who Lives Here?” application lets the user know which of the persons in his contact list that lives where the photo was taken. A small user survey showed that the participants thought all of the applications could be useful, but they were not so sure if they would use them themselves. The survey also showed that most of the users were able to find photos faster when using map-based browsing in the “From Map to Photos” application than when browsing through a photo collection linearly, but several concerns about the implementation details and the use of an emulator make the exact efficiency gain very uncertain.</p>
|
42 |
Construction of Object-Oriented Queries Towards Relational Data : In View of Industrial PracticesJodal, Stein Magnus January 2009 (has links)
<p>The focus of this work is querying relational data through an object-relational mapper (ORM). In Java projects, it is common to use the Hibernate ORM and write the queries using HQL and Criteria. These approaches have limitations in regard to readability and static analysis. The limitations are identified and explained in this thesis. Several possible solutions are discussed. One of the solutions is looked at in depth and implemented in a real world project. The described solution eases the construction of queries and provides a way to fully utilize the development support tools.</p>
|
43 |
Project Management in Agile Software Development : An empirical investigation of the use of Scrum in mature teamsAndersen, Joachim Hjelmås January 2009 (has links)
<p>kommer...</p>
|
44 |
Automated Analyses of Malicious CodeKrister, Kris Mikael January 2009 (has links)
<p>Sophisticated software with malicious intentions (malware) that can easily and aggressively spread to a large set of hosts is located all over the Internet. Such software struggles to avoid malware analysts to continue its malicious actions without interruption. It is difficult for analysts to find the locations of machines infected with unknown and alien malware. Likewise, it is hard to estimate the prevalence of the outbreak of the malware. Currently, the processes are done using resource demanding manual work, or simply rough guessing. Automating these tasks is one possible way to reduce the necessary resources. This thesis presents an in-depth study of which properties such a system should have. A system design is made based on the findings, and an implementation is carried out as a proof of concept system. The final system runs (malicious) software, and at the same time observes network traffic originating from the software. A signature for intrusion detection systems (IDSes) is generated using data from the observations. When loaded in an IDS, the signature localises hosts that are infected with the same malware type, making network administrators able to find and repair the hosts. The thesis also covers a deep introductory study of the malware problem and possible countermeasures, focusing on a malware analyst's point of view.</p>
|
45 |
Combining Audio FingerprintsLarsen, Vegard Andreas January 2008 (has links)
Large music collections are now more common than ever before. Yet, search technology for music is still in its infancy. Audio fingerprinting is one method that allows searching for music. In this thesis several audio fingerprinting solutions are combined into a single solution to determine if such a combination can yield better results than any of the solutions can separately. The solution is used to find duplicate music files in a personal collection. The results show that applying the weighted root-mean square (WRMS) to the problem most effectively ranked the results in a satisfying manner. It was notably better than the other approaches tried. The WRMS produced 61% more correct matches than the original FDMF solution, and 49% more correct matches than libFooID.
|
46 |
Storing and Querying RDF in MarsBang, Ole Petter, Fjeldskår, Tormod January 2009 (has links)
As part of the Semantic Web movement, the Resource Description Framework (RDF) is gaining momentum as a format for storing data, particularly metadata. The SPARQL Protocol and RDF Query Language is a SQL-like query language, recommended by W3C for querying RDF data. FAST is exploring the possibilities of supporting storage and querying of RDF data in their Mars search engine. To facilitate this, a SPARQL parser has been created for the Microsoft .NET Framework, using the MPLex and MPPG tools from Microsoft's Managed Babel package. This thesis proposes a solution for efficiently storing and retrieving RDF data in Mars, based on decomposition and B+ Tree indexing. Further, a method for transforming SPARQL queries into Mars operator graphs is described. Finally, the implementation of a prototype implementation is discussed. The prototype has been developed in collaboration with FAST and has required customized indexing in Mars. Some deviations from the proposed solution were made in order to create a working prototype within the available time frame. The focus has been on exploring possibilities, and performance has thus not been a priority, neither in indexing nor in evaluation.
|
47 |
Feature Selection for Text CategorisationGarnes, Øystein Løhre January 2009 (has links)
Text categorization is the task of discovering the category or class text documents belongs to, or in other words spotting the correct topic for text documents. While there today exists many machine learning schemes for building automatic classifiers, these are typically resource demanding and do not always achieve the best results when given the whole contents of the documents. A popular solution to these problems is called feature selection. The features (e.g. terms) in a document collection are given weights based on a simple scheme, and then ranked by these weights. Next, each document is represented using only the top ranked features, typically only a few percent of the features. The classifier is then built in considerably less time, and might even improve accuracy. In situations where the documents can belong to one of a series of categories, one can either build a multi-class classifier and use one feature set for all categories, or one can split the problem into a series of binary categorization tasks (deciding if documents belong to a category or not) and create one ranked feature subset for each category/classifier. Many feature selection metrics have been suggested over the last decades, including supervised methods that make use of a manually pre-categorized set of training documents, and unsupervised methods that need only training documents of the same type or collection that is to be categorized. While many of these look promising, there has been a lack of large-scale comparison experiments. Also, several methods have been proposed the last two years. Moreover, most evaluations are conducted on a set of binary tasks instead of a multi-class task as this often gives better results, although multi-class categorization with a joint feature set often is used in operational environments. In this report, we present results from the comparison of 16 feature selection methods (in addition to random selection) using various feature set sizes. Of these, 5 were unsupervised , and 11 were supervised. All methods are tested on both a Naive Bayes (NB) classifier and a Support Vector Machine (SVM) classifier. We conducted multi-class experiments using a collection with 20 non-overlapping categories, and each feature selection method produced feature sets common for all the categories. We also combined feature selection methods and evaluated their joint efforts. We found that the classical supervised methods had the best performance, including Chi Square, Information Gain and Mutual Information. The Chi Square variant GSS coefficient was also among the top performers. Odds Ratio showed excellent performance for NB, but not for SVM. The three unsupervised methods Collection Frequency, Collection Frequency Inverse Document Frequency and Term Frequency Document Frequency all showed performances close to the best group. The Bi-Normal Separation metric produced excellent results for the smallest feature subsets. The weirdness factor performed several times better than random selection, but was not among the top performing group. Some combination experiments achieved better results than each method alone, but the majority did not. The top performers Chi square and GSS coefficient classified more documents when used together than alone.Four of the five combinations that showed increase in performance included the BNS metric.
|
48 |
Semantic Cache Investment : Adaption of Cache Investment for DASCOSABeiske, Konrad Giæver, Bjørndalen, Jan January 2009 (has links)
Semantic cache and distribution introduce new obstacles to how we use cache in query processing in databases. We have adapted a caching strategy called cache investment to work in a peer-to-peer database with semantic cache. Cache investment is a technique that influences the query optimizer without changing it. It suggests cache candidates based on knowledge about queries executed in the past. These queries are not only limited to the local site, but also detects locality in queries by looking at queries processed on remote sites. Our implementation of Semantic cache investment for distributed databases shows a great performance improvement, especially when multiple queries are active at the same time. To utilize cache investment we have looked into how a distributed query optimizer can be extended to use cache content in planning. This allows the query optimizer to detect and include beneficial cache content on remote sites that it otherwise would have ignored. Our implementation of a cache-aware optimizer shows an improvement in performance, but its most important task is to evaluate cache candidates provided through cache investment.
|
49 |
Prototyping a location aware application for UBiT. : A map-based application, designed, implemented and evaluated.Olsen, Bjarne Sletten January 2009 (has links)
Through the research performed in this thesis, it has been shown how location awareness and maps can be exploited to facilitate the use of library resources, such as information on documents and objects. A prototype has been developed to demonstrate the feasibility of integrating several different information sources for this use. The prototype created allows for users located within the city centre of Trondheim to search for documents and to locate the library departments holding them. The user is shown a map and given information on how to travel to the nearest bus stop, as well as bus schedules on how to get to the selected library department. Several information sources for the prototype has been identified and evaluated. The prototype communicates with BIBSYS for document information retrieval, Google Maps for map generation, team-trafikk.no for bus schedules querying and Amazon.com and LibraryThing.com for book cover image downloading. To ensure data consistency some local data sources are also maintained, such as a list of all the UBiT (NTNU library) departments in Trondheim. The prototype was implemented so that it would satisfy a set of requirements. These requirements were created by applying the technique of use cases. Each requirement has been discussed and prioritised based on requests from UBiT. The most important requirements have been incorporated into the design of the prototype. This focuses on modularity and it has been discussed how the external sources best can be integrated with the prototype. The prototype is implemented using a combination of programming languages. The differences between these languages have posed a challenge, and solutions to how these can be avoided are presented. The prototype has been tested according to an extensive test plan, and the results of these tests have been document and evaluated. Each of the design decisions have been evaluated and discussed, and suggestions on how these could have been improved are given. Finally, suggestions on how the functionality of the prototype can be extended are presented. The prototype created in this thesis allows for users, familiar or unfamiliar with the city and its transportation network, to locate a document and travel to the library holding it. It demonstrates how emerging technologies such as location awareness can contribute to increased use of library services.
|
50 |
Similarity Search in Large Databases using Metric Indexing and Standard Database Access MethodsOttesen, Erik Bagge January 2009 (has links)
Several methods exists for performing similarity searches quickly using metric indexing. However, most of these methods are based on main memory indexing or require specialized disk access methods. We have described and implemented a method combining standard database access methods with the LAESA Linear Approximating Eliminating Search Algorithm to perform both range and K nearest neighbour (KNN) queries using standard database access methods and relational operators. We have studied and tested various existing implementations of R-trees, and implemented the R*-tree. We also found that some of the optimizations in R*-trees was damaging to the response time at very high dimensionality. This is mostly due to the increased CPU time removing any benefit from reducing the number of disk accesses. Further we have performed comprehensive experiments using different access methods, join operators, pivot counts and range limits for both range and nearest neighbour queries. We will also implement and experiment using a multi-threaded execution environment running on several processors.
|
Page generated in 0.1113 seconds