• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 911
  • 173
  • 110
  • 90
  • 54
  • 11
  • 7
  • 6
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 1361
  • 579
  • 574
  • 567
  • 566
  • 395
  • 395
  • 320
  • 251
  • 251
  • 217
  • 187
  • 182
  • 182
  • 176
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Verktøy for evaluering av brukergrensesnitt for bibliografisk informasjon / Tools for evaluation of user interfaces for bibliographic information

Bergheim, Erlend Klakegg January 2010 (has links)
I denne oppgaven skal jeg se på logging generelt og logging på Verdensveven spesielt for å lage et verktøy som kan logge brukeraktiviteter i en helt vanlig nettleser uten å benytte ekstra utstyr.I første omgang gjennomføres en mindre teoristudie om logging, og logging på Verdensveven spesielt, før de to verktøyene lages. Det må også velges teknologi som passer til formålet og som lar applikasjonene kunne settes opp i etterkant av ferdigstillelse. Hovedvekten av oppgaven blir å se på hvilke tekniske muligheter man har ved logging i en nettleser samt implementasjon av de to verktøyene.Som resultat er det laget et verktøy som logger brukeraktiviteter i moderne nettlesere uten å måtte benytte andre hjelpemidler. Det er også laget en mindre applikasjon som passer til målgruppen slik at verktøyet kan testes ut i praksis.
82

Using Information Extraction and Text Classification in an Effort to Support Systematic Literature Reviews

Lazreg, Sofien January 2012 (has links)
Systematic literature reviews are an important tool in Evidence-basedSoftware Engineering, but require a large amount of effort and time from theresearchers. Data extraction is an important step in these reviews, but currentpractice requires the researchers to manually extract large amounts ofdata. This thesis investigates the possibility of developing a prototype forautomatic extraction, so to reduce the time spent on manually extracting thisdata. By reviewing related research, and experimenting with different features and machine learning models, two different models were implemented in the prototype: Conditional Random Fields for information extraction and Maximum Entropy for text classification. The models achieved average F1 performance score of 67.02% and 73.82%, respectively. These results can be characterized as good results, and show that it is possible to automate the data extraction process, by annotating a small part of the dataset and training machine learning models to perform the extraction.
83

Evaluating the use of Learning Algorithms in Categorization of Text

Sørensen, Alf Simen Nygaard January 2012 (has links)
We have tested this by developing a small prototype that used thecorpora of labeled documents with some different learning algorithmsto see if the results would be satisfactory. We conclude that while thesystem would indeed make it easier for someone to classify unlabeleddocuments, it can not work totally autonomously based on the rela-tively small amount of documents and large amount of categories thatare in the ontology.
84

Test-Driven Conceptual Modelling : evaluation through a case study

Bernat-Casi, Isaac January 2011 (has links)
The purpose of this project is to showcase the workcycle and feasibility of Test-Driven Conceptual Modelling (TDCM) on a real-sized system. TDCM is a novel methodology to develop conceptual schemas which can be understood as belonging to the wider eXtreme Programming (XP) family called Test-Driven Development (TDD). Its aim is to iteratively develop conceptual schemas through automated testing at the conceptual level.In order to achieve its goal, this project is built upon the state of the art of conceptual schema testing, as seen in recent publications. It has been carried out in accordance with the design-science research model to ensure both rigour and relevance. We contribute to TDCM experimentation by applying it to a case study. In particular, this project focus on the development, by reverse engineering, of the conceptual schema of Remember the Milk (RTM), a popular system that supports the management of tasks. As a complementary goal, some suggestions to improve the RTM system will be presented thanks to the knowledge gathered during the process. This document collects the result of this experience.Our findings confirm that this methodology is promising, because the resultant conceptual's schema validation and high semantic quality paid off its relatively small additional development efforts.
85

Viable Open Source for the Consultancy Industry

Klette, Kristian Fredrik January 2012 (has links)
Open source software is growing in the market, and increasingly preferred toclosed software for the increased flexibility free software provides. As aresult of this more and more businesses are trying to enter this market andprofit from open source software.Consultancy agencies targeting the public sector are in demand of expertise andproducts released as open source. As this is a new field for many companies,studies are needed on how to approach these markets with a high chance ofsuccess with regards to business models and the technological benefits thatopen source software may provide. The problem description raises two researchquestions:* Is authoring of open source software a viable business idea for consultancy agencies?* How should software be released as open source?This thesis presents two main contributions for answering the research questions.The first is a set of guidelines and techniques for estimating the business viability of aof open source software venture. The second is some best practices for authoring and releasingopen source by observing the successful projects that already exists.In addition to these theoretical parts of the thesis, a system for analyzing andgenerating XSLT-transformations for OpenFEIDE is presented.
86

Finding an Optimal Approach for Indexing DNA Sequences

Brujordet, Anders January 2010 (has links)
In this thesis you should find an optimal approach to index and retriev DNA sequences. As part of the task you should develop an algorithm that is fast, and accurate enough to find relevant sequences. The result will be evaluated based on speed, scalability and search efficiency (e.g. Precision and Recall). The approach should be implemented in a Java-based prototype which will be a "proof-of-the-concept".
87

Ranking Mechanisms for Image Retrieval based on Coordinates, Perspective, and Area

Skjønsberg, Sindre January 2010 (has links)
Image retrieval is becoming increasingly relevant as the size of image collections, and amount of image types grow. One of these types is aerial photography, unique in that it can be represented by its spatial content, and through this, be combined with digital maps. Finding good ways of describing this image type with regard to performing queries and ranking results is therefore an important task, and what this study is about.Existing systems already combine maps and imagery, but does not take the spatial features found within each image into consideration. Instead, more traditional external metadata, e.g. file name, author, and date are used when performing retrieval operations on the objects involved.A set of requirements for an image retrieval system on aerial photography using spatial features, were suggested. This described the image- and query types one can expect such a system to handle, and how the information found within these could be represented. A prototype was developed based on these requirements, evaluating the performance of single coordinate queries and a relevance calculation using the coverage, perspective, and areas of interest found in each picture.The prototype evaluation shows that the different characteristics found in aerial photography makes it very difficult to represent and rank all these images in the same way. Especially images taken horizontally, i.e. where the horizon is showing, have other properties than images looking straight down on an area. The evaluation also shows problems related to manual registration of spatial features for images covering large areas, where inaccuracies introduced here can have a damaging effect on ranking.Suggestions for future work with spatial image retrieval are mentioned, proposing alternatives to the spatial features used in the prototype, improvements for calculating relevance, as well as technologies that might help the feature extraction process.
88

Searching for, and identifying Protein Information in the Literature

Klæboe, Espen January 2010 (has links)
As research papers grow in volume and in quantity, there is still to this day, a hassle to locate desired articles based on specific protein names and/or Protein-Protein-Interactions. This is due to the everlasting problem of extracting protein names and Protein-Protein-Interactions from bio-medical papers and articles. The goal of this thesis was to investigate an approach that suggests the use of the Lucene framework for storing and indexing different articles found in bio-medical databases and being able to effciently identify protein names and possible interactions that exist in them. The system, dubbed MasterPPI, locates protein names and Protein-Protein-Interaction keywords with the help of two dictionaries, and when these are found and labeled, determins a Protein-Protein-Interaction if a specific interaction-keyword is present in a sentence, between to protein names. When tested against the test collection from the IAS subtask in the BioCreAtIvE2 challenge; the prototype system achieved a f-score of 0.34, showing that the system has potential, but needs a great deal of work.
89

Improving Performance of Biomedical Information Retrieval using Document-Level Field Boosting and BM25F Weighting

Jervidalo, Jørgen January 2010 (has links)
Corpora of biomedical information typically contains large amounts of ambiguous data, as proteins and genes can be referred to by a number of different terms, making information retrieval difficult. This thesis investigates a number of methods attempting to increase precision and recall of searches within the biomedical domain, including using the BM25F model for scoring documents and using Named Entity Recognition (NER) to identify biomedical entities in the text. We have implemented a prototype for testing the approaches, and have found that by using a combination of several methods, including using three different NER models at once, a significant increase (up to 11.5%) in mean average precision (MAP) is observed over our baseline result.
90

Environment re-creation methods for virtual heritage using a game engine with discernment of visual learning cues

Svånå, David January 2010 (has links)
This thesis presents an analysis of visual cues and environmental hints gathered from computer games and cinematic theory. These cues can help users of interactive virtual worlds to navigate and understand them in a comprehensive context, in an integrated manner. This can be applied to most interactive virtual environments. It is also viewed in a perspective of virtual heritage; reconstructions of historical locations.There is currently not much research documenting such cues. Here, the sampled cues are split into visual, environmental and interface categories. The techniques are analyzed both from a general standpoint and for potential use in virtual heritage. Most of the cues' analyses indicate that they could be very useful in virtual heritage or similar applications.One such application, a high-fidelity re-creation of the medieval city of Nidaros, is made using the Unreal Engine 3 graphics- and game-engine. Construction of the environment mimics the needs of a comprehensive virtual heritage project, and provides an easily extensible test case. Many technical aspects of the construction are described in detail.Selected cues and design techniques are successfully applied to the re-created interactive environment. Users of the program are able to walk freely in the city. A discussion of the results is provided, and many ideas for further expansion are suggested.The results suggest that the presented combination of techniques constitute a new and promising perspective to any type of virtual environment. The use of a game engine could also help cut production costs and provide a fully interactive, high quality learning experience.

Page generated in 0.3178 seconds