• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 905
  • 172
  • 110
  • 90
  • 54
  • 11
  • 7
  • 6
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 1354
  • 578
  • 574
  • 567
  • 566
  • 395
  • 395
  • 314
  • 251
  • 251
  • 217
  • 181
  • 176
  • 176
  • 176
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Biomedical Information Retrieval based on Document-Level Term Boosting

Johannsson, Dagur Valberg January 2009 (has links)
There are several problems regarding information retrieval on biomedical information. The common methods for information retrieval tend to fall short when searching in this domain. With the ever increasing amount of information available, researchers have widely agreed on that means to precisely retrieve needed information is vital to use all available knowledge. We have in an effort to increase the precision of retrieval within biomedical information created an approach to give all terms in a document a context weight based on the contexts domain specific data. We have created a means of including our context weights in document ranking, by combining the weights with existing ranking models. Combining context weights with existing models has given us document-level term boosting, where the context of the queried terms within a document will positively or negatively affect the documents ranking score. We have tested out our approach by implementing a full search engine prototype and evaluatied it on a document collection within biomedical domain. Our work shows that this type of score boosting has little effect on overall retrieval precision. We conclude that the approach we have created, as implemented in our prototype, not to necessarily be good means of increasing precision in biomedical retrieval systems.
42

Personlige samlinger i distribuerte - digitale bibliotek

Joki, Sverre Magnus Elvenes January 2004 (has links)
No description available.
43

Integrasjon og bruk av gazetteers og tesauri i digitale bibliotek.Søk og gjennfinning via geografisk refert informasjon

Olsen, Marit January 2004 (has links)
-
44

Classification of Images using Color, CBIR Distance Measures and Genetic Programming : An evolutionary Experiment

Edvardsen, Stian January 2006 (has links)
In this thesis a novel approach to image classification is presented. The thesis explores the use of color feature vectors and CBIR – retrieval methods in combination with Genetic Programming to achieve a classification system able to build classes based on training sets, and determine if an image is a part of a specific class or not. A test bench has been built, with methods for extracting color features, both segmented and whole, from images. CBIR distance-algorithms have been implemented, and the algorithms used are histogram Euclidian distance, histogram intersection distance and histogram quadratic distance. The genetic program consists of a function set for adjusting weights which corresponds to the extracted feature vectors. Fitness of the individual genomes is measured by using the CBIR distance algorithms, seeking to minimize the distance between the individual images in the training set. A classification routine is proposed, utilizing the feature vectors from the image in question, and weights generated in the genetic program in order to determine if the image belongs to the trained class. A test–set of images is used to determine the accuracy of the method. The results shows that it is possible to classify images using this method, but that it requires further exploration to make it capable of good results.
45

Supporting SAM: Infrastructure Development for Scalability Assessment of J2EE Systems

Bostad, Geir January 2006 (has links)
The subject of this master thesis is the exploration of scalability for large enterprise systems. The Scalability Assessment Method (SAM) is used to analyse scalability properties of an Internet banking application built on J2EE architecture. The report first explains the underlying concepts of SAM. A practical case study is then presented which walks through the stages of applying the method. The focus is to discover and where possible to supply the infrastructure necessary to support SAM. The practical results include a script toolbox to automate the measurement process and some investigation of key scalability issues. A further contribution is the detailed guidance contained in the report itself on how to apply the method. Finally conclusions are drawn with respect to the feasibility of SAM in the context of the case study, and more broadly for similar applications.
46

Identifying Duplicates : Disambiguating Bibsys

Myrhaug, Kristian January 2007 (has links)
The digital information age has brought with it the information seekers. These seekers, which are ordinary people, are one step ahead of many libraries, and require all information to be retrievable by posting a query and/or by browsing through information related to their information needs. Disambiguating (identifying and managing ambiguous entries) creators of publications, makes it browsing in information related to a specified creator feasible. This thesis pose a framework, named iDup, for disambiguation of bibliographic information, and evaluates the original edit-distance and a specially designed time-frame measure for comparing entries in a collection of BIBSYS-MARC records. The strength of the time-frame measure and edit-distance are both shown, as is the weakness of the edit-distance.
47

Knowledge Transfer Between Projects

Høisæter, Anne-Lise Anastasiadou January 2008 (has links)
The practice of knowledge management in organizations is an issue that has recieved increasing attention during the last 20 years. This focus on knowledge management has also reached the public sector in Norway. Since 2001 the Directorate of Taxes has shown an interest in adopting methods and technologies to improve management of knowledge especially through the use of technology. This thesis aims to evaluate the current transfer of knowledge between projects in the Directorate of Taxes’ IT and service partner. The thesis also suggests and evaluates an approach for knowledge transfer based on two tools, the post mortem analysis and the wiki. I wish to show how this approach, based on one technical tool and one non-technical, covers all stages of the knowledge transfer process and helps the organization create and retain their knowledge. To examine the current situation of knowledge transfer in the Directorate of Taxes and to evaluate the suggested approach for knowledge transfer data was collected in six different stages. In spring 2007 I observed a meeting of project managers which provided me with information on how knowledge transfer is done on the managerial level. Documents that are used in project work were studied throughout the fall of 2007 to learn more about what project work consists of and what routines they have around the work. In late fall 2007 I conducted 8 interviews with employees at the Directorate of Taxes. I enquired about the use of the documents and meetings, and about other routines and practices concerning knowledge transfer. I also asked the employees about what they expected and desired from a potential new approach of knowledge transfer and what they thought of using the two tools that constitute my approach. In spring 2008 I observed the execution of a post mortem analysis and interviewed the participants afterwards. This gave me new insight as to how the tool works and how the employees of the organization respond to it. I studied documents containing previous research done on organizational learning at the Directorate of Taxes, and gained insight on the organization from the perspective of others. I also used the findings from this research to evaluate the suitability of the two tools. I learnt that the project members at the Directorate of Taxes chiefly transfer knowledge directly through people by a so called open-door-policy, where people are encouraged to seek and give help when they need it, face-to-face. There are some problems with this method including that it can be hard to find the right people and it is open for constant interruptions. At the managerial level sporadic meetings are conducted where knowledge is transferred, but problems with this method include that they are low in attendance and that the knowledge shared is not optimal. The third attempt of knowledge transfer reported is the use documents and templates. The Directorate of Taxes spends time and resources trying to transfer knowledge through the documents, but there are no routines around their use. The two interview sessions and the execution of the post mortem analysis show promising results for the suggested approach. The interviewees and participants of the post mortem analysis were very positive to the adoption of the method. There are however some employees who are skeptical to the suitability of the post mortem analysis and to using an electronic system for knowledge transfer. The organization has to make sure that it has its employees on board when taking these methods into use if they are to be successful.
48

Full-Text Search in XML Databases

Skoglund, Robin January 2009 (has links)
The Extensible Markup Language (XML) has become an increasingly popular format for representing and exchanging data. Its flexible and exstensible syntax makes it suitable for representing both structured data and textual information, or a mixture of both. The popularization of XML has lead to the development of a new database type. XML databases serve as repositories of large collections of XML documents, and seek to provide the same benefits for XML data as relational databases for relational data; indexing, transactional processing, failsafe physical storage, querying collections etc.. There are two standardized query languages for XML, XQuery and XPath, which are both powerful for querying and navigating the structure XML. However, they offer limited support for full-text search, and cannot be used alone for typical Information Retrieval (IR) applications. To address IR-related issues in XML, a new standard is emerging as an extension to XPath and XQuery: XQuery and XPath Full Text 1.0 (XQFT). XQFT is carefully investigated to determine how well-known IR techniques apply to XML, and the chracateristics of full-text search and indexing in existing XML databases are described in a state-of-the-art study. Based on findings from literature and source code review, the design and implementation of XQFT is discussed; first in general terms, then in the context of Oracle Berkeley DB XML (BDB XML). Experimental support for XQFT is enabled in BDB XML, and a few experiments are conducted in order to evaluate functionality aspects of the XQFT implementation. A scheme for full-text indexing in BDB XML is proposed. The full-text index acts as an augmented version of an inverted list, and is implemented on top of an Oracle Berkeley DB database. Tokens are used as keys, with data tuples for each distinct (document, path) combination the token occurs in. Lookups in the index are based on keywords, and should allow answering various queries without materializing data. Investigation shows that XML-based IR with XQFT is not fundamentally different from traditional text-based IR. Full-text queries rely on linguistic tokens, which --- in XQFT --- are derived from nodes without considering the XML structure. Further, it is discovered that full-text indexing is crucial for query efficiency in large document collections. In summary, common issues with full-text search are present in XML-based IR, and are addressed in the same manner as text-based IR.
49

Emnekart basert på standarder / Topic Maps based on standards

Brodshaug, Marit January 2005 (has links)
Emnekart er et hjelpemiddel for å navigere blant ressurser, men har i utgangspunktet ingen bestemt struktur for hvordan informasjon bør struktureres. Oppgaven tar derfor for seg ulike formater, som er benyttet for strukturering av informasjon, for å se om disse kan være til hjelp ved implementasjon av emnekart. Det er også slik at mange ressurser allerede har metadata knyttet til seg, eller er strukturert i standardiserte modeller, eller klassifisert etter gitte systemer, og det kan da være av interesse å fortsette og nyttiggjøre seg av denne informasjonen og struktureringen selv om man tar i bruk en standard som emnekart. For å undersøke dette er Dublin Core, Dewey og FRBR-modellen implementert i emnekart standarden. Det er også i oppgaven sett på sammenfletting av forskjellige emnekart, og om en standardisering av emnekart kan gjøre sammenflettingen bedre. Det er testet både sammenfletting med to emnekart basert på samme struktur, i tillegg til sammenfletting på tvers av strukturer.
50

Presentasjon av avanserte lenkestrukturer : Xlink i Webbaserte dokumente / A Presentation on Sophisticated Linking

Ballo, Tor Åge January 2006 (has links)
Avanserte lenkestrukturer tilbyr lenkemuligheter som er mer kompleks enn ved lenking i HTML. Lenkemodellen XLink introduserer funksjonalitet som blant annet muliggjør beskrivelse av relasjoner mellom flere enn to ressurser, rik metadata til lenkene, og tredjeparts lenker. Avhandlingen vil ta for seg ulike løsninger for å implementere avanserte lenkestrukturer, slik at de støttes av dagens nettlesere. Det blir sett på hvilke funksjonalitet fra XLink det er mulig å integrere på Web med eksisterende teknologi og hvordan slike lenkener bør utformes i grensesnitt. Det vil bli testet gjennom en applikasjon som har til hensikt å utforsker mulighetene ved bruk av XLink på Web. Applikasjonen vil bli implementert i en prototyp av BIBSYS som benytter FRBR modellen.

Page generated in 0.0291 seconds