21 |
Personlige samlinger i distribuerte - digitale bibliotekJoki, Sverre Magnus Elvenes January 2004 (has links)
No description available.
|
22 |
Integrasjon og bruk av gazetteers og tesauri i digitale bibliotek.Søk og gjennfinning via geografisk refert informasjonOlsen, Marit January 2004 (has links)
<p>-</p>
|
23 |
Classification of Images using Color, CBIR Distance Measures and Genetic Programming : An evolutionary ExperimentEdvardsen, Stian January 2006 (has links)
<p>In this thesis a novel approach to image classification is presented. The thesis explores the use of color feature vectors and CBIR retrieval methods in combination with Genetic Programming to achieve a classification system able to build classes based on training sets, and determine if an image is a part of a specific class or not. A test bench has been built, with methods for extracting color features, both segmented and whole, from images. CBIR distance-algorithms have been implemented, and the algorithms used are histogram Euclidian distance, histogram intersection distance and histogram quadratic distance. The genetic program consists of a function set for adjusting weights which corresponds to the extracted feature vectors. Fitness of the individual genomes is measured by using the CBIR distance algorithms, seeking to minimize the distance between the individual images in the training set. A classification routine is proposed, utilizing the feature vectors from the image in question, and weights generated in the genetic program in order to determine if the image belongs to the trained class. A testset of images is used to determine the accuracy of the method. The results shows that it is possible to classify images using this method, but that it requires further exploration to make it capable of good results.</p>
|
24 |
Supporting SAM: Infrastructure Development for Scalability Assessment of J2EE SystemsBostad, Geir January 2006 (has links)
<p>The subject of this master thesis is the exploration of scalability for large enterprise systems. The Scalability Assessment Method (SAM) is used to analyse scalability properties of an Internet banking application built on J2EE architecture. The report first explains the underlying concepts of SAM. A practical case study is then presented which walks through the stages of applying the method. The focus is to discover and where possible to supply the infrastructure necessary to support SAM. The practical results include a script toolbox to automate the measurement process and some investigation of key scalability issues. A further contribution is the detailed guidance contained in the report itself on how to apply the method. Finally conclusions are drawn with respect to the feasibility of SAM in the context of the case study, and more broadly for similar applications.</p>
|
25 |
Identifying Duplicates : Disambiguating BibsysMyrhaug, Kristian January 2007 (has links)
<p>The digital information age has brought with it the information seekers. These seekers, which are ordinary people, are one step ahead of many libraries, and require all information to be retrievable by posting a query and/or by browsing through information related to their information needs. Disambiguating (identifying and managing ambiguous entries) creators of publications, makes it browsing in information related to a specified creator feasible. This thesis pose a framework, named iDup, for disambiguation of bibliographic information, and evaluates the original edit-distance and a specially designed time-frame measure for comparing entries in a collection of BIBSYS-MARC records. The strength of the time-frame measure and edit-distance are both shown, as is the weakness of the edit-distance.</p>
|
26 |
Knowledge Transfer Between ProjectsHøisæter, Anne-Lise Anastasiadou January 2008 (has links)
<p>The practice of knowledge management in organizations is an issue that has recieved increasing attention during the last 20 years. This focus on knowledge management has also reached the public sector in Norway. Since 2001 the Directorate of Taxes has shown an interest in adopting methods and technologies to improve management of knowledge especially through the use of technology. This thesis aims to evaluate the current transfer of knowledge between projects in the Directorate of Taxes IT and service partner. The thesis also suggests and evaluates an approach for knowledge transfer based on two tools, the post mortem analysis and the wiki. I wish to show how this approach, based on one technical tool and one non-technical, covers all stages of the knowledge transfer process and helps the organization create and retain their knowledge. To examine the current situation of knowledge transfer in the Directorate of Taxes and to evaluate the suggested approach for knowledge transfer data was collected in six different stages. In spring 2007 I observed a meeting of project managers which provided me with information on how knowledge transfer is done on the managerial level. Documents that are used in project work were studied throughout the fall of 2007 to learn more about what project work consists of and what routines they have around the work. In late fall 2007 I conducted 8 interviews with employees at the Directorate of Taxes. I enquired about the use of the documents and meetings, and about other routines and practices concerning knowledge transfer. I also asked the employees about what they expected and desired from a potential new approach of knowledge transfer and what they thought of using the two tools that constitute my approach. In spring 2008 I observed the execution of a post mortem analysis and interviewed the participants afterwards. This gave me new insight as to how the tool works and how the employees of the organization respond to it. I studied documents containing previous research done on organizational learning at the Directorate of Taxes, and gained insight on the organization from the perspective of others. I also used the findings from this research to evaluate the suitability of the two tools. I learnt that the project members at the Directorate of Taxes chiefly transfer knowledge directly through people by a so called open-door-policy, where people are encouraged to seek and give help when they need it, face-to-face. There are some problems with this method including that it can be hard to find the right people and it is open for constant interruptions. At the managerial level sporadic meetings are conducted where knowledge is transferred, but problems with this method include that they are low in attendance and that the knowledge shared is not optimal. The third attempt of knowledge transfer reported is the use documents and templates. The Directorate of Taxes spends time and resources trying to transfer knowledge through the documents, but there are no routines around their use. The two interview sessions and the execution of the post mortem analysis show promising results for the suggested approach. The interviewees and participants of the post mortem analysis were very positive to the adoption of the method. There are however some employees who are skeptical to the suitability of the post mortem analysis and to using an electronic system for knowledge transfer. The organization has to make sure that it has its employees on board when taking these methods into use if they are to be successful.</p>
|
27 |
Full-Text Search in XML DatabasesSkoglund, Robin January 2009 (has links)
<p>The Extensible Markup Language (XML) has become an increasingly popular format for representing and exchanging data. Its flexible and exstensible syntax makes it suitable for representing both structured data and textual information, or a mixture of both. The popularization of XML has lead to the development of a new database type. XML databases serve as repositories of large collections of XML documents, and seek to provide the same benefits for XML data as relational databases for relational data; indexing, transactional processing, failsafe physical storage, querying collections etc.. There are two standardized query languages for XML, XQuery and XPath, which are both powerful for querying and navigating the structure XML. However, they offer limited support for full-text search, and cannot be used alone for typical Information Retrieval (IR) applications. To address IR-related issues in XML, a new standard is emerging as an extension to XPath and XQuery: XQuery and XPath Full Text 1.0 (XQFT). XQFT is carefully investigated to determine how well-known IR techniques apply to XML, and the chracateristics of full-text search and indexing in existing XML databases are described in a state-of-the-art study. Based on findings from literature and source code review, the design and implementation of XQFT is discussed; first in general terms, then in the context of Oracle Berkeley DB XML (BDB XML). Experimental support for XQFT is enabled in BDB XML, and a few experiments are conducted in order to evaluate functionality aspects of the XQFT implementation. A scheme for full-text indexing in BDB XML is proposed. The full-text index acts as an augmented version of an inverted list, and is implemented on top of an Oracle Berkeley DB database. Tokens are used as keys, with data tuples for each distinct (document, path) combination the token occurs in. Lookups in the index are based on keywords, and should allow answering various queries without materializing data. Investigation shows that XML-based IR with XQFT is not fundamentally different from traditional text-based IR. Full-text queries rely on linguistic tokens, which --- in XQFT --- are derived from nodes without considering the XML structure. Further, it is discovered that full-text indexing is crucial for query efficiency in large document collections. In summary, common issues with full-text search are present in XML-based IR, and are addressed in the same manner as text-based IR.</p>
|
28 |
Finding and Mapping Expertise Automatically Using Corporate DataVennesland, Audun January 2007 (has links)
In an organization, both management as well as new and experienced employees often have a need to get in touch with experts in a variety of situations. The new staff members need to learn how to perform their job, the management need - amongst other things - to man projects and vacancies, and other employees are often dependent on others' expertise to accomplish their tasks. Traditionally this problem has often been approached with computer applications using semi-automatic methods involving self-assessments of expertise stored in databases. These methods prove to be time-consuming, they do not consider the dynamics of expertise and the self-assessed expertise is often difficult to validate. This report presents an overview of issues involved in expertise finding and the development of a simple, yet effective prototype which tries to overcome the mentioned problems by using a fully automatic approach. A study of the Urban Development area at the Municipality of Trondheim is carried out to analyze this organizations' possessed expertise, sought after expertise and to collect necessary information for building the expertise finder prototype. The study found that a lot of expertise evidence is found in the formal correspondence archived in the case handling systems' document repository, and that the structure and content of these documents could fit a fully-automatic Expertise finder well. Four alternative test cases have been evaluated during the testing and evaluation of the prototype. One of these test cases - where expert profiles are modelled on-the-fly based on employees' names occurring in formal documents - is able to compete with- and in some cases outperform evaluation scores presented in related research.
|
29 |
Adaptive personalized eLearningTakhirov, Naimdjon January 2008 (has links)
This work has found that mapping prior knowledge and learning style is important for constructing personalized learning offerings for students with different levels of knowledge and learning styles. Prior knowledge assessment and a learning style questionnaire were used to assess the knowledge level and learning style. The proposed model for automatic construction of prior knowledge assessment aims to connect questions in the assessment to speci c course modules in order to identify levels on different modules, because a student may have varying levels of knowledge within different modules. We have also found that it is not easy to map students' prior knowledge with total accuracy. However, this is not required in order to achieve a tailored learning experience; an assessment of prior knowledge can still be used to decide what piece of content should be presented to a particular student. Learning style can be simply de ned as either the way people learn or an individual's preferred way of learning. The VAK learning style inventory has been found suitable to map the learning styles of students, and it is one of few learning style inventories appropriate for online learning assessment. A questionnaire consisting of 16 questions has been used to identify the learning style of students prior to commencement of the course. It is important to consider the number of questions, because the students may feel reluctant to spend too much time on the questionnaire. However, the user evaluation has shown that students willingly answer questions to allow the system to identify their learning styles. This work also presents a comprehensive overview of the state-of-the-art pertaining to learning, learning styles, Learning Management Systems, technologies related to web-based personalization and related standards and speci cations. A brief comparison is also made of various schools that have tried to address personalization of content for web-based learning. Finally, for evaluation purposes, a course on "Designing Relational Databases" was created, and a group of fourteen users evaluated the personalized course.
|
30 |
Ranking and clustering of search results : Analysis of Similarity graphShevchuk, Ksenia Alexander January 2008 (has links)
Evaluate the clustering of the similarity matrix and confirm that it is high. Compare the ranking results of the eigenvector ranking and the Link Popularity ranking and confirm for the high clustered graph the correlation between those is larger than for the low clustered graph.
|
Page generated in 0.0913 seconds