• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 905
  • 172
  • 110
  • 90
  • 54
  • 11
  • 7
  • 6
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 1354
  • 578
  • 574
  • 567
  • 566
  • 395
  • 395
  • 314
  • 251
  • 251
  • 217
  • 181
  • 176
  • 176
  • 176
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Edge and Mean Based Image Compression

Desai, Ujjaval Y., Mizuki, Marcelo M., Masaki, Ichiro, Horn, Berthold K.P. 01 November 1996 (has links)
In this paper, we present a static image compression algorithm for very low bit rate applications. The algorithm reduces spatial redundancy present in images by extracting and encoding edge and mean information. Since the human visual system is highly sensitive to edges, an edge-based compression scheme can produce intelligible images at high compression ratios. We present good quality results for facial as well as textured, 256~x~256 color images at 0.1 to 0.3 bpp. The algorithm described in this paper was designed for high performance, keeping hardware implementation issues in mind. In the next phase of the project, which is currently underway, this algorithm will be implemented in hardware, and new edge-based color image sequence compression algorithms will be developed to achieve compression ratios of over 100, i.e., less than 0.12 bpp from 12 bpp. Potential applications include low power, portable video telephones.
32

Recognizing 3D Object Using Photometric Invariant

Nagao, Kenji, Grimson, Eric 22 April 1995 (has links)
In this paper we describe a new efficient algorithm for recognizing 3D objects by combining photometric and geometric invariants. Some photometric properties are derived, that are invariant to the changes of illumination and to relative object motion with respect to the camera and/or the lighting source in 3D space. We argue that conventional color constancy algorithms can not be used in the recognition of 3D objects. Further we show recognition does not require a full constancy of colors, rather, it only needs something that remains unchanged under the varying light conditions sand poses of the objects. Combining the derived color invariants and the spatial constraints on the object surfaces, we identify corresponding positions in the model and the data space coordinates, using centroid invariance of corresponding groups of feature positions. Tests are given to show the stability and efficiency of our approach to 3D object recognition.
33

Direct Object Recognition Using No Higher Than Second or Third Order Statistics of the Image

Nagao, Kenji, Horn, Berthold 01 December 1995 (has links)
Novel algorithms for object recognition are described that directly recover the transformations relating the image to its model. Unlike methods fitting the typical conventional framework, these new methods do not require exhaustive search for each feature correspondence in order to solve for the transformation. Yet they allow simultaneous object identification and recovery of the transformation. Given hypothesized % potentially corresponding regions in the model and data (2D views) --- which are from planar surfaces of the 3D objects --- these methods allow direct compututation of the parameters of the transformation by which the data may be generated from the model. We propose two algorithms: one based on invariants derived from no higher than second and third order moments of the image, the other via a combination of the affine properties of geometrical and the differential attributes of the image. Empirical results on natural images demonstrate the effectiveness of the proposed algorithms. A sensitivity analysis of the algorithm is presented. We demonstrate in particular that the differential method is quite stable against perturbations --- although not without some error --- when compared with conventional methods. We also demonstrate mathematically that even a single point correspondence suffices, theoretically at least, to recover affine parameters via the differential method.
34

Finding and Mapping Expertise Automatically Using Corporate Data

Vennesland, Audun January 2007 (has links)
In an organization, both management as well as new and experienced employees often have a need to get in touch with experts in a variety of situations. The new staff members need to learn how to perform their job, the management need - amongst other things - to man projects and vacancies, and other employees are often dependent on others' expertise to accomplish their tasks. Traditionally this problem has often been approached with computer applications using semi-automatic methods involving self-assessments of expertise stored in databases. These methods prove to be time-consuming, they do not consider the dynamics of expertise and the self-assessed expertise is often difficult to validate. This report presents an overview of issues involved in expertise finding and the development of a simple, yet effective prototype which tries to overcome the mentioned problems by using a fully automatic approach. A study of the Urban Development area at the Municipality of Trondheim is carried out to analyze this organizations' possessed expertise, sought after expertise and to collect necessary information for building the expertise finder prototype. The study found that a lot of expertise evidence is found in the formal correspondence archived in the case handling systems' document repository, and that the structure and content of these documents could fit a fully-automatic Expertise finder well. Four alternative test cases have been evaluated during the testing and evaluation of the prototype. One of these test cases - where expert profiles are modelled on-the-fly based on employees' names occurring in formal documents - is able to compete with- and in some cases outperform evaluation scores presented in related research.
35

Adaptive personalized eLearning

Takhirov, Naimdjon January 2008 (has links)
This work has found that mapping prior knowledge and learning style is important for constructing personalized learning offerings for students with different levels of knowledge and learning styles. Prior knowledge assessment and a learning style questionnaire were used to assess the knowledge level and learning style. The proposed model for automatic construction of prior knowledge assessment aims to connect questions in the assessment to speci c course modules in order to identify levels on different modules, because a student may have varying levels of knowledge within different modules. We have also found that it is not easy to map students' prior knowledge with total accuracy. However, this is not required in order to achieve a tailored learning experience; an assessment of prior knowledge can still be used to decide what piece of content should be presented to a particular student. Learning style can be simply de ned as either the way people learn or an individual's preferred way of learning. The VAK learning style inventory has been found suitable to map the learning styles of students, and it is one of few learning style inventories appropriate for online learning assessment. A questionnaire consisting of 16 questions has been used to identify the learning style of students prior to commencement of the course. It is important to consider the number of questions, because the students may feel reluctant to spend too much time on the questionnaire. However, the user evaluation has shown that students willingly answer questions to allow the system to identify their learning styles. This work also presents a comprehensive overview of the state-of-the-art pertaining to learning, learning styles, Learning Management Systems, technologies related to web-based personalization and related standards and speci cations. A brief comparison is also made of various schools that have tried to address personalization of content for web-based learning. Finally, for evaluation purposes, a course on "Designing Relational Databases" was created, and a group of fourteen users evaluated the personalized course.
36

Ranking and clustering of search results : Analysis of Similarity graph

Shevchuk, Ksenia Alexander January 2008 (has links)
Evaluate the clustering of the similarity matrix and confirm that it is high. Compare the ranking results of the eigenvector ranking and the Link Popularity ranking and confirm for the high clustered graph the correlation between those is larger than for the low clustered graph.
37

Design and use of XML formats for the FRBR model

Gjerde, Anders January 2008 (has links)
This thesis aims to investigate how XML can be used to design a bibliographical format for storage of records better in terms of hierarchical structure and readability. It first presents introductory theory regarding the techniques which make the fundament of bibliographical formats and what has previously been in use. It also accounts for the FRBR model which is the conceptual framework of the format presented here. Throughout the thesis, several important XML design criteria will be presented and examples as to why these are important to consider when constructing a bibliographical format with the use of XML. Different implementation alternatives will be presented, with their advantages and disadvantages thoroughly discussed in order to establish a solid foundation for the choices that have been made. After having done this study, an XSD (XML Schema Definition) has been made according to the best practices that have been uncovered. The XSD is based on the FRBR Model, although it is slightly changed to accommodate the wishes and interests of librarians. Most noteworthy of these changes is that the Manifestation element has been made the top element with the Expression and Work elements hierarchically placed beneath Manifestation in that order. It maintains a MARC-based datatag structure, so that librarians who are already used to it will not have to readjust to another way of structuring the most common datafields. Relationships and other attributes however, are efficiently handled in language-based elements and the XSD accommodates new relationship types with a generic relation element. XSLT has been used to transform an existing XML database to conform to the XSD for testing purposes. Statistics have been collected from the database to support design choices. Depending on what the users' needs are, there are many different design choices. XML leads to more readable records but also takes up much space. When using XML to describe relational metadata, relationships can be expressed using hierarchical storing to a certain degree, but ID/IDREF will have to be used at some point to avoid infinite inclusion of new records. ID/IDREF may also be used to improve readability or save storage space. Hierarchical storing leads to many duplicated records, especially concerning Actors and Concepts. When using XML, one must choose the root element of the record structure according to which entity is the point of interest. In FRBR, there are several reasons to choose Manifestation as the root element as it is the focal point of a library record.
38

Phrase searching in text indexes

Fellinghaug, Asbjørn Alexander January 2008 (has links)
Phrase searching in text indexes Compare different approaches to perform phrase searching, and consider a new approach whereas bigrams is considered as index term. This master thesis focus at the challenges within phrase searching in large text indexes, and to assess alternative approaches to cope with such indexes. This goal was achieved by performing an experiment, based on the theory of using bigrams consisting of stopwords as additional index terms. Realizing the characteristics within inverted index structures, we utilized stopwords as indicators for severe long posting lists. The characteristics of stopwords proved valuable, and they were collected based on a already established index for a subset of the TREC GOV2 collection. In alternative approaches we outlined two “state of the art” index structures, specifically designed to cope with phrase searching challenges. The first structure - nextword index - followed a modification of the inverted index structure. The second structure - phrase index - utilized the inverted structure in using complete phrases as index terms. Our bigram index focused on the same manipulation of the inverted index structure as the phrase index, using bigrams of words to rastically cut posting lists lengths. This was one of our main goals, as we identified stopwords posting list lengths to be one of the primary challenges with phrase searching in inverted index structures. Using stopwords to create and select bigrams proved successful to enhance phrase searching, as response times substantially improved. We conclude that our bigram index provides a significant performance in crease in terms of query evaluation time, and outperforms the standard inverted index within phrase searching.
39

A Multimedia Approach to Medical Information Retrieval

Grande, Aleksander January 2009 (has links)
From the discovery of DNA by Francis H. C. Crick and James D. Watson in 1953 there have been conducted a lot of research in the field of DNA. Up through the years technological breakthroughs has made DNA sequencing more faster and more available, and has gone from being a very manual task to be highly automated. In 1990 the Human Genome Project was started and the research in DNA skyrocketed. DNA was sequenced faster and faster throughout the 1990s, and more projects with goals of sequencing other specie's DNA was initiated. All this research of DNA led to vast amounts of DNA sequences, but the techniques for searching through these DNA sequences was not developed at the same pace. The need for new and improved methods of searching in DNA is becoming more and more evident. This thesis explores the possibilities of using content based information retrieval to search through DNA sequences. This is a bold proposition but can have great benefits if successfully implemented. By transforming DNA sequences to images, and indexing these images with a content based information retrieval system it can be possible to achieve a successful DNA search. We discover that this is possible but further work has to be done in order to solve some discovered issues with the transforming of the DNA sequences to images.
40

Redistribution of Documents across Search Engine Clusters

Høyum, Øystein January 2009 (has links)
The goal of this master thesis has been to evaluate methods for redistribution of data on search engine clusters. For all of the methods the redistribution is done when the cluster changes size. Redistribution methods that are specifically designed for search engines are not common, so the methods compared in this thesis are based on other distributed settings. This is from among other things distributed database systems, distributed files and continuous media systems. The evaluation of the methods consists of two parts, a theoretical analysis and an implementation and testing of the methods. In the theoretical analysis the methods are compared by deduction of expressions of performance. In the practical approach the algorithms are implemented on a simplified search engine cluster of 6 computers. The methods have been evaluated using three criteria. The first criteria of evaluation are how well the methods distribute documents across the cluster. In the theoretical analysis this also includes worst case scenarios. The practical evaluation compares the distribution at the end of the tests. The second criterion of evaluation is efficiency of document access. The theoretical approach focuses on the number of operations required while the practical approach calculates indexing throughput. The last area of focus examined is the document volume transported during redistribution. For the final part of the comparison of the methods, some relevant scenarios are introduced. These scenarios focus on dynamic data sets with high frequency of updates, often new documents and much searching. Using the scenarios and results from the method testing, we found some methods that performed be better than others. It is worth noting that the conclusions are for a given the type of workload from the scenarios and the setting for the test. Given other situations, other methods might be more suitable. When concluding our results we found, for the give scenarios, the best distribution method was the distributed version of linear hashing (LH*). The results from the method using hashing/range-partitioning also showed to be the least suitable as a consequence of high transport volume.

Page generated in 0.055 seconds