• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 2
  • 1
  • 1
  • Tagged with
  • 10
  • 10
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Automatic detection of shot boundaries in digital video

Yusoff, Yusseri January 2002 (has links)
This thesis describes the implementation of automatic shot boundary detection algorithms for the detection of cuts and gradual transitions in digital video sequences. The objective was to develop a fully automatic video segmentation system as a pre-processing step for video database retrieval management systems as well as other applications which has large video sequences as part of their systems. For die detection of cuts, we begin by looking at a set of baseline algorithms that look into measuring specific features of video images and calculating the dissimilarity of the measures between frames in the video sequence. We then propose two different approaches and compare them against the set of baseline algorithms. These approaches are themselves built upon the base set of algorithms. Observing that the baseline algorithms initially use hard thresholds to determine shot boundaries, we build Receiver Operating Characteristic (ROC) curves to plot the characteristics of the algorithms when varying the thresholds. In the first approach, we look into combining the multiple algorithms in such a way that as a collective, the detection of cuts are improved. The results of the fusion are then compared against the baseline algorithms on the ROC curve. For the second approach, we look into having adaptive thresholds for the baseline algorithms. A selection of adaptive thresholding methods were applied to the data set and compared with the baseline algorithms that are using hard thresholds. In the case of gradual transition detection, an application of a filtering technique used to detect ramp edges in images is adapted for use in video sequences. The approach is taken by starting with the observation that shot boundaries represent edges in time, with cuts being sharp edges and gradual transitions closely approximating ramp edges. The methods that we propose reflect our concentration on producing a reliable and efficient shot boundary detection mechanism. In each instance, be it for cuts or gradual transitions, we tested our algorithms on a comprehensive set of video sequences, containing a variety of content and obtained highly competitive results.
2

Video Indexing and Retrieval in Compressed Domain Using Fuzzy-Categorization.

Fang, H., Qahwaji, Rami S.R., Jiang, Jianmin January 2006 (has links)
No / There has been an increased interest in video indexing and retrieval in recent years. In this work, indexing and retrieval system of the visual contents is based on feature extracted from the compressed domain. Direct possessing of the compressed domain spares the decoding time, which is extremely important when indexing large number of multimedia archives. A fuzzy-categorizing structure is designed in this paper to improve the retrieval performance. In our experiment, a database that consists of basketball videos has been constructed for our study. This database includes three categories: full-court match, penalty and close-up. First, spatial and temporal feature extraction is applied to train the fuzzy membership functions using the minimum entropy optimal algorithm. Then, the max composition operation is used to generate a new fuzzy feature to represent the content of the shots. Finally, the fuzzy-based representation becomes the indexing feature for the content-based video retrieval system. The experimental results show that the proposal algorithm is quite promising for semantic-based video retrieval.
3

Exploiting Information Extraction Techniques For Automatic Semantic Annotation And Retrieval Of News Videos In Turkish

Kucuk, Dilek 01 February 2011 (has links) (PDF)
Information extraction (IE) is known to be an effective technique for automatic semantic indexing of news texts. In this study, we propose a text-based fully automated system for the semantic annotation and retrieval of news videos in Turkish which exploits several IE techniques on the video texts. The IE techniques employed by the system include named entity recognition, automatic hyperlinking, person entity extraction with coreference resolution, and event extraction. The system utilizes the outputs of the components implementing these IE techniques as the semantic annotations for the underlying news video archives. Apart from the IE components, the proposed system comprises a news video database in addition to components for news story segmentation, sliding text recognition, and semantic video retrieval. We also propose a semi-automatic counterpart of system where the only manual intervention takes place during text extraction. Both systems are executed on genuine video data sets consisting of videos broadcasted by Turkish Radio and Television Corporation. The current study is significant as it proposes the first fully automated system to facilitate semantic annotation and retrieval of news videos in Turkish, yet the proposed system and its semi-automated counterpart are quite generic and hence they could be customized to build similar systems for video archives in other languages as well. Moreover, IE research on Turkish texts is known to be rare and within the course of this study, we have proposed and implemented novel techniques for several IE tasks on Turkish texts. As an application example, we have demonstrated the utilization of the implemented IE components to facilitate multilingual video retrieval.
4

CLIP-RS: A Cross-modal Remote Sensing Image Retrieval Based on CLIP, a Northern Virginia Case Study

Djoufack Basso, Larissa 21 June 2022 (has links)
Satellite imagery research used to be an expensive research topic for companies and organizations due to the limited data and compute resources. As the computing power and storage capacity grows exponentially, a large amount of aerial and satellite images are generated and analyzed everyday for various applications. Current technological advancement and extensive data collection by numerous Internet of Things (IOT) devices and platforms have amplified labeled natural images. Such data availability catalyzed the development and performance of current state-of-the-art image classification and cross-modal models. Despite the abundance of publicly available remote sensing images, very few remote sensing (RS) images are labeled and even fewer are multi-captioned.These scarcities limit the scope of fine tuned state of the art models to at most 38 classes, based on the PatternNet data, one of the largest publicly available labeled RS data. Recent state-of-the art image-to-image retrieval and detection models in RS have shown great results. Because the text-to-image retrieval of RS images is still emerging, it still faces some challenges in the retrieval of those images.These challenges are based on the inaccurate retrieval of image categories that were not present in the training dataset and the retrieval of images from descriptive input. Motivated by those shortcomings in current cross-modal remote sensing image retrieval, we proposed CLIP-RS, a cross-modal remote sensing image retrieval platform. Our proposed framework CLIP-RS is a framework that combines a fine-tuned implementation of a recent state of the art cross-modal and text-based image retrieval model, Contrastive Language Image Pre-training (CLIP) and FAISS (Facebook AI similarity search), a library for efficient similarity search. Our implementation is deployed on a Web App for inference task on text-to-image and image-to-image retrieval of RS images collected via the Mapbox GL JS API. We used the free tier option of the Mapbox GL JS API and took advantage of its raster tiles option to locate the retrieved results on a local map, a combination of the downloaded raster tiles. Other options offered on our platform are: image similarity search, locating an image in the map, view images' geocoordinates and addresses.In this work we also proposed two remote sensing fine-tuned models and conducted a comparative analysis of our proposed models with a different fine-tuned model as well as the zeroshot CLIP model on remote sensing data. / Master of Science / Satellite imagery research used to be an expensive research topic for companies and organizations due to the limited data and compute resources. As the computing power and storage capacity grows exponentially, a large amount of aerial and satellite images are generated and analyzed everyday for various applications. Current technological advancement and extensive data collection by numerous Internet of Things (IOT) devices and platforms have amplified labeled natural images. Such data availability catalyzed the devel- opment and performance of current state-of-the-art image classification and cross-modal models. Despite the abundance of publicly available remote sens- ing images, very few remote sensing (RS) images are labeled and even fewer are multi-captioned.These scarcities limit the scope of fine tuned state of the art models to at most 38 classes, based on the PatternNet data,one of the largest publicly avail- able labeled RS data.Recent state-of-the art image-to-image retrieval and detection models in RS have shown great results. Because the text-to-image retrieval of RS images is still emerging, it still faces some challenges in the re- trieval of those images.These challenges are based on the inaccurate retrieval of image categories that were not present in the training dataset and the re- trieval of images from descriptive input. Motivated by those shortcomings in current cross-modal remote sensing image retrieval, we proposed CLIP-RS, a cross-modal remote sensing image retrieval platform.Cross-modal retrieval focuses on data retrieval across different modalities and in the context of this work, we focus on textual and imagery modalities.Our proposed frame- work CLIP-RS is a framework that combines a fine-tuned implementation of a recent state of the art cross-modal and text-based image retrieval model, Contrastive Language Image Pre-training (CLIP) and FAISS (Facebook AI similarity search), a library for efficient similarity search. In deep learning, the concept of fine tuning consists of using weights from a different model or algorithm into a similar model with different domain specific application. Our implementation is deployed on a Web Application for inference tasks on text-to-image and image-to-image retrieval of RS images collected via the Mapbox GL JS API. We used the free tier option of the Mapbox GL JS API and took advantage of its raster tiles option to locate the retrieved results on a local map, a combination of the downloaded raster tiles. Other options offered on our platform are: image similarity search, locating an image in the map, view images' geocoordinates and addresses.In this work we also pro- posed two remote sensing fine-tuned models and conducted a comparative analysis of our proposed models with a different fine-tuned model as well as the zeroshot CLIP model on remote sensing data. Detection models in RS have shown great results. Because the text-to-image retrieval of RS images is still emerging, it still faces some challenges in the re- trieval of those images.These challenges are based on the inaccurate retrieval of image categories that were not present in the training dataset and the re- trieval of images from descriptive input. Motivated by those shortcomings in current cross-modal remote sensing image retrieval, we proposed CLIP-RS, a cross-modal remote sensing image retrieval platform.Cross-modal retrieval focuses on data retrieval across different modalities and in the context of this work, we focus on textual and imagery modalities.Our proposed frame- work CLIP-RS is a framework that combines a fine-tuned implementation of a recent state of the art cross-modal and text-based image retrieval model, Contrastive Language Image Pre-training (CLIP) and FAISS (Facebook AI similarity search), a library for efficient similarity search. In deep learning, the concept of fine tuning consists of using weights from a different model or algorithm into a similar model with different domain specific application. Our implementation is deployed on a Web Application for inference tasks on text-to-image and image-to-image retrieval of RS images collected via the Mapbox GL JS API. We used the free tier option of the Mapbox GL JS API and took advantage of its raster tiles option to locate the retrieved results on a local map, a combination of the downloaded raster tiles. Other options offered on our platform are: image similarity search, locating an image in the map, view images' geocoordinates and addresses.In this work we also pro- posed two remote sensing fine-tuned models and conducted a comparative analysis of our proposed models with a different fine-tuned model as well as the zeroshot CLIP model on remote sensing data.
5

Mehr finden durch schlaueres Suchen / Find more hits through smarter searches / Trouver plus, par la recherche intelligent

24 January 2011 (has links) (PDF)
Vom 21.-23. Juli 2010 fand in Karlsruhe die 34. Jahrestagung der Gesellschaft für Klassifikation statt. In diesem Rahmen tagten an zwei Tagen die Bibliothekare. Es kamen über 60 Teilnehmer aus der BRD, Österreich und der Schweiz. Deren Thema seit jeher ist Inhaltserschließung oder einfacher: Das Suchen und Finden. Unser Motto diesmal: „Mehr finden durch schlaueres Suchen“. Ort der bibliothekarischen Tagung war die KIT-Bibliothek. Vorgetragen wurden 15 Beiträge aus den Bereichen Forschung, Entwicklungen (auch in den beiden Dezimalklassifikationen) sowie Erfahrungsberichte. Im Mittelpunkt stand jeweils das Neue! / The 34th Annual Meeting of the German Classification Society took place in Karlsruhe on 21-23 July 2010. Traditionally included was the meeting of the librarian section on 22-23 July. There were about 60 participants in attendance from Germany, Austria and Switzerland. Their general topic has always been “subject indexing” or more exactly “indexing and retrieval”. This year‟s motto urged: “Find more hits through smarter searches”. The meeting was hosted by the Karlsruhe KIT library. Presented were 15 contributions from the fields of research, reviews and development. Among the subjects: both decimal classifications. The main goal required as ever: It's the novelty that counts! / Le 21-23 Juillet 2010 a été lieu à Karlsruhe la 34e Réunion annuelle de la Société allemand de Classification. Dans ce cadre, les bibliothécaires se sont réunis pour deux jours, le 22-23, selon la tradition. Il y avait environ 60 participants de l'Allemagne, l'Autriche et la Suisse. Leur thème a toujours été le indexation et la recherche, ou plus simplement “Rechercher et Trouver“. Le slogan de cette année a été : ”Trouver plus, par la recherche intelligente”. Lieu de la réunion était la bibliothèque du KIT de Karlsruhe. Ils sont été presenté 15 contributions dans les domaines de la recherche, du développement (également dans les classifications decimales) et des commentaires. L‟objectif principal, comme toujours : C‟est la nouveauté qui compte !
6

Subject retrieval in web-based library catalogs / Predmetno pretraživanje u knjižnicnim katalozima s web-suceljem

Golub, Koraljka January 2003 (has links)
This thesis has been motivated by past research, problems and realizations that online library catalog users frequently perform subject searches – using keywords, subject headings and descriptors – and these searches have yielded unsatisfactory results. Web-based catalogs or WebPACs (Web-based Online Public Access Catalogs), belonging to the so-called third generation of online catalogs and providing a wide variety of search options, remain largely underutilized despite the continuous advancement of information retrieval systems. Users still encounter a number of problems, such as those related to translating their concepts to the language of the catalog’s system and cross-references prepared to this purpose. Subject access in online library catalogs can be provided through different access points. To that purpose natural and controlled indexing and retrieval languages are used, and each among them has its advantages and downsides. Natural language indexing is performed by the computer, in which process words from defined fields are automatically extracted. Controlled indexing languages are those in which selection of terms to be assigned to documents is manually performed. These are, for example, classification systems, subject heading languages and thesauri. During the 1970s, a consensus was reached that the best retrieval results are gained when using both types of indexing languages simultaneously. Apart from indexing languages, it is necessary to take into account user search behavior; and while designing user interface one has to allow for the users’ skills and knowledge - ensuring instruction, help and feedback information at every step of the retrieval process. The aim of the research was to determine the variety and quality of subject access to information in WebPACs of British university libraries, including searching by words or classification marks, natural and controlled languages, browsing options, and forming simple and complex queries in order to conclude about existing advancements, offered models and employed methods and compare them to WebPACs of Croatian university libraries.
7

Mehr finden durch schlaueres Suchen: Sacherschliessung auf der 34. Jahrestagung der Gesellschaft für Klassifikation

Hermes, Hans-Joachim, Pika, Jiri 24 January 2011 (has links)
Vom 21.-23. Juli 2010 fand in Karlsruhe die 34. Jahrestagung der Gesellschaft für Klassifikation statt. In diesem Rahmen tagten an zwei Tagen die Bibliothekare. Es kamen über 60 Teilnehmer aus der BRD, Österreich und der Schweiz. Deren Thema seit jeher ist Inhaltserschließung oder einfacher: Das Suchen und Finden. Unser Motto diesmal: „Mehr finden durch schlaueres Suchen“. Ort der bibliothekarischen Tagung war die KIT-Bibliothek. Vorgetragen wurden 15 Beiträge aus den Bereichen Forschung, Entwicklungen (auch in den beiden Dezimalklassifikationen) sowie Erfahrungsberichte. Im Mittelpunkt stand jeweils das Neue! / The 34th Annual Meeting of the German Classification Society took place in Karlsruhe on 21-23 July 2010. Traditionally included was the meeting of the librarian section on 22-23 July. There were about 60 participants in attendance from Germany, Austria and Switzerland. Their general topic has always been “subject indexing” or more exactly “indexing and retrieval”. This year‟s motto urged: “Find more hits through smarter searches”. The meeting was hosted by the Karlsruhe KIT library. Presented were 15 contributions from the fields of research, reviews and development. Among the subjects: both decimal classifications. The main goal required as ever: It's the novelty that counts! / Le 21-23 Juillet 2010 a été lieu à Karlsruhe la 34e Réunion annuelle de la Société allemand de Classification. Dans ce cadre, les bibliothécaires se sont réunis pour deux jours, le 22-23, selon la tradition. Il y avait environ 60 participants de l'Allemagne, l'Autriche et la Suisse. Leur thème a toujours été le indexation et la recherche, ou plus simplement “Rechercher et Trouver“. Le slogan de cette année a été : ”Trouver plus, par la recherche intelligente”. Lieu de la réunion était la bibliothèque du KIT de Karlsruhe. Ils sont été presenté 15 contributions dans les domaines de la recherche, du développement (également dans les classifications decimales) et des commentaires. L‟objectif principal, comme toujours : C‟est la nouveauté qui compte !
8

Lingo – ein System zur automatischen Indexierung – Anwendung und Einsatzmöglichkeiten: Lingo – ein System zur automatischen Indexierung –Anwendung und Einsatzmöglichkeiten

Müller, Thomas 26 January 2011 (has links)
Die heterogenen musealen Bestände (Text, Bild, gegenständliche Objekte) im Haus der Geschichte der Bundesrepublik Deutschland umfassen derzeit über 365.000 Objektbeschreibungen zeithistorischer Objekte. Auf der Basis des Open Source Indexierungssystems lingo wird eine automatische Indexierung entwickelt, die - aufsetzend auf den existierenden Rahmenbedingungen - normierte Beschreibungsmerkmale generiert und als Indexterme für das Retrieval zur Verfügung stellt. Zielvorstellung ist es, eine einheitliche Suche über die Objektbeschreibungen anhand der sprachlichen und semantischen Vereinheitlichung der Indexterme zu realisieren.
9

Vyhledávání v multimodálních databázích / Multimodal Database Search

Krejčíř, Tomáš January 2009 (has links)
The field that deals with storing and effective searching of multimedia documents is called Information retrieval. This paper describes solution of effective searching in collections of shots. Multimedia documents are presented as vectors in high-dimensional space, because in such collection of documents it is easier to define semantics as well as the mechanisms of searching. The work aims at problems of similarity searching based on metric space, which uses distance functions, such as Euclidean, Chebyshev or Mahalanobis, for comparing global features and cosine or binary rating for comparing local features. Experiments on the TRECVid dataset compare implemented distance functions. Best distance function for global features appears to be Mahalanobis and for local features cosine rating.
10

An overview of the BIOASQ large-scale biomedical semantic indexing and question answering competition

Tsatsaronis, George 10 October 2017 (has links)
This article provides an overview of the first BioASQ challenge, a competition on large-scale biomedical semantic indexing and question answering (QA), which took place between March and September 2013. BioASQ assesses the ability of systems to semantically index very large numbers of biomedical scientific articles, and to return concise and user-understandable answers to given natural language questions by combining information from biomedical articles and ontologies.

Page generated in 0.1204 seconds