• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 3
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 21
  • 21
  • 9
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Vulnerability in online social network profiles : a framework for measuring consequences of information disclosure in online social networks

Alim, Sophia January 2011 (has links)
The increase in online social network (OSN) usage has led to personal details known as attributes being readily displayed in OSN profiles. This can lead to the profile owners being vulnerable to privacy and social engineering attacks which include identity theft, stalking and re identification by linking. Due to a need to address privacy in OSNs, this thesis presents a framework to quantify the vulnerability of a user's OSN profile. Vulnerability is defined as the likelihood that the personal details displayed on an OSN profile will spread due to the actions of the profile owner and their friends in regards to information disclosure. The vulnerability measure consists of three components. The individual vulnerability is calculated by allocating weights to profile attribute values disclosed and neighbourhood features which may contribute towards the personal vulnerability of the profile user. The relative vulnerability is the collective vulnerability of the profiles' friends. The absolute vulnerability is the overall profile vulnerability which considers the individual and relative vulnerabilities. The first part of the framework details a data retrieval approach to extract MySpace profile data to test the vulnerability algorithm using real cases. The profile structure presented significant extraction problems because of the dynamic nature of the OSN. Issues of the usability of a standard dataset including ethical concerns are discussed. Application of the vulnerability measure on extracted data emphasised how so called 'private profiles' are not immune to vulnerability issues. This is because some profile details can still be displayed on private profiles. The second part of the framework presents the normalisation of the measure, in the context of a formal approach which includes the development of axioms and validation of the measure but with a larger dataset of profiles. The axioms highlight that changes in the presented list of profile attributes, and the attributes' weights in making the profile vulnerable, affect the individual vulnerability of a profile. iii Validation of the measure showed that vulnerability involving OSN profiles does occur and this provides a good basis for other researchers to build on the measure further. The novelty of this vulnerability measure is that it takes into account not just the attributes presented on each individual profile but features of the profiles' neighbourhood.
12

Επισήμανση και ανάκτηση περιεχομένου με τεχνικές ενεργούς μάθησης

Φουρφουρής, Γεώργιος 15 December 2014 (has links)
Η ανάκτηση περιεχομένου από τις επιμέρους βάσεις είναι ιδιαίτερης σημασίας για την σωστή επεξεργασία δεδομένων και την εξαγωγή συμπερασμάτων. Παράλληλα, η σωστή επισήμανση των επιμέρους δεδομένων (κείμενο, εικόνα, βίντεο) βοηθά ιδιαίτερα στη σωστή ανάκτηση των περιεχομένων και επακόλουθα στην εξαγωγή των απαραίτητων συμπερασμάτων. Στα πλαίσια αυτής της διπλωματικής, αρχικά, δίδεται μια πλήρης περιγραφή και ανάλυση των παραπάνω ενώ στη συνέχεια υλοποιείται το αντίστοιχο σύστημα επισήμανσης και ανάκτησης περιεχομένου. Πιο αναλυτικά, το σύστημα είναι σε θέση να ανεβάζει και να επισημαίνει κατάλληλα τα περιεχόμενά του στις βάσεις περιεχομένων και δεδομένων. Παράλληλα, μπορεί να ανακτά τα συγκεκριμένα περιεχόμενα από αυτές τις βάσεις ώστε να είναι σε θέση να εξάγει τα κατάλληλα συμπεράσματα. Όλα αυτά υλοποιούνται και ενσωματώνονται με τις μεθόδους ενεργής μάθησης ενώ παρουσιάζονται σε μια web based εφαρμογή. / The content retrieval of individual data bases are of particular importance for both correct processing of data and draw conclusions. Furthermore, proper labeling of individual data (among text, image or video), particularly helps in recovering the correct contents and subsequent export of the necessary conclusions. Within this thesis is firstly given a complete description and analysis of the above references and then is implemented the corresponding labeling and content retrieval system. More specifically, the system is able to fetch and appropriate note the contents of data bases and data contents. Furthermore, it can recover the specific contents of those databases being able to draw of the appropriate conclusions. All of these are implemented and integrated with the methods of active learning represented on a web based application.
13

Οργάνωση βάσεων εικόνων βάσει περιγράμματος : εφαρμογή σε φύλλα

Φωτοπούλου, Φωτεινή 16 June 2011 (has links)
Το αντικείμενο της μελέτης αυτής είναι η οργάνωση (ταξινόμηση, αναγνώριση, ανάκτηση κλπ.) βάσεων που περιλαμβάνουν εικόνες (φωτογραφίες) φύλλων δένδρων. Η οργάνωση βασίζεται στο σχήμα των φύλλων και περιλαμβάνει διάφορα στάδια. Το πρώτο στάδιο είναι η εξαγωγή του περιγράμματος και γίνεται με διαδικασίες επεξεργασίας εικόνας που περιλαμβάνουν τεχνικές ομαδοποίησης και κατάτμησης. Από το περίγραμμα του φύλλου εξάγονται χαρακτηριστικά που δίνουν την δυνατότητα αξιόπιστης περιγραφής κάθε φύλλου. Μελετήθηκαν στη διατριβή αυτή οι παρακάτω γνωστές μέθοδοι: Centroid Contour Distance, Angle code (histogram), Chain Code Fourier Descriptors. Προτάθηκαν επίσης και καινούριες μέθοδοι: Pecstrum (pattern spectrum), Multidimension Sequence Similarity Measure (MSSM). Οι παραπάνω μέθοδοι υλοποιήθηκαν. Παράχθηκε κατάλληλο λογισμικό και εφαρμόσθηκαν σε μία βάση εικόνων φύλλων επιλεγμένη από το διαδίκτυο. Η αξιολόγηση των μεθόδων έγινε μέσα από έλεγχο της συνολικής ακρίβειας κατηγοριοποίησης (με τον confusion matrix). H μέθοδος MSSM έδωσε τα καλύτερα αποτελέσματα. Μία οπτική αξιολόγηση έγινε σε αναπαράσταση 2 διαστάσεων (biplot) μέσα απο διαδικασία Multidimensional Scaling. / The objective of this thesis is the leaf images data base organization (i.e classification, recognition, retrieval etc.). The database organization is based on the leaf shape and is accomplished in a few stages. The contour recognition and recording consist the first stage and is performed with image processing operations namely clustering and segmentation. From the leaf contour several features are extracted appropriate for a reliable description of each leaf type. The following well known techniques were studied in this thesis: Centroid Contour Distance, Angle code (histogram), Chain Code, Fourier Descriptors. Two new metods were also proposed: Pecstrum (pattern spectrum), Multidimension Sequence Similarity Measure. In the experimental study appropriate software was produced to realize all the above methods which was applied to the leaf data base downloaded from internet. The overall evaluation of the methods was done by means of the classification in precision and using the confusion matrix. Best results were produced by the MSSM method.
14

Looking for data / Information seeking behaviour of survey data users

Friedrich, Tanja 30 November 2020 (has links)
Die Informationsverhaltensforschung liefert zahlreiche Erkenntnisse darüber, wie Menschen Informationen suchen, abrufen und nutzen. Wir verfügen über Forschungsergebnisse zu Informationsverhaltensmustern in einem breiten Spektrum von Kontexten und Situationen, aber wir wissen nicht genug über die Informationsbedürfnisse und Ziele von Forschenden hinsichtlich der Nutzung von Forschungsdaten. Die Informationsverhaltensforschung gibt insbesondere Aufschluss über das literaturbezogene Informationsverhalten. Die vorliegende Studie basiert auf der Annahme, dass diese Erkenntnisse nicht ohne weiteres auf datenbezogenes Informationsverhalten übertragen werden können. Um diese Annahme zu untersuchen, wurde eine Studie zum Informationssuchverhalten von Datennutzenden durchgeführt. Übergeordnetes Ziel der Studie war es, Erkenntnisse über das Informationsverhalten der Nutzenden eines bestimmten Retrievalsystems für sozialwissenschaftliche Daten zu erlangen, um die Entwicklung von Forschungsdateninfrastrukturen zu unterstützen, die das Data Sharing erleichtern sollen. Das empirische Design dieser Studie folgt einem Mixed-Methods-Ansatz. Dieser umfasst eine qualitative Studie in Form von Experteninterviews und – darauf aufbauend – eine quantitative Studie in Form einer Online-Befragung von Sekundärnutzenden von Daten aus Bevölkerungs- und Meinungsumfragen (Umfragedaten). Im Kern hat die Untersuchung ergeben, dass die Einbindung in die Forschungscommunity bei der Datensuche eine zentrale Rolle spielt. Die Analysen zeigen, dass Communities eine wichtige Determinante für das Informationssuchverhalten sind. Die Einbindung in die Community hat das Potential, Probleme oder Barrieren bei der Datensuche zu reduzieren. Diese Studie trägt zur Theorieentwicklung in der Informationsverhaltensforschung durch die Modellierung des Datensuchverhaltens bei. In praktischer Hinsicht gibt die Studie Empfehlungen für das Design von Dateninfrastrukturen, basierend auf empirischen Anforderungsanalysen. / From information behaviour research we have a rich knowledge of how people are looking for, retrieving, and using information. We have scientific evidence for information behaviour patterns in a wide scope of contexts and situations, but we don’t know enough about researchers’ information needs and goals regarding the usage of research data. Having emerged from library user studies, information behaviour research especially provides insight into literature-related information behaviour. This thesis is based on the assumption that these insights cannot be easily transferred to data-related information behaviour. In order to explore this assumption, a study of secondary data users’ information-seeking behaviour was conducted. The study was designed and evaluated in comparison to existing theories and models of information-seeking behaviour. The overall goal of the study was to create evidence of actual information practices of users of one particular retrieval system for social science data in order to inform the development of research data infrastructures that facilitate data sharing. The empirical design of this study follows a mixed methods approach. This includes a qualitative study in the form of expert interviews and – building on the results found therein – a quantitative web survey of secondary survey data users. The core result of this study is that community involvement plays a pivotal role in survey data seeking. The analyses show that survey data communities are an important determinant in survey data users' information seeking behaviour and that community involvement facilitates data seeking and has the capacity of reducing problems or barriers. Community involvement increases with growing experience, seniority, and data literacy. This study advances information behaviour research by modelling the specifics of data seeking behaviour. In practical respect, the study specifies data-user oriented requirements for systems design.
15

Network Coding in Distributed, Dynamic, and Wireless Environments: Algorithms and Applications

Chaudhry, Mohammad 2011 December 1900 (has links)
The network coding is a new paradigm that has been shown to improve throughput, fault tolerance, and other quality of service parameters in communication networks. The basic idea of the network coding techniques is to relish the "mixing" nature of the information flows, i.e., many algebraic operations (e.g., addition, subtraction etc.) can be performed over the data packets. Whereas traditionally information flows are treated as physical commodities (e.g., cars) over which algebraic operations can not be performed. In this dissertation we answer some of the important open questions related to the network coding. Our work can be divided into four major parts. Firstly, we focus on network code design for the dynamic networks, i.e., the networks with frequently changing topologies and frequently changing sets of users. Examples of such dynamic networks are content distribution networks, peer-to-peer networks, and mobile wireless networks. A change in the network might result in infeasibility of the previously assigned feasible network code, i.e., all the users might not be able to receive their demands. The central problem in the design of a feasible network code is to assign local encoding coefficients for each pair of links in a way that allows every user to decode the required packets. We analyze the problem of maintaining the feasibility of a network code, and provide bounds on the number of modifications required under dynamic settings. We also present distributed algorithms for the network code design, and propose a new path-based assignment of encoding coefficients to construct a feasible network code. Secondly, we investigate the network coding problems in wireless networks. It has been shown that network coding techniques can significantly increase the overall throughput of wireless networks by taking advantage of their broadcast nature. In wireless networks each packet transmitted by a device is broadcasted within a certain area and can be overheard by the neighboring devices. When a device needs to transmit packets, it employs the Index Coding that uses the knowledge of what the device's neighbors have heard in order to reduce the number of transmissions. With the Index Coding, each transmitted packet can be a linear combination of the original packets. The Index Coding problem has been proven to be NP-hard, and NP-hard to approximate. We propose an efficient exact, and several heuristic solutions for the Index Coding problem. Noting that the Index Coding problem is NP-hard to approximate, we look at it from a novel perspective and define the Complementary Index Coding problem, where the objective is to maximize the number of transmissions that are saved by employing coding compared to the solution that does not involve coding. We prove that the Complementary Index Coding problem can be approximated in several cases of practical importance. We investigate both the multiple unicast and multiple multicast scenarios for the Complementary Index Coding problem for computational complexity, and provide polynomial time approximation algorithms. Thirdly, we consider the problem of accessing large data files stored at multiple locations across a content distribution, peer-to-peer, or massive storage network. Parts of the data can be stored in either original form, or encoded form at multiple network locations. Clients access the parts of the data through simultaneous downloads from several servers across the network. For each link used client has to pay some cost. A client might not be able to access a subset of servers simultaneously due to network restrictions e.g., congestion etc. Furthermore, a subset of the servers might contain correlated data, and accessing such a subset might not increase amount of information at the client. We present a novel efficient polynomial-time solution for this problem that leverages the matroid theory. Fourthly, we explore applications of the network coding for congestion mitigation and over flow avoidance in the global routing stage of Very Large Scale Integration (VLSI) physical design. Smaller and smarter devices have resulted in a significant increase in the density of on-chip components, which has given rise to congestion and over flow as critical issues in on-chip networks. We present novel techniques and algorithms for reducing congestion and minimizing over flows.
16

[pt] ABORDAGENS DE COORDENAÇÃO DE VOO PARA GRUPOS DE VANT EM COLETA DE DADOS DE WSN / [en] FLIGHT COORDINATION APPROACHES OF UAV SQUADS FOR WSN DATA COLLECTION

BRUNO JOSÉ OLIVIERI DE SOUZA 31 May 2019 (has links)
[pt] Redes de sensores sem fio (WSN) são uma importante alternativa na coleta de dados em diversas situações, tais como no monitoramento de grandes áreas ou áreas que apresentem perigo. A recuperação de dados de WSNs é uma importante atividade que pode obter melhores resultados com o uso de veículos aéreos não tripulados (UAV) como, por exemplo, em relação ao aumento da quantidade de dados coletados e diminuição do tempo entre a coleta dos dados e seu uso. Em particular, áreas tomadas por desastres podem ficar sem recursos de comunicação e com grande risco residual para humanos, momento no qual uma WSN pode ser rapidamente lançada por via aérea e atuar na coleta de dados relevantes até que medidas pertinentes e dedicadas possam ser colocadas em ação. Estudos apresentam abordagens no uso de UAVs para coleta dos dados de WSN, focando principalmente na otimização do caminho a ser percorrido por um único UAV e se baseando em uma comunicação de longo alcance sempre disponível, não explorando a possibilidade da utilização de diversos UAVs ou à limitação do alcance da comunicação. Neste trabalho apresentamos o DADCA, uma abordagem distribuída escalável capaz de coordenadar grupos de UAVs na coleta de dados de WSN sob restrições de alcance de comunicação, sem fazer uso de técnicas de otimização. Resultados indicam que a quantidade de dados coletados pelo DADCA é semelhante ou superior, em até 1 porcento, a abordagens de otimização de caminhos percorridos por UAVs. O atraso no recebimento de mensagens de sensores é até 46 porcento menor do que outras abordagens e o processamento necessário a bordo de UAVs é no mínimo menor do que 75 porcento do que aqueles que utilizam algoritmos baseados em otimização. Os resultados apresentados indicam que o DADCA é capaz de igualar e até superar outras abordagens apresentadas, agregando vantagens de uma abordagem distribuída. / [en] Wireless sensor networks (WSNs) are an important means of collecting data in a variety of situations, such as monitoring large or hazardous areas. The retrieval of WSN data can yield better results with the use of unmanned aerial vehicles (UAVs), for example, concerning the increase in the amount of collected data and decrease in the time between the collection and use of the data. In particular, disaster areas may be left without communication resources and with great residual risk to humans, at which point a WSN can be quickly launched by air to collect relevant data until other measures can be put in place. Some studies present approaches to the use of UAVs for the collection of WSN data, focusing mainly on optimizing the path to be covered by a single UAV and relying on long-range communication that is always available; these studies do not explore the possibility of using several UAVs or the limitations on the range of communication. This work describes DADCA, a distributed scalable approach capable of coordinating groups of UAVs in WSN data collection with restricted communication range and without the use of optimization techniques. The results show that the amount of data collected by DADCA is similar or superior, by up to 1 percent, to path optimization approaches. In the proposed approach, the delay in receiving sensor messages is up to 46 percent shorter than in other approaches, and the required processing onboard UAVs can reach less than 75 percent of those using optimization-based algorithms. The results indicate that the DADCA can match and even surpass other approaches presented, while also adding the advantages of a distributed approach.
17

[en] MODEL DRIVEN QUESTIONNAIRES BASED ON A DOMAIN SPECIFIC LANGUAGE / [pt] QUESTIONÁRIOS ORIENTADOS POR MODELOS BASEADOS EM DSL

LUCIANE CALIXTO DE ARAUJO 04 May 2020 (has links)
[pt] Pesquisas são pervasivas no mundo moderno e seu uso vai de medidas de satisfação de consumidores ao rastreamento de tendências econômicas globais. No centro do processo de pesquisa está a coleta de dados que é, usualmente, assistida por computador. O desenvolvimento de software destinado à coleta de dados em pesquisas envolve a codificação de questionários que variam de simples sequências de questões abertas à questionários complexos nos quais validações, cálculo de dados derivados, gatilhos para garantia de consistência e objetos de interesse criados dinamicamente são a regra. A especificação do questionário é parte dos metadados da pesquisa e é um fator chave na garantia da qualidade dos dados coletados e dos resultados atingidos por uma pesquisa. São os metadados da pesquisa que estabelecem a maior parte dos requisitos para os sistemas de suporte a pesquisas, incluindo requisitos para o software de coleta de dados. À medida que a pesquisa é planejada e executada, esses requisitos devem ser compreendidos, comunicados, codificados e implantados, numa sequência de atividades que demanda técnicas adequadas para que a pesquisa seja eficaz e efetiva. A Engenharia Orientada a Modelos (Model Driven Engineering) propõe estratégias que visam alcançar esse objetivo. Neste contexto, esta dissertação propõe o uso de Linguagens de Domínio Específico (Domain-specific Languages - DSLs) para modelar questionários, apresenta um protótipo e avalia DSLs como uma técnica para diminuir a distância entre especialistas de domínio e desenvolvedores de software, incentivar o reuso, eliminar a redundância e minimizar o retrabalho. / [en] Surveys are pervasive in the modern world with its usage ranging from the field of customer satisfaction measurement to global economic trends tracking. At the core of survey processes is data collection which is, usually, computer aided. The development of data collection software involves the codification of questionnaires which vary from simple straightforward questions to complex questionnaires in which validations, derived data calculus, triggers used to guarantee consistency and dynamically created objects of interest are the rule. The questionnaire specification is part of what is called survey metadata and is a key factor for collected data and survey quality. Survey metadata establishes most of the requirements for survey support systems including data collection software. As the survey process is executed, those requirements need to be translated, coded and deployed in a sequence of activities that demands strategies for being efficient and effective. Model Driven Engineering enters this picture with the concept of software crafted directly from models. In this context, this dissertation proposes the usage of a Domain Specific Language (DSL) for modeling questionnaires, presents a prototype and evaluates DSL as a strategy to reduce the gap between survey domain experts and software developers, improve reuse, eliminate redundancy and minimize rework.
18

[en] ALUMNI TOOL: INFORMATION RECOVERY OF PERSONAL DATA ON THE WEB IN AUTHENTICATED SOCIAL NETWORKS / [pt] ALUMNI TOOL: RECUPERAÇÃO DE DADOS PESSOAIS NA WEB EM REDES SOCIAIS AUTENTICADAS

LUIS GUSTAVO ALMEIDA 02 August 2018 (has links)
[pt] O uso de robôs de busca para coletar informações para um determinado contexto sempre foi um problema desafiante e tem crescido substancialmente nos últimos anos. Por exemplo, robôs de busca podem ser utilizados para capturar dados de redes sociais profissionais. Em particular, tais redes permitem estudar as trajetórias profissionais dos egressos de uma universidade, e responder diversas perguntas, como por exemplo: Quanto tempo um ex-aluno da PUC-Rio leva para chegar a um cargo de relevância? No entanto, um problema de natureza comum a este cenário é a impossibilidade de coletar informações devido a sistemas de autenticação, impedindo um robô de busca de acessar determinadas páginas e conteúdos. Esta dissertação aborda uma solução para capturar dados, que contorna o problema de autenticação e automatiza o processo de coleta de dados. A solução proposta coleta dados de perfis de usuários de uma rede social profissional para armazenamento em banco de dados e posterior análise. A dissertação contempla ainda a possibilidade de adicionar diversas outras fontes de dados dando ênfase a uma estrutura de armazém de dados. / [en] The use of search bots to collect information for a given context has grown substantially in recent years. For example, search bots may be used to capture data from professional social networks. In particular, such social networks facilitate studying the professional trajectory of the alumni of a given university, and answer several questions such as: How long does a former student of PUC-Rio take to arrive at a management position? However, a common problem in this scenario is the inability to collect information due to authentication systems, preventing a search robot from accessing certain pages and content. This dissertation addresses a solution to capture data, which circumvents the authentication problem and automates the data collection process. The proposed solution collects data from user profiles for later database storage and analysis. The dissertation also contemplates the possibility of adding several other sources of data giving emphasis to a data warehouse structure.
19

Anotação automática de dados geográficos baseada em bancos de dados abertos e interligados. / Automatic annotation of spatial data based on open and interconnected databases.

HENRIQUES, Hamon Barros. 07 May 2018 (has links)
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-05-07T16:21:38Z No. of bitstreams: 1 HAMON BARROS HENRIQUES - DISSERTAÇÃO PPGCC 2015..pdf: 3136584 bytes, checksum: a73ddf1f3aa24a230079e12abc8cee00 (MD5) / Made available in DSpace on 2018-05-07T16:21:38Z (GMT). No. of bitstreams: 1 HAMON BARROS HENRIQUES - DISSERTAÇÃO PPGCC 2015..pdf: 3136584 bytes, checksum: a73ddf1f3aa24a230079e12abc8cee00 (MD5) Previous issue date: 2015-08-31 / Recentemente, infraestruturas de dados espaciais (IDE) têm se popularizado como uma importante solução para facilitar a interoperabilidade de dados geográficos oferecidos por diferentes organizações. Um importante desafio que precisa ser superado por estas infraestruturas consiste em permitir que seus clientes possam localizar facilmente os dados e serviços que se encontram disponíveis. Atualmente, esta tarefa é implementada a partir de serviços de catálogo. Embora tais serviços tenham representado um importante avanço para a recuperação de dados geográficos, estes ainda possuem limitações importantes. Algumas destas limitações surgem porque os serviços de catálogo resolvem suas consultas com base nas informações contidas em seus registros de metadados, que normalmente descrevem as características do serviço como um todo. Além disso, muitos catálogos atuais resolvem consultas com restrições temáticas apenas com base em palavras-chaves, e não possuem meios formais para descrever a semântica dos recursos disponíveis. Para resolver a falta de semântica, esta dissertação apresenta uma solução para a anotação semântica automática das camadas e dos seus respectivos atributos disponibilizados em uma IDE. Com isso, motores de busca, que utilizam ontologias como insumo para a resolução de suas consultas, irão encontrar os dados geográficosqueestãorelacionadossemanticamenteaumdeterminadotema pesquisado. Também foi descrita nesta pesquisa uma avaliação do desempenho da solução proposta sobre uma amostra de serviços Web Feature Service. / Recently, Spatial Data Infrastructure (SDI) has become popular as an important solution for easing the interoperability if geographic data offered by different organizations. An important challenge that must be overcome by such infrastructures consists in allowing their users to easily locating the available data and services. Presently, this task is implemented by means of catalog services. Although such services represent an important advance for retrieval of geographic data, they still have serious limitations. Some of these limitations arise because the catalog service resolves their queries based on information contained in their metadata records, which normally describes the characteristics of the service as a whole. In addition, many current catalogs solve queries with thematic restrictions based only on keywords, and have no formal means for describing the semantics of available resources. To resolve the lack of semantics, this dissertation presents a solution for automatic semantic annotation of feature types and their attributes available in an IDE.With this, search engines, which use ontologies as input for solving their queries will find the geographic data that are semantically related to a particular topic searched. Also has described in this research an evaluation of the performance of the proposed solution on a sample of Web Feature Service services.
20

Vyhledávání ve videu / Video Retrieval

Černý, Petr January 2012 (has links)
This thesis summarizes the information retrieval theory, the relational model basic and focuses on the data indexing in relational database systems. The thesis focuses on multimedia data searching. It includes description of automatic multimedia data content extraction and multimedia data indexing. Practical part discusses design and solution implementation for improving query effectivity for multidimensional vector similarity which describes multimedia data. Thesis final part discusses experiments with this solution.

Page generated in 0.4669 seconds