Spelling suggestions: "subject:"metadata."" "subject:"datadata.""
231 |
Database metadata requirements for automated web development : a case study using PHPMgheder, Mohamed Ahmed January 2009 (has links)
The Web has come a long way. It started as a distributed document repository and quickly became the spring board for a new type of application. Propped on top of the original HTML+HTTP architecture, this new application platform shifted the way the architecture was used so that commands and functionality were embedded in the form data of Web requests rather than in the HTTP command conveying the request. This approach enabled Web requests to convey any type of data, not just document operations. This is occurring because the Web provides such a powerful platform on which to create applications. This is occurring because web development methods are still evolving toward the structure and stability required taking on this enormous new role. As the needs of developers change, certain themes that arise more frequently than others become embedded into new environments to support those needs. Until recently, Web application programming has largely been done with a set of keywords and metaphors developed long before the Web became a popular place to program. APIs have been developed to support Web specific features, but they are no replacement for fundamental changes in the programming environment itself. The growth of Web applications requires a new type of programming designed specifically for the needs of the Web. This thesis aims to contribute towards the development of an abstract framework to generate abstract and dynamic Web user interfaces that are not developed to a specific platform. To meet this aim, this thesis suggests a general implementation of a prototype system that uses the information in database metadata in conjunction with PHP. Database metadata is richer in providing the information needed to build dynamic user interfaces. This thesis uses PHP and the abstract library ADOdb to provide us with a generalised database metadata based prototype. PHP does not have any restrictions on accessing and extracting database metadata from numerous database management systems. As a result, PHP and relational database were used to build the proposed framework. Additionally, ADOdb was used to link the two mentioned technologies. The implemented framework in this thesis demonstrates that it is possible to generate different automatic Web entry forms that are not specific at any platform.
|
232 |
Žiniatinklio indeksavimo pagal jo metaduomenis tyrimas / Research of Web Indexing According to its MetadataOrvydaitė, Indrė 02 September 2010 (has links)
Tikslas Išanalizavus internetinių puslapių antraštės struktūrą ištirti juose naudojamų metaduomenų įtaką indeksavimui.
Tyrimo objektas Internetiniame puslapyje aprašoma antraštė, jos metaduomenys.
Problema Randama daug literatūros ir straipsnių apie metaduomenų panaudojimą bei reikšmę internetinių puslapių kūrimui bei žiniatinklio indeksavimui, tačiau sutinkama skirtingų nuomonių šiuo klausimu, vieni teigia, kad mataduomenys turi mažai įtakos žiniatinklio indeksavimui, kiti teigia priešingai. Deja, visa medžiaga apie metaduomenų įtaką pateikiama tik teorinė, o realių pavyzdžių beveik nėra, todėl atlikus tyrimą įsitikinama metažymių naudingumu ir suformuluojamas taisyklingas antraštės aprašymas.
Tyrimo metodologija Teoriniai tyrimo metodai: paieškos variklio veikimo apžvalga, paieškos variklių optimizavimą įtakojančių faktorių apžvalga, metažymių aprašymo ir panaudojimo analizė.
Praktiniai tyrimo metodai: internetinio puslapio antraštės analizavimas, antraštėje talpinamų duomenų apie metažymes surinkimas, metažymėse aprašomų raktažodžių reikšmių surinkimas, paieškos rezultatų pagal raktažodžius ir puslapio pavadinimą stebėjimas.
Naudotos priemonės „Mozila Firefox“ 3.5.9 – žiniatinklio naršyklė
„Macromedia Dreamweaver“ 8.0 – internetinių puslapių kūrimo programa.
Tyrimo apimtis Indeksavimo našumas palygintas taikant paieškos užklausų pateikimą keliems paieškos varikliams tuo pačiu metu.
Tyrimo eksperimentas apima šiuos realizacijos atvejus: paieškos užklausos pagal raktažodžių... [toliau žr. visą tekstą] / The purpose of this bachelor’s paper is to carry out deep research on web indexing which depends from headings. The reason why have been decided to do this research were the numerous information about methods, which increases the better way of web indexing, however, there is no information on practical examples. Very important is high page rank in search engine, because it is main way to the success and popularity. There is possibility to increase page rank by integration of metatags into the page’s headings, however, literature sources gives different articles about metatags operation and their influence to the web indexing, this is the reason which stimulated to research, its proves how the search engine assess heading’s metadata. The research has been made by three search’s engines. The main research object is described in internet’s page headings and its metadata.
The theoretical part which has been analysed includes information about operations in search engine, and about the main search engine’s optimization factors, in the internet’s pages heading have been used metatags structure and the value. The research have been done by analysing heading structure and in the heading have been used metadata, where had been investigated the keyword use of metatag and title tag’s influence to the web indexing. The work produced rightful example of headings how should be filled, and recommendations to improve the web indexing.
The research which has been... [to full text]
|
233 |
Using natural language generation to provide access to semantic metadataHielkema, Feikje January 2010 (has links)
In recent years, the use of using metadata to describe and share resources has grown in importance, especially in the context of the Semantic Web. However, access to metadata is difficult for users without experience with description logic or formal languages, and currently this description applies to most web users. There is a strong need for interfaces that provide easy access to semantic metadata, enabling novice users to browse, query and create it easily. This thesis describes a natural language generation interface to semantic metadata called LIBER (Language Interface for Browsing and Editing Rdf), driven by domain ontologies which are integrated with domain-specific linguistic information. LIBER uses the linguistic information to generate fluent descriptions and search terms through syntactic aggregation. The tool contains three modules to support metadata creation, querying and browsing, which implement the WYSIWYM (What You See Is What You Meant) natural language generation approach. Users can add and remove information by editing system-generated feedback texts. Two studies have been conducted to evaluate LIBER’s usability, and compare it to a different Semantic Web interface. The studies showed subjects with no prior experience of the Semantic Web could use LIBER effectively to create, search and browse metadata, and were a useful source of ideas in which to improve LIBER’s usability. However, the results of these studies were less positive than we had hoped, and users actually preferred the other Semantic Web tool. This has raised questions about which user audience LIBER should aim for, and the extent to which the underlying ontologies influence the usability of the interface. LIBER’s portability to other domains is supported by a tool with which ontology developers without a background in linguistics can prepare their ontologies for use in LIBER by adding the necessary linguistic information.
|
234 |
Dynamic web forms development using RuleML : building a framework using metadata driven rules to control Web forms generation and appearanceAlbhbah, Atia Mahmod January 2013 (has links)
Web forms development for Web based applications is often expensive, laborious, error-prone, time consuming and requires a lot of effort. Web forms are used by many different people with different backgrounds and a lot of demands. There is a very high cost associated with the need to update the Web application systems to achieve these demands. A wide range of techniques and ideas to automate the generation of Web forms exist. These techniques and ideas however, are not capable of generating the most dynamic behaviour of form elements, and make Insufficient use of database metadata to control Web forms' generation and appearance. In this thesis different techniques are proposed that use RuleML and database metadata to build rulebases to improve the automatic and dynamic generation of Web forms. First this thesis proposes the use of a RuleML format rulebase using Reaction RuleML that can be used to support the development of automated Web interfaces. Database metadata can be extracted from system catalogue tables in typical relational database systems, and used in conjunction with the rulebase to produce appropriate Web form elements. Results show that this mechanism successfully insulates application logic from code and suggests that Abstract iii the method can be extended from generic metadata rules to more domain specific rules. Second it proposes the use of common sense rules and domain specific rules rulebases using Reaction RuleML format in conjunction with database metadata rules to extend support for the development of automated Web forms. Third it proposes the use of rules that involve code to implement more semantics for Web forms. Separation between content, logic and presentation of Web applications has become an important issue for faster development and easy maintenance. Just as CSS applied on the client side to control the overall presentation of Web applications, a set of rules can give a similar consistency to the appearance and operation of any set of forms that interact with the same database. We develop rules to order Web form elements and query forms using Reaction RuleML format in conjunction with database metadata rules. The results show the potential of RuleML formats for representing database structural and active semantics. Fourth it proposes the use of a RuleML based approach to provide more support for greater semantics for example advanced domain support even when this is not a DBMS feature. The approach is to specify most of the semantics associated with data stored in RDBMS, to overcome some RDBMSs limitations. RuleML could be used to represent database metadata as an external format.
|
235 |
Intelligent image cropping and scalingDeigmoeller, Joerg January 2011 (has links)
Nowadays, there exist a huge number of end devices with different screen properties for watching television content, which is either broadcasted or transmitted over the internet. To allow best viewing conditions on each of these devices, different image formats have to be provided by the broadcaster. Producing content for every single format is, however, not applicable by the broadcaster as it is much too laborious and costly. The most obvious solution for providing multiple image formats is to produce one high resolution format and prepare formats of lower resolution from this. One possibility to do this is to simply scale video images to the resolution of the target image format. Two significant drawbacks are the loss of image details through ownscaling and possibly unused image areas due to letter- or pillarboxes. A preferable solution is to find the contextual most important region in the high-resolution format at first and crop this area with an aspect ratio of the target image format afterwards. On the other hand, defining the contextual most important region manually is very time consuming. Trying to apply that to live productions would be nearly impossible. Therefore, some approaches exist that automatically define cropping areas. To do so, they extract visual features, like moving reas in a video, and define regions of interest (ROIs) based on those. ROIs are finally used to define an enclosing cropping area. The extraction of features is done without any knowledge about the type of content. Hence, these approaches are not able to distinguish between features that might be important in a given context and those that are not. The work presented within this thesis tackles the problem of extracting visual features based on prior knowledge about the content. Such knowledge is fed into the system in form of metadata that is available from TV production environments. Based on the extracted features, ROIs are then defined and filtered dependent on the analysed content. As proof-of-concept, this application finally adapts SDTV (Standard Definition Television) sports productions automatically to image formats with lower resolution through intelligent cropping and scaling. If no content information is available, the system can still be applied on any type of content through a default mode. The presented approach is based on the principle of a plug-in system. Each plug-in represents a method for analysing video content information, either on a low level by extracting image features or on a higher level by processing extracted ROIs. The combination of plug-ins is determined by the incoming descriptive production metadata and hence can be adapted to each type of sport individually. The application has been comprehensively evaluated by comparing the results of the system against alternative cropping methods. This evaluation utilised videos which were manually cropped by a professional video editor, statically cropped videos and simply scaled, non-cropped videos. In addition to and apart from purely subjective evaluations, the gaze positions of subjects watching sports videos have been measured and compared to the regions of interest positions extracted by the system.
|
236 |
Utilização de metadados no gerenciamento de acesso a servidores de vídeo. / Metadata utilization in the video servers access management.Goularte, Rudinei 26 February 1998 (has links)
A experiência com autoria de material didático multimídia para propósitos educacionais mostra um grande problema: como prover uma maneira de tratar objetos multimídia de modo que usuários inexperientes (como professores) possam estar aptos a projetar e construir suas próprias apresentações? A criação de tais apresentações envolve fatores como armazenamento, entrega, busca e apresentação de material multimídia (vídeo em especial). Uma infra-estrutura básica que armazene e entregue eficientemente os dados de vídeo é necessária, porém, outro ponto importante é organizar esses dados armazenados no servidor de forma a facilitar seu acesso por parte dos usuários. Neste trabalho, isto é alcançado através do uso de um sistema interativo de recuperação e gerenciamento de informações projetado para facilitar o acesso a itens (ou parte deles) armazenados no servidor. A principal característica de tal sistema é o uso de uma base de metadados contendo os atributos dos vídeos armazenados no servidor. Buscas podem ser feitas por título, assunto, tamanho, autor, conteúdo ou, mais importante no caso de material didático, por cenas ou frames específicos. O sistema foi implementado segundo uma abordagem cliente/servidor utilizando a linguagem de programação JAVA. A comunicação entre clientes e servidores é realizada através do uso do Visibroker 3.0, que é uma ferramenta de programação para Objetos Distribuídos segundo o padrão CORBA. O acesso aos dados a partir da base de metadados é realizado através do uso de um driver PostgreSQL que segue a API JDBC. Para propósitos de avaliação do sistema um player foi construído utilizando a ferramenta Java Media Framework (JMF). Foi realizada uma análise para a verificação do impacto da utilização das tecnologias CORBA e JDBC no sistema. Foi detectado que a utilização da tecnologia JDBC impõe um atraso muito mais significante que a utilização da tecnologia CORBA. Outra conclusão é que a utilização de metadados provê uma melhor interatividade em buscas, permite economia de tempo durante o processo de edição e provê economia de espaço de armazenamento através do compartilhamento de objetos como vídeos, cenas e frames. / The experience with authoring multimedia material for educational purposes shows a major problem: how to provide an easy and efficient way to handle multimedia objects in a manner that non-expert users (namely school teachers) can be able to design and build their own presentations? The creation of this presentations involves factors like storage, delivery, search and presentation of multimedia material (video in special). A basic infra-structure that stores and efficiently deliver the video data is needed. However, another important point is the organization of these data stored into the server in a way to facilitate the access to them from the users. In the system wich is the subject of this work, this is achived through the use of an interactive information management and retrieval system designed to facilitate the access to items (or parts of the items) stored in the server. The main characteristic of the system is the use of a metadata base which contains attributes of the videos stored in the server. Searches can be made by title, subject, length, author, content or, most important in the didatic multimedia material case, by a specific scene or frame. The system was built with JAVA programming language in a client/server way. The communication between clients and servers is realized through the use of the Visibroker 3.0, which is a Distributed Objects programming tool according to the CORBA standard. The data access from the metadata base use a PostgreSQL driver which follows the JDBC API. For evaluation purposes a playback tool was built using Java Media Framework (JMF). An analisys was carried out to verify the impact of the utilization of CORBA and JDBC technologies in the system. It was detected that JDBC technology utilization imposes a much more significate delay than the CORBA technology utilization. Another conclusion is that metadata utilization provide better interactivity searches, making the editing process faster and save storage space through the sharing of objects like videos, scenes and frames.
|
237 |
Um modelo de navegação exploratória para a infra-estrutura da web semântica / A model for exploratory navigation in the semantic web infrastructurePansanato, Luciano Tadeu Esteves 21 November 2007 (has links)
Esta tese propõe um modelo de navegação exploratória para a infra-estrutura da Web Semântica, denominado Navigation and Exploration Model (NAVE). O modelo NAVE foi desenvolvido com base na literatura de information searching, nos níveis de atividades de information seeking, e na estratégia de orienteering. O objetivo é facilitar o projeto e desenvolvimento de sistemas de navegação exploratória. O NAVE é descrito por meio de uma representação gráfica dos estágios e decisões do processo de navegação e suas respectivas técnicas de suporte à navegação, além de recomendações. Um sistema, denominado de Exploratory Navigation System (ENS), foi desenvolvido para avaliar a viabilidade de utilizar o modelo NAVE em aplicações reais. O sistema ENS é composto de diversas ferramentas de navegação que permitem ao usuário escolher a ferramenta adequada, ou a melhor combinação de ferramentas, provavelmente ajustada ao seu nível de habilidade e conhecimento, à sua preferência, e ao tipo de informação que ele está procurando no momento. O sistema permite ao usuário priorizar de maneiras diferentes as suas escolhas de ferramentas em cada passo de uma estratégia de orienteering, subjacente ao modelo NAVE. Essas ferramentas podem apresentar vantagens complementares no contexto de uma tarefa de information searching. O sistema ENS foi avaliado utilizando uma abordagem tanto qualitativa quanto quantitativa, que serviram para refinar as questões de pesquisa e explorar o modelo NAVE. Primeiro, um estudo de usabilidade foi conduzido que combinou vários métodos, como questionários, think-aloud, entrevistas, e registro da interação do usuário. Esse estudo forneceu informações com relação às ferramentas e o modelo NAVE subjacente, as quais foram consideradas no seu desenvolvimento. Segundo, um estudo experimental foi conduzido para comparar o ENS com uma abordagem de busca por palavra-chave. Os resultados forneceram indicações estatísticas de que os participantes tiveram desempenho superior utilizando o ENS / A model for exploratory navigation in the Semantic Web infrastructure called NAVE - Navigation and Exploration Model - is proposed. NAVE is based on literature of information searching, levels of information seeking activities, and an orienteering strategy. This model aims in particular at facilitating the design and development of exploratory navigation systems. It is described by a graphical representation of stages and decisions of the search process and their respective navigation support techniques, and recommendations. As a proof of concept and also to evaluate the feasibility of using NAVE in real-life applications, a system called ENS - Exploratory Navigation System - was developed. ENS is composed of a variety of navigation tools, enabling users to choose the appropriate tool or the best combination of tools (that is, the best strategy) in agreement with different levels of users\' ability, background, preferences, and kind of information they are looking for at moment. It enables users to prioritize different ways their choices of tools to use at each step in an orienteering strategy embedded on the model NAVE. These tools may present complementary advantages in an information searching task. ENS was evaluated in both qualitative and quantitative approach which served to refine research questions and explore the model NAVE. First, a usability study was conducted which combined a variety of methods, such as questionnaires, think-aloud, interview, and user log recording. This study provided insights regarding the tools and the underlying model which were considered in its further development. Second, an experimental study was conducted in order to compare the ENS with a keyword search approach. The findings provided statistical indications that participants had a better performance using the ENS
|
238 |
Representação da informação dinâmica em ambientes digitaisRibeiro, Camila 09 August 2013 (has links)
Este trabalho é um estudo exploratório interdisciplinar, pois converge de duas áreas não pertencentes à mesma classe acadêmica, Ciência da Informação (CI) e Ciência da Computação. O objetivo é, além de estudar a representação no ambiente virtual, encontrar uma forma de representar a informação não textual (multimídia) que atenda essas \"novas necessidades\" e possibilidades que a Web Semântica requer no desenvolvimento de contextos com uso do XML. Conforme a complexidade dos documentos multimodais que envolvem textos, vídeos e imagens descritos em mais de um formato, a opção para a interoperabilidade da descrição foi representar o contexto destes documentos com uso de ontologia. Através de uma metodologia de pesquisa qualitativa de análise exploratória e descritiva, apresentam-se ontologias que permitam que esta descrição feita em padrões convencionais, mas interoperáveis, de formatos de descrição, e que possam atingir um conjunto de objetos multimodais. A descrição desta ontologia, em dois formatos interoperáveis, MARC21 e Dublin Core, foi criada utilizando o software Protégé; e para validação da ontologia, foram feitas 3 aplicações práticas com vídeos acadêmicos (uma aula, um trabalho de conclusão de curso e uma defesa de dissertação de mestrado), que possuem imagens retiradas dos slideshows e compostas num documento final. O resultado alcançado é uma representação dinâmica de vídeo, que faça as relações com os outros objetos que a vídeo trás além da interoperabilidade dos formatos de descrição, tais como: Dublin Core e MARC21. / This work is an exploratory interdisciplinary study, since it mixes two different academic areas: Information science (IS) and Computer Science. The search for a new way of represent non-textual information (media) that supplies the current needs and possibilities that semantic web requires on XML developed contexts is one of the aims of this study. According to the complexity of multimodal documents that converge text, videos and images described in more than one format, ontology use was choose to represent the description interoperability. Through a qualitative research using exploratory and descriptive analysis will be presented ontologies that allow the conventional patterns of description to be interoperable, being able to show a multimodal object set. This ontology description was made in two interoperable formats: MARC21 and Dublin Core. It was created using the Protégé software. To validate the ontologies, they will be applied in 3 academic videos (a lesson video, a graduation defense, and a masters defense), and all of three are composed with slideshows images that are attached in the final document. The result obtained is a dynamic video representation that can make relations with the other video objects beyond interoperability of description formats, such as Dublin Core and MARC21.
|
239 |
Avaliação da qualidade do dado espacial digital de acordo com parâmetros estabelecidos por usuários. / Digital spatial data quality evaluation based on users parameters.Salisso Filho, João Luiz 02 May 2013 (has links)
Informações espaciais estão cada vez mais disseminadas no cotidiano do cidadão comum, de empresas e de instituições governamentais. Aplicações como o Google Earth, Bing Maps, aplicativos de localização por GPS, entre outros apresentam a informação espacial como uma commodity. Cada vez mais empresas públicas e privadas incorporam o dado espacial em seu processo decisório, tornando ainda mais crítico a questão da qualidade deste tipo de dado. Dada a natureza multidisciplinar e, principalmente, o volume de informações disponibilizadas para os usuários, faz-se necessário apresentar um método de avaliação de dados apoiado por processos computacionais, que permita ao usuário avaliar a verdadeira adequação que tais dados têm frente ao uso pretendido. Nesta Dissertação de Mestrado propõe-se uma metodologia estruturada de avaliação de dados espaciais apoiada por computador. A metodologia utilizada, baseada em normas apresentadas pela International Standards Organization (ISO), permite ao usuário de dados espaciais avaliar sua qualidade comparando a qualidade do dado de acordo com os parâmetros estabelecidos pelo próprio usuário. Também permite ao usuário comparar a qualidade apresentada pelo dado espacial com a informação de qualidade provida pelo produtor do dado. Desta forma, o método apresentado, ajuda o usuário a determinar a real adequação do dado espacial ao seu uso pretendido. / Spatial information is increasingly widespread in everyday life of ordinary people, businesses and government institutions. Applications like Google Earth, Bing Maps, GPS location applications, among others present spatial data as a commodity. More and more public and private companies incorporate the usage of spatial data into their decision process, increasing the importance of spatial quality issues. Given the multidisciplinary nature and, especially, the volume of information available to all users, it is necessary to introduce a data quality evaluation method supported by computational processes, enabling the end user to evaluate the real fitness for use that such data have for an intended use. This dissertation aims to present a structure methodology for spatial data evaluation supported by computational process. The methodology, based on standards provided by the International Standards Organization (ISO), allows users of spatial information evaluating the quality of spatial data comparing the quality of information against users own quality parameters. It will also allow the user to compare the quality presented by the given spatial data with quality information provided by the data producer. Thus, the presented method will support the end user in determining the real fitness for use for the spatial data.
|
240 |
Arquitetura orientada a serviços para aquisição de dados de experimentos em Weblab de abelhas. / Service oriented architecture for data acquisition of experiments in bee Weblab.Najm, Leandro Halle 17 June 2011 (has links)
Experimentos ambientais são fundamentais para entender os efeitos das mudanças climáticas, como o decréscimo de polinizadores encontrados na natureza. Esses experimentos devem ser compartilhados com uma metodologia integrada. Desenvolver e aplicar ferramentas de tecnologia da informação em diferentes áreas de pesquisa é primordial para melhorar processos de controle e análise de dados, sem requisitar que pesquisadores de outras áreas tenham conhecimentos avançados em tecnologias da computação. Para isso, é importante a utilização de uma infraestrutura de hardware e software aberta e disponível aos pesquisadores, por meio de portais na web conhecidos como Weblabs, para aquisição e compartilhamento de dados obtidos através de sensores. Este trabalho apresenta uma arquitetura de sistemas de informação para a implementação de Weblabs a partir dos conceitos de SOA, para solucionar o problema de heterogeneidade e interoperabilidade de ambientes, visto que os dados são coletados por diferentes tecnologias de redes de sensores em suas bases de dados. Para tanto, fez-se necessária a modelagem de uma base de dados central capaz de armazenar dados oriundos de diferentes sistemas, acessíveis por meio do consumo de serviços disponibilizados pelo Weblab. / Environmental experiments are fundamental to understand the effects of climate change, such as the decline of pollinators in nature. These experiments should be shared with an integrated methodology. Develop and apply tools of information technology in different areas of research is essential for improving process control and data analysis, without requiring that researchers from other fields have advanced knowledge in computing technologies. For this it is important to use an open infrastructure of hardware and software made available to researchers through web portals, known as Weblab for acquisition and sharing of data obtained by sensors. This paper presents a model of information systems architecture for the implementation of a Weblab based on the concepts of SOA, to solve the problem of heterogeneity and interoperability of environments, since the data is collected by different network technologies of sensors in its databases. It was necessary for the modeling of central database capable of storing data from different systems accessible through the consumption of the service provided by the Weblab.
|
Page generated in 0.0468 seconds