• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 567
  • 181
  • 143
  • 142
  • 141
  • 95
  • 33
  • 28
  • 16
  • 14
  • 12
  • 12
  • 12
  • 9
  • 9
  • Tagged with
  • 1602
  • 348
  • 298
  • 253
  • 249
  • 233
  • 227
  • 218
  • 209
  • 176
  • 159
  • 143
  • 138
  • 126
  • 119
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
531

Models and operators for extension of active multimedia documents via annotations / Modelos e operadores para extensão de documentos multimídia ativos via anotações

Martins, Diogo Santana 18 November 2013 (has links)
Multimedia production is an elaborate activity composed of multiple information management and transformation tasks that support an underlying creative goal. Examples of these activities are structuring, organization, modification and versioning of media elements, all of which depend on the maintenance of supporting documentation and metadata. In professional productions, which can count on proper human and material resources, such documentation is maintained by the production crew, being key to secure a high quality in the final content. In less resourceful configurations, such as amateur-oriented productions, at least reasonable quality standards are desirable in most cases, however the perceived difficulty in managing and transforming content can inhibit amateurs on producing content with acceptable quality. This problem has been tackled in many fronts, for instance via annotation methods, smart browsing methods and authoring techniques, just to name a few. In this dissertation, the primary objective is to take advantage of user-created annotations in order to aid amateur-oriented multimedia authoring. In order to support this objective, the contributions are built around an authoring approach based on structured multimedia documents. First, a custom language for Web-based multimedia documents is defined, based on SMIL (Synchronized Multimedia Integration Language). This language brings several contributions, such as the formalization of an extended graph-based temporal layout model, live editing of document elements and extended reuse features. Second, a model for document annotation and an algebra for document transformations are defined, both of which allows composition and extraction of multimedia document fragments based on annotations. Third, the previous contributions are integrated into a Web-based authoring tool, which allows manipulating a document while it is active. Such manipulations encompass several interaction techniques for enriching, editing, publishing and extending multimedia documents. The contributions have been instantiated with multimedia sessions obtained from synchronous collaboration tools, in scenarios of video-based lectures, meetings and video-based qualitative research. Such instantiations demonstrate the applicability and utility of the contributions / Produção multimídia é uma atividade complexa composta por múltiplas atividades de gerência e transformação de informação, as quais suportam um objetivo de criar conteúdo. Exemplos dessas atividades são estruturação, organização, modificação e versionamento de elementos de mídia, os quais dependem da manutenção de documentos auxiliares e metadados. Em produções profissionais, as quais podem contar com recursos humanos e materiais adequados, tal documentação é mantida pela equipe de produção, sendo instrumental para garantir a uma alta qualidade no produto final. Em configurações com menos recursos, como produções amadoras, ao menos padrões razoáveis de qualidade são desejados na maioria dos casos, contudo a dificuldade em gerenciar e transformar conteúdo pode inibir amadores a produzir conteúdo com qualidade aceitável. Esse problema tem sido atacado em várias frentes, por exemplo via métodos de anotação, métodos de navegação e técnicas de autoria, apenas para nomear algumas. Nesta tese, o objetivo principal é tirar proveito de anotações criadas pelo usuário com o intuito de apoiar autoria multimídia por amadores. De modo a subsidiar esse objetivo, as contribuições são construídas em torno uma abordagem de autoria baseada em documentos multimídia estruturados. Primeiramente, uma linguagem customizada para documentos multimídia baseados na Web é definida, baseada na linguagem SMIL (Synchronized Multimedia Integration Language). Esta linguagem traz diversas contribuições, como a formalização de um modelo estendido para formatação temporal baseado em grafos, edição ao vivo de elementos de um documento e funcionalidades de reúso. Em segundo, um modelo para anotação de documentos e uma álgebra para transformação de documentos são definidos, ambos permitindo composição e extração de fragmentos de documentos multimídia com base em anotações. Em terceiro, as contribuições anteriores são integradas em uma ferramenta de autoria baseada na Web, a qual permite manipular um documento enquanto o mesmo está ativo. Tais manipulações envolvem diferentes técnicas de interação com o objetivo de enriquecer, editar, publicar e estender documentos multimídia interativos. As contribuições são instanciadas com sessões multimídia obtidas de ferramentas de colaboração síncrona, em cenários de aulas baseadas em vídeos, reuniões e pesquisa qualitativa baseada em vídeos. Tais instanciações demonstram a aplicabilidade e utilidade das contribuições
532

Exploitation didactique d’un corpus pour l’enseignement de la compréhension orale du FLE en milieu universitaire chinois : didactisation de la banque de données multimédia CLAPI (Corpus de Langues Parlées en Interaction) / The research of corpus in teaching listening of French as a foreign language in Chinese universities : didactisation of multimedia database CLAPI (corpus of spoken languages in interaction)

Zhang, Chang 21 September 2017 (has links)
La compréhension orale du français langue étrangère constitue un objectif clé dans le processus d’apprentissage d’une langue étrangère. Néanmoins, les étudiants chinois de spécialité FLE dans les universités chinoises ont souvent des difficultés en compréhension orale. La présente étude tente d’exploiter la banque de donnée CLAPI (Corpus de langues parlées en interaction) pour proposer des pistes de l’enseignement de la compréhension orale du FLE en milieu universitaire chinois. Le travail présente d’abord le contexte culturel et d’enseignement pour mieux interpréter la culture d’enseignement-apprentissage du FLE en Chine ; puis notre recherche fait appel aux études théoriques sur le phénomène de compréhension orale en langue étrangère, aux celles sur les apports des corpus. Ensuite, nous effectuons une recherche sur le terrain auprès des étudiants et des enseignants de français en milieu universitaire chinois afin de dégager les aouts et les limites dans l’enseignement/apprentissage du cours de compréhension orale. En confrontant les théories de base et le contexte d’enseignement chinois aux résultats obtenus dans notre enquête, nous arrivons à apporter les réflexions sur l’exploitation du corpus oral pour l’enseignement de la compréhension orale au contexte chinois, ainsi que les propositions s’adressant surtout à nos collègues de français en milieu universitaire chinois. / Listening comprehension is a key objective in the process of learning a foreign language. The Chinese students often find understanding oral French difficult.Based on this fact, this paper attempts to use the database CLAPI (Corpus de langues parlées en interaction) to propose some paths for teaching listening comprehension in the context of Chinese universities.This research begins with the presentation of educational and cultural context for interpreting the culture of teaching in China; then the paperconsists of a review of foreign language listening comprehension andthe contributions of the corpus; and then, we carry out this study in the context of Chinese universities, with students and teachers of French, in order to find advantages and limitations in the teaching and learning of listening comprehension. Based on the theories, the Chinese context of French teaching andthe results obtained in our study, we bring our reflections and proposals on the teaching of oral corpus for listening comprehension in Chinese context.
533

Analyse et évaluation de structures orientées document / Analysis and evaluation of document-oriented structures

Gomez Barreto, Paola 13 December 2018 (has links)
De nos jours, des millions de sources de données différentes produisent une énorme quantité de données non structurées et semi-structurées qui changent constamment. Les systèmes d'information doivent gérer ces données tout en assurant la scalabilité et la performance. En conséquence, ils ont dû s'adapter pour supporter des bases de données hétérogènes, incluant des bases de données No-SQL. Ces bases de données proposent une structure de données sans schéma avec une grande flexibilité, mais sans séparation claire des couches logiques et physiques. Les données peuvent être dupliquées, fragmentées et/ou incomplètes, et ils peuvent aussi changer à mesure des besoins de métier.La flexibilité et l’absence de schéma dans les systèmes NoSQL orientés documents, telle que MongoDB, permettent d’explorer des nouvelles alternatives de structuration sans faire face aux contraintes. Le choix de la structuration reste important et critique parce qu’il y a plusieurs impacts à considérer et il faut choisir parmi des nombreuses d’options de structuration. Nous proposons donc de revenir sur une phase de conception dans laquelle des aspects de qualité et les impacts de la structure sont pris en compte afin de prendre une décision d’une manière plus avertie.Dans ce cadre, nous proposons SCORUS, un système pour l’analyse et l’évaluation des structures orientés document qui vise à faciliter l’étude des possibilités de semi-structurations orientées document, telles que MongoDB, et à fournir des métriques objectives pour mieux faire ressortir les avantages et les inconvénients de chaque solution par rapport aux besoins des utilisateurs. Pour cela, une séquence de trois phases peut composer un processus de conception. Chaque phase peut être aussi effectuée indépendamment à des fins d’analyse et de réglage. La stratégie générale de SCORUS est composée par :1. Génération d’un ensemble d’alternatives de structuration : dans cette phase nous proposons de partir d’une modélisation UML des données et de produire automatiquement un large ensemble de variantes de structuration possibles pour ces données.2. Evaluation d’alternatives en utilisant un ensemble de métriques structurelles : cette évaluation prend un ensemble de variantes de structuration et calcule les métriques au regard des données modélisées.3. Analyse des alternatives évaluées : utilisation des métriques afin d’analyser l’intérêt des alternatives considérées et de choisir la ou les plus appropriées. / Nowadays, millions of different data sources produce a huge quantity of unstructured and semi-structured data that change constantly. Information systems must manage these data but providing at the same time scalability and performance. As a result, they have had to adapt it to support heterogeneous databases, included NoSQL databases. These databases propose a schema-free with great flexibility but with a no clear separation of the logical and physical layers. Data can be duplicated, split and/or incomplete, and it can also change as the business needs.The flexibility and absence of schema in document-oriented NoSQL systems, such as MongoDB, allows new structuring alternatives to be explored without facing constraints. The choice of the structuring remains important and critical because there are several impacts to consider and it is necessary to choose among many of options of structuring. We therefore propose to return to a design phase in which aspects of quality and the impacts of the structure are considered in order to make a decision in a more informed manner.In this context, we propose SCORUS, a system for the analysis and evaluation of document-oriented structures that aims to facilitate the study of document-oriented semi-structuring possibilities, such as MongoDB, and to provide objective metrics for better highlight the advantages and disadvantages of each solution in relation to the needs of the users. For this, a sequence of three phases can compose a design process. Each phase can also be performed independently for analysis and adjustment purposes. The general strategy of SCORUS is composed by:1. Generation of a set of structuration alternatives: in this phase we propose to start from UML modeling of the data and to automatically produce a large set of possible structuring variants for this data.2. Evaluation of Alternatives Using a Set of Structural Metrics: This evaluation takes a set of structuring variants and calculates the metrics against the modeled data.3. Analysis of the evaluated alternatives: use of the metrics to analyze the interest of the considered alternatives and to choose the most appropriate one(s).
534

Modélisation NoSQL des entrepôts de données multidimensionnelles massives / Modeling Multidimensional Data Warehouses into NoSQL

El Malki, Mohammed 08 December 2016 (has links)
Les systèmes d’aide à la décision occupent une place prépondérante au sein des entreprises et des grandes organisations, pour permettre des analyses dédiées à la prise de décisions. Avec l’avènement du big data, le volume des données d’analyses atteint des tailles critiques, défiant les approches classiques d’entreposage de données, dont les solutions actuelles reposent principalement sur des bases de données R-OLAP. Avec l’apparition des grandes plateformes Web telles que Google, Facebook, Twitter, Amazon… des solutions pour gérer les mégadonnées (Big Data) ont été développées et appelées « Not Only SQL ». Ces nouvelles approches constituent une voie intéressante pour la construction des entrepôts de données multidimensionnelles capables de supporter des grandes masses de données. La remise en cause de l’approche R-OLAP nécessite de revisiter les principes de la modélisation des entrepôts de données multidimensionnelles. Dans ce manuscrit, nous avons proposé des processus d’implantation des entrepôts de données multidimensionnelles avec les modèles NoSQL. Nous avons défini quatre processus pour chacun des deux modèles NoSQL orienté colonnes et orienté documents. De plus, le contexte NoSQL rend également plus complexe le calcul efficace de pré-agrégats qui sont habituellement mis en place dans le contexte ROLAP (treillis). Nous avons élargis nos processus d’implantations pour prendre en compte la construction du treillis dans les deux modèles retenus.Comme il est difficile de choisir une seule implantation NoSQL supportant efficacement tous les traitements applicables, nous avons proposé deux processus de traductions, le premier concerne des processus intra-modèles, c’est-à-dire des règles de passage d’une implantation à une autre implantation du même modèle logique NoSQL, tandis que le second processus définit les règles de transformation d’une implantation d’un modèle logique vers une autre implantation d’un autre modèle logique. / Decision support systems occupy a large space in companies and large organizations in order to enable analyzes dedicated to decision making. With the advent of big data, the volume of analyzed data reaches critical sizes, challenging conventional approaches to data warehousing, for which current solutions are mainly based on R-OLAP databases. With the emergence of major Web platforms such as Google, Facebook, Twitter, Amazon...etc, many solutions to process big data are developed and called "Not Only SQL". These new approaches are an interesting attempt to build multidimensional data warehouse capable of handling large volumes of data. The questioning of the R-OLAP approach requires revisiting the principles of modeling multidimensional data warehouses.In this manuscript, we proposed implementation processes of multidimensional data warehouses with NoSQL models. We defined four processes for each model; an oriented NoSQL column model and an oriented documents model. Each of these processes fosters a specific treatment. Moreover, the NoSQL context adds complexity to the computation of effective pre-aggregates that are typically set up within the ROLAP context (lattice). We have enlarged our implementations processes to take into account the construction of the lattice in both detained models.As it is difficult to choose a single NoSQL implementation that supports effectively all the applicable treatments, we proposed two translation processes. While the first one concerns intra-models processes, i.e., pass rules from an implementation to another of the same NoSQL logic model, the second process defines the transformation rules of a logic model implementation to another implementation on another logic model.
535

Fábula PXP - a técnica de Programação Exploratória (PXP): projetos de criação e desenvolvimento de jogos digitais

Lemes, David de Oliveira 17 April 2015 (has links)
Made available in DSpace on 2016-04-29T14:23:36Z (GMT). No. of bitstreams: 1 David de Oliveira Lemes.pdf: 762934 bytes, checksum: 50e7e21a7cb2e27f5b3da0707887c572 (MD5) Previous issue date: 2015-04-17 / The development of digital games is an activity that grows exponentially as the many game supporting devices become popular. Under the game development perspective, nowadays, several tools "promise" to make the most inexperienced programmer able to create and develop a digital game. This research presents a work model that seeks to introduce the importance of narrative into games, the way the ideas are organized to produce a digital game and, overall, a programming technique that will help the game designer, or digital games designer to understand how a calculations, the computer, may enhance or hinder the use of ideas and original narrative of a project / O desenvolvimento de jogos digitais é uma atividade exponencialmente em ascensão à medida que os mais diversos dispositivos os quais suportam os games se popularizam. Sob o olhar do desenvolvimento de games, nos dias atuais, diversas ferramentas possibilitam que o mais inexperiente programador possa criar e desenvolver um jogo digital. Esta pesquisa apresenta um modelo de trabalho que busca apresentar a importância da narrativa nos jogos, como as ideias são organizadas para a produção de um jogo digital e, sobretudo, uma técnica de programação que ajudará o game designer, ou o projetista de jogos digitais a entender como uma máquina de cálculos, o computador, pode ampliar ou limitar os uso das ideias e narrativas originais do projeto
536

Le spectre du document : supports, signes et sens dans l’œuvre romanesque de Charles Dickens / The spectrum of documents : media, signs and meaning in Charles Dickens’s novels

Prest, Céline 26 November 2016 (has links)
Né en 1812 et mort en 1870, Charles Dickens assiste tout au long du XIXe siècle au développement de l’ère industrielle, de la société de consommation et de nouvelles pratiques de lecture qui transforment les usages du document. Ce sont ces nouveaux usages que Dickens commente continuellement dans ses romans ainsi que dans ses essais, en ne cessant d’en exposer toute l’ambivalence. Le romancier manifeste une inquiétude permanente quant au pouvoir de l’instance auctoriale qui n’est jamais pris pour acquis. Ce travail s’inscrit donc dans une réflexion sur le lecteur et la réception tels qu’ils se présentent dans les textes dickensiens. Le personnage dickensien est présenté comme un lecteur concret et un herméneute imparfait. Sa réception des textes écrits dépend de sa subjectivité d’une part, et d’autre part de la matérialité du support qui s’interpose entre lui et le texte. Ainsi, le sens construit ultérieurement par les lecteurs dickensiens peut se distinguer de l’intention originelle de l’auteur : contrairement à l’oral, l’écrit s’inscrit dans une communication différée et crée un écart dans lequel la subjectivité et la matérialité s’insère. En considérant ces deux paramètres dans le processus de la communication écrite, ce travail adopte les perspectives d’étude de ce que la critique anglo-saxonne nomme Book History. L’analyse de l’objet textuel qu’est le document rejoint également les constructions théoriques du courant Thing Theory. Dans la continuité de ces courants, nous nous intéresserons à “l’imagination matérielle” de l’œuvre dickensienne qui pense, rêve et vit dans la matière du document. Il s’agira de voir pourquoi et comment Dickens cherche paradoxalement à se défaire des matières inertes que sont le papier, la plaque et la pierre comme supports de l’écriture, pour ensuite examiner son rêve de textes vivants qui trouvent leur possibilité en l’homme, dans un au-delà de la matière. / From his birth in 1812 until his death in 1870, Charles Dickens was part of the industrial development era of the 19th century which brought about the consumer society and new forms of reading that transformed the use of documents. Dickens comments on these different forms and uses throughout his work, both in his novels and in his essays, in which he demonstrates a persistent uncertainty concerning the power of the author. This dissertation aims at reflecting on the role of the reader and the act of reading as they are presented in Dickens’s novels. Dickens’s characters are presented as concrete readers who imperfectly interpret the various texts they are presented with. The reception of written texts is subject on the one hand to the character’s subjectivity and on the other hand the materiality of the document which comes between the reader and the text. Thus whatever sense is construed by a Dickensien reader can differ from the original intent of the text’s author. Contrary to an orally delivered message, a written text is part of a differed communication into which subjectivity and matter irrupt. Considering these two parameters within the process of written communication, this work adopts the perspectives of the Anglo-Saxon critical study called Book History. The analysis of the textual object which is the document is connected with the theoretical reflections on objects as considered in Thing Theory. Together with these theories, this work is interested in “the material imagination” in Dickens’s work which thinks and dreams about the materiality of the document. We set out to understand why and how Dickens paradoxically attempts to deconstruct inert materials such as paper, signboards, stones as media for writing to then examine his dream of living texts which find its answer in man, beyond matter.
537

Indexation et interrogation de pages web décomposées en blocs visuels

Faessel, Nicolas 14 June 2011 (has links)
Cette thèse porte sur l'indexation et l'interrogation de pages Web. Dans ce cadre, nous proposons un nouveau modèle : BlockWeb, qui s'appuie sur une décomposition de pages Web en une hiérarchie de blocs visuels. Ce modèle prend en compte, l'importance visuelle de chaque bloc et la perméabilité des blocs au contenu de leurs blocs voisins dans la page. Les avantages de cette décomposition sont multiples en terme d'indexation et d'interrogation. Elle permet notamment d'effectuer une interrogation à une granularité plus fine que la page : les blocs les plus similaires à une requête peuvent être renvoyés à la place de la page complète. Une page est représentée sous forme d'un graphe acyclique orienté dont chaque nœud est associé à un bloc et étiqueté par l'importance de ce bloc et chaque arc est étiqueté la perméabilité du bloc cible au bloc source. Afin de construire ce graphe à partir de la représentation en arbre de blocs d'une page, nous proposons un nouveau langage : XIML (acronyme de XML Indexing Management Language), qui est un langage de règles à la façon de XSLT. Nous avons expérimenté notre modèle sur deux applications distinctes : la recherche du meilleur point d'entrée sur un corpus d'articles de journaux électroniques et l'indexation et la recherche d'images sur un corpus de la campagne d'ImagEval 2006. Nous en présentons les résultats. / This thesis is about indexing and querying Web pages. We propose a new model called BlockWeb, based on the decomposition of Web pages into a hierarchy of visual blocks. This model takes in account the visual importance of each block as well as the permeability of block's content to their neighbor blocks on the page. Splitting up a page into blocks has several advantages in terms of indexing and querying. It allows to query the system with a finer granularity than the whole page: the most similar blocks to the query can be returned instead of the whole page. A page is modeled as a directed acyclic graph, the IP graph, where each node is associated with a block and is labeled by the coefficient of importance of this block and each arc is labeled by the coefficient of permeability of the target node content to the source node content. In order to build this graph from the bloc tree representation of a page, we propose a new language : XIML (acronym for XML Indexing Management Language), a rule based language like XSLT. The model has been assessed on two distinct dataset: finding the best entry point in a dataset of electronic newspaper articles, and images indexing and querying in a dataset drawn from web pages of the ImagEval 2006 campaign. We present the results of these experiments.
538

Discovering and Tracking Interesting Web Services

Rocco, Daniel J. (Daniel John) 01 December 2004 (has links)
The World Wide Web has become the standard mechanism for information distribution and scientific collaboration on the Internet. This dissertation research explores a suite of techniques for discovering relevant dynamic sources in a specific domain of interest and for managing Web data effectively. We first explore techniques for discovery and automatic classification of dynamic Web sources. Our approach utilizes a service class model of the dynamic Web that allows the characteristics of interesting services to be specified using a service class description. To promote effective Web data management, the Page Digest Web document encoding eliminates tag redundancy and places structure, content, tags, and attributes into separate containers, each of which can be referenced in isolation or in conjunction with the other elements of the document. The Page Digest Sentinel system leverages our unique encoding to provide efficient and scalable change monitoring for arbitrary Web documents through document compartmentalization and semantic change request grouping. Finally, we present XPack, an XML document compression system that uses a containerized view of an XML document to provide both good compression and efficient querying over compressed documents. XPack's queryable XML compression format is general-purpose, does not rely on domain knowledge or particular document structural characteristics for compression, and achieves better query performance than standard query processors using text-based XML. Our research expands the capabilities of existing dynamic Web techniques, providing superior service discovery and classification services, efficient change monitoring of Web information, and compartmentalized document handling. DynaBot is the first system to combine a service class view of the Web with a modular crawling architecture to provide automated service discovery and classification. The Page Digest Web document encoding represents Web documents efficiently by separating the individual characteristics of the document. The Page Digest Sentinel change monitoring system utilizes the Page Digest document encoding for scalable change monitoring through efficient change algorithms and intelligent request grouping. Finally, XPack is the first XML compression system that delivers compression rates similar to existing techniques while supporting better query performance than standard query processors using text-based XML.
539

Wrapper application generation for semantic web

Han, Wei 01 December 2003 (has links)
No description available.
540

A Document Similarity Measure and Its Applications

Gan, Zih-Dian 07 September 2011 (has links)
In this paper, we propose a novel similarity measure for document data processing and apply it to text classification and clustering. For two documents, the proposed measure takes three cases into account: (a) The feature considered appears in both documents, (b) the feature considered appears in only one document, and (c) the feature considered appears in none of the documents. For the first case, we give a lower bound and decrease the similarity according to the difference between the feature values of the two documents. For the second case, we give a fixed value disregarding the magnitude of the feature value. For the last case, we ignore its effectiveness. We apply it to the similarity based single-label classifier k-NN and multi-label classifier ML-KNN, and adopt these properties to measure the similarity between a document and a specific set for document clustering, i.e., k-means like algorithm, to compare the effectiveness with other measures. Experimental results show that our proposed method can work more effectively than others.

Page generated in 0.0511 seconds