• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 567
  • 181
  • 143
  • 142
  • 141
  • 95
  • 33
  • 28
  • 16
  • 14
  • 12
  • 12
  • 12
  • 9
  • 9
  • Tagged with
  • 1602
  • 348
  • 298
  • 253
  • 249
  • 233
  • 227
  • 218
  • 209
  • 176
  • 159
  • 143
  • 138
  • 126
  • 119
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
551

Mesures de comparabilité pour la construction assistée de corpus comparables bilingues thématiques

Ke, Guiyao 26 February 2014 (has links) (PDF)
Les corpus comparables thématiques regroupent des textes issus d¡¯un même thème et rédigés dans plusieurs langues, fortement similaires mais ne comprenant pas de traductions mutuelles. Par rapport aux corpus parallèles qui regroupent des paires de traductions, les corpus comparables présentent trois avantages: premièrement, ce sont des ressources riches et larges : en volume et en période couverte; deuxièmement, les corpus comparables fournissent des ressources linguistiques originales et thématiques. Enfin, ils sont moins coûteux à développer que les corpus parallèles. Avec le développement considérable du WEB, une matière première très abondante est exploitable pour la construction de corpus comparables. En contre-partie, la qualité des corpus comparables est essentielle pour leur utilisation dans différents domaines tels que la traduction automatique ou assistée, l¡¯extraction de terminologies bilingues, la recherche d¡¯information multilingue, etc. L¡¯objectif de ce travail de thèse est de développer une approche méthodologique et un outillage informatique pour fournir une assistance à la construction des corpus comparables bilingues et thématiques de ? bonne qualité ?, à partir du WEB et à la demande. Nous présentons tout d¡¯abord la notion de mesure de comparabilité qui associe deux espaces linguistiques et, à partir d¡¯une mesure quantitative de comparabilité de référence, nous proposons deux variantes, qualifiées de comparabilité thématique, que nous évaluons suivant un protocole basé sur la dégradation progressive d¡¯un corpus parallèle. Nous proposons ensuite une nouvelle méthode pour améliorer le co-clustering et la co-classification de documents bilingues, ainsi que l¡¯alignement des clusters comparables. Celle-ci fusionne des similarités natives définies dans chacun des espaces linguistiques avec des similarités induites par la mesure de comparabilité utilisée. Enfin, nous proposons une démarche intégrée basée sur les contributions précédemment évoquées afin d¡¯assister la construction, à partir du WEB, de corpus comparables bilingues thématiques de qualité. Cette démarche comprend une étape de validation manuelle pour garantir la qualité de l¡¯alignement des clusters comparables. En jouant sur le seuil de comparabilité d¡¯alignement, différents corpus comparables associés à des niveaux de comparabilité variables peuvent être fournis en fonction des besoins spécifiés. Les expérimentations que nous avons menées sur des Flux RSS issus de grands quotidiens internationaux apparaissent pertinentes et prometteuses.
552

Alignement lexical en corpus comparables : le cas des composés savants et des adjectifs relationnels

Harastani, Rima 10 February 2014 (has links) (PDF)
Notre travail concerne l'extraction automatique d'une liste de termes alignés avec leurs traductions (c'est-à-dire un lexique bilingue spécialisé) à partir d'un corpus comparable dans un domaine de spécialité. Un corpus comparable comprend des textes écrits dans deux langues différentes sans aucune relation de traduction entre eux mais dont les textes appartiennent à un même domaine. Les contributions de cette thèse portent sur l'amélioration de la qualité d'un lexique bilingue spécialisé extrait à partir d'un corpus comparable. Nous proposons des méthodes consacrées à la traduction de deux types de termes, qui ont des caractéristiques en commun entre plusieurs langues ou qui posent par leur nature des problèmes pour la traduction : les composés savants (termes contenant au moins une racine gréco-latine) et les termes composés d'un nom et un adjectif relationnel. Nous développons également une méthode, qui exploite des contextes riches en termes spécifiques au domaine du corpus, pour réordonner dans un lexique bilingue spécialisé des traductions candidates fournies pour un terme. Les expériences sont réalisées en utilisant deux corpus comparables spécialisés (dans les domaines du cancer du sein et des énergies renouvelables), sur les langues français, anglais, allemand et espagnol.
553

Models and operators for extension of active multimedia documents via annotations / Modelos e operadores para extensão de documentos multimídia ativos via anotações

Diogo Santana Martins 18 November 2013 (has links)
Multimedia production is an elaborate activity composed of multiple information management and transformation tasks that support an underlying creative goal. Examples of these activities are structuring, organization, modification and versioning of media elements, all of which depend on the maintenance of supporting documentation and metadata. In professional productions, which can count on proper human and material resources, such documentation is maintained by the production crew, being key to secure a high quality in the final content. In less resourceful configurations, such as amateur-oriented productions, at least reasonable quality standards are desirable in most cases, however the perceived difficulty in managing and transforming content can inhibit amateurs on producing content with acceptable quality. This problem has been tackled in many fronts, for instance via annotation methods, smart browsing methods and authoring techniques, just to name a few. In this dissertation, the primary objective is to take advantage of user-created annotations in order to aid amateur-oriented multimedia authoring. In order to support this objective, the contributions are built around an authoring approach based on structured multimedia documents. First, a custom language for Web-based multimedia documents is defined, based on SMIL (Synchronized Multimedia Integration Language). This language brings several contributions, such as the formalization of an extended graph-based temporal layout model, live editing of document elements and extended reuse features. Second, a model for document annotation and an algebra for document transformations are defined, both of which allows composition and extraction of multimedia document fragments based on annotations. Third, the previous contributions are integrated into a Web-based authoring tool, which allows manipulating a document while it is active. Such manipulations encompass several interaction techniques for enriching, editing, publishing and extending multimedia documents. The contributions have been instantiated with multimedia sessions obtained from synchronous collaboration tools, in scenarios of video-based lectures, meetings and video-based qualitative research. Such instantiations demonstrate the applicability and utility of the contributions / Produção multimídia é uma atividade complexa composta por múltiplas atividades de gerência e transformação de informação, as quais suportam um objetivo de criar conteúdo. Exemplos dessas atividades são estruturação, organização, modificação e versionamento de elementos de mídia, os quais dependem da manutenção de documentos auxiliares e metadados. Em produções profissionais, as quais podem contar com recursos humanos e materiais adequados, tal documentação é mantida pela equipe de produção, sendo instrumental para garantir a uma alta qualidade no produto final. Em configurações com menos recursos, como produções amadoras, ao menos padrões razoáveis de qualidade são desejados na maioria dos casos, contudo a dificuldade em gerenciar e transformar conteúdo pode inibir amadores a produzir conteúdo com qualidade aceitável. Esse problema tem sido atacado em várias frentes, por exemplo via métodos de anotação, métodos de navegação e técnicas de autoria, apenas para nomear algumas. Nesta tese, o objetivo principal é tirar proveito de anotações criadas pelo usuário com o intuito de apoiar autoria multimídia por amadores. De modo a subsidiar esse objetivo, as contribuições são construídas em torno uma abordagem de autoria baseada em documentos multimídia estruturados. Primeiramente, uma linguagem customizada para documentos multimídia baseados na Web é definida, baseada na linguagem SMIL (Synchronized Multimedia Integration Language). Esta linguagem traz diversas contribuições, como a formalização de um modelo estendido para formatação temporal baseado em grafos, edição ao vivo de elementos de um documento e funcionalidades de reúso. Em segundo, um modelo para anotação de documentos e uma álgebra para transformação de documentos são definidos, ambos permitindo composição e extração de fragmentos de documentos multimídia com base em anotações. Em terceiro, as contribuições anteriores são integradas em uma ferramenta de autoria baseada na Web, a qual permite manipular um documento enquanto o mesmo está ativo. Tais manipulações envolvem diferentes técnicas de interação com o objetivo de enriquecer, editar, publicar e estender documentos multimídia interativos. As contribuições são instanciadas com sessões multimídia obtidas de ferramentas de colaboração síncrona, em cenários de aulas baseadas em vídeos, reuniões e pesquisa qualitativa baseada em vídeos. Tais instanciações demonstram a aplicabilidade e utilidade das contribuições
554

A vontade da verdade, a informação e o arquivo

Elias, Aluf Alba V. 18 May 2012 (has links)
Made available in DSpace on 2015-10-19T11:50:24Z (GMT). No. of bitstreams: 1 elias2012.pdf: 681074 bytes, checksum: 01f635ac98609139cfb98d87b6c1ab93 (MD5) Previous issue date: 2012-05-18 / This work investigates a possible relationship between a will of truth, the information and the archives observing its development through action to document in its genesis. Presents a discussion based on the philosophy of Michel Foucault on the question of truth and meaning of the Archive. Locates the problem with the current discussion on the opening of the archives of the period of the Brazilian Military Regime, where there is a conflict between truth and will of the documentary sources in the custody of the Archives. Includes discussion on the different perspectives on the relationship between truth and use the information to the Information Science, Archival Science and History, disciplines commonly linked to deals with the documentary sources, presenting differences in such areas of knowledge on the validation of information. This dissertation finally concludes that recent studies on a Philosophy of Information based on documentary practices, started by Bernd Frohmann in Information Science, open a possible path of research to enrich the debate on the subject / Este trabalho investiga a possível relação entre uma vontade da verdade, a informação e os arquivos observando seus desdobramentos por meio da ação de documentar em sua gênese. Apresenta a discussão tendo como base o dispositivo filosófico de Michel Foucault sobre a questão da verdade e o significado do Arquivo. Localiza o problema com a discussão atual sobre a abertura dos arquivos do período do Regime Militar Brasileiro, em que se observa o confronto entre a vontade de verdade e as fontes documentais sob custódia dos Arquivos. Inclui no debate as diferentes perspectivas sobre a relação entre a verdade e a informação ao utilizar a Ciência da Informação, Arquivologia e História, disciplinas comumente ligadas à lida com as fontes documentais, apresentando as divergências de tais áreas do conhecimento sobre a validação da informação. Esta dissertação, por fim, conclui que os recentes estudos sobre uma Filosofia da Informação baseada nas práticas documentárias, iniciados por Bernd Frohmann no âmbito da Ciência da Informação, abrem um caminho possível de investigação para o enriquecimento do debate sobre o tema
555

Preservação de documentos digitais : confiabilidade de midias de CD-ROM e CD-R / Digital document preservation: reliability of medias CD-ROM and CD-R

Innarelli, Humberto Celeste 26 May 2006 (has links)
Orientadores: Paulo Sollero / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecanica / Made available in DSpace on 2018-08-06T17:27:30Z (GMT). No. of bitstreams: 1 Innarelli_HumbertoCeleste_M.pdf: 5933187 bytes, checksum: 7266f9fcf1ad3fabbfa246f72516e192 (MD5) Previous issue date: 2006 / Resumo: Este trabalho tem como foco principal a preservação de documentos digitais no âmbito das mídias de CD-ROM e CD-R. Inicialmente, consiste na revisão da literatura de pesquisas e projetos desenvolvidos na área, na análise das mídias de CD-ROM e CD-R em relação a sua estrutura física e lógica, no estudo das variáveis que causam sua degradação, determinando uma relação causa versus efeito e no desenvolvimento de um software de verificação da confiabilidade das mídias, que conta com um modelo de confiabilidade desenvolvido com base na teoria da confiabilidade de sistemas e nos estudos existentes na área. Este software tem funcionalidades de identificação, armazenamento dos dados de mídia e módulos de análise de confiabilidade. Também é abordado neste trabalho uma análise experimental , a qual conta com metodologias de observação visual, observação por microscópio óptico e aplicação do software de verificação da confiabilidade, esta análise serve de base para a compreensão das variáveis relacionadas a confiabilidade e durabilidade das mídias, assim como, na fundamentação de propostas de preservação de documentos digitais No final do projeto é possível mostrar resultados e discuti-los, propor políticas para a preservação de documentos digitais em mídias de CD-ROM e CD-R e demonstrar conclusões e sugestões para próximos trabalhos / Abstract: The main object of this project is the digital documents preservation in the medias of COROM and CD-R. Initially, this project shows literature review of the studies and projects developed in this area, analyzing the medias of CO-ROM and CO-R in relation of physical and logic structure, showing the studies of the variables who degraded these medias. These studies are the bases of cause versus effect and the parameters to development of a software that can determinate a reliability of medias, that has reliability model developed based in system reliability theory and on the studies in this area. This software has functions of identify medias, data storage and module of reliability analysis. Besides, it does also object of this project an experimental analyses, which counts with visual observation methodology, optical observation and application of the software. This analysis is the base to understand the variables related to the reliability and durability of the medias, as well as, the fundament to propose digital documents preservation. At the end of the project is possible to show results, proposes digital documents preservation politics in medias of CD-ROM and CO-R and demonstrate conclusions and suggestions to the next projects / Mestrado / Mecanica dos Sólidos e Projeto Mecanico / Mestre em Engenharia Mecânica
556

Identificação automática de relações multidocumento / Automatic identification of multidocument relations

Erick Galani Maziero 16 January 2012 (has links)
O tratamento multidocumento mostra-se indispensável no cenário atual das mídias eletrônicas, em que são produzidos diversos documentos sobre um mesmo tópico, principalmente quando se considera a explosão de informação permitida pela web. Tanto leitores quanto aplicações computacionais se beneficiam da análise discursiva multidocumento por meio da qual são explicitadas relações entre as porções dos documentos, por exemplo, relações de equivalência, contradição ou de contextualização de alguma informação. A fim de realizar o tratamento automático multidocumento, adota-se neste trabalho a teoria linguístico-computacional CST (Cross-document Structure Theory, Radev, 2000). Esse tipo de conhecimento multidocumento permite que (i) se tratem mais apropriadamente fenômenos como redundância, complementariedade e contradição de informações e, consequentemente, (ii) produzam-se sistemas melhores de processamento textual, como buscadores web mais inteligentes e sumarizadores automáticos. Neste trabalho é apresentada uma metodologia de identificação dessas relações explorando-se técnicas de aprendizado automático do paradigma tradicional e hierárquico. Para relações que não são passíveis de identificação por aprendizado automático foram desenvolvidas regras para sua identificação. Por fim, um parser é gerado contendo classificadores e regras / The multi-document treatment is essential in the current scenario of electronic media, in which many documents are produced about a same topic, mainly when considering the explosion of information allowed by the web. Both readers and computational applications are benefited by the discursive multi-document analysis, through which the relations (for example, equivalence, contradiction or background relations) among the portions of text are showed. In order to achieve the automatic multi-document treatment, the CST (Cross-document Structure Theory, Radev, 2000) is adopted in this work. This kind of knowledge allow (i) the appropriated treatment of phenomena like redundancy, complementarity and contradiction of information and, consequently, (ii) the production of better systems of text processing, as more intelligent web searchers and automatic summarizers. In this work, a methodology to identify these relations is presented exploring techniques of machine learning of the traditional and hierarchical paradigm. For relations with low frequency in the corpus, handcrafted rules were developed. Finally, a parser is generated containing classifiers and rules
557

Segmentation de documents administratifs en couches couleur / Segmentation of administrative document images into color layers

Carel, Elodie 08 October 2015 (has links)
Les entreprises doivent traiter quotidiennement de gros volumes de documents papiers de toutes sortes. Automatisation, traçabilité, alimentation de systèmes d’informations, réduction des coûts et des délais de traitement, la dématérialisation a un impact économique évident. Pour respecter les contraintes industrielles, les processus historiques d’analyse simplifient les images grâce à une séparation fond/premier-plan. Cependant, cette binarisation peut être source d’erreurs lors des étapes de segmentation et de reconnaissance. Avec l’amélioration des techniques, la communauté d’analyse de documents a montré un intérêt croissant pour l’intégration d’informations colorimétriques dans les traitements, ceci afin d’améliorer leurs performances. Pour respecter le cadre imposé par notre partenaire privé, l’objectif était de mettre en place des processus non supervisés. Notre but est d’être capable d’analyser des documents même rencontrés pour la première fois quels que soient leurs contenus, leurs structures, et leurs caractéristiques en termes de couleurs. Les problématiques de ces travaux ont été d’une part l’identification d’un nombre raisonnable de couleurs principales sur une image ; et d’autre part, le regroupement en couches couleur cohérentes des pixels ayant à la fois une apparence colorimétrique très proche, et présentant une unité logique ou sémantique. Fournies sous forme d’un ensemble d’images binaires, ces couches peuvent être réinjectées dans la chaîne de dématérialisation en fournissant une alternative à l’étape de binarisation classique. Elles apportent en plus des informations complémentaires qui peuvent être exploitées dans un but de segmentation, de localisation, ou de description. Pour cela, nous avons proposé une segmentation spatio-colorimétrique qui permet d’obtenir un ensemble de régions locales perceptuellement cohérentes appelées superpixels, et dont la taille s’adapte au contenu spécifique des images de documents. Ces régions sont ensuite regroupées en couches couleur globales grâce à une analyse multi-résolution. / Industrial companies receive huge volumes of documents everyday. Automation, traceability, feeding information systems, reducing costs and processing times, dematerialization has a clear economic impact. In order to respect the industrial constraints, the traditional digitization process simplifies the images by performing a background/foreground separation. However, this binarization can lead to some segmentation and recognition errors. With the improvements of technology, the community of document analysis has shown a growing interest in the integration of color information in the process to enhance its performance. In order to work within the scope provided by our industrial partner in the digitization flow, an unsupervised segmentation approach was chosen. Our goal is to be able to cope with document images, even when they are encountered for the first time, regardless their content, their structure, and their color properties. To this end, the first issue of this project was to identify a reasonable number of main colors which are observable on an image. Then, we aim to group pixels having both close color properties and a logical or semantic unit into consistent color layers. Thus, provided as a set of binary images, these layers may be reinjected into the digitization chain as an alternative to the conventional binarization. Moreover, they also provide extra-information about colors which could be exploited for segmentation purpose, elements spotting, or as a descriptor. Therefore, we have proposed a spatio-colorimetric approach which gives a set of local regions, known as superpixels, which are perceptually meaningful. Their size is adapted to the content of the document images. These regions are then merged into global color layers by means of a multiresolution analysis.
558

Application de la sémantique intertextuelle à la modélisation de constats d'infraction de la ville de Québec

Bastien, Isabelle 09 1900 (has links)
Rendu possible grâce à une bourse du Laboratoire de cyberjustice. / Nous modélisons un type de constat d’infraction avec le métalangage XML en appliquant l’approche de la sémantique intertextuelle au design d’objets informationnels de Marcoux. Nous établissons notre propre méthode à partir de trois méthodes de modélisation dite « classiques » : la méthode de Maler et El Andaloussi, le Document Engineering de Glushko et la méthode RASKE de Salminen. Premièrement, nous procédons à l’analyse d’un corpus sélectionné de documents et résumons les informations recueillies sur notre objet d’étude et sur son contexte d’utilisation. Nous transposons ensuite le Règlement sur la forme des constats d’infraction (C-25.1, r. 1) vers un nouveau médium, la définition de type de document, et élaborons sa spécification de la sémantique intertextuelle. Le résultat de notre travail est un prototype d’un modèle permettant la création d’un type de constat d’infraction en XML originairement électronique, et qui génère également un rendu de sa signification en langue naturelle sur une page-écran d’un navigateur. En scénarisant la DTD pour élaborer des modèles de contenus qui soient les plus séquentiels possible, et en distribuant les thèmes abordés dans le Règlement sur la forme des constats d’infraction en accord avec la syntaxe de la langue naturelle française, nous simplifions le rendu de la sémantique intertextuelle du constat XML et améliorons possiblement son idiomaticité. / In the current thesis, we propose an XML modelling of a legal document, the statement of offense, using Marcoux’s intertextual semantics approach to information object design. Our method combines the modelling approaches of Maler and El Andaloussi, Glushko's Document Engineering and Salminen's RASKE method. We first analyse a selected corpus of documents, and identify information relevant to our topic and its context of use. We then transpose the document from a paper medium onto a XML-based electronic medium by i) the construction of a Document Type Definition (DTD) and ii) the elaboration of its intertextual semantics specification. The result is a prototype that enables the authoring of a statement of offence in XML, and allows for the automatic rendering of the document’s intended meaning in a natural language via a web browser. By designing the DTD so that the content models are mostly introduced sequentially, and by distributing the themes included in the "Règlement sur la forme des constats d’infraction" in accordance with the syntax of the French language, we simplify the rendering of the intertextual semantics and possibly improve its idiomaticity.
559

Conception de formes de relecture dans les chaînes éditoriales numériques / Designing proofreading news in digital publishing chains

Dumas Milne Edwards, Léonard 25 January 2016 (has links)
La production documentaire en contexte professionnel entraîne généralement un processus de révision dans lequel les documents doivent être relus avant validation et publication. Cette tâche importante fait face à de nouvelles difficultés avec le numérique. En effet, trois propriétés de l'écriture numérique sont problématiques : les documents évoluent très fréquemment et ne peuvent pas être relus entièrement à chaque version ; les interactions hypertextuelles rendent la tâche laborieuse, voire impossible ; la rééditorialisation documentaire augmente le nombre de formes documentaires à relire. En tant que technologie d'écriture numérique avancée, les chaînes éditoriales XML sont un cadre pertinent pour l'étude de la relecture de documents numériques. Partant du constat que les formes documentaires qu'elles proposent, à savoir les formes génératrices (sources XML modifiables via un éditeur WYSIWYM) et les formes publiées (documents issus de la transformation des sources XML), font défaut à la relecture, nous envisageons la conception de formes documentaires dédiées à cette activité selon deux approches : la linéarisation, qui consiste à restaurer une certaine linéarité matérielle des contenus pour faciliter leur relecture exhaustive ; et la tabulation, qui vise à paralléliser, afin de mieux les comparer, les différents contextes de rééditorialisation d'un document. Une partie des propositions faites dans ce mémoire a mené à la réalisation de prototypes ayant été expérimentés dans des situations d'usage des chaînes éditoriales Scenari en contexte pédagogique. Ces prototypes s'appuient sur des formes linéaires de relecture permettant notamment la comparaison de deux versions du document en se basant sur un algorithme de différentiel. / Documentary production in a professional context often involves a revising process in which documents need to be proofread before validation and publication. This important task faces new challenges when dealing with digital documents. Indeed, three features of digital writing are problematic: documents evolve very frequently and cannot be proofread each time as a whole; interactions provided by hypertexts make the task laborious or even impossible; document repurposing increases the views of content to proofread. As an advanced digital writing technology, XML publishing chains are a relevant framework for studying proofreading of digital documents. Observing that the views of content proposed by publishing chains, namely the generative views (XML sources that can be modified through a WYSIWYM editor) and the published views (documents obtained by transformation of the XML sources), are not adapted for proofreading, we consider designing new views of content dedicated for this activity based on two approaches: linearization, which consists in restoring some material linearity among contents; and tabulation, which aims at parallelizing different repurposing contexts so that they can be better compared. Part of the contribution presented here has led to the development of prototypes that have been experimented in the use of Scenari publishing chains in a pedagogical context. These prototypes rely on linear proofreading views allowing in particular the comparison between two versions of the document based on a diff algorithm.
560

Near-Duplicate Detection Using Instance Level Constraints

Patel, Vishal 08 1900 (has links) (PDF)
For the task of near-duplicate document detection, comparison approaches based on bag-of-words used in information retrieval community are not sufficiently accurate. This work presents novel approach when instance-level constraints are given for documents and it is needed to retrieve them, given new query document for near-duplicate detection. The framework incorporates instance-level constraints and clusters documents into groups using novel clustering approach Grouped Latent Dirichlet Allocation (gLDA). Then distance metric is learned for each cluster using large margin nearest neighbor algorithm and finally ranked documents for given new unknown document using learnt distance metrics. The variety of experimental results on various datasets demonstrate that our clustering method (gLDA with side constraints) performs better than other clustering methods and the overall approach outperforms other near-duplicate detection algorithms.

Page generated in 0.0556 seconds