• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 567
  • 181
  • 143
  • 142
  • 141
  • 95
  • 33
  • 28
  • 16
  • 14
  • 12
  • 12
  • 12
  • 9
  • 9
  • Tagged with
  • 1602
  • 348
  • 298
  • 253
  • 249
  • 233
  • 227
  • 218
  • 209
  • 176
  • 159
  • 143
  • 138
  • 126
  • 119
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
741

Active and deep learning for multimedia / Apprentissage actif et profond pour le multimédia

Budnik, Mateusz 24 February 2017 (has links)
Les thèmes principaux abordés dans cette thèse sont l'utilisation de méthodes d'apprentissage actif et d'apprentissage profond dans le contexte du traitement de documents multimodaux. Les contributions proposées dans cette thèse abordent ces deux thèmes. Un système d'apprentissage actif a été introduit pour permettre une annotation plus efficace des émissions de télévision grâce à la propagation des étiquettes, à l'utilisation de données multimodales et à des stratégies de sélection efficaces. Plusieurs scénarios et expériences ont été envisagés dans le cadre de l'identification des personnes dans les vidéos, en prenant en compte l'utilisation de différentes modalités (telles que les visages, les segments de la parole et le texte superposé) et différentes stratégies de sélection. Le système complet a été validé au cours d'un ``test à blanc'' impliquant des annotateurs humains réels.Une deuxième contribution majeure a été l'étude et l'utilisation de l'apprentissage profond (en particulier les réseaux de neurones convolutifs) pour la recherche d'information dans les vidéos. Une étude exhaustive a été réalisée en utilisant différentes architectures de réseaux neuronaux et différentes techniques d'apprentissage telles que le réglage fin (fine-tuning) ou des classificateurs plus classiques comme les SVMs. Une comparaison a été faite entre les caractéristiques apprises (la sortie des réseaux neuronaux) et les caractéristiques plus classiques (``engineered features''). Malgré la performance inférieure des seconds, une fusion de ces deux types de caractéristiques augmente la performance globale.Enfin, l'utilisation d'un réseau neuronal convolutif pour l'identification des locuteurs à l'aide de spectrogrammes a été explorée. Les résultats ont été comparés à ceux obtenus avec d'autres systèmes d'identification de locuteurs récents. Différentes approches de fusion ont également été testées. L'approche proposée a permis d'obtenir des résultats comparables à ceux certains des autres systèmes testés et a offert une augmentation de la performance lorsqu'elle est fusionnée avec la sortie du meilleur système. / The main topics of this thesis include the use of active learning-based methods and deep learning in the context of retrieval of multimodal documents. The contributions proposed during this thesis address both these topics. An active learning framework was introduced, which allows for a more efficient annotation of broadcast TV videos thanks to the propagation of labels, the use of multimodal data and selection strategies. Several different scenarios and experiments were considered in the context of person identification in videos, including using different modalities (such as faces, speech segments and overlaid text) and different selection strategies. The whole system was additionally validated in a dry run involving real human annotators.A second major contribution was the investigation and use of deep learning (in particular the convolutional neural network) for video retrieval. A comprehensive study was made using different neural network architectures and training techniques such as fine-tuning or using separate classifiers like SVM. A comparison was made between learned features (the output of neural networks) and engineered features. Despite the lower performance of the engineered features, fusion between these two types of features increases overall performance.Finally, the use of convolutional neural network for speaker identification using spectrograms is explored. The results are compared to other state-of-the-art speaker identification systems. Different fusion approaches are also tested. The proposed approach obtains comparable results to some of the other tested approaches and offers an increase in performance when fused with the output of the best system.
742

Atribuição de perfis de autoria / Author profiling

Weren, Edson Roberto Duarte January 2014 (has links)
A identificação de perfis de autoria visa classificar os textos com base nas escolhas estilísticas de seus autores. A ideia é descobrir as características dos autores dos textos. Esta tarefa tem uma importância crescente em análise forense, segurança e marketing. Neste trabalho, nos concentramos em descobrir a idade e o gênero dos autores de blogs. Com este objetivo em mente, analisamos um grande número de atributos - que variam de recuperação de informação até análise de sentimento. Esta dissertação relata a utilidade desses atributos. Uma avaliação experimental em um corpus com mais de 236K posts de blogs mostrou que um classificador usando os atributos explorados aqui supera o estado-da arte. Mais importante ainda, as experiências mostram que os atributos oriundos de recuperação de informação propostos neste trabalho são os mais discriminativos e produzem as melhores previsões. / Authorship analysis aims at classifying texts based on the stylistic choices of their authors. The idea is to discover characteristics of the authors of the texts. This task has a growing importance in forensics, security, and marketing. In this work, we focus on discovering age and gender from blog authors. With this goal in mind, we analyzed a large number of features – ranging from Information Retrieval to Sentiment Analysis. This paper reports on the usefulness of these features. Experiments on a corpus of over 236K blogs show that a classifier using the features explored here have outperformed the stateof- the art. More importantly, the experiments show that the Information Retrieval features proposed in our work are the most discriminative and yield the best class predictions.
743

Adaptace XML dokumentů a integritní omezení v XML / XML Document Adaptation and Integrity Constraints in XML

Malý, Jakub January 2013 (has links)
This work examines XML data management and consistency -- more precisely the problem of document adaptation and the usage of integrity constraints. Changes in user requirements cause changes in schemas used in the systems and changes in the schemas subsequently make existing documents invalid. In this thesis, we introduce a formal framework for detecting changes between two versions of a schema and generating a transformation from the source to the target schema. Large-scale information systems depend on integrity constraints to be preserved and valid. In this work, we show how OCL can be used for XML data to define constraints at the abstract level, how such constraints can be translated to XPath expressions and Schematron schemas automatically and verified in XML documents.
744

Content selection for timeline generation from single history articles

Bauer, Sandro Mario January 2017 (has links)
This thesis investigates the problem of content selection for timeline generation from single history articles. While the task of timeline generation has been addressed before, most previous approaches assume the existence of a large corpus of history articles from the same era. They exploit the fact that salient information is likely to be mentioned multiple times in such corpora. However, large resources of this kind are only available for historical events that happened in the most recent decades. In this thesis, I present approaches which can be used to create history timelines for any historical period, even for eras such as the Middle Ages, for which no large corpora of supplementary text exist. The thesis first presents a system that selects relevant historical figures in a given article, a task which is substantially easier than full timeline generation. I show that a supervised approach which uses linguistic, structural and semantic features outperforms a competitive baseline on this task. Based on the observations made in this initial study, I then develop approaches for timeline generation. I find that an unsupervised approach that takes into account the article's subject area outperforms several supervised and unsupervised baselines. A main focus of this thesis is the development of evaluation methodologies and resources, as no suitable corpora existed when work began. For the initial experiment on important historical figures, I construct a corpus of existing timelines and textual articles, and devise a method for evaluating algorithms based on this resource. For timeline generation, I present a comprehensive evaluation methodology which is based on the interpretation of the task as a special form of single-document summarisation. This methodology scores algorithms based on meaning units rather than surface similarity. Unlike previous semantic-units-based evaluation methods for summarisation, my evaluation method does not require any manual annotation of system timelines. Once an evaluation resource has been created, which involves only annotation of the input texts, new timeline generation algorithms can be tested at no cost. This crucial advantage should make my new evaluation methodology attractive for the evaluation of general single-document summaries beyond timelines. I also present an evaluation resource which is based on this methodology. It was constructed using gold-standard timelines elicited from 30 human timeline writers, and has been made publicly available. This thesis concentrates on the content selection stage of timeline generation, and leaves the surface realisation step for future work. However, my evaluation methodology is designed in such a way that it can in principle also quantify the degree to which surface realisation is successful.
745

Trust and Profit Sensitive Ranking for the Deep Web and On-line Advertisements

January 2012 (has links)
abstract: Ranking is of definitive importance to both usability and profitability of web information systems. While ranking of results is crucial for the accessibility of information to the user, the ranking of online ads increases the profitability of the search provider. The scope of my thesis includes both search and ad ranking. I consider the emerging problem of ranking the deep web data considering trustworthiness and relevance. I address the end-to-end deep web ranking by focusing on: (i) ranking and selection of the deep web databases (ii) topic sensitive ranking of the sources (iii) ranking the result tuples from the selected databases. Especially, assessing the trustworthiness and relevances of results for ranking is hard since the currently used link analysis is inapplicable (since deep web records do not have links). I formulated a method---namely SourceRank---to assess the trustworthiness and relevance of the sources based on the inter-source agreement. Secondly, I extend the SourceRank to consider the topic of the agreeing sources in multi-topic environments. Further, I formulate a ranking sensitive to trustworthiness and relevance for the individual results returned by the selected sources. For ad ranking, I formulate a generalized ranking function---namely Click Efficiency (CE)---based on a realistic user click model of ads and documents. The CE ranking considers hitherto ignored parameters of perceived relevance and user dissatisfaction. CE ranking guaranteeing optimal utilities for the click model. Interestingly, I show that the existing ad and document ranking functions are reduced forms of the CE ranking under restrictive assumptions. Subsequently, I extend the CE ranking to include a pricing mechanism, designing a complete auction mechanism. My analysis proves several desirable properties including revenue dominance over popular Vickery-Clarke-Groves (VCG) auctions for the same bid vector and existence of a Nash equilibrium in pure strategies. The equilibrium is socially optimal, and revenue equivalent to the truthful VCG equilibrium. Further, I relax the independence assumption in CE ranking and analyze the diversity ranking problem. I show that optimal diversity ranking is NP-Hard in general, and that a constant time approximation algorithm is not likely. / Dissertation/Thesis / Ph.D. Computer Science 2012
746

Optimering av en intern datorplattform : En utredning åt Byggtec Gävleborg AB

Jörnelind, Johan January 2017 (has links)
Syftet med uppsatsen är att ta fram en mall för intern dokumenthantering och projektstyrning åt byggföretaget Byggtec Gävleborg AB. För att komma fram till resultatet har jag till använt mig av intervjuer av olika tjänstemän på Byggtec. Jag har även gjort litteraturstudier genom 4 tidskrifter med inriktning mot byggbranschen, samt studerat liknande system som redan finns på marknaden. Totalt så intervjuades 3 byggplatschefer samt den ekonomiansvarige och VD:n, detta gav en god uppfattning om vilka områden som borde täckas upp på plattformen, det gav även en bild av hur de vill att arbetet med plattformen ska utföras. Intervjuerna som gjordes handlade om de olika byggplatschefernas arbetssätt i dagsläget, samt hur de vill att plattformen ska byggas upp och hur de vill att den ska användas. Rapporten är avgränsad till att tillgodose Byggtec Gävleborg ABs behov då det är för dem studien är gjord, men trots att den är avgränsad till Byggtec så fungerar upplägget lika bra även åt andra byggföretag. Detta resulterade i en mall som lämpar sig bra att spara de viktigaste och mest användbara dokumenten under ett byggprojekt i. Mallen innehåller även de viktigaste dokument och handlingar som kan tänkas användas i framtida bruk. / The purpose with this essay is to obtain a template for internal document management and project control for the company Byggtec Gävleborg AB. To achieve the results I mostly have used interviews of different officials at Byggtec. Other methods that has been used is literature studies through different magazines and also studying similar systems that are already on the market. Totally 3 construction site managers, the financial officer and the CEO were interviewed. This gave a good perception of what subjects that should be covered by the platform. It also gave a picture of how they wanted the work with the platform would be done. The interviews who were held was how the construction site managers work today, how they want the platform to be constructed and how they want it to be used. The report is limited to meet the requirements of Byggtec Gävleborg AB. It is for them the report has been done. Although it is limited for them the report layout would be suitable for any building company. This gave as result a template useable to save the most important documents during a constructing project. The template also contains the most important documents which may be used for future use. / <p>Betyg 170707, H14.</p>
747

A ação da CODEMATO na colonização oficial de Mato Grosso: revisitando o Projeto Juina (1978 – 1997) / A ação da CODEMAT na colonização oficial de Mato Grosso: revisitando o Projeto Juina (1978 – 1997)

Santi, Rejane Pereira 30 November 2015 (has links)
Submitted by Jordan (jordanbiblio@gmail.com) on 2018-08-23T14:33:14Z No. of bitstreams: 1 DISS_2015_Rejane Pereira Santi.pdf: 14617718 bytes, checksum: a7688ddfa8c80878c6064ddb6b758e07 (MD5) / Approved for entry into archive by Jordan (jordanbiblio@gmail.com) on 2018-08-23T15:12:38Z (GMT) No. of bitstreams: 1 DISS_2015_Rejane Pereira Santi.pdf: 14617718 bytes, checksum: a7688ddfa8c80878c6064ddb6b758e07 (MD5) / Made available in DSpace on 2018-08-23T15:12:38Z (GMT). No. of bitstreams: 1 DISS_2015_Rejane Pereira Santi.pdf: 14617718 bytes, checksum: a7688ddfa8c80878c6064ddb6b758e07 (MD5) Previous issue date: 2015-11-30 / A CODEMAT – Companhia de Desenvolvimento do Estado de Mato Grosso S/A, criada pelo governo do Estado de Mato Grosso em 1968 para promover o desenvolvimento econômico se colocou no cenário político mato-grossense como colonizadora oficial do Estado a partir do final dos anos 1970. Na época o país vivia sob a égide do regime de um Estado autoritário da Ditadura Militar. O “desenvolvimento econômico” foi entendido pelo Estado e suas autarquias, e mais ainda pela CODEMAT como competência para alienar e comercializar as terras devolutas pertencentes ao Estado de MT nos limites da fronteira amazônica mato-grossense, o que incluía o Noroeste de MT. Estas terras passaram em sua maioria à tutela da União e do Conselho de Segurança Nacional. Foi dessa forma que se implantou em MT, seguindo a idealização de segurança e integração nacional; os pólos de desenvolvimento como o POLOAMAZONIA, POLOCENTRO, POLONOROESTE que serviam ao incentivo fiscal para o desenvolvimento dos projetos voltados não para o assentamento do trabalhador do campo ou mesmo do pequeno proprietário; mas antes, ao interesse das empresas agropecuárias e à exploração de recursos minerais. As ações fundiárias que competiam à União foram desenvolvidas pelo INCRA e aquelas de responsabilidade do governo estadual, entregues à CODEMAT para o desenvolvimento do Projeto Juina. Para que fosse endossado pelo Estado e contasse com os recursos/subsídios advindos dos programas federais voltados para a colonização na Amazônia, a CODEMAT elaborou e apresentou ao governo do estado de MT o Projeto Estadual de Colonização Juina - Volume I. Sendo as ações da CODEMAT nosso objeto de investigação, esta se faz a partir da problematização do Programa Estadual de Colonização; aqui entendida como o ato de “historicizar sobre”. Isto é, trata-se do trabalho de transformar o problema em análise historiográfica partindo da investigação das ações da CODEMAT enquanto colonizadora e até que ponto esse documento cumpriu suas propostas originais de assentamento ou não passou de uma propaganda em si das ações da CODEMAT sobre o projeto de colonização que se realmente pretendia, o Projeto Juina. / The CODEMAT – Mato Grosso State Development Company S/A, created by the Mato Grosso state government in 1968 to promote economic development, is placed in Mato Grosso political scene as the official colonial state company in the late 1970s. At the time the country was under the aegis of an authoritarian state regime: a military dictatorship. Then, the "economic development" was understood by the State and its municipalities, and even more so by CODEMAT, as the power to sell and market the vacant lands owned by the MT state, within the limits of Mato Grosso Amazon frontier, which included the Northwest of Mato Grosso. These lands have gone mostly to the protection of the Union and the National Security Council. That's how it was implemented in MT, following the regime’s idealization of security and national integration; development poles such as POLOAMAZONIA, POLOCENTRO, POLONOROESTE served as tax incentives not for the development of projects related to the rural workers’ settlement, let alone to the small land owners; but rather to the interests of big agricultural enterprises and exploitation of mineral resources. The agrarian actions due to the Union were developed by INCRA, and those of the MT state government's responsibility, delivered to CODEMAT for the development of Juina Project. In order to be endorsed by the State and count on the resources/subsidies arising from federal programs for colonization in the Amazon, CODEMAT prepared and presented to the MT state government the Juina State Colonization Project Volume I. As the actions of CODEMAT are our object of research, it is done by questioning the State Program of Colonization; here understood as the act of "historicizing on". That means, it is the job of turning the matter into historiographical analysis, starting from the investigation of CODEMAT actions as a colonizing institution, and how much of the State Program of Colonization original purposes were fulfilled, or if the Program was merely a propaganda of CODEMAT actions for the actual colonization project intended, the Juina Project.
748

Documentos eletrônicos on-line : análise das referências das teses e dissertações de Programas de Pós-Graduação em Comunicação do Rio Grande do Sul

Mesquita, Rosa Maria Apel January 2006 (has links)
A pesquisa analisa as mudanças na comunicação científica decorrentes da utilização da Internet para a publicação de documentos científicos. O desenvolvimento da ciência depende, em grande parte, do registro dos resultados de pesquisas, o que permite a sua avaliação e utilização pela comunidade científica. Um dos problemas que o crescimento de publicações na rede acarreta é a facilidade com que as informações são alteradas, atualizadas, removidas e transferidas para outros locais. O estudo teve como objetivo analisar as características das referências dos documentos eletrônicos on-l ine das teses e dissertações defendidas nos Programas de Pós-Graduação em Comunicação da Universidade Federal do Rio Grande do Sul, da Pontifícia Universidade Católica do Rio Grande do Sul e da Universidade do Vale do Rio dos Sinos, entre os anos de 1997 a 2004, enquanto fontes de informação que permitem a recuperação dos documentos científicos. Trata-se de um estudo bibliométrico que utiliza a técnica de análise de referências para caracterizar as referências eletrônicas on-l ine analisadas. A análise quantitativa foi complementada com uma pesquisa qualitativa, que consistiu em entrevistar alunos e professores dos programas de pós-graduação estudados. Das 390 teses e dissertações defendidas pelos programas, 191 apresentaram pelo menos uma referência de documento eletrônico on-l ine. As 1.616 referências de documentos eletrônicos on-l ine estudadas revelaram que: os si tes comerciais são mais utilizados (54,28%); português é o idioma predominante nas referências (73,4%); e 93,53% das datas de publicação estão cobertas pelo período de 1995 a 2005; a recuperação através do URL foi de 24,8% e através do mecanismo de busca Google foi de 45,3%. Os resultados indicam que nem sempre se consegue recuperar os documentos eletrônicos on- l ine referenciados, devido a exclusão do URL da rede, troca de endereço eletrônico, ou pela não localização do documento na página ou si te. Os problemas encontrados na recuperação dos documentos por meio das referências eletrônicas apontam a fragilidade do meio on-l ine para atender as necessidades do processo de comunicação científica. / This research is about changes in scientific communication resulting from using the Internet for publishing scientific information. Progress of science is dependent of an adequated registration of results, which alows their avaliation and use by the scientific community. One difficulty with the growing number of on-line publications is the possibility of altering, updating, removing or transferring published information to another place. This study analysed the references of on-line electronic documents in monographies presented by post graduation students in order to obtain a master degree or a doctor degree in Communication Science, from 1997 to 2004, in Universidade Federal do Rio Grande do Sul, Pontifícia Universidade Católica do Rio Grande do Sul and Universidade do Vale do Rio dos Sinos, as information sources that allow the scientific documents retrieval. It is a bibliometric study, which uses reference analysis to describe on-line electronic references. The quantitative analysis was complemented by qualitative analysis, which consisted in interviews with students and professors of Communication Science Post-Graduation Programs, at the three universities. One hundred and ninety one out of 390 post graduation monographies had at least one on-line electronic document referenced. There were 1,616 references to on-line electronic documents. Commercial sites were predominant (34.28%) by kind of site; the predominant language was Portuguese (73.4%); and 93.53% of the references were documents published between 1995 and 2005. The document retrieval for on-line electronic references by typing the URL had a 24.8% success rate, and using Google to recover had a 45.3% success rate. The results indicated that document retrieval of on-line electronic references is problematic, due to extinguishing the URL, shifting links and poor site design. The dificulties with on-line electronic document retrieval shows the precariety of the Internet, as it stands today, to reach the scientific communication demands.
749

Extração de metadados utilizando uma ontologia de domínio / Metadata extraction using a domain ontology

Oliveira, Luis Henrique Gonçalves de January 2009 (has links)
O objetivo da Web Semântica é prover a descrição semântica dos recursos através de metadados processáveis por máquinas. Essa camada semântica estende a Web já existente agregando facilidades para a execução de pesquisas, filtragem, resumo ou intercâmbio de conhecimento de maior complexidade. Dentro deste contexto, as bibliotecas digitais são as aplicações que estão iniciando o processo de agregar anotações semânticas às informações disponíveis na Web. Uma biblioteca digital pode ser definida como uma coleção de recursos digitais selecionados segundo critérios determinados, com alguma organização lógica e de modo acessível para recuperação distribuída em rede. Para facilitar o processo de recuperação são utilizados metadados para descrever o conteúdo armazenado. Porém, a geração manual de metadados é uma tarefa complexa e que demanda tempo, além de sujeita a falhas. Portanto a extração automática ou semi-automática desses metadados seria de grande ajuda para os autores, subtraindo uma tarefa do processo de publicação de documentos. A pesquisa realizada nesta dissertação visou abordar esse problema, desenvolvendo um extrator de metadados que popula uma ontologia de documentos e classifica o documento segundo uma hierarquia pré-definida. A ontologia de documentos OntoDoc foi criada para armazenar e disponibilizar os metadados extraídos, assim como a classificação obtida para o documento. A implementação realizada focou-se em artigos científicos de Ciência da Computação e utilizou a classificação das áreas da ACM na tarefa de classificação dos documentos. Um conjunto de exemplos retirados da Biblioteca Digital da ACM foi gerado para a realização do treinamento e de experimentos sobre a implementação. As principais contribuições desta pesquisa são o modelo de extração de metadados e classificação de documentos de forma integrada e a descrição dos documentos através de metadados armazenados em um ontologia, a OntoDoc. / The main purpose of the Semantic Web is to provide machine processable metadata that describes the semantics of resources to facilitate the search, filter, condense, or negotiate knowledge for their human users. In this context, digital libraries are applications where the semantic annotation process of information available in the Web is beginning. Digital library can be defined as a collection of digital resources selected by some criteria, with some organization and available through distributed network retrieval. To facilitate the retrieval process, metadata are applied to describe stored content. However, manual metadata generation is a complex task, time-consuming and error-prone. Thus, automatic or semiautomatic metadata generation would be great help to the authors, subtracting this task from the document publishing process. The research in this work approached this problem through the developing of a metadata extractor that populates a document ontology and classify the document according to a predefined hierarchy. The document ontology OntoDoc was created to store and to make available all the extracted metadata, as well as the obtained document classification. The implementation aimed on Computer Science papers and used the ACM Computing Classification system in the document classification task. A sample set extracted from the ACM Digital Libray was generated for implementation training and validation. The main contributions of this work are the integrated metadata extraction and classification model and the description of documents through a metadata stored in an ontology.
750

Documentarte : experiências de pensamento entre educação, filosofia da diferença e a história

Helbich, Luciane January 2013 (has links)
Este trabalho apresenta uma experiência sobre o uso do documento como parte constitutiva do pensamento do professor de história. Nesta dissertação, o pensamento é visto como a produção de um fazer artístico. O encontro com os documentos podem produzir uma criação artística de poéticas e narrativas que são inventadas a partir da problemática do encontro. Uma história-problema, nesta perspectiva, é apresentada como um movimento de criação que resulta do contato entre documentos e professor. Este trabalho procura novos olhares sobre as experiências com os documentos e as entende como práticas do presente, como atualização de virtualidades que a potência dos documentos disponibiliza. Nesta pesquisa os documentos são como monumentos que se compõem juntamente com o corpo da professora que os encontra dando uma forma artística e singular ao acontecimento. Nesta perspectiva, o que se pretende é experimentar encontros com documentos-obras de arte e trilhar percursos não estabelecidos pelos modelos convencionais de abordagem do documento. Inscrita na filosofia da diferença, esta pesquisa busca a empiria com os corpos-matérias de documentos que dão consistência ao plano de pensamento da professora que escreve. Os escritos deste texto se constroem principalmente com o pensamento filosófico de Spinoza, Nietzsche, Michel Foucault e Gilles Deleuze e com o historiador Durval Muniz de Albuquerque Junior buscando linhas de fuga para além dos planos existentes das discussões teóricas sobre produção de narrativa e arte, historiografia e abordagem de documentos históricos. / This paper presents an experiment on the use of the document as a constitutive part of the thinking of a history professor. In this dissertation, the thought is seen as an artistic production. The meeting with the documents may produce, in the teacher´s work, an artistic production of poetics and narratives, of writings which are invented from the meeting problematic. A history-problem, in this perspective, is presented as a creation movement which is a result of the contact between documents and teacher. This paper looks for new ways of seeing the experiences with documents and figures them out as practices of the present, as the update of virtualities that the potentialization of virtualities makes available. In this research, documents are like monuments which are therefore composed gathered to the teacher who finds them, giving an artistic and singular form to the fact. In this perspective, this paper aims at experiencing the meeting with historical documents and follow paths which have not been determined by conventional models of the document approach. Inscribed under the Difference Philosophy, this research seeks the empiricism with body-subject of documents which give consistency to the thought plan of the teacher who writes it. The writings of this text are built mainly with the philosophical thoughts of Spinoza, Nietzsche, Michel Foucault and Gilles Deleuze and with the Historian Durval Muniz de Albuquerque Junior seeking escape lines beyond the existing plans of the theoretical discussions about narrative and art production, historiography and historical documents approach.

Page generated in 0.0588 seconds