• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 180
  • 177
  • 171
  • 167
  • 113
  • 20
  • 19
  • 15
  • 8
  • 8
  • 5
  • 5
  • 4
  • 4
  • 4
  • Tagged with
  • 970
  • 151
  • 107
  • 106
  • 99
  • 97
  • 96
  • 91
  • 74
  • 69
  • 67
  • 64
  • 64
  • 60
  • 57
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

DESENVOLVIMENTO DE UM SISTEMA DE DOCUMENTAÇÃO PARA O PROJETO DE MÁQUINAS AGRICOLAS: MODELO DE REFERÊNCIA. / DEVELOPMENT OF DOCUMENTATION SYSTEM TO AGRICULTURAL MACHINERY DESIGN: REFERENCE MODEL.

Gassen, José Renê Freitas 14 February 2008 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The agricultural machinery design process is characterized by the transformation of elevated number abstract information in a physical model. This work presents the development of a Documentation System for the Design of Agricultural Machinery along the stages of planning and design of the Model of Reference for the Agricultural Machinery Development Process, which will be used like standard model in the execution of these projects. The methodology adopted was the employment of researches of the descriptive and exploratory kind about matters concerning the subject of the work, among them: model of reference for the agricultural machines development process; modeling of information; management of the knowledge; projects documentation systems; and, management of the communications. The specific objectives reached was definition and classification of the kinds documents, establishment of the form of presentation and of the content of each document, definition of technical way for presentation of the documents, definition of the architecture of the Design Documentation System, modeling of the documents of project and integration of the Design Documentation System to the Model of Reference for the Agricultural Machinery Development Process. The result obtained permitted the definition of three base models kinds: (i) Models of documents for alfa-numerical elements, forms and graphics; (ii) Models of documents for representative plans; and, (iii) Models of documents for illustration of technical drawings. The structure of the system of documentation developed in this work can be deployment in two parts, one with the management documents and another with the technical documents, jointly with term of opening and tutorial. It was obtained the definition, classification, relation with the domains of the knowledge and establishment of specific content for one hundred and thirty-six documents, of the which, fifty-four are of management nature and eighty-two of technical nature. / As fases iniciais do processo de projeto de máquinas agrícolas são caracterizadas pelo tratamento de elevado número de informações abstratas, as quais tornam-se mais concretas à medida que se aproximam das fases finais, tomando a forma de modelos físicos. Tal complexidade exige a prática do processo de projeto de modo sistematizado, que possibilite o registro, a gestão e a armazenagem dos conhecimentos, tecnologias geradas e decisões tomadas durante a sua execução, através de um adequado Sistema de Documentação de Projetos. Neste contexto, o presente trabalho apresenta o desenvolvimento de um Sistema de Documentação para o Projeto de Máquinas Agrícolas (SDP-MA) ao longo das macrofases de planejamento e projetação do Modelo de Referência para o Processo de Desenvolvimento de Máquinas Agrícolas (MR-PDMA), para que seja usado como modelo padrão na execução destes projetos. A metodologia adotada foi a realização de pesquisas do tipo exploratória e descritiva sobre assuntos pertinentes ao tema do trabalho, entre os quais se destacam: modelo de referência para o processo de desenvolvimento de máquinas; modelagem de informações; gestão do conhecimento; sistemas de documentação de projetos; e, gerenciamento das comunicações. Foram executadas atividades parciais na busca ao objetivo geral, entre as quais, têm-se: definição e classificação dos tipos de documentos, estabelecimento da forma de apresentação e do conteúdo de cada documento, viabilização técnica para apresentação dos documentos, definição da arquitetura do Sistema de Documentação de Projetos, modelagem dos documentos de projeto e integração do SDP-MA ao MR-PDMA. O resultado obtido permitiu a definição de três tipos de modelos base: (i) Modelos de documentos para elementos alfa-numéricos, planilhas e gráficos; (ii) Modelos de documentos para esquemas representativos; e, (iii) Modelos de documentos para ilustração de desenhos técnicos. A estrutura do sistema de documentação desenvolvido neste trabalho pode ser decomposta em dois pacotes, um com os documentos gerenciais e outro com os documentos técnicos, juntamente com termo de abertura e tutorial. Obteve-se definição, classificação, relação com os domínios do conhecimento e estabelecimento de conteúdo específico para cento e trinta e seis documentos, dos quais, cinqüenta e quatro são de natureza gerencial e oitenta e dois de natureza técnica.
152

Mécanismes et outils pour sécurisation de systèmes à accès distants : application aux systèmes de gestion électronique de documents / Mecanisms and tools to secure remote access systems

Risterucci, Gabriel 31 March 2016 (has links)
Cette thèse a pour objet l'amélioration de la sécurité de systèmes à accès distant par l'utilisation d'outils cryptographiques. Elle s'applique en particulier aux applications de gestion de documents numériques pour leurs problématiques de communication, d'authentification et de gestion de droits. Contrairement aux approches classiques consistant à utiliser des moyens de protections ponctuels, nous proposons ici un ensemble d'outils conçu pour collaborer afin de renforcer la sécurité du système. La sécurisation des communications est réalisée grâce à la conception d'un protocole de communications sécurisée adapté aux applications distribuées. Les problématiques d'authentification ont donné lieu à l'élaboration de solutions permettant d'apporter un support cryptographique pour toutes modalités d'authentification. La gestion des droits fait l'objet d'un développement spécifique permettant d'associer des droits à des applications cryptographiques. Un point clé de ces réflexions est l'importance de l'accessibilité de ces outils de sécurité pour les utilisateurs du système. Cela a influé sur les propositions pour qu'elles perturbent le moins possible l'expérience utilisateur. Le résultat est l'intégration en un système global de différents outils et mécanismes apportant une sécurité complète à un système de gestion de documents numériques. Cette sécurité est basée sur des algorithmes cryptographiques afin de disposer de propriétés de sécurité prouvables et vérifiables. Comme support de ces mécanismes, une plate-forme de sécurité logicielle a été conçu pour fournir les outils cryptographiques de façon portable. / This thesis' goal is the improvement of the security of remotely accessed systems with the use of cryptographic tools. Specifically it is applied to digital documents management software that raise issues in three fields~: communication, authentication and rights management. Unlike common approaches that involve the use of individual protections for these three fields, we offer a set of tools made to work together to improve the system's security. Securing communication is done thanks to a new secure communication protocol designed for distributed applications. Authentication issues led to the development of two tailored solutions providing cryptographic support to the application for any authentication method. Rights management is handled through new associations between a given access right and specific cryptographic applications. A key element of those solutions is the emphasis put on the usability of these secure tools. It swayed the development of our proposals toward more transparent solutions that would not disturb the user experience. As a result, we obtained a secure system made of these tools and mechanisms that work together to provide full and transparent security for a digital documents management software. This security is fully based on cryptographic algorithms to provide provable and verifiable security properties. As a supporting layer for these mechanisms, a secure software library was designed to provide all the required tools for cryptographic uses in a portable way.
153

Koncepce umění Georgese Bataille / Georges Bataille's conception of art

Kaisrová, Martina January 2011 (has links)
in English The aim of the study is to show Bataille's esthetic thoughts, dispersed in several Bataille's works, as a consistent theory of art. Through his fascination for cruel and evil, Bataille turns his interest to the matter and in revue called Documents he describes the appeal of low forms due to their proximity to formlessness. Beyond this frontier there is a sphere of "not-knowing", where the logical reason can not impenetrate. This sphere of excess is not only out of logical concept, but also out of our life, i. e. before our birth and after our death. In an organized society this sphere is forbidden. The tabu that guards it protects us from the destructive forces of chaos which could appear through the death rage, violent sexual passions, cruelty and evil. Human desire to see what is in this sphere comes from his nostalgia for a naturalness which had been lost when we changed from animals into human beings. The animal does not obey any rule and in ancient societies it used to represent the divinity. The lost intimity of present instant, as explained in Lascaux or Literature et le Mal, is mediated to us through religious transgression of law (sacrifice) or as fiction in art (transgression in modus "a like") or through destruction of academic forms. Such moments are always accompanied by...
154

Approches numériques pour le filtrage de documents centrés sur une entité : un modèle diachronique et des méta critères / Entity centric document filtering using numerical approaches : a diachronical model and meta criteria

Bouvier, Vincent 16 December 2015 (has links)
[...] Nos principales contributions peuvent être résumées en trois points :1. la proposition d’un système de classification de documents centrés sur les entités à l’aide d’un profil d’entité et de méta critères dans le contexte de filtrage de documents. Nous avons mis en place une approche qui est indépendante des entités et qui utilise les principes du transfert de connaissances. En effet, notre approche permet l’apprentissage à partir d’un ensemble de données annotées pour un pool d’entités tout en étant capables de catégoriser des documents concernant des entités pour lesquels aucune donnée annotée n’a été fournie ;2. la proposition d’un nouveau modèle de langue diachronique pour étendre la définition de profil d’entité afin de permettre la mise à jour de celui-ci. En effet, le suivi d’une entité nommée implique de pouvoir distinguer une information déjà connue d’une information nouvelle. Le modèle de langue diachronique permet la mise à jour automatique du profil d’entité tout en minimisant le bruit apporté ;3. la proposition d’une méthode pour découvrir la popularité d’une entité afin d’améliorer la cohérence d’un modèle de classification sur tous les aspects temporels liés à une entité. Pour détecter l’importance d’un document au regard d’une entité, nous proposons d’utiliser, entre autres, des indicateurs temporels qui peuvent varier d’une entité à l’autre. Nous proposons de regrouper les entités en fonction de leur popularité sur le Web à chaque instant pour tenter d’améliorer la cohérence des modèles et ainsi augmenter les performances des classificateurs.[...] / [...] Our main contributions are:1. We propose an entity centric classification system, which helps finding documents that are related to an entity based on its profile and a set of meta criteria. We propose to use the classification result to filter out unrelated documents. This approach is entity independent and uses transfer learning principles. We trained the classification system with a set of annotated concerning a set of entities and we categorized documents that concerns other entities;2. We introduce a diachronical language model, which extends our definition of entity profile in order to add to the capability of updating an entity profile. Tracking an entity implies to distinguish between a known piece of information from a new one. This new language model enables automatic update of entity profile while minimizing the noise;3. We develop a method to detect the entity popularity in order to enhance the coherence of a classification model concerning temporal aspects. In order to detect the importance of a document regarding an entity, we propose to use temporal sensors, which may vary from an entity to another. We cluster entities sharing the same amount of popularity on the Web at each time t to enhance the coherence of classification model and thus improve classifier performances.[...]
155

Modélisation, conception, fabrication et reproduction à grande échelle d'éléments optiques diffractants profonds pour les applications anti-fraude / Modeling, Design, Manufacture and Large Scale Replication of Deep Diffractive Optical Elements for Anti-Fraud Applications

Chikha, Khalil 19 December 2016 (has links)
Les micros ou nanostructures diffractantes sont utilisées depuis de nombreuses années pour sécuriser les documents sensibles, comme les cartes d'identité, les documents de voyage tels que passeports, visas, ou encore les documents fiduciaires. Cependant, le développement des techniques de reprographie et l'implication croissante des organisations criminelles rendent la contrefaçon des documents imprimés de plus en plus accessible. Jusqu'ici, les techniques de fabrication et surtout de réplication des structures diffractantes utilisées pour la protection anti-fraude imposaient une limitation sur l'épaisseur des structures qui typiquement ne pouvaient pas dépasser quelques centaines de nanomètres. Mes travaux de thèse s'inscrivent dans la recherche puis le développement des techniques, très avant-garde, de réalisation et de réplication de structures comportant un relief vertical pouvant atteindre plusieurs microns. La disponibilité de ce type de structures épaisses ouvre de nombreuses possibilités de nouvelles fonctions optiques et donc une nouvelle barrière importante contre la falsification des documents de sécurité. / Diffractive micro/nano structures are used for many years to secure sensitive documents such as identity cards, travel documents (passports, visas...). However, the development of reprographic techniques and the increasing involvement of criminal organizations make counterfeiting of printed documents more accessible. Until now, the manufacturing techniques and especially replication of diffractive structures used for anti-fraud protection imposed a limitation on the thickness of the structures that typically could not exceed a few hundred nanometers. My PhD work is part of the research and the development of very advanced mastering and replication techniques which can, thereafter, realize structures with a vertical relief of up to several microns. The availability of this type of thick structure opens many possibilities for new optical functions and thus a new major barrier against counterfeiting.
156

Oběh dokladů a vnitřní kontrolní systém / The workflow of the accounting documents and the system internal control

Polcrová, Lucie January 2009 (has links)
The thesis topic is the accounting workflow of the company Siemens Enteprise Communications, s.r.o. The first, theoretical, part of the thesis focuses on general characteristics of the accounting system including the company rules and the need for such a system. The second part of the thesis deals with the workflow of accounting documents (invoices, credit notes, travel orders and other accounting documents). The thesis closes with a comparison of two workflow systems.
157

La production de la métropole : quel rôle jouent les documents de planification ? : les cas de la Région Métropolitaine de Belo Horizonte au Brésil et de la métropole lyonnaise en France / The production of the metropolis : what role for the planning documents ? : the case of the Metropolitan Region of Belo Horizonte in Brazil and the city of Lyon in France

Aguiar Mol, Natalia 06 February 2015 (has links)
La gestion métropolitaine pose des défis dans différents pays parmi le monde : difficultés d'établissement d'autorités métropolitaines, partage de compétences, changements de pouvoir politiques... Ce sont de nombreuses difficultés à surmonter dans le processus d'établissement d'une gestion métropolitaine. Au Brésil, changements récents dans le système de gestion métropolitaine dans divers états fédérés ont été mise en place depuis le début des années 2000.La Région Métropolitaine de Belo Horizonte (RMBH) figure entre ces premières expériences d'innovation dans le sujet. Renouvelé presque il y a une décennie, le système de la RMBH compte avec quelques instances et instruments de gestion, dont le Plan Directeur de Développement Intégré (PDDI) y figure comme instrument principal. Même si existante depuis les années 1970, la mise en place du système de gestion présente, outre les problèmes cités ci-dessous, des difficultés liés à une construction partagé et à une mise en conscience de l'échelle (ou de la réalité ?) métropolitaine parmi la population, les techniciens et les élus. C'est dans la quête de ces questions qu'il a été pertinent d'étudier l'expérience des coopérations métropolitaines en France. Les communautés urbaines existants depuis la décennie de 1970 ont fait beaucoup avancer les processus de coopération inter municipaux, notablement dans le cas de l'agglomération lyonnaise, choisi comme deuxième étude de cas. Ainsi comme l'action publique se transforme avec l'émergence des métropoles, la pratique de la planification urbaine rencontre des nouveaux défis pour faire face à la multiplicité de scènes, d'instances et d'acteurs présents dans cette échelle. Dans ce sens, plus que d'être considéré un élément extérieur (et conséquent) à ce processus, la planification doit être vue comme un dispositif qui contribue à la construction de la métropole. A partir de deux études de cas – le PDDI de la RMBH (au Brésil) et le SCoT et l'Inter SCoT de l'agglomération lyonnaise (en France) – la présente étude s'intéresse aux documents de planification et leur rôle dans la production de la métropole. En interviewent nombreuses acteurs dans chaque cas, la thèse analyse leur regards du processus, en cherchant à y identifier des traces d'un éventuel processus d'apprentissage collectif et de la construction de la notion d'un intérêt collectif métropolitain / Metropolitan management brings challenges in different countries around the world: difficulties of establishing metropolitan authorities, sharing of competences, policy changes... In Brazil, recent changes in the metropolitan management system of various federal states have been implemented since early 2000s.The Metropolitan Region of Belo Horizonte (RMBH) figures between these early experiences of innovation in this subject. Renewed after almost a decade, the RMBH system counts with some instances and management tools, including the Plan Director of Integrated Development (PDDI), as the main instrument. Although existing since the 1970s, the establishment of this metropolitan management system presents, in addition to the problems mentioned below, difficulties associated with a shared construction of metropolitan politics and metropolitan con-science among the population, technicians and politicians. It was relevant in the pursuit of these questions to study the experience of cooperation in metropolitan France regions. Urban communities existing since the 1970 did much work for the process of cooperation between municipalities, notably in the case of Lyon, for a second case study. As public policy changes with the emergence of the metropolis, the practice of urban planning face new challenges to deal with the multiplicity of scenes, instances and actors of this new scale. In this sense, more than being considered an external element (and a subsequent one) in this process, metropolitan urban planning should be seen as a mechanism that helps to build the metropolis. Using two case studies - the PDDI of RMBH (Brazil) and SCoT and Inter SCoT of Lyon (in France) - this study is interested in planning documents and their role in production of the metropolis. In interviewing many actors in each experience, the Thesis analyzes their perception of these processes, seeking to identify traces of a possible collective learning process and the development of a metropolitan public interest, as a concept
158

Universalisme moral. Deux perpectives : Jean-Paul II et Jürgen Habermas

Geoffroy, Carl 07 May 2021 (has links)
Ce mémoire aborde la question de Puniversalisme éthique. L’auteur analyse les positions de Jürgen Habermas et de Jean-Paul II. Après avoir mis en lumière les préoccupations de Habermas, sa conception de la rationalité, et plus précisément de la rationalité pratique, ainsi que sa théorie de Pagir communicationnel sont examinées. Cette démarche est le préalable à la présentation de l’éthique de la discussion habermassienne, i.e. de la théorie discursive de la morale. La partie sur Jean-Paul II porte sur une seule œuvre : l’encyclique Veritatis Splendor. L’analyse s’attarde à la conception de la raison pratique et au concept de vérité à l’œuvre dans l’exposé de la position du Magistère sur Puniversalisme moral. Le concept de loi naturelle est également examiné dans la mesure où celle-ci est la figure catholique de Puniversalisme moral. Ce mémoire permet enfin de cerner les divergences et les convergences des deux auteurs quant à la possibilité d’une morale universaliste.
159

Évaluation de la qualité des documents anciens numérisés

Rabeux, Vincent 06 March 2013 (has links) (PDF)
Les travaux de recherche présentés dans ce manuscrit décrivent plusieurs apports au thème de l'évaluation de la qualité d'images de documents numérisés. Pour cela nous proposons de nouveaux descripteurs permettant de quantifier les dégradations les plus couramment rencontrées sur les images de documents numérisés. Nous proposons également une méthodologie s'appuyant sur le calcul de ces descripteurs et permettant de prédire les performances d'algorithmes de traitement et d'analyse d'images de documents. Les descripteurs sont définis en analysant l'influence des dégradations sur les performances de différents algorithmes, puis utilisés pour créer des modèles de prédiction à l'aide de régresseurs statistiques. La pertinence, des descripteurs proposés et de la méthodologie de prédiction, est validée de plusieurs façons. Premièrement, par la prédiction des performances de onze algorithmes de binarisation. Deuxièmement par la création d'un processus automatique de sélection de l'algorithme de binarisation le plus performant pour chaque image. Puis pour finir, par la prédiction des performances de deux OCRs en fonction de l'importance du défaut de transparence (diffusion de l'encre du recto sur le verso d'un document). Ce travail sur la prédiction des performances d'algorithmes est aussi l'occasion d'aborder les problèmes scientifiques liés à la création de vérités-terrains et d'évaluation de performances.
160

Knowledge Extraction for Hybrid Question Answering

Usbeck, Ricardo 22 May 2017 (has links) (PDF)
Since the proposal of hypertext by Tim Berners-Lee to his employer CERN on March 12, 1989 the World Wide Web has grown to more than one billion Web pages and still grows. With the later proposed Semantic Web vision,Berners-Lee et al. suggested an extension of the existing (Document) Web to allow better reuse, sharing and understanding of data. Both the Document Web and the Web of Data (which is the current implementation of the Semantic Web) grow continuously. This is a mixed blessing, as the two forms of the Web grow concurrently and most commonly contain different pieces of information. Modern information systems must thus bridge a Semantic Gap to allow a holistic and unified access to information about a particular information independent of the representation of the data. One way to bridge the gap between the two forms of the Web is the extraction of structured data, i.e., RDF, from the growing amount of unstructured and semi-structured information (e.g., tables and XML) on the Document Web. Note, that unstructured data stands for any type of textual information like news, blogs or tweets. While extracting structured data from unstructured data allows the development of powerful information system, it requires high-quality and scalable knowledge extraction frameworks to lead to useful results. The dire need for such approaches has led to the development of a multitude of annotation frameworks and tools. However, most of these approaches are not evaluated on the same datasets or using the same measures. The resulting Evaluation Gap needs to be tackled by a concise evaluation framework to foster fine-grained and uniform evaluations of annotation tools and frameworks over any knowledge bases. Moreover, with the constant growth of data and the ongoing decentralization of knowledge, intuitive ways for non-experts to access the generated data are required. Humans adapted their search behavior to current Web data by access paradigms such as keyword search so as to retrieve high-quality results. Hence, most Web users only expect Web documents in return. However, humans think and most commonly express their information needs in their natural language rather than using keyword phrases. Answering complex information needs often requires the combination of knowledge from various, differently structured data sources. Thus, we observe an Information Gap between natural-language questions and current keyword-based search paradigms, which in addition do not make use of the available structured and unstructured data sources. Question Answering (QA) systems provide an easy and efficient way to bridge this gap by allowing to query data via natural language, thus reducing (1) a possible loss of precision and (2) potential loss of time while reformulating the search intention to transform it into a machine-readable way. Furthermore, QA systems enable answering natural language queries with concise results instead of links to verbose Web documents. Additionally, they allow as well as encourage the access to and the combination of knowledge from heterogeneous knowledge bases (KBs) within one answer. Consequently, three main research gaps are considered and addressed in this work: First, addressing the Semantic Gap between the unstructured Document Web and the Semantic Gap requires the development of scalable and accurate approaches for the extraction of structured data in RDF. This research challenge is addressed by several approaches within this thesis. This thesis presents CETUS, an approach for recognizing entity types to populate RDF KBs. Furthermore, our knowledge base-agnostic disambiguation framework AGDISTIS can efficiently detect the correct URIs for a given set of named entities. Additionally, we introduce REX, a Web-scale framework for RDF extraction from semi-structured (i.e., templated) websites which makes use of the semantics of the reference knowledge based to check the extracted data. The ongoing research on closing the Semantic Gap has already yielded a large number of annotation tools and frameworks. However, these approaches are currently still hard to compare since the published evaluation results are calculated on diverse datasets and evaluated based on different measures. On the other hand, the issue of comparability of results is not to be regarded as being intrinsic to the annotation task. Indeed, it is now well established that scientists spend between 60% and 80% of their time preparing data for experiments. Data preparation being such a tedious problem in the annotation domain is mostly due to the different formats of the gold standards as well as the different data representations across reference datasets. We tackle the resulting Evaluation Gap in two ways: First, we introduce a collection of three novel datasets, dubbed N3, to leverage the possibility of optimizing NER and NED algorithms via Linked Data and to ensure a maximal interoperability to overcome the need for corpus-specific parsers. Second, we present GERBIL, an evaluation framework for semantic entity annotation. The rationale behind our framework is to provide developers, end users and researchers with easy-to-use interfaces that allow for the agile, fine-grained and uniform evaluation of annotation tools and frameworks on multiple datasets. The decentral architecture behind the Web has led to pieces of information being distributed across data sources with varying structure. Moreover, the increasing the demand for natural-language interfaces as depicted by current mobile applications requires systems to deeply understand the underlying user information need. In conclusion, the natural language interface for asking questions requires a hybrid approach to data usage, i.e., simultaneously performing a search on full-texts and semantic knowledge bases. To close the Information Gap, this thesis presents HAWK, a novel entity search approach developed for hybrid QA based on combining structured RDF and unstructured full-text data sources.

Page generated in 0.0596 seconds