• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 75
  • 38
  • 34
  • 8
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 1
  • Tagged with
  • 204
  • 32
  • 24
  • 23
  • 22
  • 22
  • 18
  • 18
  • 17
  • 16
  • 16
  • 14
  • 14
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Applications of Contact Length Models in Grinding Processes

Qi, Hong Sheng, Mills, B., Xu, X.P. January 2009 (has links)
yes / The nature of the contact behaviour between a grinding wheel and a workpiece in the grinding process has a great effect on the grinding temperature and the occurrence of thermal induced damage on the ground workpiece. It is found that the measured contact length le in grinding is considerably longer than the geometric contact length lg and the contact length due to wheel-workpiece deflection lf. The orthogonal relationship among the contact lengths, i.e. lc2 = (Rrlf)2 + lg2, reveals how the grinding force and grinding depth of cut affect the overall contact length between a grinding wheel and a workpiece in grinding processes. To make the orthogonal contact length model easy to use, attempts on modification of the model are carried out in the present study, in which the input variable of the model, Fn’, is replaced by a well-established empirical formula and specific grinding power. By applying the modified model in this paper, an analysis on the contributions of the individual factors, i.e. the wheel/workpiece deformation and the grinding depth of cut, on the overall grinding contact length is conducted under a wide range of grinding applications, i.e. from precise/shallow grinding to deep/creep-feed grinding. Finally, using a case study, the criterion of using geometric contact length lg to represent the real contact length lc, in terms of convenience versus accuracy, is discussed.
12

Depending on VR : Rule-based Text Simplification Based on Dependency Relations

Johansson, Vida January 2017 (has links)
The amount of text that is written and made available increases all the time. However, it is not readily accessible to everyone. The goal of the research presented in this thesis was to develop a system for automatic text simplification based on dependency relations, develop a set of simplification rules for the system, and evaluate the performance of the system. The system was built on a previous tool and developments were made to ensure the that the system could perform the operations necessary for the rules included in the rule set. The rule set was developed by manual adaption of the rules to a set of training texts. The evaluation method used was a classification task with both objective measures (precision and recall) and a subjective measure (correctness). The performance of the system was compared to that of a system based on constituency relations. The results showed that the current system scored higher on both precision (96% compared to 82%) and recall (86% compared to 53%), indicating that the syntactic information dependency relations provide is sufficient to perform text simplification. Further evaluation should account for how helpful the text simplification produced by the current system is for target readers.
13

Chaine 3D interactive / 3D chain interactive

Daval, Vincent 09 December 2014 (has links)
Avec l'évolution constante des technologies, les scanners 3D fournissent de plus en plus de données avec une précision toujours plus grande. Cependant, l'augmentation considérable de la taille des données pose des problèmes, les fichiers deviennent très lourds et il peut en découler des difficultés de transmission ou de stockage. C'est pourquoi, la plupart du temps les données obtenues par les scanners vont être analysées, traitées puis simplifiées ; on parle alors de chaine d'acquisition 3D. Ce manuscrit présente une approche qui permet de numériser les objets de manière dynamique, en adaptant la densité de points dès l'acquisition en fonction de la complexité de l'objet à numériser, et ce sans a priori sur la forme de l'objet. Ce système permet d'éviter de passer par la chaine 3D classique, en ne calculant pas un nuage de points dense qu'il faudra simplifier par la suite, mais en adaptant la densité de points au niveau de l'acquisition afin d'obtenir des données simplifiées directement à la sortie de l'acquisition, permettant ainsi de réduire considérablement le temps de traitement des données. / With constant evolution of technology, 3D scanners are providing more and more data with ever greater precision. However, the substantial increase of the data size is problematic. Files become very heavy and can cause problems in data transmission or data storage. Therefore, most of the time, the data obtained by the 3d scanners will be analyzed, processed and simplify; this is called 3D acquisition chain.This manuscript presents an approach which digitize objects dynamically by adapting point density during the acquisition depending on the object complexity, without informations on object shape. This system allows to avoid the use of the classic 3D chain. This system do not calculate a dense points cloud which will be simplify later, but it adapts the points density during the acquisition in order to obtain directly simplified data to the acquisition output. This process allows to reduce significantly processing time.
14

Zjednodušování textu v češtině / Text simplification in Czech

Burešová, Karolína January 2017 (has links)
This thesis deals with text simplification in Czech, in particular with lexical simplification. Several strategies of complex word identification, substitution generation and substitution ranking are implemented and evaluated. Substitution generation is attempted both in a dictionary-based manner and in an embedding- based manner. Some experiments involving people are also presented, the experiments aim at gaining an in- sight into perceived simplicity/complexity and its factors. The experiments conducted and evaluated include sentence pair comparison and manual text simplification. Both the evaluation results of various strategies and the outcomes of experiments involving humans are described and some future work is suggested. 1
15

La sécurisation des autorisations d’urbanisme / Securing urban planning permits

Martin, Pierre-Antoine 20 December 2013 (has links)
Le régime des autorisations d’urbanisme était l’objet de nombreuses critiques en raison de sa complexité, de l’incertitude du délai d’instruction et de l’imprévisibilité de la décision administrative. Cette situation résultait de l’accumulation des modifications sans vision d’ensemble. Les acteurs du droit de l’urbanisme n’étaient pas en mesure de prévoir aisément un résultat et de compter sur celui-ci.L’ordonnance du 8 décembre 2005 et la loi du 13 juillet 2006 réforment ce régime afin d’améliorer la sécurité juridique des acteurs du droit de l’urbanisme. Pendant de la loi du 13 décembre 2000 pour les documents d’urbanisme, cette réforme réécrit le Livre IV du Code de l’urbanisme.La réforme a intégré la sécurité juridique dans le droit de l’utilisation et de l’occupation des sols. La réforme a pour objectifs de clarifier le champ d’application des autorisations d’urbanisme en regroupant les travaux, de simplifier la procédure d’instruction, de garantir la prévisibilité de la décision administrative. Ces objectifs correspondent aux prescriptions techniques de la sécurité juridique, à savoir : la stabilité et la prévisibilité du droit.Entrée en vigueur depuis le 1er octobre 2007, le bilan de la réforme peut désormais être établi. Présentée comme un renforcement de la sécurité juridique du constructeur ou de l’aménageur, la réforme améliore l’efficacité de l’action administrative. La sécurité juridique de l’opérateur s’en trouve renforcée par ricochet.Le processus décisionnel a été aménagé pour sécuriser la délivrance des autorisations d’urbanisme. La réforme du contentieux de l’urbanisme vise aujourd’hui à renforcer la sécurisation des autorisations et la réalisation des constructions et des opérations d’aménagement. / The system of planning permissions was the subject of a number of criticisms because of its complexity, uncertainty regarding the length of the process and the unpredictability of administrative decisions. This situation was the result of piecemeal amendments being made without being considered as a whole. Those using the planning law were not able to easily foresee the outcome or be able to rely on it.The Ordonnance of 8 December 2005 and the Law of 13 July 2006 reformed this system in order to improve the legal certainty of those using the planning law. Part of the Law of 13 December 2000 relating to planning documents, this reform rewrote Book IV of the Planning Code.The reform integrated legal certainty in the law relating to the use and occupation of land. The aim of the reform is to clarify the circumstances in which planning permission is required by regrouping works, simplifying the application procedure and improving the foreseeability of administrative decisions. These objectives correspond with the technical guidance of legal certainty, namely: stability and predictability of the law.Being in force since 1 October 2007, the impact of the reform can now be assessed. Presented as strengthening the legal certainty of builders and developers, the reform improved efficiency of the administrative process. As a result the legal certainty of users has been strengthened.The decision making process has been set up to bring certainty to the granting of planning provisions. The reform of planning disputes is currently aiming to ease the granting of permissions and carrying out building and development projects.
16

Lexical simplification : optimising the pipeline

Shardlow, Matthew January 2015 (has links)
Introduction: This thesis was submitted by Matthew Shardlow to the University of Manchester for the degree of Doctor of Philosophy (PhD) in the year 2015. Lexical simplification is the practice of automatically increasing the readability and understandability of a text by identifying problematic vocabulary and substituting easy to understand synonyms. This work describes the research undertaken during the course of a 4-year PhD. We have focused on the pipeline of operations which string together to produce lexical simplifications. We have identified key areas for research and allowed our results to influence the direction of our research. We have suggested new methods and ideas where appropriate. Objectives: We seek to further the field of lexical simplification as an assistive technology. Although the concept of fully-automated error-free lexical simplification is some way off, we seek to bring this dream closer to reality. Technology is ubiquitous in our information-based society. Ever-increasingly we consume news, correspondence and literature through an electronic device. E-reading gives us the opportunity to intervene when a text is too difficult. Simplification can act as an augmentative communication tool for those who find a text is above their reading level. Texts which would otherwise go unread would become accessible via simplification. Contributions: This PhD has focused on the lexical simplification pipeline. We have identified common sources of errors as well as the detrimental effects of these errors. We have looked at techniques to mitigate the errors at each stage of the pipeline. We have created the CW Corpus, a resource for evaluating the task of identifying complex words. We have also compared machine learning strategies for identifying complex words. We propose a new preprocessing step which yields a significant increase in identification performance. We have also tackled the related fields of word sense disambiguation and substitution generation. We evaluate the current state of the field and make recommendations for best practice in lexical simplification. Finally, we focus our attention on evaluating the effect of lexical simplification on the reading ability of people with aphasia. We find that in our small-scale preliminary study, lexical simplification has a nega- tive effect, causing reading time to increase. We evaluate this result and use it to motivate further work into lexical simplification for people with aphasia.
17

Approximation robuste de surfaces avec garanties / Robust shape approximation and mapping between surfaces

Mandad, Manish 29 November 2016 (has links)
Cette thèse comprend deux parties indépendantes.Dans la première partie nous contribuons une nouvelle méthode qui, étant donnée un volume de tolérance, génère un maillage triangulaire surfacique garanti d’être dans le volume de tolérance, sans auto-intersection et topologiquement correct. Un algorithme flexible est conçu pour capturer la topologie et découvrir l’anisotropie dans le volume de tolérance dans le but de générer un maillage de faible complexité.Dans la seconde partie nous contribuons une nouvelle approche pour calculer une fonction de correspondance entre deux surfaces. Tandis que la plupart des approches précédentes procède par composition de correspondance avec un domaine simple planaire, nous calculons une fonction de correspondance en optimisant directement une fonction de sorte à minimiser la variance d’un plan de transport entre les surfaces / This thesis is divided into two independent parts.In the first part, we introduce a method that, given an input tolerance volume, generates a surface triangle mesh guaranteed to be within the tolerance, intersection free and topologically correct. A pliant meshing algorithm is used to capture the topology and discover the anisotropy in the input tolerance volume in order to generate a concise output. We first refine a 3D Delaunay triangulation over the tolerance volume while maintaining a piecewise-linear function on this triangulation, until an isosurface of this function matches the topology sought after. We then embed the isosurface into the 3D triangulation via mutual tessellation, and simplify it while preserving the topology. Our approach extends toDépôt de thèseDonnées complémentairessurfaces with boundaries and to non-manifold surfaces. We demonstrate the versatility and efficacy of our approach on a variety of data sets and tolerance volumes.In the second part we introduce a new approach for creating a homeomorphic map between two discrete surfaces. While most previous approaches compose maps over intermediate domains which result in suboptimal inter-surface mapping, we directly optimize a map by computing a variance-minimizing mass transport plan between two surfaces. This non-linear problem, which amounts to minimizing the Dirichlet energy of both the map and its inverse, is solved using two alternating convex optimization problems in a coarse-to-fine fashion. Computational efficiency is further improved through the use of Sinkhorn iterations (modified to handle minimal regularization and unbalanced transport plans) and diffusion distances. The resulting inter-surface mapping algorithm applies to arbitrary shapes robustly and efficiently, with little to no user interaction.
18

Modélisation et calcul parallèle pour le Web SIG 3D / Modeling and Parallel Computation for 3D WebGIS

Cellier, Fabien 31 January 2014 (has links)
Cette thèse est centrée sur l'affichage et la manipulation en temps interactif au sein d'un navigateur Internet de modèles 3D issus de Systèmes d'Informations Géographiques (SIG). Ses principales contributions sont la visualisation de terrains 3D haute résolution, la simplification de maillages irréguliers sur GPU, et la création d'une nouvelle API navigateur permettant de réaliser des traitements lourds et efficaces (parallélisme GP/GPU) sans compromettre la sécurité. La première approche proposée pour la visualisation de modèles de terrain s'appuie sur les récents efforts des navigateurs pour devenir une plateforme versatile. Grâce aux nouvelles API 3D sans plugin, nous avons pu créer un client de visualisation de terrains "streamés" à travers HTTP. Celui-ci s'intègre parfaitement dans les écosystèmes Web-SIG actuels (desktop et mobile) par l'utilisation des protocoles standards du domaine (fournis par l'OGC, Open Geospatial Consortium). Ce prototype s'inscrit dans le cadre des partenariats industriels entre ATOS Worldline et ses clients SIG, et notamment l'IGN (institut national de l'information géographique et forestière) avec le Géoportail (http://www.geoportail.gouv.fr) et ses API cartographiques. La 3D dans les navigateurs possède ses propres défis, qui sont différents de ce que l'on connaît des applications lourdes : aux problèmes de transfert de données s'ajoutent les restrictions et contraintes du JavaScript. Ces contraintes, détaillées dans le paragraphe suivant, nous ont poussé à repenser les algorithmes de référence de visualisation de terrain afin de prendre en compte les spécificités dues aux navigateurs. Ainsi, nous avons su profiter de la latence du réseau pour gérer dynamiquement les liaisons entre les parties du maillage sans impacter significativement la vitesse du rendu. Au-delà de la visualisation 3D, et bien que le langage JavaScript autorise le parallélisme de tâches, le parallélisme de données reste quasi inexistant au sein des navigateurs Web. Ce constat, couplé à la faiblesse de traitement du JavaScript, constituait un frein majeur dans notre objectif de définir une plateforme SIG complète et performante intégrée au navigateur. C'est pour cette raison que nous avons conçu et développé, à travers les WebCLWorkers, une API Web de calcul GP/GPU haute performance répondant aux critères de simplicité et de sécurité inhérents au Web. Contrairement à l'existant, qui se base sur des codes déjà précompilés ou met de côté les performances, nous avons tenté de trouver le bon compromis pour avoir un langage proche du script mais sécurisé et performant, en utilisant les API OpenCL comme moteur d'exécution. Notre proposition d'API a intéressé la fondation Mozilla qui nous a ensuite demandé de participer à l'élaboration du standard WebCL dans la cadre du groupe Khronos, (aux côtés de Mozilla mais aussi de Samsung, Nokia, Google, AMD, etc.). Grâce aux nouvelles ressources de calcul ainsi obtenues, nous avons alors proposé un algorithme de simplification parallèle de maillages irréguliers. Alors que l'état de l'art repose essentiellement sur des grilles régulières pour le parallélisme (hors Web) ou sur la simplification via clusterisation et kd-tree, aucune solution ne permettait d'avoir à la fois une simplification parallèle et des modèles intermédiaires utilisables pour la visualisation progressive en utilisant des grilles irrégulières. Notre solution repose sur un algorithme en trois étapes utilisant des priorités implicites et des minima locaux afin de réaliser la simplification, et dont le degré de parallélisme est linéairement lié au nombre de points et de triangles du maillage à traiter [etc...]GP/GPU / This thesis focuses on displaying and manipulating 3D models from Geographic Information Systems (GIS) in interactive time directly in a web browser. Its main contributions are the visualization of high resolution 3D terrains, the simplification of irregular meshes on the GPU, and the creation of a new API for performing heavy and effective computing in the browser (parallelism GP/GPU) without compromising safety. The first approach proposed for the visualization of terrain models is built on recent browsers efforts to become a versatile platform. With the new 3D pluginless APIs, we have created a visualization client for terrain models “streamed” through HTTP. It fits perfectly into the current Web-GIS ecosystem (desktop and mobile) by the use of the standard protocols provided by OGC Open Geospatial Consortium. This prototype is part of an industrial partnership between ATOS Wordline and its GIS customer, and particularly the IGN (French National Geographic Institute) with the Geoportail application (http://www.geoportail.gouv.fr) and its mapping APIs. The 3D embedded in browsers brings its own challenges which are different from what we know in heavy applications: restrictions and constraints from JavaScript but also problems of data transfer. These constraints, detailed in the next paragraph, led us to rethink the standard algorithms for 3D visualization to take into account the browser specificities. Thus, we have taken advantage of network latency to dynamically manage the connections between the different parts of the mesh without significantly impacting the rendering speed. Beyond 3D visualization, and even if the JavaScript language allows task parallelism, data parallelism remains absent from Web browsers. This observation, added to the slowness of JavaScript processing, constituted a major obstacle in our goal to define a complete and powerful GIS platform integrated in the browser. That is why we have designed and developed the WebCLWorkers, a GP/GPU Web API for high performance computing that meets the criteria of simplicity and security inherent to the Web. We tried to find a trade-off for a language close to the script but secure and efficient, based on the OpenCL API at runtime. This approach is opposite to the existing ones, which are either based on precompiled code or disregard performances. Our API proposal interested the Mozilla Foundation which asked us to participate in the development of the WebCL standard by integrating the Khronos Group (Mozilla, Samsung, Nokia, Google, AMD, and so on). Exploiting these new computing resources, we then suggested an algorithm for parallel simplification of irregular meshes. While the state of the art was mainly based on regular grids for parallelism (and did not take into account Web browsers restrictions) or on simplification and kd-tree clustering, no solution could allow both parallel simplification and progressive visualization using irregular grids. Our solution is based on a three-step algorithm using implicit priorities and local minima to achieve simplification, and its degree of parallelism is linearly related to the number of points and triangles in the mesh to process. We have proposed in the thesis an innovative approach for 3D WebGIS pluglinless visualization, offering tools that bring to the browser a comfortable GP/GPU computing power, and designing a method for irregular meshes parallel simplification allowing to visualize level of details directly in Web browsers. Based on these initial results, it becomes possible to carry all the rich functionalities of desktop GIS clients to Web browsers, on PC as well as mobile phones and tablets
19

Simplification de modèles mathématiques représentant des cultures cellulaires

Cardin-Bernier, Guillaume January 2015 (has links)
L’utilisation de cellules vivantes dans un procédé industriel tire profit de la complexité inhérente au vivant pour accomplir des tâches complexes et dont la compréhension est parfois limitée. Que ce soit pour la production de biomasse, pour la production de molécules d’intérêt ou pour la décomposition de molécules indésirables, ces procédés font appel aux multiples réactions formant le métabolisme cellulaire. Afin de décrire l’évolution de ces systèmes, des modèles mathématiques composés d’un ensemble d’équations différentielles sont utilisés. Au fur et à mesure que les connaissances du métabolisme se sont développées, les modèles mathématiques le représentant se sont complexifiés. Le niveau de complexité requis pour expliquer les phénomènes en jeu lors d’un procédé spécifique est difficile à définir. Ainsi, lorsqu’on tente de modéliser un nouveau procédé, la sélection du modèle à utiliser peut être problématique. Une des options intéressantes est la sélection d’un modèle provenant de la littérature et adapté au procédé utilisé. L’information contenue dans le modèle doit alors être évaluée en fonction des phénomènes observables dans les conditions d’opération. Souvent, les modèles provenant de la littérature sont surparamétrés pour l’utilisation dans les conditions d’opération des procédés ciblées. Cela fait en sorte de causer des problèmes d’identifiabilité des paramètres. De plus, l’ensemble des variables d’état utilisées dans le modèle n’est pas nécessairement mesuré dans les conditions d’opération normales. L’objectif de ce projet est de cibler l’information utilisable contenue dans les modèles par la simplification méthodique de ceux-ci. En effet, la simplification des modèles permet une meilleure compréhension des dynamiques à l’oeuvre dans le procédé. Ce projet a permis de définir et d’évaluer trois méthodes de simplification de modèles mathématiques servant à décrire un procédé de culture cellulaire. La première méthode est basée sur l’application de critères sur les différents éléments du modèle, la deuxième est basée sur l’utilisation d’un critère d’information du type d’Akaike et la troisième considère la réduction d’ordre du modèle par retrait de variables d’état. Les résultats de ces méthodes de simplification sont présentés à l’aide de quatre modèles cellulaires provenant de la littérature.
20

Análise bidirecional da língua na simplificação sintática em textos do português voltada à acessibilidade digital / Biderectional language analysis in syntactic simplification of portuguese texts focused on digital accessibility

Candido Junior, Arnaldo 28 March 2013 (has links)
O Processamento de Línguas Naturais é uma área interdisciplinar cujas pesquisas podem ser divididas em duas grandes linhas: análise e síntese da língua. Esta pesquisa de doutorado traz contribuições para ambas. Para a análise da língua, um modelo integrativo capaz de unir diferentes níveis linguísticos é apresentado e avaliado em relação aos níveis morfológico, (incluindo subníveis léxico e morfossintático), sintático e semântico. Enquanto análises tradicionais são feitas dos níveis mais baixos da língua para os mais altos, em uma estratégia em cascata, na qual erros dos níveis mais baixos são propagados para os níveis mais altos, o modelo de análise proposto é capaz de unificar a análise de diferentes níveis a partir de uma abordagem bidirecional. O modelo é baseado em uma grande rede neural, treinada em córpus, cujos padrões de treinamento são extraídos de tokens presentes nas orações. Um tipo de recorrência denominado coativação é aplicado no modelo para permitir que a análise de um padrão modifique e seja modificada pela análise de outros padrões em um mesmo contexto. O modelo de análise permite investigações para as quais não foi originalmente planejado, além de apresentar resultados considerados satisfatórios em lematização e análise morfossintática, porém ainda demandando aprimoramento para a tarefa de análise sintática. A ferramenta associada a esse modelo permitiu investigar a recorrência proposta e a interação bidirecional entre níveis da língua, incluindo seus subníveis. Experimentos para coativação e bidirecionalidade foram realizados e considerados satisfatórios. Para a área de síntese da língua, um modelo de simplificação sintática, tarefa considerada como adaptação de texto para texto, baseado em regras manuais é aplicado em textos analisados sintaticamente, tendo como objetivo tornar os textos sintaticamente mais simples para leitores com letramento rudimentar ou básico. A ferramenta associada a esse modelo permitiu realizar simplificação sintática com medida-f de 77,2%, simplificando aproximadamente 16% de orações em textos do gênero enciclopédico / Natural Language Processing is an interdisciplinary research area that encompasses two large research avenues: language analysis and language synthesis. This thesis contributes for both of them. In what concerns language analysis, it presents an integrative model that links different levels of linguistic analysis. The evaluation of such model takes into consideration several levels: morphologic (including lexical and morph-syntactic sub-levels), syntactic and semantic. Whereas traditional analysis are undertaken from the lower levels to the upper ones, propagating errors in such direction, the model proposed herein is able to unify different levels of analysis using a bidirectional approach. The model is based on a large-scale neural network trained in corpus, which extracts its training features from tokens within the sentences. A type of recurrence denominated co-activation is applied to the model to make the analysis of a pattern able to modify (and to be modified by) the analysis of other patterns in a same context. This model may be used for purposes different from those for which it was conceived and yields satisfactory results in lemmatization and part-of-speech analysis, but still needs work on syntactic analysis. The tool associated to this model makes it possible to analyze the proposed recurrence language and the bidirectional influence of different levels on each other, including sub-level interaction. Experiments on both co-activation and bidirectional level integration were performed, and the results were considered satisfactory. On the other hand, in what concerns language synthesis, this thesis presents a rule-based model of syntactic simplification (one of text adaptation techniques), applicable to syntactically parsed texts in order to render them simpler for low literacy readers. The tool associated to this model makes it possible to carry out the task of syntactic simplification in Portuguese language. Such tool achieved 77.2% of f-measure in a task that simplified approximately 16% of the sentences of an encyclopedic text

Page generated in 0.107 seconds