Spelling suggestions: "subject:"[een] REFINEMENT"" "subject:"[enn] REFINEMENT""
401 |
Consolidation des poudres métalliques par des déformations plastiques extrêmes : torsion sous haute pression : expériences et modélisations / Consolidation of Metal Powders through Severe Plastic Deformation : High Pressure Torsion : Experiments and ModelingZhao, Yajun 29 February 2016 (has links)
Les procédés d’hyper-déformations (SPD) peuvent imposer de très grandes déformations à un métal et en transformer les propriétés métallurgiques de la matière en introduisant une forte densité de dislocations et un important affinement de la microstructure. Dans ce travail de thèse présenté, des expériences en torsion à haute pression (HPT) ont été réalisées pour la consolidation des différentes poudres de fer de taille à l’échelle nano et micrométrique. Ces expériences ont été effectuées avec succès à la température ambiante aboutissant à la fois à un faible niveau de porosité résiduelle et l'affinement significatif de la taille de grain, grâce à une importante déformation en cisaillement et à de la pression hydrostatique appliquée au procédé HPT. La compression a été faite en deux étapes: d'abord une compression axiale, puis déformation en cisaillement en tournant la partie inférieure de la filière HPT tout en maintenant constante la force axiale. L'homogénéité de la déformation en cisaillement à travers l'épaisseur du disque a été examinée par une mesure de déformation locale, qui montre une distribution du gradient. L'analyse par diffraction à rayons X a été réalisée sur des échantillons consolidés qui ont révélé une proportion peu importante d’oxydes. L'effet de la déformation en cisaillement sur la microstructure et la texture a été étudié par microscopie électronique à balayage et EBSD. La micro-dureté et la porosité moyenne des échantillons en fonction de la déformation en cisaillement, à pression hydrostatique constante, ont également été mesurées. Une trame de modélisation mise en œuvre dans le modèle de Taylor a été développée pour simuler l'effet du glissement aux joints de grains pour l'évolution de la texture cristallographique. Le principal effet constaté est un décalage des orientations idéales dans les conditions de cisaillement simple, ce qui a été vérifié expérimentalement. Le procédé de consolidation par HPT a été simulé numériquement en utilisant la méthode des éléments finis pour un modèle de plasticité des poudres. La simulation de ce dernier a permis de confirmer la porosité résiduelle moyenne observée expérimentalement et les différents gradients de la déformation plastique. La distribution de la densité locale a également été modélisée / Severe plastic deformation (SPD) processes can impose extremely large strains to a metal and transforming the metallurgical state of the material by introducing high dislocation density and high level of microstructure refinement. In the present thesis work High Pressure Torsion (HPT) experiments were performed for consolidation of different powders including Nano- and Micro- scaled iron powders. The experiments were carried out successfully at room temperature, achieving both low level of residual porosity and significant grain refinement, thanks to the intense shear strain and hydrostatic pressure applied in HPT. The compaction was done in two steps: first axial compaction, then shear deformation by rotating the bottom part of the HPT die while maintaining the axial force constant. The homogeneity of shear strain across the thickness of the disk was examined by local strain measurement, showing a gradient distribution. X-ray diffraction analysis was carried out on the consolidated samples which revealed no significant proportion of oxides. The effect of shear deformation on the microstructure and texture was investigated by metallographic scanning electron microscopy and electron backscattered diffraction (EBSD). The micro-hardness and average porosity of the samples as a function of shear strain at constant hydrostatic pressure were also measured. A modeling frame implemented into the Taylor model was developed to simulate the effect of Grain Boundary Sliding (GBS) on the evolution of crystallographic texture. The main effect found is a shift of the ideal orientations under simple shear conditions, which was verified experimentally. The consolidation process by HPT was simulated numerically using the finite element method together with a powder plasticity model. The simulation of the consolidation process permitted to confirm the experimentally observed average residual porosity and the different gradients in the plastic strain. The local density distribution was also modeled
|
402 |
Le Théâtre épuré et sans concessions de Ludwik Margules / Ludwik Margules' purified and unyielding theater / El teatro depurado y sin concesiones de Ludwik MargulesPaulin Rios, Maria Teresa 15 April 2016 (has links)
Il existe une grande controverse autour du metteur en scène polono-mexicain Ludwik Margules (1933-2006), auteur d’une quarantaine de spectacles au Mexique depuis les années cinquante. Célèbre pour sa rigueur et son exigence pendant les processus de préparation, Ludwik Margules se distinguait par ses provocations et ses intimidations envers les comédiens. Sa méthode basée sur la transgression des limites de ces derniers a été mise en doute à de multiples reprises. Cette recherche a pour but de comprendre comment Margules a réussi à créer un théâtre épuré et sans concessions qui l’a poussé à la rencontre de l’essentiel et à la création d’un langage caractérisé par son minimalisme. Nous croyons que c’est grâce à la recherche de l’essence de l’être à travers une méthode de travail basée sur la transgression. En plus d’être metteur en scène de théâtre, Margules a été professeur et directeur dans différentes institutions à Mexico où il a renouvelé et enrichi la pédagogie théâtrale. Son école de petit format, le Foro Teatro Contemporáneo (Forum Théâtre Contemporain) était reconnue pour sa qualité en dépit de sa précarité. Sa vaste connaissance du langage théâtral et son énorme travail pédagogique ont enrichi le théâtre mexicain. Son art de la mise en scène visant un public intellectuel, s’est caractérisé par un langage poétique. Margules a combattu, à travers un syncrétisme entre sa culture polonaise et la mexicaine, le vieux théâtre espagnol pétrifié, qui envahissait les scènes mexicaines. Tout au long de son travail comme directeur du Centre Universitaire de Théâtre de l’Université Autonome du Mexique, il a créé des formations en Direction Scénique et en Scénographie qui ont servi à former plusieurs artistes même si elles n’ont duré que cinq ans. / There is a huge controversy around polish-mexican director Ludwik Margules (1933-2006) responsible for more than forty shows in Mexico since the 1950´s. Famous for his exigency and rigor during the rehearsals, Ludwik Margules was distinguished by its provocations towards the performers. His method based on transgreding the limits has been called into question multiple times. The objective of this research is to understand how did Margules managed to create a purified and unyielding theater that incited him to find the essential and took him to elaborate a language caracterized by its minimalism. Our hypothesis is becuase it´s method is based on transgression which is achieved by looking for the essence of being. Besides being a stage director, Margules was a professor and Dean on different institutions where he renewed and enriched theater pedagogy in Mexico. His independent school the Foro Teatro Contemporáneo (Contemporary Theater Forum) was recognized for its quality in spite of its precariousness. His vast knowledge of theater language and his enormous pedagogical work enriched mexican theater. His art while staging a play for an intelectual audience was characterized for his poetic language. Margules fought, through the syncretism of his polish/mexican culture, the old petrified spanish theater that invaded the Mexican theater scene. Throughout his work as Dean of the Centro Universitario de Teatro at Mexican Autonomous University, he created the stage directing and stage designing school who formed several artists, sadly this schools lasted for only five years.
|
403 |
Geração e refinamento de malhas segmentadas a partir de imagens com textura / Generating and refining segmented meshes from textured imagesMario Augusto de Souza Lizier 23 November 2009 (has links)
Com a popularização de equipamentos tradicionais de captura de imagens, como câmeras digitais, e o avanço tecnológico dos dispositivos não invasivos, como tomografia e ressonância, cresce também a necessidade e consequente uso de métodos numéricos para simulação de fenômenos físicos em domínios definidos por imagens. Um dos pré-requisitos para a aplicação de tais métodos numéricos consiste na discretização do domínio em questão, num processo denominado geração de malhas. Embora diversos métodos de geração de malha tenham sido propostos para discretizar domínios definidos por primitivas geométricas, pouco tem sido feito no sentido de gerar uma decomposição diretamente a partir de imagens. Neste trabalho, apresentamos uma abordagem de geração de malhas de qualidade a partir de domínios definidos por imagens com textura. Mais especificamente, a pesquisa descrita nesta tese contribui com a melhoria do algoritmo Imesh, ao sanar três de suas principais limitações: tratamento de imagens com texturas; controle do nível de refinamento da malha e suporte a outros tipos de elementos. Estas contribuições flexibilizam o processo de geração da malha, e ainda ampliam o domínio de aplicações do algoritmo Imesh, à medida que são considerados domínios definidos por imagens com textura e o uso de métodos numéricos para elementos não simpliciais torna-se possível. O algoritmo de melhoria da malha gerada utiliza uma abordagem inovadora de remalhamento baseada em templates e guiada por retalhos de Bézier / With the spreading of traditional image capturing devices, such as digital cameras, and the technological advancement of more specific imaging devices such as CT and MRI, also increased the need and the following use of numerical methods for simulation of physical phenomena in domains defined by images. One of the prerequisites for the application of such numerical methods is the discretization of the corresponding domain, in a process called mesh generation. Although several methods of mesh generation have been proposed to discretize domains defined by geometric primitives, little has been done to generate a decomposition directly from images. We present an approach to generate quality meshes from domains defined by images with texture. More specifically, the research described in this thesis contributes to the improvement of the Imesh algorithm, removing three of its main limitations: treatment textured images, control of the level of the mesh refinement and support for other types of non-simplicial elements. These contributions provide flexibility to the mesh generation process, and extend the range of applications of Imesh by both handling textured images and considering the use of numerical methods for non-simplicial elements. The mesh quality improvement algorithm uses a new approach based on mesh templates and it is guided by Bezier patches
|
404 |
Réponses manquantes : Débogage et Réparation de requêtes / Query Debugging and Fixing to Recover Missing Query ResultsTzompanaki, Aikaterini 14 December 2015 (has links)
La quantité croissante des données s’accompagne par l’augmentation du nombre de programmes de transformation de données, généralement des requêtes, et par la nécessité d’analyser et comprendre leurs résultats : (a) pourquoi telle réponse figure dans le résultat ? ou (b) pourquoi telle information n’y figure pas ? La première question demande de trouver l’origine ou la provenance des résultats dans la base, un problème très étudié depuis une 20taine d’années. Par contre, expliquer l’absence de réponses dans le résultat d’une requête est un problème peu exploré jusqu’à présent. Répondre à une question Pourquoi-Pas consiste à fournir des explications quant à l’absence de réponses. Ces explications identifient pourquoi et comment les données pertinentes aux réponses manquantes sont absentes ou éliminées par la requête. Notre travail suppose que la base de données n’est pas source d’erreur et donc cherche à fournir des explications fondées sur (les opérateurs de) la requête qui peut alors être raffinée ultérieurement en modifiant les opérateurs "fautifs". Cette thèse développe des outils formels et algorithmiques destinés au débogage et à la réparation de requêtes SQL afin de traiter des questions de type Pourquoi-Pas. Notre première contribution, inspirée par une étude critique de l’état de l’art, utilise un arbre de requête pour rechercher les opérateurs "fautifs". Elle permet de considérer une classe de requêtes incluant SPJA, l’union et l’agrégation. L’algorithme NedExplain développé dans ce cadre, a été validé formellement et expérimentalement. Il produit des explications de meilleure qualité tout en étant plus efficace que l’état de l’art.L’approche précédente s’avère toutefois sensible au choix de l’arbre de requête utilisé pour rechercher les explications. Notre deuxième contribution réside en la proposition d’une notion plus générale d’explication sous forme de polynôme qui capture toutes les combinaisons de conditions devant être modifiées pour que les réponses manquantes apparaissent dans le résultat. Cette méthode s’applique à la classe des requêtes conjonctives avec inégalités. Sur la base d’un premier algorithme naïf, Ted, ne passant pas à l’échelle, un deuxième algorithme, Ted++, a été soigneusement conçu pour éliminer entre autre les calculs itérés de sous-requêtes incluant des produits cartésien. Comme pour la première approche, une évaluation expérimentale a prouvé la qualité et l’efficacité de Ted++. Concernant la réparation des requêtes, notre contribution réside dans l’exploitation des explications polynômes pour guider les modifications de la requête initiale ce qui permet la génération de raffinements plus pertinents. La réparation des jointures "fautives" est traitée de manière originale par des jointures externes. L’ensemble des techniques de réparation est mis en oeuvre dans FixTed et permet ainsi une étude de performance et une étude comparative. Enfin, Ted++ et FixTed ont été assemblés dans une plate-forme pour le débogage et la réparation de requêtes relationnelles. / With the increasing amount of available data and data transformations, typically specified by queries, the need to understand them also increases. “Why are there medicine books in my sales report?” or “Why are there not any database books?” For the first question we need to find the origins or provenance of the result tuples in the source data. However, reasoning about missing query results, specified by Why-Not questions as the latter previously mentioned, has not till recently receivedthe attention it is worth of. Why-Not questions can be answered by providing explanations for the missing tuples. These explanations identify why and how data pertinent to the missing tuples were not properly combined by the query. Essentially, the causes lie either in the input data (e.g., erroneous or incomplete data) or at the query level (e.g., a query operator like join). Assuming that the source data contain all the necessary relevant information, we can identify the responsible query operators formingquery-based explanations. This information can then be used to propose query refinements modifying the responsible operators of the initial query such that the refined query result contains the expected data. This thesis proposes a framework targeted towards SQL query debugging and fixing to recover missing query results based on query-based explanations and query refinements.Our contribution to query debugging consist in two different approaches. The first one is a tree-based approach. First, we provide the formal framework around Why-Not questions, missing from the state-of-the-art. Then, we review in detail the state-of-the-art, showing how it probably leads to inaccurate explanations or fails to provide an explanation. We further propose the NedExplain algorithm that computes correct explanations for SPJA queries and unions there of, thus considering more operators (aggregation) than the state of the art. Finally, we experimentally show that NedExplain is better than the both in terms of time performance and explanation quality. However, we show that the previous approach leads to explanations that differ for equivalent query trees, thus providing incomplete information about what is wrong with the query. We address this issue by introducing a more general notion of explanations, using polynomials. The polynomial captures all the combinations in which the query conditions should be fixed in order for the missing tuples to appear in the result. This method is targeted towards conjunctive queries with inequalities. We further propose two algorithms, Ted that naively interprets the definitions for polynomial explanations and the optimized Ted++. We show that Ted does not scale well w.r.t. the size of the database. On the other hand, Ted++ is capable ii of efficiently computing the polynomial, relying on schema and data partitioning and advantageous replacement of expensive database evaluations by mathematical calculations. Finally, we experimentally evaluate the quality of the polynomial explanations and the efficiency of Ted++, including a comparative evaluation.For query fixing we propose is a new approach for refining a query by leveraging polynomial explanations. Based on the input data we propose how to change the query conditions pinpointed by the explanations by adjusting the constant values of the selection conditions. In case of joins, we introduce a novel type of query refinements using outer joins. We further devise the techniques to compute query refinements in the FixTed algorithm, and discuss how our method has the potential to be more efficient and effective than the related work.Finally, we have implemented both Ted++ and FixTed in an system prototype. The query debugging and fixing platform, short EFQ allows users to nteractively debug and fix their queries when having Why- Not questions.
|
405 |
Anisotropic mesh refinement in stabilized Galerkin methodsApel, Thomas, Lube, Gert 30 October 1998 (has links)
The numerical solution of the convection-diffusion-reaction problem is considered in two and three dimensions. A stabilized finite element method of Galerkin/Least squares type accomodates diffusion-dominated as well as convection- and/or reaction- dominated situations. The resolution of boundary layers occuring in the singularly perturbed case is accomplished using anisotropic mesh refinement in boundary layer regions. In this paper, the standard analysis of the stabilized Galerkin method on isotropic meshes is extended to more general meshes with boundary layer refinement. Simplicial Lagrangian elements of arbitrary order are used.
|
406 |
Anisotropic mesh refinement for singularly perturbed reaction diffusion problemsApel, Th., Lube, G. 30 October 1998 (has links)
The paper is concerned with the finite element resolution of layers appearing
in singularly perturbed problems. A special anisotropic grid of Shishkin type
is constructed for reaction diffusion problems. Estimates of the finite element
error in the energy norm are derived for two methods, namely the standard
Galerkin method and a stabilized Galerkin method. The estimates are uniformly
valid with respect to the (small) diffusion parameter. One ingredient is a
pointwise description of derivatives of the continuous solution. A numerical
example supports the result.
Another key ingredient for the error analysis is a refined estimate for
(higher) derivatives of the interpolation error. The assumptions on admissible
anisotropic finite elements are formulated in terms of geometrical conditions
for triangles and tetrahedra. The application of these estimates is not
restricted to the special problem considered in this paper.
|
407 |
FEM auf irregulären hierarchischen DreiecksnetzenGroh, U. 30 October 1998 (has links)
From the viewpoint of the adaptive solution of partial differential equations a finit
e element method on hierarchical triangular meshes is developed permitting hanging nodes
arising from nonuniform hierarchical refinement.
Construction, extension and restriction of the nonuniform hierarchical basis and the
accompanying mesh are described by graphs. The corresponding FE basis is generated by
hierarchical transformation. The characteristic feature of the generalizable concept is the
combination of the conforming hierarchical basis for easily defining and changing the FE
space with an accompanying nonconforming FE basis for the easy assembly of a FE
equations system. For an elliptic model the conforming FEM problem is solved by an iterative
method applied to this nonconforming FEM equations system and modified by
projection into the subspace of conforming basis functions. The iterative method used is the
Yserentant- or BPX-preconditioned conjugate gradient algorithm.
On a MIMD computer system the parallelization by domain decomposition is easy and
efficient to organize both for the generation and solution of the equations system and for
the change of basis and mesh.
|
408 |
Behandlung gekrümmter Oberflächen in einem 3D-FEM-Programm für ParallelrechnerPester, M. 30 October 1998 (has links)
The paper presents a method for generating curved
surfaces of 3D finite element meshes by mesh
refinement starting with a very coarse grid.
This is useful for parallel implementations where
the finest meshes should be computed and not read from
large files. The paper deals with simple geometries
as sphere, cylinder, cone. But the method may be
extended to more complicated geometries.
(with 45 figures)
|
409 |
Grain refinement in hypoeutectic Al-Si alloy driven by electric currentsZhang, Yunhu 19 February 2016 (has links)
The present thesis investigates the grain refinement in solidifying Al-7wt%Si hypoeutectic alloy driven by electric currents. The grain size reduction in alloys generated by electric currents during the solidification has been intensively investigated. However, since various effects of electric currents have the potential to generate the finer equiaxed grains, it is still argued which effect plays the key role in the grain refinement process. In addition, the knowledge about the grain refinement mechanism under the application of electric currents remains fragmentary and inconsistent. Hence, the research objectives of the present thesis focus on the role of electric current effects and the grain refinement mechanism under the application of electric currents.
Chapter 1 presents an introduction with respect to the subject of grain refinement in alloys driven by electric current during the solidification process in particular, including the research objectives; the research motivation; a brief review about the research history; a short introduction on the electric currents effects and a review relevant to the research status of grain refinement mechanism.
Chapter 2 gives a description of research methods. This chapter shows the employed experiment materials, experimental setup, experimental procedure, the analysis methods of solidified samples, and numerical method, respectively.
Chapter 3 focuses on the role of electric current effects in the grain refinement process. A series of solidification experiments are performed under various values of effective electric currents for both, electric current pulse and direct current. The corresponding temperature measurements and flow measurements are carried out with the increase of effective electric current intensity. Meanwhile, numerical simulations are conducted to present the details of the flow structure and the distribution of electric current density and electromagnetic force. Finally, the role of electric current effects is discussed to find the key effect in the grain refinement driven by electric currents.
Chapter 4 investigates the grain refinement mechanism driven by electric currents. This chapter mainly focuses on the origin of finer equiaxed grain for grain refinement under the application of electric current on account of the importance of the origin for understanding the grain refinement mechanism. A series of solidification experiments are carried out in Al-7wt%Si alloy and in high purity aluminum. The main origin of equiaxed grain for grain refinement is concluded based on the experiment results.
Chapter 5 presents three further investigations based on the achieved knowledge in chapter 3 and 4 about the role of electric current effects and the grain refinement mechanism. According to the insight into the key electric current effect for the grain refinement shown in chapter 3, this chapter presents a potential approach to promote the grain refinement. In addition, the solute distribution under the influence of electric current is examined based on the knowledge about the electric current effects. Moreover, the grain refinement mechanism under application of travelling magnetic field is investigated by performing a series of solidification experiments to compare with the experiments about the grain refinement mechanism driven by electric currents shown in chapter 4.
Chapter 6 summarizes the main conclusions from the presented work.:Abstract VII
Contents IX
List of figures XI
List of tables XVII
1. Introduction 1
1.1 Research objectives 1
1.2 Research motivation 2
1.3 Research history 5
1.4 Electric currents effects 9
1.4.1 Some fundamentals 10
1.4.2 Role of electric currents effects in grain refinement 12
1.5 Grain refinement mechanism 13
1.5.1 Nucleation theory 13
1.5.2 Equiaxed grain formation without the application of external fields 18
1.5.3 Grain refinement mechanism under the application of electric currents 23
1.5.4 Grain refinement mechanism under the application of magnetic field 29
2. Research methods 31
2.1 Introduction 31
2.2 Experimental materials 31
2.2.1 Solidification 31
2.2.2 Similarity of GaInSn liquid metal and Al-Si melt 32
2.3 Experimental setup 33
2.3.1 Solidification 33
2.3.2 Flow measurements 35
2.3.3 External energy fields 36
2.4 Experimental procedure 38
2.4.1 Solidification 38
2.4.2 Flow measurements 39
2.5 Metallography 39
2.6 Numerical method 41
2.6.1 Numerical model 41
2.6.2 Numerical domain and boundary conditions 42
3. Role of electric currents effects in the grain refinement 45
3.1 Introduction 45
3.2 Experimental parameter 45
3.3 Results 46
3.3.1 Solidified structure 46
3.3.2 Forced melt flow 50
3.3.3 Temperature distribution 58
3.4 Discussion 61
3.5 Conclusions 67
4. Grain refinement mechanism driven by electric currents 69
4.1 Introduction 69
4.2 Experimental parameter 69
4.3 Results 73
4.3.1 Solidified structure of Al-Si alloy 73
4.3.2 Cooling curves of Al-Si alloy 77
4.3.3 Solidified structure of high purity aluminum 78
4.4 Discussion 80
4.5 Conclusions 83
5. Supplemental investigations 85
5.1 A potential approach to improve the grain refinement 85
5.1.1 Introduction 85
5.1.2 Experimental parameter 86
5.1.3 Results and discussion 87
5.2 Macrosegregation formation 90
5.2.1 Introduction 90
5.2.2 Experimental parameter 91
5.2.3 Results and discussion 92
5.3 Grain refinement driven by TMF 97
5.3.1 Introduction 97
5.3.2 Experimental parameter 97
5.3.3 Results and discussion 98
5.4 Conclusions 102
6. Summary 103
Bibliography 105
|
410 |
Interactive mapping specification and repairing in the presence of policy views / Spécification et réparation interactive de mappings en présence de polices de sécuritéComignani, Ugo 19 September 2019 (has links)
La migration de données entre des sources aux schémas hétérogènes est un domaine en pleine croissance avec l'augmentation de la quantité de données en accès libre, et le regroupement des données à des fins d'apprentissage automatisé et de fouilles. Cependant, la description du processus de transformation des données d'une instance source vers une instance définie sur un schéma différent est un processus complexe même pour un utilisateur expert dans ce domaine. Cette thèse aborde le problème de la définition de mapping par un utilisateur non expert dans le domaine de la migration de données, ainsi que la vérification du respect par ce mapping des contraintes d'accès ayant été définies sur les données sources. Pour cela, dans un premier temps nous proposons un système dans lequel l'utilisateur fournit un ensemble de petits exemples de ses données, et est amené à répondre à des questions booléennes simples afin de générer un mapping correspondant à ses besoins. Dans un second temps, nous proposons un système permettant de réécrire le mapping produit de manière à assurer qu'il respecte un ensemble de vues de contrôle d'accès définis sur le schéma source du mapping. Plus précisément, le premier grand axe de cette thèse est la formalisation du problème de la définition interactive de mappings, ainsi que la description d'un cadre formel pour la résolution de celui-ci. Cette approche formelle pour la résolution du problème de définition interactive de mappings est accompagnée de preuves de bonnes propriétés. A la suite de cela, basés sur le cadre formel défini précédemment, nous proposons des algorithmes permettant de résoudre efficacement ce problème en pratique. Ces algorithmes visent à réduire le nombre de questions auxquelles l'utilisateur doit répondre afin d'obtenir un mapping correspondant à ces besoins. Pour cela, les mappings possibles sont ordonnés dans des structures de treillis imbriqués, afin de permettre un élagage efficace de l'espace des mappings à explorer. Nous proposons également une extension de cette approche à l'utilisation de contraintes d'intégrité afin d'améliorer l’efficacité de l'élagage. Le second axe majeur vise à proposer un processus de réécriture de mapping qui, étant donné un ensemble de vues de contrôle d'accès de référence, permet d'assurer que le mapping réécrit ne laisse l'accès à aucune information n'étant pas accessible via les vues de contrôle d'accès. Pour cela, nous définissons un protocole de contrôle d'accès permettant de visualiser les informations accessibles ou non à travers un ensemble de vues de contrôle d'accès. Ensuite, nous décrivons un ensemble d'algorithmes permettant la réécriture d'un mapping en un mapping sûr vis-à-vis d'un ensemble de vues de contrôle d'accès. Comme précédemment, cette approche est complétée de preuves de bonnes propriétés. Afin de réduire le nombre d'interactions nécessaires avec l'utilisateur lors de la réécriture d'un mapping, une approche permettant l'apprentissage des préférences de l'utilisateur est proposée, cela afin de permettre le choix entre un processus interactif ou automatique. L'ensemble des algorithmes décrit dans cette thèse ont fait l'objet d'un prototypage et les expériences réalisées sur ceux-ci sont présentées dans cette thèse / Data exchange between sources over heterogeneous schemas is an ever-growing field of study with the increased availability of data, oftentimes available in open access, and the pooling of such data for data mining or learning purposes. However, the description of the data exchange process from a source to a target instance defined over a different schema is a cumbersome task, even for users acquainted with data exchange. In this thesis, we address the problem of allowing a non-expert user to spec- ify a source-to-target mapping, and the problem of ensuring that the specified mapping does not leak information forbidden by the security policies defined over the source. To do so, we first provide an interactive process in which users provide small examples of their data, and answer simple boolean questions in order to specify their intended mapping. Then, we provide another process to rewrite this mapping in order to ensure its safety with respect to the source policy views. As such, the first main contribution of this thesis is to provide a formal definition of the problem of interactive mapping specification, as well as a formal resolution process for which desirable properties are proved. Then, based on this formal resolution process, practical algorithms are provided. The approach behind these algorithms aims at reducing the number of boolean questions users have to answers by making use of quasi-lattice structures to order the set of possible mappings to explore, allowing an efficient pruning of the space of explored mappings. In order to improve this pruning, an extension of this approach to the use of integrity constraints is also provided. The second main contribution is a repairing process allowing to ensure that a mapping is “safe” with respect to a set of policy views defined on its source schema, i.e., that it does not leak sensitive information. A privacy-preservation protocol is provided to visualize the information leaks of a mapping, as well as a process to rewrite an input mapping into a safe one with respect to a set of policy views. As in the first contribution, this process comes with proofs of desirable properties. In order to reduce the number of interactions needed with the user, the interactive part of the repairing process is also enriched with the possibility of learning which rewriting is preferred by users, in order to obtain a completely automatic process. Last but not least, we present extensive experiments over the open source prototypes built from two contributions of this thesis
|
Page generated in 0.0547 seconds