• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 72
  • 16
  • 11
  • 8
  • 4
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 130
  • 130
  • 67
  • 28
  • 24
  • 19
  • 18
  • 15
  • 15
  • 14
  • 13
  • 12
  • 12
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Modélisation de scènes urbaines à partir de données aériennes / Urban scene modeling from airborne data

Verdie, Yannick 15 October 2013 (has links)
L'analyse et la reconstruction automatique de scène urbaine 3D est un problème fondamental dans le domaine de la vision par ordinateur et du traitement numérique de la géométrie. Cette thèse présente des méthodologies pour résoudre le problème complexe de la reconstruction d'éléments urbains en 3D à partir de données aériennes Lidar ou bien de maillages générés par imagerie Multi-View Stereo (MVS). Nos approches génèrent une représentation précise et compacte sous la forme d'un maillage 3D comportant une sémantique de l'espace urbain. Deux étapes sont nécessaires ; une identification des différents éléments de la scène urbaine, et une modélisation des éléments sous la forme d'un maillage 3D. Le Chapitre 2 présente deux méthodes de classifications des éléments urbains en classes d'intérêts permettant d'obtenir une compréhension approfondie de la scène urbaine, et d'élaborer différentes stratégies de reconstruction suivant le type d'éléments urbains. Cette idée, consistant à insérer à la fois une information sémantique et géométrique dans les scènes urbaines, est présentée en détails et validée à travers des expériences. Le Chapitre 3 présente une approche pour détecter la 'Végétation' incluses dans des données Lidar reposant sur les processus ponctuels marqués, combinée avec une nouvelle méthode d'optimisation. Le Chapitre 4 décrit à la fois une approche de maillage 3D pour les 'Bâtiments' à partir de données Lidar et de données MVS. Des expériences sur des structures urbaines larges et complexes montrent les bonnes performances de nos systèmes. / Analysis and 3D reconstruction of urban scenes from physical measurements is a fundamental problem in computer vision and geometry processing. Within the last decades, an important demand arises for automatic methods generating urban scenes representations. This thesis investigates the design of pipelines for solving the complex problem of reconstructing 3D urban elements from either aerial Lidar data or Multi-View Stereo (MVS) meshes. Our approaches generate accurate and compact mesh representations enriched with urban-related semantic labeling.In urban scene reconstruction, two important steps are necessary: an identification of the different elements of the scenes, and a representation of these elements with 3D meshes. Chapter 2 presents two classification methods which yield to a segmentation of the scene into semantic classes of interests. The beneath is twofold. First, this brings awareness of the scene for better understanding. Second, deferent reconstruction strategies are adopted for each type of urban elements. Our idea of inserting both semantical and structural information within urban scenes is discussed and validated through experiments. In Chapter 3, a top-down approach to detect 'Vegetation' elements from Lidar data is proposed using Marked Point Processes and a novel optimization method. In Chapter 4, bottom-up approaches are presented reconstructing 'Building' elements from Lidar data and from MVS meshes. Experiments on complex urban structures illustrate the robustness and scalability of our systems.
122

A New AC-Radio Frequency Heating Calorimetry Technique for Complex Fluids

Barjami, Saimir 28 April 2005 (has links)
We have developed a new modulation calorimetry technique using RF-Field heating. This technique eliminates temperature gradients across the sample leading to a higher precision in evaluating the heat capacity compared to the previous techniques. A frequency scan was carried out on a 8CB+aerosil sample showing a wide plateau indicating the region of frequency independent heat capacity. A temperature scan was then performed through the first-order nematic to isotropic and second order smectic-A to nematic transitions and was shown to be consistent with the previous work. The amplitude of the RF heating power applied to the sample depends on the permittivity and the loss factor of the sample. Since the permittivity of a dielectric material has a strong temperature dependence in liquid crystals, new information is obtained. The heat capacity measurements have a relative resolution of better than 0.06%, and the phase shift a resolution of 0.03%, were shown to be significant improvements over traditional heating methods. We then applied this new RF calorimetry on bulk and aerosil 8CB dispersions. For the bulk 8CB, the step-like character of smectic-A to nematic transition, and first order nematic to isotropic transitions indicated the strong dominance of the permittivity and the loss factor of the material. For the 8CB+aerosil samples at different silica density, our data were consistent with the previous work and provides clear evidence for the coupling between the smectic-A and nematic phases. We have undertaken a combined T-dependent optical and calorimetric investigation of CCN47+aerosil samples through the I-N transition over a range of silica densities displaying the double I-N transition peak. This work offers compelling evidence that the I-N transition with weak quenched random disorder proceeds via a two-step process in which random-dilution is followed by random-field interactions on cooling from the isotropic phase, a previously unrecognized phenomena.
123

Localization and quality enhancement for automatic recognition of vehicle license plates in video sequences / Localisation et amélioration de qualité pour reconnaissance automatique de plaques d'immatriculation de véhicules dans les séquences vidéo.

Nguyen, Chu Duc 29 June 2011 (has links)
La lecture automatique de plaques d’immatriculation de véhicule est considérée comme une approche de surveillance de masse. Elle permet, grâce à la détection /localisation ainsi que la reconnaissance optique, d’identifier un véhicule dans les images ou les séquences d’images. De nombreuses applications comme le suivi du trafic, la détection de véhicules volés, le télépéage ou la gestion d’entrée / sortie des parkings utilise ce procédé. Or malgré d’important progrès enregistré depuis l’apparition des premiers prototypes en 1979 accompagné d’un taux de reconnaissance parfois impressionnant, notamment grâce aux avancés en recherche scientifique et en technologie des capteurs, les contraintes imposés pour le bon fonctionnement de tels systèmes en limitent les portées. En effet, l’utilisation optimale des techniques de localisation et de reconnaissance de plaque d’immatriculation dans les scénarii opérationnels nécessite des conditions d’éclairage contrôlées ainsi qu’une limitation dans de la pose, de vitesse ou tout simplement de type de plaque. La lecture automatique de plaques d’immatriculation reste alors un problème de recherche ouvert. La contribution majeure de cette thèse est triple. D’abord une nouvelle approche robuste de localisation de plaque d’immatriculation dans des images ou des séquences d’images est proposée. Puis, l’amélioration de la qualité des plaques localisées est traitée par une adaptation de technique de super-résolution. Finalement, un modèle unifié de localisation et de super-résolution est proposé permettant de diminuer la complexité temporelle des deux approches combinées. / Automatic reading of vehicle license plates is considered an approach to mass surveillance. It allows, through the detection / localization and optical recognition to identify a vehicle in the images or video sequences. Many applications such as traffic monitoring, detection of stolen vehicles, the toll or the management of entrance/ exit parking uses this method. Yet in spite of important progress made since the appearance of the first prototype sin 1979, with a recognition rate sometimes impressive thanks to advanced science and sensor technology, the constraints imposed for the operation of such systems limit laid. Indeed, the optimal use of techniques for localizing and recognizing license plates in operational scenarios requiring controlled lighting conditions and a limitation of the pose, velocity, or simply type plate. Automatic reading of vehicle license plates then remains an open research problem. The major contribution of this thesis is threefold. First, a new approach to robust license plate localization in images or image sequences is proposed. Then, improving the quality of the plates is treated with a localized adaptation of super-resolution technique. Finally, a unified model of location and super-resolution is proposed to reduce the time complexity of both approaches combined.
124

Non-deterministic analysis of slope stability based on numerical simulation

Shen, Hong 02 October 2012 (has links) (PDF)
In geotechnical engineering, the uncertainties such as the variability and uncertainty inherent in the geotechnical properties have caught more and more attentions from researchers and engineers. They have found that a single “Factor of Safety” calculated by traditional deterministic analyses methods can not represent the slope stability exactly. Recently in order to provide a more rational mathematical framework to incorporate different types of uncertainties in the slope stability estimation, reliability analyses and non-deterministic methods, which include probabilistic and non probabilistic (imprecise methods) methods, have been applied widely. In short, the slope non-deterministic analysis is to combine the probabilistic analysis or non probabilistic analysis with the deterministic slope stability analysis. It cannot be regarded as a completely new slope stability analysis method, but just an extension of the slope deterministic analysis. The slope failure probability calculated by slope non-deterministic analysis is a kind of complement of safety factor. Therefore, the accuracy of non deterministic analysis is not only depended on a suitable probabilistic or non probabilistic analysis method selected, but also on a more rigorous deterministic analysis method or geological model adopted. In this thesis, reliability concepts have been reviewed first, and some typical non-deterministic methods, including Monte Carlo Simulation (MCS), First Order Reliability Method (FORM), Point Estimate Method (PEM) and Random Set Theory (RSM), have been described and successfully applied to the slope stability analysis based on a numerical simulation method-Strength Reduction Method (SRM). All of the processes have been performed in a commercial finite difference code FLAC and a distinct element code UDEC. First of all, as the fundamental of slope reliability analysis, the deterministic numerical simulation method has been improved. This method has a higher accuracy than the conventional limit equilibrium methods, because of the reason that the constitutive relationship of soil is considered, and fewer assumptions on boundary conditions of slope model are necessary. However, the construction of slope numerical models, particularly for the large and complicated models has always been very difficult and it has become an obstacle for application of numerical simulation method. In this study, the excellent spatial analysis function of Geographic Information System (GIS) technique has been introduced to help numerical modeling of the slope. In the process of modeling, the topographic map of slope has been gridded using GIS software, and then the GIS data was transformed into FLAC smoothly through the program built-in language FISH. At last, the feasibility and high efficiency of this technique has been illustrated through a case study-Xuecheng slope, and both 2D and 3D models have been investigated. Subsequently, three most widely used probabilistic analyses methods, Monte Carlo Simulation, First Order Reliability Method and Point Estimate Method applied with Strength Reduction Method have been studied. Monte Carlo Simulation which needs to repeat thousands of deterministic analysis is the most accurate probabilistic method. However it is too time consuming for practical applications, especially when it is combined with numerical simulation method. For reducing the computation effort, a simplified Monte Carlo Simulation-Strength Reduction Method (MCS-SRM) has been developed in this study. This method has estimated the probable failure of slope and calculated the mean value of safety factor by means of soil parameters first, and then calculated the variance of safety factor and reliability of slope according to the assumed probability density function of safety factor. Case studies have confirmed that this method can reduce about 4/5 of time compared with traditional MCS-SRM, and maintain almost the same accuracy. First Order Reliability Method is an approximate method which is based on the Taylor\'s series expansion of performance function. The closed form solution of the partial derivatives of the performance function is necessary to calculate the mean and standard deviation of safety factor. However, there is no explicit performance function in numerical simulation method, so the derivative expressions have been replaced with equivalent difference quotients to solve the differential quotients approximately in this study. Point Estimate Method is also an approximate method involved even fewer calculations than FORM. In the present study, it has been integrated with Strength Reduction Method directly. Another important observation referred to the correlation between the soil parameters cohesion and friction angle. Some authors have found a negative correlation between cohesion and friction angle of soil on the basis of experimental data. However, few slope probabilistic studies are found to consider this negative correlation between soil parameters in literatures. In this thesis, the influence of this correlation on slope probability of failure has been investigated based on numerical simulation method. It was found that a negative correlation considered in the cohesion and friction angle of soil can reduce the variability of safety factor and failure probability of slope, thus increasing the reliability of results. Besides inter-correlation of soil parameters, these are always auto-correlated in space, which is described as spatial variability. For the reason that knowledge on this character is rather limited in literature, it is ignored in geotechnical engineering by most researchers and engineers. In this thesis, the random field method has been introduced in slope numerical simulation to simulate the spatial variability structure, and a numerical procedure for a probabilistic slope stability analysis based on Monte Carlo simulation was presented. The soil properties such as cohesion and friction angle were discretized to continuous random fields based on local averaging method. In the case study, both stationary and non-stationary random fields have been investigated, and the influence of spatial variability and averaging domain on the convergence of numerical simulation and probability of failure was studied. In rock medium, the structure faces have very important influence on the slope stability, and the rock material can be modeled as the combination of rigid or deformable blocks with joints in distinct element method. Therefore, much more input parameters like strength of joints are required to input the rock slope model, which increase the uncertainty of the results of numerical model. Furthermore, because of the limitations of the current laboratory and in-site testes, there is always lack of exact values of geotechnical parameters from rock material, even the probability distribution of these variables. Most of time, engineers can only estimate the interval of these variables from the limit testes or the expertise’s experience. In this study, to assess the reliability of the rock slope, a Random Set Distinct Element Method (RS-DEM) has been developed through coupling of Random Set Theory and Distinct Element Method, and applied in a rock slope in Sichuan province China.
125

Contributions au développement d'outils computationnels de design de protéine : méthodes et algorithmes de comptage avec garantie / Contribution to protein design tools : counting methods and algorithms

Viricel, Clement 18 December 2017 (has links)
Cette thèse porte sur deux sujets intrinsèquement liés : le calcul de la constante de normalisation d’un champ de Markov et l’estimation de l’affinité de liaison d’un complexe de protéines. Premièrement, afin d’aborder ce problème de comptage #P complet, nous avons développé Z*, basé sur un élagage des quantités de potentiels négligeables. Il s’est montré plus performant que des méthodes de l’état de l’art sur des instances issues d’interaction protéine-protéine. Par la suite, nous avons développé #HBFS, un algorithme avec une garantie anytime, qui s’est révélé plus performant que son prédécesseur. Enfin, nous avons développé BTDZ, un algorithme exact basé sur une décomposition arborescente qui a fait ses preuves sur des instances issues d’interaction intermoléculaire appelées “superhélices”. Ces algorithmes s’appuient sur des méthodes issuse des modèles graphiques : cohérences locales, élimination de variable et décompositions arborescentes. A l’aide de méthodes d’optimisation existantes, de Z* et des fonctions d’énergie de Rosetta, nous avons développé un logiciel open source estimant la constante d’affinité d’un complexe protéine protéine sur une librairie de mutants. Nous avons analysé nos estimations sur un jeu de données de complexes de protéines et nous les avons confronté à deux approches de l’état de l’art. Il en est ressorti que notre outil était qualitativement meilleur que ces méthodes. / This thesis is focused on two intrinsically related subjects : the computation of the normalizing constant of a Markov random field and the estimation of the binding affinity of protein-protein interactions. First, to tackle this #P-complete counting problem, we developed Z*, based on the pruning of negligible potential quantities. It has been shown to be more efficient than various state-of-the-art methods on instances derived from protein-protein interaction models. Then, we developed #HBFS, an anytime guaranteed counting algorithm which proved to be even better than its predecessor. Finally, we developed BTDZ, an exact algorithm based on tree decomposition. BTDZ has already proven its efficiency on intances from coiled coil protein interactions. These algorithms all rely on methods stemming from graphical models : local consistencies, variable elimination and tree decomposition. With the help of existing optimization algorithms, Z* and Rosetta energy functions, we developed a package that estimates the binding affinity of a set of mutants in a protein-protein interaction. We statistically analyzed our esti- mation on a database of binding affinities and confronted it with state-of-the-art methods. It appears that our software is qualitatively better than these methods.
126

Non-parametric synthesis of volumetric textures from a 2D sample / Méthodes non-paramétriques pour la synthèse de textures volumiques à partir d’un exemple 2D

Urs, Radu Dragos 29 March 2013 (has links)
Ce mémoire traite de synthèse de textures volumiques anisotropes à partir d’une observation 2D unique. Nous présentons différentes variantes d’algorithmes non paramétriques et multi-échelles. Leur principale particularité réside dans le fait que le processus de synthèse 3D s’appuie sur l’échantillonnage d’une seule image 2D d’entrée, en garantissant la cohérence selon les différentes vues de la texture 3D. Deux catégories d’approches sont abordées, toutes deux multi-échelles et basées sur une hypothèse markovienne. La première catégorie regroupe un ensemble d’algorithmes dits de recherche de voisinages fixes, adaptés d’algorithmes existants de synthèses de textures volumiques à partir de sources 2D multiples. Le principe consiste, à partir d’une initialisation aléatoire, à modifier les voxels un par un, de façon déterministe, en s’assurant que les configurations locales de niveaux de gris sur des tranches orthogonales contenant le voxel sont semblables à des configurations présentes sur l’image d’entrée. La deuxième catégorie relève d’une approche probabiliste originale dont l’objectif est de reproduire, sur le volume texturé, les interactions entre pixels estimées sur l’image d’entrée. L’estimation est réalisée de façon non paramétrique par fenêtrage de Parzen. L’optimisation est gérée voxel par voxel, par un algorithme déterministe de type ICM. Différentes variantes sont proposées, relatives aux stratégies de gestion simultanée des tranches orthogonales contenant le voxel. Ces différentes méthodes sont d’abord mises en œuvre pour la synthèse d’un jeu de textures structurées, de régularité et d’anisotropie variées. Une analyse comparée et une étude de sensibilité sont menées, mettant en évidence les atouts et faiblesses des différentes approches. Enfin, elles sont appliquées à la simulation de textures volumiques de matériaux composites carbonés, à partir de clichés obtenus à l’échelle nanométrique par microscopie électronique à transmission. Le schéma expérimental proposé permet d’évaluer quantitativement et de façon objective les performances des différentes méthodes. / This thesis deals with the synthesis of anisotropic volumetric textures from a single 2D observation. We present variants of non parametric and multi-scale algorithms. Their main specificity lies in the fact that the 3D synthesis process relies on the sampling of a single 2D input sample, ensuring consistency in the different views of the 3D texture. Two types of approaches are investigated, both multi-scale and based on markovian hypothesis. The first category brings together a set of algorithms based on fixed-neighbourhood search, adapted from existing algorithms of texture synthesis from multiple 2D sources. The principle is that, starting from a random initialisation, the 3D texture is modified, voxel by voxel, in a deterministic manner, ensuring that the grey level local configurations on orthogonal slices containing the voxel are similar to configurations of the input image. The second category points out an original probabilistic approach which aims at reproducing in the textured volume the interactions between pixels learned in the input image. The learning is done by non-parametric Parzen windowing. Optimization is handled voxel by voxel by a deterministic ICM type algorithm. Several variants are proposed regarding the strategies used for the simultaneous handling of the orthogonal slices containing the voxel. These synthesis methods are first implemented on a set of structured textures of varied regularity and anisotropy. A comparative study and a sensitivity analysis are carried out, highlighting the strengths and the weaknesses of the different algorithms. Finally, they are applied to the simulation of volumetric textures of carbon composite materials, on nanometric scale snapshots obtained by transmission electron microscopy. The proposed experimental benchmark allows to evaluate quantitatively and objectively the performances of the different methods.
127

DSA Image Registration And Respiratory Motion Tracking Using Probabilistic Graphical Models

Sundarapandian, Manivannan January 2016 (has links) (PDF)
This thesis addresses three problems related to image registration, prediction and tracking, applied to Angiography and Oncology. For image analysis, various probabilistic models have been employed to characterize the image deformations, target motions and state estimations. (i) In Digital Subtraction Angiography (DSA), having a high quality visualization of the blood motion in the vessels is essential both in diagnostic and interventional applications. In order to reduce the inherent movement artifacts in DSA, non-rigid image registration is used before subtracting the mask from the contrast image. DSA image registration is a challenging problem, as it requires non-rigid matching across spatially non-uniform control points, at high speed. We model the problem of sub-pixel matching, as a labeling problem on a non-uniform Markov Random Field (MRF). We use quad-trees in a novel way to generate the non uniform grid structure and optimize the registration cost using graph-cuts technique. The MRF formulation produces a smooth displacement field which results in better artifact reduction than with the conventional approach of independently registering the control points. The above approach is further improved using two models. First, we introduce the concept of pivotal and non-pivotal control points. `Pivotal control points' are nodes in the Markov network that are close to the edges in the mask image, while 'non-pivotal control points' are identified in soft tissue regions. This model leads to a novel MRF framework and energy formulation. Next, we propose a Gaussian MRF model and solve the energy minimization problem for sub-pixel DSA registration using Random Walker (RW). An incremental registration approach is developed using quad-tree based MRF structure and RW, wherein the density of control points is hierarchically increased at each level M depending of the features to be used and the required accuracy. A novel numbering scheme of the control points allows us to reuse the computations done at level M in M + 1. Both the models result in an accelerated performance without compromising on the artifact reduction. We have also provided a CUDA based design of the algorithm, and shown performance acceleration on a GPU. We have tested the approach using 25 clinical data sets, and have presented the results of quantitative analysis and clinical assessment. (ii) In External Beam Radiation Therapy (EBRT), in order to monitor the intra fraction motion of thoracic and abdominal tumors, the lung diaphragm apex can be used as an internal marker. However, tracking the position of the apex from image based observations is a challenging problem, as it undergoes both position and shape variation. We propose a novel approach for tracking the ipsilateral hemidiaphragm apex (IHDA) position on CBCT projection images. We model the diaphragm state as a spatiotemporal MRF, and obtain the trace of the apex by solving an energy minimization problem through graph-cuts. We have tested the approach using 15 clinical data sets and found that this approach outperforms the conventional full search method in terms of accuracy. We have provided a GPU based heterogeneous implementation of the algorithm using CUDA to increase the viability of the approach for clinical use. (iii) In an adaptive radiotherapy system, irrespective of the methods used for target observations there is an inherent latency in the beam control as they involve mechanical movement and processing delays. Hence predicting the target position during `beam on target' is essential to increase the control precision. We propose a novel prediction model (called o set sine model) for the breathing pattern. We use IHDA positions (from CBCT images) as measurements and an Unscented Kalman Filter (UKF) for state estimation. The results based on 15 clinical datasets show that, o set sine model outperforms the state of the art LCM model in terms of prediction accuracy.
128

Non-deterministic analysis of slope stability based on numerical simulation

Shen, Hong 29 June 2012 (has links)
In geotechnical engineering, the uncertainties such as the variability and uncertainty inherent in the geotechnical properties have caught more and more attentions from researchers and engineers. They have found that a single “Factor of Safety” calculated by traditional deterministic analyses methods can not represent the slope stability exactly. Recently in order to provide a more rational mathematical framework to incorporate different types of uncertainties in the slope stability estimation, reliability analyses and non-deterministic methods, which include probabilistic and non probabilistic (imprecise methods) methods, have been applied widely. In short, the slope non-deterministic analysis is to combine the probabilistic analysis or non probabilistic analysis with the deterministic slope stability analysis. It cannot be regarded as a completely new slope stability analysis method, but just an extension of the slope deterministic analysis. The slope failure probability calculated by slope non-deterministic analysis is a kind of complement of safety factor. Therefore, the accuracy of non deterministic analysis is not only depended on a suitable probabilistic or non probabilistic analysis method selected, but also on a more rigorous deterministic analysis method or geological model adopted. In this thesis, reliability concepts have been reviewed first, and some typical non-deterministic methods, including Monte Carlo Simulation (MCS), First Order Reliability Method (FORM), Point Estimate Method (PEM) and Random Set Theory (RSM), have been described and successfully applied to the slope stability analysis based on a numerical simulation method-Strength Reduction Method (SRM). All of the processes have been performed in a commercial finite difference code FLAC and a distinct element code UDEC. First of all, as the fundamental of slope reliability analysis, the deterministic numerical simulation method has been improved. This method has a higher accuracy than the conventional limit equilibrium methods, because of the reason that the constitutive relationship of soil is considered, and fewer assumptions on boundary conditions of slope model are necessary. However, the construction of slope numerical models, particularly for the large and complicated models has always been very difficult and it has become an obstacle for application of numerical simulation method. In this study, the excellent spatial analysis function of Geographic Information System (GIS) technique has been introduced to help numerical modeling of the slope. In the process of modeling, the topographic map of slope has been gridded using GIS software, and then the GIS data was transformed into FLAC smoothly through the program built-in language FISH. At last, the feasibility and high efficiency of this technique has been illustrated through a case study-Xuecheng slope, and both 2D and 3D models have been investigated. Subsequently, three most widely used probabilistic analyses methods, Monte Carlo Simulation, First Order Reliability Method and Point Estimate Method applied with Strength Reduction Method have been studied. Monte Carlo Simulation which needs to repeat thousands of deterministic analysis is the most accurate probabilistic method. However it is too time consuming for practical applications, especially when it is combined with numerical simulation method. For reducing the computation effort, a simplified Monte Carlo Simulation-Strength Reduction Method (MCS-SRM) has been developed in this study. This method has estimated the probable failure of slope and calculated the mean value of safety factor by means of soil parameters first, and then calculated the variance of safety factor and reliability of slope according to the assumed probability density function of safety factor. Case studies have confirmed that this method can reduce about 4/5 of time compared with traditional MCS-SRM, and maintain almost the same accuracy. First Order Reliability Method is an approximate method which is based on the Taylor\'s series expansion of performance function. The closed form solution of the partial derivatives of the performance function is necessary to calculate the mean and standard deviation of safety factor. However, there is no explicit performance function in numerical simulation method, so the derivative expressions have been replaced with equivalent difference quotients to solve the differential quotients approximately in this study. Point Estimate Method is also an approximate method involved even fewer calculations than FORM. In the present study, it has been integrated with Strength Reduction Method directly. Another important observation referred to the correlation between the soil parameters cohesion and friction angle. Some authors have found a negative correlation between cohesion and friction angle of soil on the basis of experimental data. However, few slope probabilistic studies are found to consider this negative correlation between soil parameters in literatures. In this thesis, the influence of this correlation on slope probability of failure has been investigated based on numerical simulation method. It was found that a negative correlation considered in the cohesion and friction angle of soil can reduce the variability of safety factor and failure probability of slope, thus increasing the reliability of results. Besides inter-correlation of soil parameters, these are always auto-correlated in space, which is described as spatial variability. For the reason that knowledge on this character is rather limited in literature, it is ignored in geotechnical engineering by most researchers and engineers. In this thesis, the random field method has been introduced in slope numerical simulation to simulate the spatial variability structure, and a numerical procedure for a probabilistic slope stability analysis based on Monte Carlo simulation was presented. The soil properties such as cohesion and friction angle were discretized to continuous random fields based on local averaging method. In the case study, both stationary and non-stationary random fields have been investigated, and the influence of spatial variability and averaging domain on the convergence of numerical simulation and probability of failure was studied. In rock medium, the structure faces have very important influence on the slope stability, and the rock material can be modeled as the combination of rigid or deformable blocks with joints in distinct element method. Therefore, much more input parameters like strength of joints are required to input the rock slope model, which increase the uncertainty of the results of numerical model. Furthermore, because of the limitations of the current laboratory and in-site testes, there is always lack of exact values of geotechnical parameters from rock material, even the probability distribution of these variables. Most of time, engineers can only estimate the interval of these variables from the limit testes or the expertise’s experience. In this study, to assess the reliability of the rock slope, a Random Set Distinct Element Method (RS-DEM) has been developed through coupling of Random Set Theory and Distinct Element Method, and applied in a rock slope in Sichuan province China.
129

Numerical investigations on the uniaxial tensile behaviour of Textile Reinforced Concrete / Numerische Untersuchungen zum einaxialen Zugtragverhalten von Textilbeton

Hartig, Jens 25 March 2011 (has links) (PDF)
In the present work, the load-bearing behaviour of Textile Reinforced Concrete (TRC), which is a composite of a fine-grained concrete matrix and a reinforcement of high-performance fibres processed to textiles, exposed to uniaxial tensile loading was investigated based on numerical simulations. The investigations are focussed on reinforcement of multi-filament yarns of alkali-resistant glass. When embedded in concrete, these yarns are not entirely penetrated with cementitious matrix, which leads associated with the heterogeneity of the concrete and the yarns to a complex load-bearing and failure behaviour of the composite. The main objective of the work was the theoretical investigation of effects in the load-bearing behaviour of TRC, which cannot be explained solely by available experimental results. Therefore, a model was developed, which can describe the tensile behaviour of TRC in different experimental test setups with a unified approach. Neglecting effects resulting from Poisson’s effect, a one-dimensional model implemented within the framework of the Finite Element Method was established. Nevertheless, the model takes also transverse effects into account by a subdivision of the reinforcement yarns into so-called segments. The model incorporates two types of finite elements: bar and bond elements. In longitudinal direction, the bar elements are arranged in series to represent the load-bearing behaviour of matrix or reinforcement. In transverse direction these bar element chains are connected with bond elements. The model gains most of its complexity from non-linearities arising from the constitutive relations, e. g., limited tensile strength of concrete and reinforcement, tension softening of the concrete, waviness of the reinforcement and non-linear bond laws. Besides a deterministic description of the material behaviour, also a stochastic formulation based on a random field approach was introduced in the model. The model has a number of advantageous features, which are provided in this combination only in a few of the existing models concerning TRC. It provides stress distributions in the reinforcement and the concrete as well as properties of concrete crack development like crack spacing and crack widths, which are in some of the existing models input parameters and not a result of the simulations. Moreover, the successive failure of the reinforcement can be studied with the model. The model was applied to three types of tests, the filament pull-out test, the yarn pull-out test and tensile tests with multiple concrete cracking. The results of the simulations regarding the filament pull-out tests showed good correspondence with experimental data. Parametric studies were performed to investigate the influence of geometrical properties in these tests like embedding and free lengths of the filament as well as bond properties between filament and matrix. The presented results of simulations of yarn pull-out tests demonstrated the applicability of the model to this type of test. It has been shown that a relatively fine subdivision of the reinforcement is necessary to represent the successive failure of the reinforcement yarns appropriately. The presented results showed that the model can provide the distribution of failure positions in the reinforcement and the degradation development of yarns during loading. One of the main objectives of the work was to investigate effects concerning the tensile material behaviour of TRC, which could not be explained, hitherto, based solely on experimental results. Hence, a large number of parametric studies was performed concerning tensile tests with multiple concrete cracking, which reflect the tensile behaviour of TRC as occurring in practice. The results of the simulations showed that the model is able to reproduce the typical tripartite stress-strain response of TRC consisting of the uncracked state, the state of multiple matrix cracking and the post-cracking state as known from experimental investigations. The best agreement between simulated and experimental results was achieved considering scatter in the material properties of concrete as well as concrete tension softening and reinforcement waviness. / Die vorliegende Arbeit beschäftigt sich mit Untersuchungen zum einaxialen Zugtragverhalten von Textilbeton. Textilbeton ist ein Verbundwerkstoff bestehend aus einer Matrix aus Feinbeton und einer Bewehrung aus Multifilamentgarnen aus Hochleistungsfasern, welche zu textilen Strukturen verarbeitet sind. Die Untersuchungen konzentrieren sich auf Bewehrungen aus alkali-resistentem Glas. Das Tragverhalten des Verbundwerkstoffs ist komplex, was aus der Heterogenität der Matrix und der Garne sowie der unvollständigen Durchdringung der Garne mit Matrix resultiert. Das Hauptziel der Arbeit ist die theoretische Untersuchung von Effekten und Mechanismen innerhalb des Lastabtragverhaltens von Textilbeton, welche nicht vollständig anhand verfügbarer experimenteller Ergebnisse erklärt werden können. Das entsprechende Modell zur Beschreibung des Zugtragverhaltens von Textilbeton soll verschiedene experimentelle Versuchstypen mit einem einheitlichen Modell abbilden können. Unter Vernachlässigung von Querdehneffekten wurde ein eindimensionales Modell entwickelt und im Rahmen der Finite-Elemente-Methode numerisch implementiert. Es werden jedoch auch Lastabtragmechanismen in Querrichtung durch eine Unterteilung der Bewehrungsgarne in sogenannte Segmente berücksichtigt. Das Modell enthält zwei Typen von finiten Elementen: Stabelemente und Verbundelemente. In Längsrichtung werden Stabelemente kettenförmig angeordnet, um das Tragverhalten von Matrix und Bewehrung abzubilden. In Querrichtung sind die Stabelementketten mit Verbundelementen gekoppelt. Das Modell erhält seine Komplexität hauptsächlich aus Nichtlinearitäten in der Materialbeschreibung, z.B. durch begrenzte Zugfestigkeiten von Matrix und Bewehrung, Zugentfestigung der Matrix, Welligkeit der Bewehrung und nichtlineare Verbundgesetze. Neben einer deterministischen Beschreibung des Materialverhaltens beinhaltet das Modell auch eine stochastische Beschreibung auf Grundlage eines Zufallsfeldansatzes. Mit dem Modell können Spannungsverteilungen im Verbundwerkstoff und Eigenschaften der Betonrissentwicklung, z.B. in Form von Rissbreiten und Rissabständen untersucht werden, was in dieser Kombination nur mit wenigen der existierenden Modelle für Textilbeton möglich ist. In vielen der vorhandenen Modelle sind diese Eigenschaften Eingangsgrößen für die Berechnungen und keine Ergebnisse. Darüber hinaus kann anhand des Modells auch das sukzessive Versagen der Bewehrungsgarne studiert werden. Das Modell wurde auf drei verschiedene Versuchstypen angewendet: den Filamentauszugversuch, den Garnauszugversuch und Dehnkörperversuche. Die Berechnungsergebnisse zu den Filamentauszugversuchen zeigten eine gute Übereinstimmung mit experimentellen Resultaten. Zudem wurden Parameterstudien durchgeführt, um Einflüsse aus Geometrieeigenschaften wie der eingebetteten und freien Filamentlänge sowie Materialeigenschaften wie dem Verbund zwischen Matrix und Filament zu untersuchen. Die Berechnungsergebnisse zum Garnauszugversuch demonstrierten die Anwendbarkeit des Modells auf diesen Versuchstyp. Es wurde gezeigt, dass für eine realitätsnahe Abbildung des Versagensverhaltens der Bewehrungsgarne eine relativ feine Auflösung der Bewehrung notwendig ist. Die Berechnungen lieferten die Verteilung von Versagenspositionen in der Bewehrung und die Entwicklung der Degradation der Garne im Belastungsverlauf. Ein Hauptziel der Arbeit war die Untersuchung von Effekten im Zugtragverhalten von Textilbeton, die bisher nicht durch experimentelle Untersuchungen erklärt werden konnten. Daher wurde eine Vielzahl von Parameterstudien zu Dehnkörpern mit mehrfacher Matrixrissbildung, welche das Zugtragverhalten von Textilbeton ähnlich praktischen Anwendungen abbilden, durchgeführt. Die Berechnungsergebnisse zeigten, dass der experimentell beobachtete dreigeteilte Verlauf der Spannungs-Dehnungs-Beziehung von Textilbeton bestehend aus dem ungerissenen Zustand, dem Zustand der Matrixrissbildung und dem Zustand der abgeschlossenen Rissbildung vom Modell wiedergegeben wird. Die beste Übereinstimmung zwischen berechneten und experimentellen Ergebnissen ergab sich unter Einbeziehung von Streuungen in den Materialeigenschaften der Matrix, der Zugentfestigung der Matrix und der Welligkeit der Bewehrung.
130

Numerical investigations on the uniaxial tensile behaviour of Textile Reinforced Concrete

Hartig, Jens 27 January 2011 (has links)
In the present work, the load-bearing behaviour of Textile Reinforced Concrete (TRC), which is a composite of a fine-grained concrete matrix and a reinforcement of high-performance fibres processed to textiles, exposed to uniaxial tensile loading was investigated based on numerical simulations. The investigations are focussed on reinforcement of multi-filament yarns of alkali-resistant glass. When embedded in concrete, these yarns are not entirely penetrated with cementitious matrix, which leads associated with the heterogeneity of the concrete and the yarns to a complex load-bearing and failure behaviour of the composite. The main objective of the work was the theoretical investigation of effects in the load-bearing behaviour of TRC, which cannot be explained solely by available experimental results. Therefore, a model was developed, which can describe the tensile behaviour of TRC in different experimental test setups with a unified approach. Neglecting effects resulting from Poisson’s effect, a one-dimensional model implemented within the framework of the Finite Element Method was established. Nevertheless, the model takes also transverse effects into account by a subdivision of the reinforcement yarns into so-called segments. The model incorporates two types of finite elements: bar and bond elements. In longitudinal direction, the bar elements are arranged in series to represent the load-bearing behaviour of matrix or reinforcement. In transverse direction these bar element chains are connected with bond elements. The model gains most of its complexity from non-linearities arising from the constitutive relations, e. g., limited tensile strength of concrete and reinforcement, tension softening of the concrete, waviness of the reinforcement and non-linear bond laws. Besides a deterministic description of the material behaviour, also a stochastic formulation based on a random field approach was introduced in the model. The model has a number of advantageous features, which are provided in this combination only in a few of the existing models concerning TRC. It provides stress distributions in the reinforcement and the concrete as well as properties of concrete crack development like crack spacing and crack widths, which are in some of the existing models input parameters and not a result of the simulations. Moreover, the successive failure of the reinforcement can be studied with the model. The model was applied to three types of tests, the filament pull-out test, the yarn pull-out test and tensile tests with multiple concrete cracking. The results of the simulations regarding the filament pull-out tests showed good correspondence with experimental data. Parametric studies were performed to investigate the influence of geometrical properties in these tests like embedding and free lengths of the filament as well as bond properties between filament and matrix. The presented results of simulations of yarn pull-out tests demonstrated the applicability of the model to this type of test. It has been shown that a relatively fine subdivision of the reinforcement is necessary to represent the successive failure of the reinforcement yarns appropriately. The presented results showed that the model can provide the distribution of failure positions in the reinforcement and the degradation development of yarns during loading. One of the main objectives of the work was to investigate effects concerning the tensile material behaviour of TRC, which could not be explained, hitherto, based solely on experimental results. Hence, a large number of parametric studies was performed concerning tensile tests with multiple concrete cracking, which reflect the tensile behaviour of TRC as occurring in practice. The results of the simulations showed that the model is able to reproduce the typical tripartite stress-strain response of TRC consisting of the uncracked state, the state of multiple matrix cracking and the post-cracking state as known from experimental investigations. The best agreement between simulated and experimental results was achieved considering scatter in the material properties of concrete as well as concrete tension softening and reinforcement waviness. / Die vorliegende Arbeit beschäftigt sich mit Untersuchungen zum einaxialen Zugtragverhalten von Textilbeton. Textilbeton ist ein Verbundwerkstoff bestehend aus einer Matrix aus Feinbeton und einer Bewehrung aus Multifilamentgarnen aus Hochleistungsfasern, welche zu textilen Strukturen verarbeitet sind. Die Untersuchungen konzentrieren sich auf Bewehrungen aus alkali-resistentem Glas. Das Tragverhalten des Verbundwerkstoffs ist komplex, was aus der Heterogenität der Matrix und der Garne sowie der unvollständigen Durchdringung der Garne mit Matrix resultiert. Das Hauptziel der Arbeit ist die theoretische Untersuchung von Effekten und Mechanismen innerhalb des Lastabtragverhaltens von Textilbeton, welche nicht vollständig anhand verfügbarer experimenteller Ergebnisse erklärt werden können. Das entsprechende Modell zur Beschreibung des Zugtragverhaltens von Textilbeton soll verschiedene experimentelle Versuchstypen mit einem einheitlichen Modell abbilden können. Unter Vernachlässigung von Querdehneffekten wurde ein eindimensionales Modell entwickelt und im Rahmen der Finite-Elemente-Methode numerisch implementiert. Es werden jedoch auch Lastabtragmechanismen in Querrichtung durch eine Unterteilung der Bewehrungsgarne in sogenannte Segmente berücksichtigt. Das Modell enthält zwei Typen von finiten Elementen: Stabelemente und Verbundelemente. In Längsrichtung werden Stabelemente kettenförmig angeordnet, um das Tragverhalten von Matrix und Bewehrung abzubilden. In Querrichtung sind die Stabelementketten mit Verbundelementen gekoppelt. Das Modell erhält seine Komplexität hauptsächlich aus Nichtlinearitäten in der Materialbeschreibung, z.B. durch begrenzte Zugfestigkeiten von Matrix und Bewehrung, Zugentfestigung der Matrix, Welligkeit der Bewehrung und nichtlineare Verbundgesetze. Neben einer deterministischen Beschreibung des Materialverhaltens beinhaltet das Modell auch eine stochastische Beschreibung auf Grundlage eines Zufallsfeldansatzes. Mit dem Modell können Spannungsverteilungen im Verbundwerkstoff und Eigenschaften der Betonrissentwicklung, z.B. in Form von Rissbreiten und Rissabständen untersucht werden, was in dieser Kombination nur mit wenigen der existierenden Modelle für Textilbeton möglich ist. In vielen der vorhandenen Modelle sind diese Eigenschaften Eingangsgrößen für die Berechnungen und keine Ergebnisse. Darüber hinaus kann anhand des Modells auch das sukzessive Versagen der Bewehrungsgarne studiert werden. Das Modell wurde auf drei verschiedene Versuchstypen angewendet: den Filamentauszugversuch, den Garnauszugversuch und Dehnkörperversuche. Die Berechnungsergebnisse zu den Filamentauszugversuchen zeigten eine gute Übereinstimmung mit experimentellen Resultaten. Zudem wurden Parameterstudien durchgeführt, um Einflüsse aus Geometrieeigenschaften wie der eingebetteten und freien Filamentlänge sowie Materialeigenschaften wie dem Verbund zwischen Matrix und Filament zu untersuchen. Die Berechnungsergebnisse zum Garnauszugversuch demonstrierten die Anwendbarkeit des Modells auf diesen Versuchstyp. Es wurde gezeigt, dass für eine realitätsnahe Abbildung des Versagensverhaltens der Bewehrungsgarne eine relativ feine Auflösung der Bewehrung notwendig ist. Die Berechnungen lieferten die Verteilung von Versagenspositionen in der Bewehrung und die Entwicklung der Degradation der Garne im Belastungsverlauf. Ein Hauptziel der Arbeit war die Untersuchung von Effekten im Zugtragverhalten von Textilbeton, die bisher nicht durch experimentelle Untersuchungen erklärt werden konnten. Daher wurde eine Vielzahl von Parameterstudien zu Dehnkörpern mit mehrfacher Matrixrissbildung, welche das Zugtragverhalten von Textilbeton ähnlich praktischen Anwendungen abbilden, durchgeführt. Die Berechnungsergebnisse zeigten, dass der experimentell beobachtete dreigeteilte Verlauf der Spannungs-Dehnungs-Beziehung von Textilbeton bestehend aus dem ungerissenen Zustand, dem Zustand der Matrixrissbildung und dem Zustand der abgeschlossenen Rissbildung vom Modell wiedergegeben wird. Die beste Übereinstimmung zwischen berechneten und experimentellen Ergebnissen ergab sich unter Einbeziehung von Streuungen in den Materialeigenschaften der Matrix, der Zugentfestigung der Matrix und der Welligkeit der Bewehrung.

Page generated in 0.0431 seconds