• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 101
  • 24
  • 19
  • 13
  • 8
  • 4
  • 4
  • 4
  • 2
  • 1
  • 1
  • Tagged with
  • 240
  • 46
  • 39
  • 36
  • 29
  • 29
  • 27
  • 27
  • 23
  • 23
  • 22
  • 21
  • 20
  • 19
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Modelos constitutivos para materiais hiperelásticos: estudo e implementação computacional / Constitutive models for hyperelastic materials: study and computational implementation

João Paulo Pascon 01 April 2008 (has links)
O objetivo central deste trabalho é implementar modelos constitutivos hiperelásticos não lineares em um código computacional que faz análise não linear geométrica de cascas. São necessários, para este propósito, conceitos sobre álgebras linear e tensorial, cinemática, deformação, tensão, balanços, princípios variacionais, métodos numéricos e hiperelasticidade. Tal programa usa a formulação Lagrangiana posicional, o método dos elementos finitos, o princípio dos trabalhos virtuais e o método iterativo de Newton-Raphson para solução das equações não lineares. O elemento finito de casca possui dez nós, sete parâmetros por nó e variação linear da deformação ao longo da espessura. Para dedução dos novos modelos usou-se a decomposição multiplicativa do gradiente da função mudança de configuração, o tensor deformação de Green-Lagrange e o tensor da tensão de Piola-Kirchhoff de segunda espécie. O código desenvolvido foi usado em simulações de diversos exemplos e apresentou boa precisão na análise mecânica de polímeros naturais altamente deformáveis. A ocorrência do fenômeno travamento não se manifestou nas análises realizadas. A presente pesquisa confirmou outros trabalhos, reforçou a necessidade de se usar modelos hiperelásticos não lineares para simular o comportamento mecânico de polímeros naturais e apresentou resultados condizentes com dados experimentais existentes na literatura científica e às respectivas soluções analíticas. / The main objective of this work is to implement nonlinear hyperelastic constitutive models in a computational code of geometrically nonlinear analysis of shells. For this purpose, concepts of linear and tensor algebras, kinematics, strain, stress, balances, variational principles, numerical methods and hyperelasticity are necessary. Such program uses the positional Lagrangian formulation, the finite element method, the principle of virtual work and the iterative method of Newton-Raphson for the solution of the nonlinear equations. The shell finite element has ten nodes, seven parameters per node and presents linear variation of the strain along the thickness. To achieve the new constitutive models the multiplicative decomposition of the deformation gradient, the Green-Lagrange strain tensor and the second Piola-Kirchhoff stress tensor are used. The developed code is tested for simulations of various examples and presents good accuracy in the mechanical analysis of highly deformable natural rubber. The locking phenomena didn\'t appear in the proposed analysis. The present research confirms other works, corroborates the need of using nonlinear hyperelastic models to simulate the mechanical behavior of natural rubber and presents suitable results when compared to existent experimental data of the scientific literature and to the respective analytical solutions.
222

Alinhamento do modelo de forma ativa com máquinas de vetores de suporte aplicado na deteção de veículos

Aragão, Maria Géssica dos Santos 13 May 2016 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / Many applications of digital image processing uses object detection techniques. Detecting an object is usually related to locate the area around it, while shape detection is related to nd, precisely, the set of points that constitutes its shape. When the problem involves detecting shapes that have predictable changes, deformable models show to be an e ective solution. The approach developed in this work refers to the vehicle shape detection in frontal position by methods which are divided into two levels, the rst level is composed by a cascade of support vector machines and the second one is a deformable model. The use of deformable models favors the detection of vehicle shape same when its image is occluded by objects such as trees / Muitas aplicações de processamento de imagens digitais utilizam técnicas de detecção de objetos. Detectar um objeto normalmente está relacionado a localizar a área em torno do mesmo, já a deteção da forma está relacionada a localizar precisamente em uma imagem um conjunto de pontos que constituem sua forma. Quando o problema envolve a detecção de formas que apresentam variações previsíveis, os modelos deformáveis se apresentam como uma alternativa eficaz. A abordagem desenvolvida neste trabalho se refere à detecção da forma de veículos em posição frontal através de métodos que se dividem em dois níveis, o primeiro nível é composto por uma cascata de máquinas de vetores de suporte e oo segundo é um modelo deformável. O uso de modelos deformáveis favorece a deteção de formas de veículos mesmo quando sua imagem está ocluída por objetos, tais como árvores.
223

Development of a numerical model of single particle impact with adhesion for simulation of the Cold Spray process / Développement d'un modèle numérique d'impact à une seule particule avec adhérence pour la simulation du processus de pulvérisation à froid

Profizi, Paul 20 September 2016 (has links)
Dans le cadre du procédé de revêtement de surface Cold Spray, un modèle numérique d’impact de particule sur substrat à haute vitesse est créé, ainsi qu’une nouvelle interaction adhésive, dans le logiciel de dynamique explicite du CEA Europlexus. Le modèle utilise des Éléments Finis et la méthode sans maillage SPH (Smoothed Particle Hydrodynamics) avec la loi matériau de Johnson-Cook, couramment utilisée pour modéliser les métaux à des vitesses de déformation élevées et prenant en compte le durcissement plastique, le durcissement en vitesse de déformation, et l’assouplissement thermique. L’interaction adhésive est basée sur les modèles de zone cohésive de Dugdale-Barenblatt et Griffith, avec une limite sur la contrainte cohésive et la rupture de l’adhésion dictée par l’énergie dissipée. L’étude de cette interaction dans le cas des corps déformables à haute vitesse de déformation montre que le type de modèle cohésif utilisé impacte directement et de façon très prononcée les résultats du calcul. L’interaction adhésive est ensuite liée à un mécanisme physique connu pour être la raison majeure de l’adhésion entre métaux lors du procédé Cold Spray : l’instabilité en cisaillement à l’interface de contact (présente dans la simulation grâce à une loi d’endommagement). Pour ce faire, un critère d’activation de l’adhésion est créé, basé sur une chute de la valeur locale de limite élastique du matériau. Ce critère permet de retrouver le phénomène de vitesse critique nécessaire pour l’adhésion de la particule lors du procédé. Un critère de rupture de l’adhésion supplémentaire est ajouté, basé sur la valeur de l’endommagement dans les éléments collés, et permet de retrouver le phénomène de vitesse maximale pour l’adhésion de la particule. Le modèle complet, construit sur des principes physiques, est ainsi capable de simuler le phénomène d’adhésion Cold Spray. Des tests de dureté et images EBSD sont aussi présentés et comparés aux résultats numériques. / In the context of the Cold Spray process, a numerical model of a single particle impact is developed. The point of interest is the adhesion of the particle to the substrate, thus an adhesive interaction model is also created. The impact model uses the Smooth Particle Hydrodynamics and/or the Finite Elements methods, with a Johnson-Cook material law, commonly used for metals at high strain rates, which takes into account strain hardening, strain rate hardening and thermal softening. The adhesive interaction is a Griffith and Dugdale-Barenblatt cohesive model with energy dissipation and a limit on the cohesive stress. Using this model it is shown that in the case of fast dynamics and deformable bodies, not only the adhesion parameters but also the type of model has an influence on the results. The adhesion model is also, contrary to previous works, linked with an actual physical mechanism known to induce adhesion in Cold Spray: a shear stress instability at the interface. This is done by adding an activation criterion to the cohesive model. This criterion is defined as a local drop in yield strength on either element in contact. Only when this criterion is locally met are the cohesive stresses applied and cohesive energy dissipated. The result is the apparition of a critical velocity, under which adhesion cannot occur due to either not enough initial kinetic energy to create an instability at the interface, or not enough adhesive surface created to keep the particle from rebounding. For the model to localize and undergo shear banding/shear instability, a damage value is added to the material law. An erosion criterion is then implemented in the cohesive model to remove the cohesive stresses from highly damaged parts of the adhesive surface. This results at high impact speeds in a maximal velocity above which the interfacial material is too damaged to sustain adhesion and prevent the particle from rebounding. A deposition behavior similar to the Cold Spray process is then observed, with a range of low velocities without any adhesion of the particle, then a critical speed initiating a velocity range of adhesion of the particle, and finally a maximum speed above which the interface is too damaged to sustain the adhesion. A set of experimental observations is also carried out to better understand the actual microstructural dynamics and changes at the interface of 1 mm copper particles impacted on copper. The results are compared to simulations and the use of the macroscopic Johnson-Cook law at a microscopic level is validated.
224

Recalage déformable de projections de scanner X à faisceau conique / Deformable registration of cone-beam projections

Delmon, Vivien 29 November 2013 (has links)
Évaluer quantitativement les mouvements d'un patient lors d'un traitement par radiothérapie est un enjeu majeur. En effet, ces mouvements et ces déformations anatomiques induisent une incertitude balistique conduisant les thérapeutes à augmenter les marges de sécurité, ce qui peut empêcher de délivrer une dose suffisante à la région tumorale. Dans le cadre de cette thèse, nous nous sommes intéressés à l'estimation de ces mouvements dans les images obtenues juste avant le traitement par le scanner à faisceau conique. Pour cela, nous avons utilisé des algorithmes de recalage déformable. Dans un premier temps, nous avons cherché à améliorer la modélisation du mouvement respiratoire. Pour cela, nous nous sommes basés sur un modèle utilisant une segmentation de l'intérieur de la cage thoracique afin d'autoriser le glissement des organes internes contre cette dernière, tout en préservant un champ de déformation cohérent. La segmentation de l'intérieur de la cage thoracique est effectuée automatiquement par un algorithme qui prend en paramètres une segmentation des poumons et de la cage thoracique. Les algorithmes permettant de segmenter ces deux régions se sont avérés peu robustes, ce qui nous a poussé à les améliorer. Une fois ces structures bien segmentées, le modèle de transformation souffre d'un inconvénient majeur empêchant son utilisation dans un algorithme de recalage entre des projections 2D et une image 3D. En effet, il nécessite une segmentation 3D de l'intérieur de la cage thoracique dans les 2 images à recaler, ce qui est impossible à obtenir pour la série de projections 2D. Le modèle proposé dans cette thèse permet de contraindre les déformations à représenter des mouvements physiologiquement plausibles, tout en ne nécessitant qu'une seule segmentation de l'image 3D. Dans un deuxième temps, nous avons implémenté un algorithme de recalage 2D/3D utilisant le modèle de déformation proposé afin d'extraire le mouvement respiratoire des projections 2D de l'imageur à faisceau conique. Cet algorithme a été testé sur des images simulées dont les déformations étaient connues. Les résultats étant concluants, nous avons utilisé un algorithme de reconstruction compensée en mouvement dans le but de produire des images 3D sans flou respiratoire sur des données réelles. L'approche proposée permet d'obtenir une connaissance approfondie de l'anatomie du patient et de son mouvement respiratoire le jour du traitement, ce qui ouvre de nouvelles perspectives comme l'adaptation journalière du traitement, le calcul de dose prenant en compte le mouvement respiratoire et la re-planification de traitement. Cette approche de recalage entre une image 3D et des projections 2D est généralisable à d'autres mouvements et d'autres régions anatomiques. / Motion estimation is a challenge in radiotherapy. It requires security margins to account for the incertitude on the tumor position. In this thesis, we address the problem of estimating the motion directly in the treatment room using the cone-beam projections. Firstly, we proposed a new breathing motion model that takes into account the sliding discontinuity between the rib-cage and the lungs. This method uses a segmentation of the inner part of the rib-cage which is obtained by an algorithm that requires the segmentation of the lungs and the rib-cage. The algorithms segmenting these parts were not robust enough and we proposed methods to improve their robustness. Compared to previous methods using this mask, our motion model is more robust to segmentation inconsistencies because it only requires a single mask instead of two consistent masks. Moreover, in case of 2D/3D registration, the computation of the second mask is usually not possible. The proposed model restricts the transformation to physically plausible motions and rely on a single segmentation. Secondly, we proposed a 2D/3D registration algorithm that uses our breathing model to extract motion from the cone-beam projections obtained just before the treatment. This algorithm was tested on simulated data. Then, we applied it to real data to reconstruct motion compensated images to remove motion blur from cone-beam CT. The proposed approach gives access to the patient motion just before the treatment, which can be used to daily adapt the treatment or to compute 4D dose maps. This approach can be used for other motions in other anatomic regions.
225

Unsteady Two Dimensional Jet with Flexible Flaps at the Exit

Das, Prashant January 2016 (has links) (PDF)
The present thesis involves the study of introducing passive exit flexibility in a two dimensional starting jet. This is relevant to various biological flows like propulsion of aquatic creatures (jellyfish, squid etc.) and flow in the human heart. In the present study we introduce exit flexibility in two ways. The first method was by hinging rigid plates at the channel exit and the second was by attaching deformable flaps at the exit. In the hinged flaps cases, the experimental arrangement closely approximates the limiting case of a free-to-rotate rigid flap with negligible structural stiffness, damping and flap inertia; these limiting structural properties permitting the largest flap openings. In the deformable flaps cases, the flap’s stiffness (or its flexural rigidity EI) becomes an important parameter. In both cases, the initial condition was such that the flaps were parallel to the channel walls. With this, a piston was pushed in a controlled manner to form the starting jet. Using this arrangement, we start the flow and visualize the flap kinematics and make flow field measurements. A number of parameters were varied which include the piston speed, the flap length and the flap stiffness (in case of the deformable flaps). In the hinged rigid flaps cases, the typical motion of the flaps involves a rapid opening with flow initiation and a subsequent more gradual return to its initial position, which occurs while the piston is still moving. The initial opening of the flaps can be attributed to an excess pressure that develops in the channel when the flow starts, due to the acceleration that has to be imparted to the fluid slug between the flaps. In the case with flaps, additional pairs of vortices are formed because of the motion of the flaps and a complete redistribution of vorticity is observed. The length of the flaps is found to significantly affect flap kinematics when plotted using the conventional time scale L/d. However, with a newly defined time-scale based on the flap length (L/Lf ), we find a good collapse of all the measured flap motions irrespective of flap length and piston velocity for an impulsively started piston motion. The maximum opening angle in all these impulsive velocity program cases, irrespective of the flap length, is found to be close to 15 degrees. Even though the flap kinematics collapses well with L/Lf , there are differences in the distribution of the ejected vorticity even for the same L/Lf . In the deformable flap cases, the initial excess pressure in the flap region causes the flaps to bulge outwards. The size of the bulge grows in size, as well as moves outwards as the flow develops and the flaps open out to reach their maximum opening. Thereafter, the flaps start returning to their initial straight position and remain there as long as the piston is in motion. Once the piston stops, the flaps collapse inwards and the two flap tips touch each other. It was found that the flap’s flexural rigidity played an important role in the kinematics. We define a new time scale (t ) based on the flexural rigidity of the flaps (EI) and the flap length (Lf ). Using this new time scale, we find that the time taken to reach the maximum bulge (t* 0.03) and the time taken to reach the maximum opening (t* 0.1) were approximately similar across various flap stiffness and flap length cases. The motion of the flaps results in the formation of additional pairs of vortices. Interestingly, the total final circulation remains almost the same as that of a rigid exit case, for all the flap stiffness and flap lengths studied. However, the final fluid impulse (after all the fluid had come out of the flap region) was always higher in the flap cases as compared to the rigid exit case because of vorticity redistribution. The rate at which the impulse increases was also higher in most flap cases. The final impulse values were as large as 1.8 times the rigid exit case. Since the time rate of change of impulse is linked with force, the measurements suggest that introduction of flexible flaps at the exit could result in better propulsion performances for a system using starting jets. The work carried out in this thesis has shown that by attaching flexible flaps at the exit of an unsteady starting jet, dramatic changes can be made to the flow field. The coupled kinematics of the flaps with the flow dynamics led to desirable changes in the flow. Although the flaps introduced in this work are idealized and may not represent the kind of flexibility we encounter in biological systems, it gives us a better understanding of the importance of exit flexibility in these kinds of flows.
226

Simulations numériques d’écoulements incompressibles interagissant avec un corps déformable : application à la nage des poissons / Numerical simulation of incompressible flows interacting with forced deformable bodies : Application to fish swimming

Ghaffari Dehkharghani, Seyed Amin 15 December 2014 (has links)
Une méthode numérique précise et efficace est proposée pour la simulation de corps déformables interagissant avec un écoulement incompressible. Les équations de Navier-Stokes, considérées dans leur formulation vorticité fonction de courant, sont discrétisées temporellement et spatialement à l'aide respectivement d'un schéma d'ordre 4 de Runge-Kutta et par des différences finies compactes. Grâce à l'utilisation d'un maillage uniforme, nous proposons un nouveau solveur direct au quatrième ordre pour l'équation de Poisson, permettant de garantir l'incompressibilité au zéro machine sur une grille optimale. L'introduction d'un corps déformable dans l'écoulement de fluide est réalisée au moyen d'une méthode de pénalisation de volume. La déformation du corps est imposée par l'utilisation d'un maillage lagrangien structuré mobile qui interagit avec le fluide environnant en raison des forces hydrodynamiques et du moment (calculés sur le maillage eulérien de référence). Une loi de contrôle efficace de la courbure d'un poisson anguilliforme nageant vers une cible prescrite est proposée. La méthode numérique développée prouve son efficacité et précision tant dans le cas de la nage du poisson mais aussi plus d'un grand nombre de problèmes d'interactions fluide-structure. / We present an efficient algorithm for simulation of deformable bodies interacting with two-dimensional incompressible flows. The temporal and spatial discretizations of the Navier--Stokes equations in vorticity stream-function formulation are based on classical fourth-order Runge--Kutta and compact finite differences, respectively. Using a uniform Cartesian grid we benefit from the advantage of a new fourth-order direct solver for the Poisson equation to ensure the incompressibility constraint down to machine zero over an optimal grid. For introducing a deformable body in fluid flow, the volume penalization method is used. A Lagrangian structured grid with prescribed motion covers the deformable body which is interacting with the surrounding fluid due to the hydrodynamic forces and the torque calculated on the Eulerian reference grid. An efficient law for controlling the curvature of an anguilliform fish, swimming toward a prescribed goal, is proposed which is based on the geometrically exact theory of nonlinear beams and quaternions. Validation of the developed method shows the efficiency and expected accuracy of the algorithm for fish-like swimming and also for a variety of fluid/solid interaction problems.
227

Mathematical approaches to modelling healing of full thickness circular skin wounds

Bowden, Lucie Grace January 2015 (has links)
Wound healing is a complex process, in which a sequence of interrelated events at both the cell and tissue levels interact and contribute to the reduction in wound size. For diabetic patients, many of these processes are compromised, so that wound healing slows down and in some cases halts. In this thesis we develop a series of increasingly detailed mathematical models to describe and investigate healing of full thickness skin wounds. We begin by developing a time-dependent ordinary differential equation model. This phenomenological model focusses on the main processes contributing to closure of a full thickness wound: proliferation in the epidermis and growth and contraction in the dermis. Model simulations suggest that the relative contributions of growth and contraction to healing of the dermis are altered in diabetic wounds. We investigate further the balance between growth and contraction by developing a more detailed, spatially-resolved model using continuum mechanics. Due to the initial large retraction of the wound edge upon injury, we adopt a non-linear elastic framework. Morphoelasticity theory is applied, with the total deformation of the material decomposed into an addition of mass and an elastic response. We use the model to investigate how interactions between growth and stress influence dermal wound healing. The model reveals that contraction alone generates unrealistically high tension in the dermal tissue and, hence, volumetric growth must contribute to healing. We show that, in the simplified case of homogeneous growth, the tissue must grow anisotropically in order to reduce the size of the wound and we postulate mechanosensitive growth laws consistent with this result. After closure the surrounding tissue remodels, returning to its residually stressed state. We identify the steady state growth profile associated with this remodelled state. The model is used to predict the outcome of rewounding experiments as a method of quantifying the amount of stress in the tissue and the application of pressure treatments to control tissue synthesis. The thesis concludes with an extension to the spatially-resolved mechanical model to account for the effects of the biochemical environment. Partial differential equations describing the dynamics of fibroblasts and a regulating growth factor are coupled to equations for the tissue mechanics, described in the morphoelastic framework. By accounting for biomechanical and biochemical stimuli the model allows us to formulate mechanistic laws for growth and contraction. We explore how disruption of mechanical and chemical feedback can lead to abnormal wound healing and use the model to identify specific treatments for normalising healing in these cases.
228

Un modèle de poutre à section mince flexible : Application aux pliages 3D de mètres-rubans / A rod model with flexible thin-walled cross-section : Application to the folding of tape springs in 3D

Picault, Elia 21 November 2013 (has links)
Ce travail a pour cadre une collaboration entre le LMA et Thales Alenia Space. Nous nous intéressons au comportement des structures flexibles et plus particulièrement des mètres rubans qui ont la particularité de pouvoir, grâce à l’aplatissement de la section, s’enrouler ou développer des pliages localisés. Une première thèse a permis d’une part la mise au point d’un nouveau type de mètre ruban au déroulement maîtrisable thermiquement et d’autre part le développement d’un modèle plan de poutre à section flexible. Dans le travail de thèse présenté ici, nous proposons une version étendue de ce modèle adaptée à la simulation du comportement dynamique tridimensionnel des mètres rubans en grands déplacements et en grandes rotations. Ce modèle est dérivé de la théorie des coques et repose sur l’introduction d’hypothèses cinématiques et sthéniques adaptées. La déformation de la section est caractérisée par celle de sa ligne moyenne qui peut se déformer dans son plan par flexion et torsion mais non par extension, ainsi que hors de son plan par gauchissement de torsion. Les fortes variations de forme de la section dans son plan peuvent alors être décrites par une cinématique de type Elastica, tandis qu’une cinématique de type Vlassov est utilisée pour définir le gauchissement dans le repère local attaché à la section. Le modèle unidimensionnel est obtenu par intégration sur la section des expressions de la théorie des coques, une approche énergétique permet ensuite de formuler le problème associé qui est résolu grâce au logiciel de modélisation par éléments finis COMSOL. / This work was carried out within the framework of a collaboration between the LMA and Thales Alenia Space. We focus on the behaviour of flexible structures and more specifically of tape springs, whose particularity lies in their capacity to coil up or to form localized folds through the flattening of their cross-section. A first thesis led to the development of a new type of tape spring whose uncoiling is controlled thermically on one hand and of a planar rod model with a flexible thin-walled cross-section on the other hand. In this thesis, we offer an extended version of this model dedicated to the simulation of three-dimensional dynamic behavior of tape springs in large displacements and large rotations. This model is derived from shell theory and is based on the introduction of adapted kinematic and sthenic hypotheses. The deformation of the cross-section is characterized by that of its average line which can deform in its own plane by flexion and twisting but not by extension, as well as out of its plane through torsional warping. The large changes of the cross-section shape in its plane can then be described by an Elastica kinematics, whereas a Vlassov kinematics is used to define the warping in the local frame attached to the section. The unidimensionnal model is obtained by integration over the cross-section of the expressions of the shell theory, an energetic approach then allows to express the associated problem which is solved thanks to the finite element modeling software COMSOL.
229

Mid-level representations for modeling objects / Représentations de niveau intermédiaire pour la modélisation d'objets

Tsogkas, Stavros 15 January 2016 (has links)
Dans cette thèse, nous proposons l'utilisation de représentations de niveau intermédiaire, et en particulier i) d'axes médians, ii) de parties d'objets, et iii) des caractéristiques convolutionnels, pour modéliser des objets.La première partie de la thèse traite de détecter les axes médians dans des images naturelles en couleur. Nous adoptons une approche d'apprentissage, en utilisant la couleur, la texture et les caractéristiques de regroupement spectral pour construire un classificateur qui produit une carte de probabilité dense pour la symétrie. Le Multiple Instance Learning (MIL) nous permet de traiter l'échelle et l'orientation comme des variables latentes pendant l'entraînement, tandis qu'une variante fondée sur les forêts aléatoires offre des gains significatifs en termes de temps de calcul.Dans la deuxième partie de la thèse, nous traitons de la modélisation des objets, utilisant des modèles de parties déformables (DPM). Nous développons une approche « coarse-to-fine » hiérarchique, qui utilise des bornes probabilistes pour diminuer le coût de calcul dans les modèles à grand nombre de composants basés sur HOGs. Ces bornes probabilistes, calculés de manière efficace, nous permettent d'écarter rapidement de grandes parties de l'image, et d'évaluer précisément les filtres convolutionnels seulement à des endroits prometteurs. Notre approche permet d'obtenir une accélération de 4-5 fois sur l'approche naïve, avec une perte minimale en performance.Nous employons aussi des réseaux de neurones convolutionnels (CNN) pour améliorer la détection d'objets. Nous utilisons une architecture CNN communément utilisée pour extraire les réponses de la dernière couche de convolution. Nous intégrons ces réponses dans l'architecture DPM classique, remplaçant les descripteurs HOG fabriqués à la main, et nous observons une augmentation significative de la performance de détection (~14.5% de mAP).Dans la dernière partie de la thèse nous expérimentons avec des réseaux de neurones entièrement convolutionnels pous la segmentation de parties d'objets.Nous réadaptons un CNN utilisé à l'état de l'art pour effectuer une segmentation sémantique fine de parties d'objets et nous utilisons un CRF entièrement connecté comme étape de post-traitement pour obtenir des bords fins.Nous introduirons aussi un à priori sur les formes à l'aide d'une Restricted Boltzmann Machine (RBM), à partir des segmentations de vérité terrain.Enfin, nous concevons une nouvelle architecture entièrement convolutionnel, et l'entraînons sur des données d'image à résonance magnétique du cerveau, afin de segmenter les différentes parties du cerveau humain.Notre approche permet d'atteindre des résultats à l'état de l'art sur les deux types de données. / In this thesis we propose the use of mid-level representations, and in particular i) medial axes, ii) object parts, and iii)convolutional features, for modelling objects.The first part of the thesis deals with detecting medial axes in natural RGB images. We adopt a learning approach, utilizing colour, texture and spectral clustering features, to build a classifier that produces a dense probability map for symmetry. Multiple Instance Learning (MIL) allows us to treat scale and orientation as latent variables during training, while a variation based on random forests offers significant gains in terms of running time.In the second part of the thesis we focus on object part modeling using both hand-crafted and learned feature representations. We develop a coarse-to-fine, hierarchical approach that uses probabilistic bounds for part scores to decrease the computational cost of mixture models with a large number of HOG-based templates. These efficiently computed probabilistic bounds allow us to quickly discard large parts of the image, and evaluate the exact convolution scores only at promising locations. Our approach achieves a $4times-5times$ speedup over the naive approach with minimal loss in performance.We also employ convolutional features to improve object detection. We use a popular CNN architecture to extract responses from an intermediate convolutional layer. We integrate these responses in the classic DPM pipeline, replacing hand-crafted HOG features, and observe a significant boost in detection performance (~14.5% increase in mAP).In the last part of the thesis we experiment with fully convolutional neural networks for the segmentation of object parts.We re-purpose a state-of-the-art CNN to perform fine-grained semantic segmentation of object parts and use a fully-connected CRF as a post-processing step to obtain sharp boundaries.We also inject prior shape information in our model through a Restricted Boltzmann Machine, trained on ground-truth segmentations.Finally, we train a new fully-convolutional architecture from a random initialization, to segment different parts of the human brain in magnetic resonance image data.Our methods achieve state-of-the-art results on both types of data.
230

Méthodes de génération et de validation de champs de déformation pour la recombinaison de distribution de dose à l’aide d’images 4DCT dans le cadre d’une planification de traitement de cancers pulmonaires

Labine, Alexandre 12 1900 (has links)
Des efforts de recherche considérables ont été déployés afin d'améliorer les résultats de traitement de cancers pulmonaires. L'étude de la déformation de l'anatomie du patient causée par la ventilation pulmonaire est au coeur du processus de planification de traitement radio-oncologique. À l'aide d'images de tomodensitométrie quadridimensionnelles (4DCT), une simulation dosimétrique peut être calculée sur les 10 ensembles d'images du 4DCT. Une méthode doit être employée afin de recombiner la dose de radiation calculée sur les 10 anatomies représentant une phase du cycle respiratoire. L'utilisation de recalage déformable d'images (DIR), une méthode de traitement d'images numériques, génère neuf champs vectoriels de déformation permettant de rapporter neuf ensembles d'images sur un ensemble de référence correspondant habituellement à la phase d'expiration profonde du cycle respiratoire. L'objectif de ce projet est d'établir une méthode de génération de champs de déformation à l'aide de la DIR conjointement à une méthode de validation de leur précision. Pour y parvenir, une méthode de segmentation automatique basée sur la déformation surfacique de surface à été créée. Cet algorithme permet d'obtenir un champ de déformation surfacique qui décrit le mouvement de l'enveloppe pulmonaire. Une interpolation volumétrique est ensuite appliquée dans le volume pulmonaire afin d'approximer la déformation interne des poumons. Finalement, une représentation en graphe de la vascularisation interne du poumon a été développée afin de permettre la validation du champ de déformation. Chez 15 patients, une erreur de recouvrement volumique de 7.6 ± 2.5[%] / 6.8 ± 2.1[%] et une différence relative des volumes de 6.8 ± 2.4 [%] / 5.9 ± 1.9 [%] ont été calculées pour le poumon gauche et droit respectivement. Une distance symétrique moyenne 0.8 ± 0.2 [mm] / 0.8 ± 0.2 [mm], une distance symétrique moyenne quadratique de 1.2 ± 0.2 [mm] / 1.3 ± 0.3 [mm] et une distance symétrique maximale 7.7 ± 2.4 [mm] / 10.2 ± 5.2 [mm] ont aussi été calculées pour le poumon gauche et droit respectivement. Finalement, 320 ± 51 bifurcations ont été détectées dans le poumons droit d'un patient, soit 92 ± 10 et 228 ± 45 bifurcations dans la portion supérieure et inférieure respectivement. Nous avons été en mesure d'obtenir des champs de déformation nécessaires pour la recombinaison de dose lors de la planification de traitement radio-oncologique à l'aide de la méthode de déformation hiérarchique des surfaces. Nous avons été en mesure de détecter les bifurcations de la vascularisation pour la validation de ces champs de déformation. / Purpose: To allow a reliable deformable image registration (DIR) method for dose calculation in radiation therapy and to investigate an automatic vessel bifurcations detection algorithm for DIR assessment to improve lung cancer radiation treatment. Methods: 15 4DCT datasets are acquired and deep exhale respiratory phases are exported to Varian treatment planning system (TPS) Eclipse^{\text{TM}} for contouring. Voxelized contours are smoothed by a Gaussian filter and then transformed into a surface mesh representation. Such mesh is adapted by rigid and elastic deformations based on hierarchical surface deformation to match each subsequent lung volumes. The segmentation efficiency is assessed by comparing the segmented lung contour and the TPS contour considering two volume metrics, defined as Volumetric Overlap Error (VOE) [%] and Relative Volume Difference (RVD) [%] and three surface metrics, defined as Average Symmetric Surface Distance (ASSD) [mm], Root Mean Square Symmetric Surface Distance (RMSSD) [mm] and Maximum Symmetric Surface Distance (MSSD) [mm]. Vesselness filter was applied within the segmented lung volumes to identify blood vessels and airways. Segmented blood vessels and airways were skeletonised using a hierarchical curve-skeleton algorithm based on a generalized potential field approach. A graph representation of the computed skeleton was generated to assign one of three labels to each node: the termination node, the continuation node or the branching node. Results: The volume metrics obtained are a VOE of 7.6 ± 2.5[%] / 6.8 ± 2.1[%] and a RVD of 6.8 ± 2.4 [%] / 5.9 ± 1.9 [%] respectively for left and right lung. The surface metrics computed are an ASSD of 0.8 ± 0.2 [mm] / 0.8 ± 0.2 [mm], a RMSSD of 1.2 ± 0.2 [mm] / 1.3 ± 0.3 [mm] and a MSSD of 7.7 ± 2.4 [mm] / 10.2 ± 5.2 [mm] respectively for left and right lung. 320 ± 51 bifurcations were detected in the right lung of a patient for the 10 breathing phases. 92 ± 10 bifurcations were found in the upper half of the lung and 228 ± 45 bifurcations were found in the lower half of the lung. Discrepancies between ten vessel trees were mainly ascribed to the segmentation methode. Conclusions: This study shows that the morphological segmentation algorithm can provide an automatic method to capture an organ motion from 4DCT scans and translate it into a volume deformation grid needed by DIR method for dose distribution combination. We also established an automatic method for DIR assessment using the morphological information of the patient anatomy. This approach allows a description of the lung’s internal structure movement, which is needed to validate the DIR deformation fields.

Page generated in 0.0263 seconds