• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 4
  • 3
  • 1
  • Tagged with
  • 16
  • 16
  • 8
  • 8
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Orientation, Microstructure and Pile-Up Effects on Nanoindentation Measurements of FCC and BCC Metals

Srivastava, Ashish Kumar 05 1900 (has links)
This study deals with crystal orientation effect along with the effects of microstructure on the pile-ups which affect the nanoindentation measurements. Two metal classes, face centered cubic (FCC) and body centered cubic (BCC, are dealt with in the present study. The objective of this study was to find out the degree of inaccuracy induced in nanoindentation measurements by the inherent pile-ups and sink-ins. Also, it was the intention to find out how the formation of pile-ups is dependant upon the crystal structure and orientation of the plane of indentation. Nanoindentation, Nanovision, scanning electron microscopy, electron dispersive spectroscopy and electron backscattered diffraction techniques were used to determine the sample composition and crystal orientation. Surface topographical features like indentation pile-ups and sink-ins were measured and the effect of crystal orientation on them was studied. The results show that pile-up formation is not a random phenomenon, but is quite characteristic of the material. It depends on the type of stress imposed by a specific indenter, the depth of penetration, the microstructure and orientation of the plane of indentation. Pile-ups are formed along specific directions on a plane and this formation as well as the pile-up height and the contact radii with the indenter is dependant on the aforesaid parameters. These pile-ups affect the mechanical properties like elastic modulus and hardness measurements which are pivotal variables for specific applications in micro and nano scale devices.
2

A study of missing transverse energy in minimum bias events with in-time pile-up at the Large Hadron Collider using the ATLAS Detector and √s=7 TeV proton-proton collision data

Wang, Kuhan 03 August 2011 (has links)
A sample of $ L dt = 3.67 pb{-1} of minimum bias events observed using the ATLAS detector at the Large Hadron Collider at $\s=7 TeV is analyzed for Missing Transverse Energy (MET) response in the presence of in-time pile-up. We find that the MET resolution ($\sigma_\text{X,Y}$) is consistent with a simple model of the detector response for minimum bias events, scaling with respect to the sum of the scalar energy ($\sum E_\text{T}$) as $\sigma_\text{X,Y}=A\sqrt{\sum E_\text{T} }$. This behavior is observed in the presence of in-time pile-up and does not vary with global calibration schemes. We find a bias in the mean ($\mu_\text{X,Y}$) of the MET that is linear with respect to $\sum E_\text{T}$, leading to an asymmetry in the $\phi_\text{X,Y}$ distribution of the MET. We propose an explanation for this problem in terms of a misalignment of the nominal center of the ATLAS detector with respect to its real center. We contrast the data with a Monte Carlo sample produced using PYTHIA. We find that the resolution, bias and asymmetry are all approximately reproduced in simulation. / Graduate
3

Maximum Likelihood Estimators for ARMA and ARFIMA Models. A Monte Carlo Study.

Hauser, Michael A. January 1998 (has links) (PDF)
We analyze by simulation the properties of two time domain and two frequency domain estimators for low order autoregressive fractionally integrated moving average Gaussian models, ARFIMA (p,d,q). The estimators considered are the exact maximum likelihood for demeaned data, EML, the associated modified profile likelihood, MPL, and the Whittle estimator with, WLT, and without tapered data, WL. Length of the series is 100. The estimators are compared in terms of pile-up effect, mean square error, bias, and empirical confidence level. The tapered version of the Whittle likelihood turns out to be a reliable estimator for ARMA and ARFIMA models. Its small losses in performance in case of ``well-behaved" models are compensated sufficiently in more ``difficult" models. The modified profile likelihood is an alternative to the WLT but is computationally more demanding. It is either equivalent to the EML or more favorable than the EML. For fractionally integrated models, particularly, it dominates clearly the EML. The WL has serious deficiencies for large ranges of parameters, and so cannot be recommended in general. The EML, on the other hand, should only be used with care for fractionally integrated models due to its potential large negative bias of the fractional integration parameter. In general, one should proceed with caution for ARMA(1,1) models with almost canceling roots, and, in particular, in case of the EML and the MPL for inference in the vicinity of a moving average root of +1. (author's abstract) / Series: Preprint Series / Department of Applied Statistics and Data Processing
4

La caractérisation mécanique de systèmes film-substrat par indentation instrumentée (nanoindentation) en géométrie sphère-plan / Mechanical characterization of film-substrate systems by instrumented indentation (nanoindentation) on sphere-plane geometry

Oumarou, Noura 06 January 2009 (has links)
L’indentation instrumentée (nanoindentation) est une technique d’analyse des données expérimentales utilisées pour atteindre les propriétés mécaniques de matériaux (dureté H, module de Young E) pour lesquels les techniques classiques sont difficilement applicables voire non envisageables. Ces paramètres mécaniques sont issus de l’exploitation de la seule courbe expérimentale charge-décharge. L’analyse de cette dernière repose sur des nombreux modèles reportés dans la littérature (Oliver et pharr, Field et Swain, Doener et Nix, Loubet et al.) qui considèrent la décharge purement élastique. De nombreuses expériences que nous avons menées, sur divers types de matériaux massifs (aciers inoxydables AISI304, AISI316, AISI430; aciers rapides HSS652; verre de silice SiO2) et revêtus de films minces de TiN et TiO2 ont montré que les propriétés mécaniques (E et H), déduites de la méthode de Oliver et Pharr, dépendent du pourcentage de la courbe de décharge considéré, de la charge appliquée et du rayon de la pointe. De plus, pour un système film-substrat, la technique est en général utilisée pour atteindre les propriétés in-situ du film ou du substrat, alors que la méthode de dépouillement fournit des paramètres composites qu’il faut ensuite déconvoluer. Dans la recherche d’une stratégie simple, permettant d’accéder au module élastique d’un film « dur » pour les applications mécaniques, nous avons fait appel à la simulation numérique. Le code de simulation numérique utilisé, est basé sur la méthode des éléments de frontière. Nos investigations numériques utilisant l’indentation sphérique nous ont permis de mettre en évidence un certain nombre de résultats utiles pour l’analyse des données expérimentales. Nous avons commencé par montrer que aussi bien pour un matériau massif homogène élastoplastique que pour un système film dur – substrat élastoplastique, la relation [delta]=a2/R demeure valable (R étant le rayon de l’indenteur, a le rayon de l’aire projetée de contact). Cela permet de représenter les résultats de l’essai d’indentation sphérique par la courbe pression moyenne F/[pi]a2- déformation a/R . Au début du chargement, la pente cette courbe est proportionnelle au module de Young du film tandis que la pente initiale de la courbe de décharge est proportionnelle au module d’élasticité du substrat. Une relation entre le déplacement de l’indenteur et [delta] , puis une méthode d’analyse d’indentation ont été établies. Enfin, la procédure a été validée numériquement et expérimentalement sur les données issues de l’indentation de divers combinaisons film-substrat (TiN/AISI430, TiN/HSS652 et TiO2/HSS652) avec succès / Depth sensing Indentation (nanoindentation) is an experimental technique increasing retained for the assessment of the mechanical properties of materials (hardness H, Young's modulus E) for which common homogeneous mechanical tests can not be performed or are extremely difficult to perform. The mechanical parameters are obtained from the indentation curve (the plot of the load vs penetration depth during both load and unload). Usually, some methodology reported in the literature (Oliver and pharr, Field and Swain, Doener and Nix, Loubet and al.) are used in order to assess E and H. We have performed a number of experiments on homogeneous materials (stainless steel AISI304, AISI316, AISI430; high-speed steel HSS652; glass SiO2) as well as a film-substrate system (TiN/AISI430, TiN/HSS652, TiO2/HSS652). Applying the Oliver and Pharr methodology, E end H vary with the applied load as well as the percentage of used unload curve retained for the analysis, as reported in the literature. Besides, in the case of the film-substrate system, only composite parameters are obtained instead of the in-situ films properties. In order to establish a simple strategy for the determination of the elastic modulus of a hard coating, we have carried out many simulations using a boundary element based numerical tool. Then a number of useful results have been identified. The well known elastic relation [delta]=a2/R between the relative approach [delta], the projected contact radius a and the punch radius R, remain valid in the plastic range for homogeneous as well as film-substrate specimens. This allows data indentation to be represented in term of mean pressure F/[pi]a2 vs indentation strain a/R . The initial slope of the loading part of the latter curve is proportional to the elastic modulus of the film, while the slope of the initial part of the unloading curve is proportional to the substrate elastic modulus. Our indentation procedure anlysis has been validated experimentally on a number of samples (TiN/AISI430, TiN/HSS652, TiO2/HSS652) after having established a relation between the punch displacement and the relative approach [delta]
5

Measurement of Z boson production in association with b-jets in proton-proton collisions at 13 TeV and studies of an electron trigger system for high luminosity in the ATLAS experiment / Medidas da produção de bósons Z associados a jatos b em colisões próton-próton a 13 TeV e estudos de um sistema de trigger de elétrons para alta luminosidade no experimento ATLAS

Navarro, Jose Luis La Rosa 11 October 2017 (has links)
This thesis will present results on the measurement of Z boson production in the electron channel associated with b-jets in proton-proton collision at 13 TeV, a measurement of fundamental interest in precision measurements and physics searches. This measurement represents important precision tests of perturbative QCD and provides information related to the b-quark inside of the proton. This measurement also represents one of the main sources of background in top quark production studies, Higgs precision measurements and the search for supersymmetric particles. The results on cross section measurements and kinematic distributions presented in this work have been unfolded to particle level and are compared to the four and five flavor schemes in Monte Carlo generators, showing that the predictions are consistent within the experimental uncertainties. In this thesis it will be also shown results of the development of the new event selection system proposed for high luminosity measurements. This system will be implemented during the second LHC long shutdown (2019-2020), which involves studies on the reconstruction of electrons using the concept of supercells, where the fine granularity of the cells of the ATLAS Liquid Argon detector is exploited to mitigate the problems to be found in high luminosity conditions. One of the problems is the increase of pile-up, caused by secondary collisions of protons in the same event, generating a large number of low pT jets that can be erroneously identified as electrons. The studies in this work introduce new discriminants based on the electromagnetic shower shape, showing that is possible to further reduce the rate of low pT jets maintaining a good performance on electron reconstruction. / Esta tese apresentará resultados da medida da produção de bósons Z no canal de elétrons associado com jatos b em colisões próton-próton a 13 TeV, uma medida de fundamental interesse em medidas de precisão e buscas por nova física. Essa medida representa importantes testes de precisão da QCD perturbativa e prove informações relacionadas ao quark b dentro do próton. Essa medida também representa uma das principais fontes de background na produção do quark top, medidas de precisão do Higgs e na busca de partículas supersimétricas. Os resultados na medida da seção de choque e distribuições cinemáticas apresentadas neste trabalho são comparados com os esquemas de quatro e cinco sabores nos geradores Monte Carlo, mostrando que as predições são consistentes dentro das incertezas experimentais. Nesta tese também serão apresentados estudos do desenvolvimento de um sistema de seleção de eventos proposto para medidas em alta luminosidade. Esse sistema será implementado durante o segundo LHC long shutdown (2019-2020), envolvendo estudos na reconstrução de elétrons usando o conceito de supercélulas, onde a fina granularidade das células do detector de Argônio Líquido do ATLAS será aproveitada para mitigar as dificuldades que se terão em condições de alta luminosidade. Um dos problemas é o aumento do empilhamento (pile-up), causado por colisões secundárias de prótons no mesmo evento, gerando uma grande quantidade de jatos de baixo pT que podem ser erroneamente identificados como elétrons. Os estudos neste trabalho apresentam novos discriminantes com base na forma da cascata eletromagnética, mostrando que é possível reduzir ainda mais a taxa de jatos de baixo pT mantendo uma boa reconstrução de elétrons.
6

Measurement of Z boson production in association with b-jets in proton-proton collisions at 13 TeV and studies of an electron trigger system for high luminosity in the ATLAS experiment / Medidas da produção de bósons Z associados a jatos b em colisões próton-próton a 13 TeV e estudos de um sistema de trigger de elétrons para alta luminosidade no experimento ATLAS

Jose Luis La Rosa Navarro 11 October 2017 (has links)
This thesis will present results on the measurement of Z boson production in the electron channel associated with b-jets in proton-proton collision at 13 TeV, a measurement of fundamental interest in precision measurements and physics searches. This measurement represents important precision tests of perturbative QCD and provides information related to the b-quark inside of the proton. This measurement also represents one of the main sources of background in top quark production studies, Higgs precision measurements and the search for supersymmetric particles. The results on cross section measurements and kinematic distributions presented in this work have been unfolded to particle level and are compared to the four and five flavor schemes in Monte Carlo generators, showing that the predictions are consistent within the experimental uncertainties. In this thesis it will be also shown results of the development of the new event selection system proposed for high luminosity measurements. This system will be implemented during the second LHC long shutdown (2019-2020), which involves studies on the reconstruction of electrons using the concept of supercells, where the fine granularity of the cells of the ATLAS Liquid Argon detector is exploited to mitigate the problems to be found in high luminosity conditions. One of the problems is the increase of pile-up, caused by secondary collisions of protons in the same event, generating a large number of low pT jets that can be erroneously identified as electrons. The studies in this work introduce new discriminants based on the electromagnetic shower shape, showing that is possible to further reduce the rate of low pT jets maintaining a good performance on electron reconstruction. / Esta tese apresentará resultados da medida da produção de bósons Z no canal de elétrons associado com jatos b em colisões próton-próton a 13 TeV, uma medida de fundamental interesse em medidas de precisão e buscas por nova física. Essa medida representa importantes testes de precisão da QCD perturbativa e prove informações relacionadas ao quark b dentro do próton. Essa medida também representa uma das principais fontes de background na produção do quark top, medidas de precisão do Higgs e na busca de partículas supersimétricas. Os resultados na medida da seção de choque e distribuições cinemáticas apresentadas neste trabalho são comparados com os esquemas de quatro e cinco sabores nos geradores Monte Carlo, mostrando que as predições são consistentes dentro das incertezas experimentais. Nesta tese também serão apresentados estudos do desenvolvimento de um sistema de seleção de eventos proposto para medidas em alta luminosidade. Esse sistema será implementado durante o segundo LHC long shutdown (2019-2020), envolvendo estudos na reconstrução de elétrons usando o conceito de supercélulas, onde a fina granularidade das células do detector de Argônio Líquido do ATLAS será aproveitada para mitigar as dificuldades que se terão em condições de alta luminosidade. Um dos problemas é o aumento do empilhamento (pile-up), causado por colisões secundárias de prótons no mesmo evento, gerando uma grande quantidade de jatos de baixo pT que podem ser erroneamente identificados como elétrons. Os estudos neste trabalho apresentam novos discriminantes com base na forma da cascata eletromagnética, mostrando que é possível reduzir ainda mais a taxa de jatos de baixo pT mantendo uma boa reconstrução de elétrons.
7

Détermination du rapport d’embranchement de la transition super-permise du carbone 10 et développement et intégration de la ligne de faisceau PIPERADE au CENBG / Determination of the branching ratio of the superallowed transition of carbon 10 and development of the beam line PIPERADE at CENBG

Aouadi, Mehdi 15 December 2017 (has links)
Les études de la radioactivité bêta dans les milieux nucléaires permettent en partie de participer à la détermination d’un des paramètres qui décrit l'interaction faible (la constante de couplage vectoriel). Pour cela, de nombreuses mesures permettent déjà d’atteindre de grandes précisions sur ce paramètre pour un grand nombre de noyaux des transitions bêta super-permise. Cependant, pour le carbone 10, l'incertitude relative du rapport d'embranchement reste encore élevée par rapport aux autres noyaux pères avec une valeur de l’ordre de 0,13 %. Ceci est dû à l’énergie du photon émis par l’état 0+ du noyau fils qui est de 1021,6 keV, c’est-à-dire proche de l’énergie d’empilement de deux signaux de photons de 511 keV. En mai 2015, notre groupe a réalisé, à ISOLDE au CERN, une expérience afin de mesurer très précisément cette transition. Pour produire le carbone 10,nous avons réalisé des réactions nucléaires qui produisaient en grandes parties les noyaux d’intérêts mais aussi des contaminants de mêmes masses émetteurs de bêta+. Afin de réduire l’empilement, il aurait été nécessaire de mieux séparer les éléments ou d’estimer celui-ci à partir de données équivalentes avec le néon 19. Ainsi, nous avons calculé une constante d’empilement qui dépend du temps de mise en forme est qui est de l’ordre de 0,1μs. Par la suite, l’analyse de nos données carbone 10 a permis d’obtenir un rapport d’embranchement de 1,500(4) % alors que la moyenne des valeurs de la littérature donne1,4645(19) %. Dans le but de produire plus d’espèces de noyaux et d'augmenter l'intensité des faisceaux, le GANIL (Grand Accélérateur National d'Ions Lourds) développe actuellement un nouvel accélérateur ainsi qu'un ensemble de cibles basées sur la méthode ISOL. Pour réduire le dépôt de contaminants aux points de mesures, tel que c'était le cas pour la mesure du carbone 10 à ISOLDE, la communauté de physiciens souhaite aussi développer un ensemble d'outils de séparations. Dans ce cadre, notre groupe participe depuis 2011 au développement de deux de ces outils : un séparateur de haute-résolution (HRS) pour séparer des noyaux dont le pouvoir de résolution en masse nécessaire (m/Δm) souhaité est de 20000et un double piège de Penning (PIPERADE) pour séparer les noyaux qui nécessite au maximum d’un pouvoir de résolution en masse de 100000. Ainsi, au CENBG, une ligne faisceau de test qui comprend une source d'ions FEBIAD, le quadrupôle radiofréquence regroupeur-refroidisseur GPIB, un aiguillage électrostatique et le double piège de Penning (PIPERADE) est en cours de développement. Lors des tests de ces dispositifs, nous avons observé une efficacité de transmission de l’ordre de 80 % du faisceau qui traverse le GPIB.Également, nous avons mesuré une émittance transverse de 3 pi.mm.mrad en comparaison de celle de 26 pi.mm.mrad observées en aval du GPIB. Par la suite, les simulations de laligne d’injection dans le piège de Penning ont permis de définir une décélération qui permet d’injecter 98 % des ions extraits du GPIB.Cette thèse se compose donc de deux parties : la détermination du rapport d'embranchement du carbone 10 et le développement et l'intégration au CENBG de la ligne de faisceau PIPERADE. / The beta radioactivity studies in nuclear medium allow to participate in thedetermination of one of the parameters that describes the weak interaction (the vectorcoupling constant). For this purpose, numerous measurements have already been made todetermine this parameter with great precision for superallowed transition nuclei. However,for carbon 10, the relative uncertainty of the branching ratio is still high compared to otherparent nuclei with a value of the order of 0.13%. This high uncertainty is due to photonenergy emitted of 1021.6 keV which is really closed to the energy due to pile-up of twophotons of 511 keV. In May 2015, our group conducted an experiment at ISOLDE at CERN tomeasure the branching ratio very precisely. The nuclear reactions needed to produce thenuclei gave a large amount of nuclei of interest but also contaminants of the same massewhich also emit beta+ particles. A pile-up ratio of around 0.1 μs was calculated. Then, wefound a branching ratio of 1.500(4) % whereas the average from litterature is 1.4645(19) %.To study more species of nuclei and increase the intensity of the beams, GANIL(Grand Accélérateur National d’Ions Lourds) is currently developing a new accelerator as wellas a set of targets based on ISOL method. In order to reduce the contaminants deposit atthe measurement points, as we saw for the measurement of carbon 10 at ISOLDE, thecommunity of physicists also wants to develop separation apparati. In this context, since2011, our group has been involved in the development of two of these tools: a highresolutionseparator (HRS) to separate nuclei which need a mass resolving power (m / Δm)around 20,000 and a double Penning trap (PIPERADE) to separate the nuclei which require amaximum resolution of 100,000. Thus, at CENBG, a test beam line that includes a FEBIAD ionsource, a RFQ cooler-buncher (GPIB), an electrostatic switch, and a double Penning trap(PIPERADE) is under development. During apparati tests, we observe a transmissionefficiency around 80 %. Also, we measure a transverse emittance about 3 pi.mm.mrad whichis lower than 26 pi.mm.mrad observed before the GPIB. By the way, simulations of the beamline permitted to decelerate the beam and inject 98 % of ions.This thesis consists of two parts: the determination of the carbon 10 branching ratioand the development and integration of the PIPERADE beam line at CENBG.
8

Estimação de energia para calorimetria em física de altas energias baseada em representação esparsa / Energy estimation for high-energy physics calorimetry based on sparse representation

Barbosa, Davis Pereira 17 March 2017 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-09-28T18:09:06Z No. of bitstreams: 1 davispereirabarbosa.pdf: 10683458 bytes, checksum: 8cd37a50126b8e958532ac4b151e99d4 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-10-09T19:24:00Z (GMT) No. of bitstreams: 1 davispereirabarbosa.pdf: 10683458 bytes, checksum: 8cd37a50126b8e958532ac4b151e99d4 (MD5) / Made available in DSpace on 2017-10-09T19:24:00Z (GMT). No. of bitstreams: 1 davispereirabarbosa.pdf: 10683458 bytes, checksum: 8cd37a50126b8e958532ac4b151e99d4 (MD5) Previous issue date: 2017-03-17 / Esta tese propõe uma nova abordagem baseada em representação esparsa para o problema de estimação de energia em calorimetria de altas energias em cenários com empilhamento de sinais. Inserida dentro do programa de atualização do experimento ATLAS, no LHC, ela teve como principal motivação o aumento progressivo da luminosidade no colisionador e suas consequências relativas ao problema da estimação da energia nos canais do calorímetro eletromagnético do ATLAS, o LArg. Dois métodos de estimação foram propostos e denominados de SPARSE e SPARSE-COF, ambos utilizando programação linear na busca pela esparsidade. Esses métodos tiveram os seus desempenhos avaliados em diversas simulações e foram comparados com o método clássico utilizado nos calorímetros do ATLAS, denominado OF, e com o DM-COF, método recentemente desenvolvido para o calorímetro hadrônico do ATLAS que trata o problema de empilhamento de sinais em sua formulação. Nas diversas simulações realizadas, os métodos SPARSE e SPARSE-COF apresentaram desempenho superior aos demais, principalmente quando a janela de observação utilizada para a estimação da energia não contém todas as amostras do pulso típico do calorímetro, operando em cenários de empilhamento de sinais. Adicionalmente, através dados de simulações Monte Carlo do LArg, os métodos baseados em representação esparsa foram avaliados utilizando programação linear e também métodos esparsos de menor complexidade computacional,como o IRLS,o OMP e o LS-OMP. Os resultados mostraram que o método LS-OMP apresentou desempenho equivalente aos métodos e SPARSE e SPARSE-COF, qualificando-o como candidato a ser utilizado para estimação on-line de energia no LArg. / This thesis proposes a new approach based on sparse representation for the energy estimation problem in high energy calorimetry operating in pile-up scenarios. This work was mainly motivated by the progressive increase of the LHC luminosity and its consequences on the energy estimation problem for channels of the electromagnetic calorimeter of ATLAS (LArg), in the context of the ATLAS experiment upgrade program at the LHC. Two estimation methods were proposed and named SPARSE and SPARSE-COF, both using linear programming in the search for sparsity. These methods were evaluated in several simulations and compared with the classical method used in ATLAS calorimeters, called OF, and with DM-COF, a recently developed method for the ATLAS hadronic calorimeter that addresses pileup problem in its formulation. In the various simulations performed, SPARSE and SPARSE-COF methods performed better than others, especially when the observation window used for energy estimation does not contain all samples of the typical calorimeter pulse, operating in pile-up scenarios. In addition, through LArg Monte Carlo simulations, the methods based on sparse representation were evaluated using linear programming and also sparse methods with less computational complexity, such as IRLS, OMP and LS-OMP. The results showed that the LS-OMP method presented performance equivalent to the SPARSE and SPARSE-COF methods,qualifying it as a candidate to be used for online energy estimation in LArg.
9

Imagerie par rayons X résolue en énergie : Méthodes de décomposition en base de matériaux adaptées à des détecteurs spectrométriques / Energy-resolved X-ray Imaging : Material decomposition methods adapted for spectrometric detectors

Potop, Alexandra-Iulia 02 October 2014 (has links)
Les systèmes d’imagerie par rayons X conventionnels utilisent des détecteurs à base de scintillateur en mode intégration d’énergie. La nouvelle génération de détecteurs à base de semi-conducteur CdTe/CdZnTe permet de compter le nombre de photons et de mesurer l’énergie avec laquelle les photons arrivent sur le détecteur. Le laboratoire LDET (CEA LETI) a développé des détecteurs spectrométriques pixellisés à base de CdTe pour l’imagerie par rayons X associés à un circuit de lecture rapide permettant de travailler à fort taux de comptage avec une bonne résolution en énergie. Ces travaux de thèse proposent d’apporter une contribution au traitement des données acquises sur ces détecteurs résolus en énergie pour la quantification des constituants des matériaux en radiographie et en tomographie. Le cadre médical applicatif choisi est l’ostéodensitométrie. Des simulations de radiographie, qui prennent en compte les imperfections du système de détection, comme le partage de charges et les empilements, ont été réalisées. Nous avons choisi d’étudier des méthodes de traitements des données spectrales basées sur la décomposition en base de matériaux. Cette technique de réduction des données consiste à modéliser le coefficient d’atténuation linéique d’un matériau par une combinaison linéaire des fonctions d’atténuation de deux matériaux de base. Deux approches, utilisant toutes les deux un apprentissage par calibrage, ont été adaptées pour notre application. La première est une adaptation de l’approche polynômiale standard, appliquée pour deux et trois canaux d’énergie. Un processus d’optimisation des seuils des canaux a été réalisé afin de trouver la configuration optimale des bandes d’énergie. Une étude sur le nombre de canaux a permis d’évaluer les limites de la formulation polynômiale. Pour aller plus loin dans l’exploitation du potentiel des nouveaux détecteurs, une approche statistique développée dans notre laboratoire a été adaptée pour la décomposition en base de matériaux. Elle peut se généraliser à un grand nombre de canaux (100 par exemple). Une comparaison des deux approches a été réalisée selon des critères de performance comme le bruit et la précision sur l’estimation des longueurs des matériaux traversés. La validation des deux approches étudiées sur des données expérimentales acquises en radiographie, dans notre laboratoire, avec des détecteurs spectrométriques, a montré une bonne quantification des constituants des matériaux, en accord avec les résultats obtenus en simulation. / Scintillator based integrating detectors are used in conventional X-ray imaging systems. The new generation of energy-resolved semiconductor radiation detectors, based on CdTe/CdZnTe, allows counting the number of photons incident on the detector and measure their energy. The LDET laboratory developed pixelated spectrometric detectors for X-ray imaging, associated with a fast readout circuit, which allows working with high fluxes and while maintaining a good energy resolution. With this thesis, we bring our contribution to data processing acquired in radiographic and tomographic modes for material components quantification. Osteodensitometry was chosen as a medical application. Radiographic data was acquired by simulation with a detector which presents imperfections as charge sharing and pile-up. The methods chosen for data processing are based on a material decomposition approach. Basis material decomposition models the linear attenuation coefficient of a material as a linear combination of the attenuations of two basis materials based on the energy related information acquired in each energy bin. Two approaches based on a calibration step were adapted for our application. The first is the polynomial approach used for standard dual energy acquisitions, which was applied for two and three energies acquired with the energy-resolved detector. We searched the optimal configuration of bins. We evaluated the limits of the polynomial approach with a study on the number of channels. To go further and take benefit of the elevated number of bins acquired with the detectors developed in our laboratory, a statistical approach implemented in our laboratory was adapted for the material decomposition method for quantifying mineral content in bone. The two approaches were compared using figures of merit as bias and noise over the lengths of the materials traversed by X-rays. An experimental radiographic validation of the two approaches was done in our laboratory with a spectrometric detector. Results in material quantification reflect an agreement with the simulations.
10

Small Scale Plasticity With Confinement and Interfacial Effects

Habibzadeh, Pouya 15 February 2016 (has links)
The mechanical properties of crystalline metals are strongly affected when the sample size is limited to the micron or sub-micron scale. At these scales, the mechanical properties are enhanced far beyond classical predictions. Besides, the surface to volume ratio significantly increases. Therefore surfaces and interfaces play a big role in the mechanical properties of these micro-samples. The effect of different interfaces on the mechanical properties of micro-samples is not yet well understood. The aim of this project is to characterize, understand, and predict the effect of confinement on deformation mechanisms at micro-scale. In this study, micro-pillars were fabricated by Focused Ion Beam (FIB). Micro-pillars were homogeneously coated with thin films by magnetron sputtering and cathodic arc deposition. The mechanical properties of carbon-coated-, chromium coated-, naked-, annealed- and non-annealed micro-pillars were measured. Afterwards, the results of micro-compression tests and Automated Crystal Orientation Mapping on Transmission electron microscopy (ACOM TEM) were compared and led to some surprising new findings.Dislocations are blocked by amorphous- and even crystalline coating in the deformed samples. Parallel slip systems were detected in the chromium layer and the copper micro-pillar. Even though the chromium layer has parallel slip systems, dislocation pile-up at the interface was found after deformation. The most significant finding in this study concerns the back stress of the dislocation pile-up, which affects the dislocation sources and causes an increase of the flow stress to generate new dislocations from these sources. Thermal annealing increases the strength and flow stress of FIB fabricated micro samples. The annealing treatment restores the lattice that was damaged by the FIB fabrication process. A higher stress is required to initiate the dislocation nucleation in a pristine lattice. Techniques of fabrication and investigation were developed to study the role of confinement and interfaces on the mechanical properties of materials at micro scale. Mechanisms of deformation were unraveled and a better understanding of the key parameters was reached. / Doctorat en Sciences de l'ingénieur et technologie / info:eu-repo/semantics/nonPublished

Page generated in 0.034 seconds