• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 134
  • 44
  • 24
  • 14
  • 6
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 267
  • 267
  • 267
  • 45
  • 28
  • 26
  • 24
  • 23
  • 21
  • 20
  • 20
  • 18
  • 18
  • 18
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Essays on Birnbaum-Saunders models

Santos, Helton Saulo Bezerra dos January 2013 (has links)
Nessa tese apresentamos três diferentes aplicações dos modelos Birnbaum-Saunders. No capítulo 2 introduzimos um novo método por função-núcleo não-paramétrico para a estimação de densidades assimétricas, baseado nas distribuições Birnbaum-Saunders generalizadas assimétricas. Funções-núcleo baseadas nessas distribuições têm a vantagem de fornecer flexibilidade nos níveis de assimetria e curtose. Em adição, os estimadores da densidade por função-núcleo Birnbaum-Saunders gene-ralizadas assimétricas são livres de viés na fronteira e alcançam a taxa ótima de convergência para o erro quadrático integrado médio dos estimadores por função-núcleo-assimétricas-não-negativos da densidade. Realizamos uma análise de dados consistindo de duas partes. Primeiro, conduzimos uma simulação de Monte Carlo para avaliar o desempenho do método proposto. Segundo, usamos esse método para estimar a densidade de três dados reais da concentração de poluentes atmosféricos. Os resultados numéricos favorecem os estimadores não-paramétricos propostos. No capítulo 3 propomos uma nova família de modelos autorregressivos de duração condicional baseados nas distribuições misturas de escala Birnbaum-Saunders (SBS). A distribuição Birnbaum-Saunders (BS) é um modelo que tem recebido considerável atenção recentemente devido às suas boas propriedades. Uma extensão dessa distribuição é a classe de distribuições SBS, a qual (i) herda várias das boas propriedades da distribuição BS, (ii) permite a estimação de máxima verossimilhança em uma forma eficiente usando o algoritmo EM, e (iii) possibilita a obtenção de um procedimento de estimação robusta, entre outras propriedades. O modelo autorregressivo de duração condicional é a família primária de modelos para analisar dados de duração de transações de alta frequência. A metodologia estudada aqui inclui estimação dos parâmetros pelo algoritmo EM, inferência para esses parâmetros, modelo preditivo e uma análise residual. Realizamos simulações de Monte Carlo para avaliar o desempenho da metodologia proposta. Ainda, avalia-mos a utilidade prática dessa metodologia usando dados reais de transações financeiras da bolsa de valores de Nova Iorque. O capítulo 4 trata de índices de capacidade do processo (PCIs), os quais são ferramentas utilizadas pelas empresas para determinar a qualidade de um produto e avaliar o desempenho de seus processos de produção. Estes índices foram desenvolvidos para processos cuja característica de qualidade tem uma distribuição normal. Na prática, muitas destas ca-racterísticas não seguem esta distribuição. Nesse caso, os PCIs devem ser modificados considerando a não-normalidade. O uso de PCIs não-modificados podemlevar a resultados inadequados. De maneira a estabelecer políticas de qualidade para resolver essa inadequação, transformação dos dados tem sido proposta, bem como o uso de quantis de distribuições não-normais. Um distribuição não-normal assimétrica o qual tem tornado muito popular em tempos recentes é a distribuição Birnbaum-Saunders (BS). Propomos, desenvolvemos, implementamos e aplicamos uma metodologia baseada em PCIs para a distribuição BS. Além disso, realizamos um estudo de simulação para avaliar o desempenho da metodologia proposta. Essa metodologia foi implementada usando o software estatístico chamado R. Aplicamos essa metodologia para um conjunto de dados reais de maneira a ilustrar a sua flexibilidade e potencialidade. / In this thesis, we present three different applications of Birnbaum-Saunders models. In Chapter 2, we introduce a new nonparametric kernel method for estimating asymmetric densities based on generalized skew-Birnbaum-Saunders distributions. Kernels based on these distributions have the advantage of providing flexibility in the asymmetry and kurtosis levels. In addition, the generalized skew-Birnbaum-Saunders kernel density estimators are boundary bias free and achieve the optimal rate of convergence for the mean integrated squared error of the nonnegative asymmetric kernel density estimators. We carry out a data analysis consisting of two parts. First, we conduct a Monte Carlo simulation study for evaluating the performance of the proposed method. Second, we use this method for estimating the density of three real air pollutant concentration data sets, whose numerical results favor the proposed nonparametric estimators. In Chapter 3, we propose a new family of autoregressive conditional duration models based on scale-mixture Birnbaum-Saunders (SBS) distributions. The Birnbaum-Saunders (BS) distribution is a model that has received considerable attention recently due to its good properties. An extension of this distribution is the class of SBS distributions, which allows (i) several of its good properties to be inherited; (ii) maximum likelihood estimation to be efficiently formulated via the EM algorithm; (iii) a robust estimation procedure to be obtained; among other properties. The autoregressive conditional duration model is the primary family of models to analyze high-frequency financial transaction data. This methodology includes parameter estimation by the EM algorithm, inference for these parameters, the predictive model and a residual analysis. We carry out a Monte Carlo simulation study to evaluate the performance of the proposed methodology. In addition, we assess the practical usefulness of this methodology by using real data of financial transactions from the New York stock exchange. Chapter 4 deals with process capability indices (PCIs), which are tools widely used by companies to determine the quality of a product and the performance of their production processes. These indices were developed for processes whose quality characteristic has a normal distribution. In practice, many of these characteristics do not follow this distribution. In such a case, the PCIs must be modified considering the non-normality. The use of unmodified PCIs can lead to inadequacy results. In order to establish quality policies to solve this inadequacy, data transformation has been proposed, as well as the use of quantiles from non-normal distributions. An asymmetric non-normal distribution which has become very popular in recent times is the Birnbaum-Saunders (BS) distribution. We propose, develop, implement and apply a methodology based on PCIs for the BS distribution. Furthermore, we carry out a simulation study to evaluate the performance of the proposed methodology. This methodology has been implemented in a noncommercial and open source statistical software called R. We apply this methodology to a real data set to illustrate its flexibility and potentiality.
212

Essays on Birnbaum-Saunders models

Santos, Helton Saulo Bezerra dos January 2013 (has links)
Nessa tese apresentamos três diferentes aplicações dos modelos Birnbaum-Saunders. No capítulo 2 introduzimos um novo método por função-núcleo não-paramétrico para a estimação de densidades assimétricas, baseado nas distribuições Birnbaum-Saunders generalizadas assimétricas. Funções-núcleo baseadas nessas distribuições têm a vantagem de fornecer flexibilidade nos níveis de assimetria e curtose. Em adição, os estimadores da densidade por função-núcleo Birnbaum-Saunders gene-ralizadas assimétricas são livres de viés na fronteira e alcançam a taxa ótima de convergência para o erro quadrático integrado médio dos estimadores por função-núcleo-assimétricas-não-negativos da densidade. Realizamos uma análise de dados consistindo de duas partes. Primeiro, conduzimos uma simulação de Monte Carlo para avaliar o desempenho do método proposto. Segundo, usamos esse método para estimar a densidade de três dados reais da concentração de poluentes atmosféricos. Os resultados numéricos favorecem os estimadores não-paramétricos propostos. No capítulo 3 propomos uma nova família de modelos autorregressivos de duração condicional baseados nas distribuições misturas de escala Birnbaum-Saunders (SBS). A distribuição Birnbaum-Saunders (BS) é um modelo que tem recebido considerável atenção recentemente devido às suas boas propriedades. Uma extensão dessa distribuição é a classe de distribuições SBS, a qual (i) herda várias das boas propriedades da distribuição BS, (ii) permite a estimação de máxima verossimilhança em uma forma eficiente usando o algoritmo EM, e (iii) possibilita a obtenção de um procedimento de estimação robusta, entre outras propriedades. O modelo autorregressivo de duração condicional é a família primária de modelos para analisar dados de duração de transações de alta frequência. A metodologia estudada aqui inclui estimação dos parâmetros pelo algoritmo EM, inferência para esses parâmetros, modelo preditivo e uma análise residual. Realizamos simulações de Monte Carlo para avaliar o desempenho da metodologia proposta. Ainda, avalia-mos a utilidade prática dessa metodologia usando dados reais de transações financeiras da bolsa de valores de Nova Iorque. O capítulo 4 trata de índices de capacidade do processo (PCIs), os quais são ferramentas utilizadas pelas empresas para determinar a qualidade de um produto e avaliar o desempenho de seus processos de produção. Estes índices foram desenvolvidos para processos cuja característica de qualidade tem uma distribuição normal. Na prática, muitas destas ca-racterísticas não seguem esta distribuição. Nesse caso, os PCIs devem ser modificados considerando a não-normalidade. O uso de PCIs não-modificados podemlevar a resultados inadequados. De maneira a estabelecer políticas de qualidade para resolver essa inadequação, transformação dos dados tem sido proposta, bem como o uso de quantis de distribuições não-normais. Um distribuição não-normal assimétrica o qual tem tornado muito popular em tempos recentes é a distribuição Birnbaum-Saunders (BS). Propomos, desenvolvemos, implementamos e aplicamos uma metodologia baseada em PCIs para a distribuição BS. Além disso, realizamos um estudo de simulação para avaliar o desempenho da metodologia proposta. Essa metodologia foi implementada usando o software estatístico chamado R. Aplicamos essa metodologia para um conjunto de dados reais de maneira a ilustrar a sua flexibilidade e potencialidade. / In this thesis, we present three different applications of Birnbaum-Saunders models. In Chapter 2, we introduce a new nonparametric kernel method for estimating asymmetric densities based on generalized skew-Birnbaum-Saunders distributions. Kernels based on these distributions have the advantage of providing flexibility in the asymmetry and kurtosis levels. In addition, the generalized skew-Birnbaum-Saunders kernel density estimators are boundary bias free and achieve the optimal rate of convergence for the mean integrated squared error of the nonnegative asymmetric kernel density estimators. We carry out a data analysis consisting of two parts. First, we conduct a Monte Carlo simulation study for evaluating the performance of the proposed method. Second, we use this method for estimating the density of three real air pollutant concentration data sets, whose numerical results favor the proposed nonparametric estimators. In Chapter 3, we propose a new family of autoregressive conditional duration models based on scale-mixture Birnbaum-Saunders (SBS) distributions. The Birnbaum-Saunders (BS) distribution is a model that has received considerable attention recently due to its good properties. An extension of this distribution is the class of SBS distributions, which allows (i) several of its good properties to be inherited; (ii) maximum likelihood estimation to be efficiently formulated via the EM algorithm; (iii) a robust estimation procedure to be obtained; among other properties. The autoregressive conditional duration model is the primary family of models to analyze high-frequency financial transaction data. This methodology includes parameter estimation by the EM algorithm, inference for these parameters, the predictive model and a residual analysis. We carry out a Monte Carlo simulation study to evaluate the performance of the proposed methodology. In addition, we assess the practical usefulness of this methodology by using real data of financial transactions from the New York stock exchange. Chapter 4 deals with process capability indices (PCIs), which are tools widely used by companies to determine the quality of a product and the performance of their production processes. These indices were developed for processes whose quality characteristic has a normal distribution. In practice, many of these characteristics do not follow this distribution. In such a case, the PCIs must be modified considering the non-normality. The use of unmodified PCIs can lead to inadequacy results. In order to establish quality policies to solve this inadequacy, data transformation has been proposed, as well as the use of quantiles from non-normal distributions. An asymmetric non-normal distribution which has become very popular in recent times is the Birnbaum-Saunders (BS) distribution. We propose, develop, implement and apply a methodology based on PCIs for the BS distribution. Furthermore, we carry out a simulation study to evaluate the performance of the proposed methodology. This methodology has been implemented in a noncommercial and open source statistical software called R. We apply this methodology to a real data set to illustrate its flexibility and potentiality.
213

Modelling and simulation of physics processes for in-beam imaging in hadrontherapy / Modélisation et simulation des processus physiques pour l’imagerie en ligne de l’hadronthérapie

Pinto, Marco 19 December 2014 (has links)
L'hadronthérapie joue un rôle de plus en plus important au sein des techniques de radiothérapie grâce aux propriétés balistiques des ions et, dans le cas de ceux plus lourds que les protons, à une augmentation de l'efficacité biologique dans la région tumorale. Ces caractéristiques permettent une meilleure conformation de la dose délivrée au volume tumoral et elles permettent en particulier de traiter des tumeurs radio-résistantes. Elles conduisent cependant à une grande sensibilité du parcours des ions aux incertitudes du traitement. C'est dans ce contexte qu'a été proposée la détection de radiations secondaires émises lors des interactions nucléaires induites par les ions incidents dans le patient. La tomographie par émission de positons et la détection des rayons gamma prompts ont notamment fait l'objet d'une recherche intense ces dernières années. Le réseau de formation européen ENTERVISION, soutenu par la communauté ENLIGHT, a été crée fin 2009 pour développer ce type d'imagerie et, plus généralement, traiter les incertitudes de traitement en hadronthérapie. Le travail présenté dans ce manuscrit et intitulé ≪ Modélisation et simulation des processus physiques pour l'imagerie en ligne de l'hadronthérapie ≫ est l'un des nombreux travaux issus de ce projet. Bien que le sujet soit particulièrement large, le fil conducteur de ce travail a été une étude systématique visant in fine une implémentation d'un dispositif d'imagerie ≪ gamma prompts ≫ utilisable à la fois en faisceau de protons et d'ions carbone / Hadrontherapy is taking an increasingly important role in radiotherapy thanks to the ballistic properties of ions and, for those heavier than protons, an enhancement in the relative biological effectiveness in the tumour region. These features allow for a higher tumour conformality possible and gives the opportunity to tackle the problem of radioresistant tumours. However, they may lead to a great sensitivity of ion range to treatment uncertainties, namely to morphological changes along their path. In view of this, the detection of secondary radiations emitted after nuclear interactions between the incoming ions and the patient have been long proposed as ion range probes and, in this regard, positron emitters and prompt gammas have been the matter of intensive research. The European training network ENTERVISION, supported by the ENLIGHT community, was created in the end of 2009 in order to develop such imaging techniques and more generally to address treatment uncertainties during hadrontherapy. The present work is one of the many resulting from this project, under the subject “Modelling and simulation of physics processes for in-beam imaging in hadrontherapy”. Despite the extensive range of the topic, the purpose was always to make a systematic study towards the clinical implementation of a prompt-gamma imaging device to be used for both proton and carbon ion treatments
214

Context-aware radiation protection for the hybrid operating room / Méthodes de radioprotection réactives au contexte pour la salle d’opération hybride

Loy Rodas, Nicolas 19 February 2018 (has links)
L’utilisation de systèmes d’imagerie à rayons X lors de chirurgies mini-invasives expose patients et staff médical à des radiations ionisantes. Même si les doses absorbées peuvent être faibles, l’exposition chronique peut causer des effets nocifs (e.g. cancer). Dans cette thèse, nous proposons des nouvelles méthodes pour améliorer la sécurité vis-à-vis des radiations ionisantes en salle opératoire hybride dans deux directions complémentaires. Premièrement, nous présentons des approches pour rendre les cliniciens plus conscients des irradiations grâce à des visualisations par réalité augmentée. Deuxièmement, nous proposons une méthode d'optimisation pour suggérer une pose de l’imageur réduisant la dose au personnel et patient, tout en conservant la qualité de l’image. Pour rendre ces applications possibles, des nouvelles approches pour la perception de la salle grâce à des caméras RGBD et pour la simulation en temps-réel de la propagation et doses de radiation ont aussi été proposées. / The use of X-ray imaging technologies during minimally-invasive procedures exposes both patients and medical staff to ionizing radiation. Even if the dose absorbed during a single procedure can be low, long-term exposure can lead to noxious effects (e.g. cancer). In this thesis, we therefore propose methods to improve the overall radiation safety in the hybrid operating room by acting in two complementary directions. First, we propose approaches to make clinicians more aware of exposure by providing in-situ visual feedback of the ongoing radiation dose by means of augmented reality. Second, we propose to act on the X-ray device positioning with an optimization approach for recommending an angulation reducing the dose deposited to both patient and staff, while maintaining the clinical quality of the outcome image. Both applications rely on approaches proposed to perceive the room using RGBD cameras and to simulate in real-time the propagation of radiation and the deposited dose.
215

Study of correlations of heavy quarks in heavy ion collisions and their role in understanding the mechanisms of energy loss in the quark gluon plasma / Etude des corrélations des quarks lourds suppression dans les collisions d'ions lourds et de leur rôle dans la compréhension des mécanismes de perte d'énergie dans le plasma de quarks et de gluons

Rohrmoser, Martin 05 April 2017 (has links)
Contexte : La chromodynamique quantique (CDQ), théorie de l’interaction forte, prédit un nouvel état de la matière, le plasma de quarks et de gluons (PQG) dont les degrés de liberté fondamentale, les quarks et les gluons, peuvent bouger quasi-librement. Les hautes températures et densités de particules, qui sont nécessaires, sont supposées être les conditions de l’univers dans ses premiers moments ou dans les étoiles à neutrons. Récemment elles ont été recrées par des collisions de noyaux d’ions lourdes à hautes énergies. Ces expériences étudient le PQG par la détection des particules de hautes énergies qui traversent le milieu, notamment, les quarks lourds. Les mécanismes de leur perte d’énergie dans le PQG ne sont pas compris complètement. Particulièrement, ils sont attribués aux processus soit de radiation induite par le milieu, soit de collisions de particules de type 2 vers 2, ou des combinaisons.Méthodes : Afin de trouver de nouvelles observables pour pouvoir distinguer les mécanismes de la perte d’énergie, on a implémenté un algorithme Monte-Carlo, qui simule la formation des cascades des particules à partir d’une particule initiale. Pour traiter le milieu, on a introduit des interactions PQG-jets, qui correspondent aux processus collisionnels et radiatifs. Les corrélations entre deux particules finales des cascades, dont une représente un quark trigger, ont été examinées comme moyen pour distinguer les modèles.Résultats : La dépendance de l’ouverture angulaire pour des corrélations entre deux particules en fonction des énergies des particules peut servir comme moyen pour séparer les mécanismes collisionnels et radiatifs de la perte d’énergie dans le milieu. / Context: Quantum chromodynamics (QCD), the theory of the strong interactions, predicts a new state of matter, the quark-gluon plasma (QGP), where its fundamental degrees of freedom, the quarks and gluons, behave quasi-freely. The required high temperatures and/orparticle densities can be expected for the early stages of the universe and in neutron stars, but have lately become accessible by highly energetic collisions of heavy ion cores. Commonly, these experiments study the QGP by the detection of hard probes, i.e. highly energetic particles, most notably heavy quarks, that pass the medium. The mechanisms of their energy-loss in the QGP are not yet completely understood. In particular, they are attributed to processes of either additional, medium induced radiation or 2 to 2 particle scattering, or combinations thereof.Methods: In a theoretical, phenomenological approach to search for new observables that allow discriminating between these collisional and radiative energy-loss mechanisms a Monte-Carlo algorithm that simulates the formation of particle cascades from an initial particle was implemented. For the medium, different types of QGP-jet interactions, corresponding to collisional and/orradiative energy loss, were introduced. Correlations between pairs of final cascade particles, where one represents a heavy trigger quark, were investigated as a means to differentiate between these models.Findings: The dependence of angular opening for two particle correlations as a function of particle energy may provide a means to disentangle collisional and radiative mechanisms of in-medium energy loss.
216

Etude de la tendance à l'ordre dans les nanoalliages métalliques à partir de leur structure électronique / Study of the ordering trends in metallic nanoalloys from their electronic structure

Andriamiharintsoa, Tsiky Hasiniaina 14 December 2016 (has links)
Ce travail de thèse propose de déterminer, en utilisant le formalisme des liaisons fortes, la relation entre les structures atomique, chimique et électronique des nanoalliages en se focalisant sur deux systèmes, archétypes d'une forte tendance à l'ordre (Cobalt-Platine - CoPt) d'un côté et d'une forte tendance à la démixtion (Iridium-Palladium - IrPd) de l'autre. Concernant les alliages CoPt et IrPd, l'évolution des caractéristiques des densités d'états locales (DEL) en fonction de la coordination de site (effet structural), de l’environnement chimique (effet d’alliage) et de la taille des systèmes a été analysée en détail. CoPt et IrPd ont un comportement tout à fait similaire en ce qui concerne les décalages de bandes d, ce qui s'explique par une règle de conservation de charge par espèce, par site et par orbitale entre systèmes mixtes et systèmes purs. Dans les nanoparticules pures d’Ir et de Pd, les centres de bandes d varient linéairement avec la coordination indépendamment de la taille. Le même comportement est observé pour les nanoalliages d’IrPd, la droite correspondant aux nanoalliages étant seulement décalée rigidement par rapport aux nanoparticules pures. Ce découplage entre effet structural et chimique, déjà observé dans les nanoalliages de CoPt, est ici généralisé car il s'applique quelle que soit la tendance chimique du système à l'ordre ou à la démixtion. Concernant la tendance chimique, le CoPt reste un système avec une tendance à l'ordre quelle que soit sa configuration, de même pour l'IrPd qui reste à la démixtion quelle que soit la configuration et quelle que soit la taille dans le cas des nanoalliages. Nous avons exploré plus finement le cas des alliages dilués, dans le cas de systèmes à base d'AuNi. On trouve, dans ce cas, un changement de tendance, en passant de la séparation de phase pour les systèmes concentrés à une tendance à l’ordre pour les systèmes dilués, incluant les systèmes de couches minces en surface. Des études complémentaires de Monte Carlo, en réseau rigide puis incluant les déplacements atomiques, montrent que les nanoparticules d'IrPd adoptent une structure cœur-coquille avec un cœur excentré malgré le faible effet de taille atomique entre les atomes de Pd et d'Ir. / The purpose of this thesis work is to determine, by using the tight-binding formalism, the link between atomic, chemical and electronic structures of nanoalloys focusing on two systems, characteristic on one hand of a strong order tendency (cobalt-platinum, CoPt) and, on the other hand, of a strong tendency to phase separation (iridium-palladium, IrPd). For both CoPt and IrPd, the evolution of the local densities of states (LDOS) as a function of the site coordination (structural effect), the chemical environment (alloy effect) and the size of the systems has been analyzed in detail. CoPt and IrPd have a same behavior concerning the d band shifts which is explained by a rule of charge preserving per species, per site and per orbital between mixed systems and corresponding pure systems. In pure Ir and Pd nanoalloys, the d band centers are found to vary linearly with the site coordination. In IrPd, a linear behavior is also observed, the corresponding line being only rigidly shifted with regards to the pure materials. This decoupling between structural and chemical effects, already observed for CoPt nanoalloys, is here generalized since it applies regardless the tendency of the system to order or to phase separate. Concerning the chemical tendency, CoPt remains a system with order tendency whatever the atomic configuration. In the same way, IrPd remains a system with a tendency to phase separation on the whole range of studied configurations although not so clearly defined in the dilute alloys. We have therefore investigated another dilute systems based on AuNi. In this case, a change of trend is observed going from phase separation for concentrated systems to order tendency for dilute systems, including thin layers at surfaces. Complementary structural studies have been performed by using Monte Carlo simulations, first on a rigid lattice and then including atomic displacements. The results show that nanoparticles of IrPd are core-shell with a strong Pd segregation at the surface. The core of nanoparticle is generally off-centered despite the very small atomic size effect between Pd and Ir atoms.
217

Modélisation et caractérisation d'un système TEMP à collimation sténopée dédié à l'imagerie du petit animal / Modeling and characterization of a SPECT system with pinhole collimation for the imaging of small animals

Auer, Benjamin 07 March 2017 (has links)
Le développement de plusieurs méthodes de reconstruction quantitatives dédiées à la Tomographie par Emission Mono Photonique du petit animal a été au cœur de cette thèse. Dans cette optique, une modélisation rigoureuse par simulation Monte Carlo du processus d’acquisition du système disponible, a été mise en place et validée. La modélisation matricielle combinée à l’algorithme de reconstruction itératif OS-EM, a permis la caractérisation des performances du système. Les valeurs de sensibilité et de résolution spatiale tomographique sont respectivement de 0,027% au centre du champ de vue et de 0,87 mm. Les limitations majeures des méthodes Monte Carlo nous ont conduit à développer une génération matricielle efficace et simplifiée des effets physiques occurrents dans le sujet. Mon approche, basée sur une décomposition de la matrice système, associée à une base de données pré-calculées, a démontré un temps acceptable pour un suivi quotidien (1h), conduisant à une reconstruction d’images personnalisée. Les approximations inhérentes à l’approche mise en place ont un impact modéré sur les valeurs des coefficients de recouvrement, une correction d’environ 10% ayant été obtenue. / My thesis focuses on the development of several quantitative reconstruction methods dedicated to small animal Single Photon Emission Computed Tomography. The latter is based on modeling the acquisition process of a 4-heads pinhole SPECT system using Monte Carlo simulations.The system matrix approach, combined with the OS-EM iterative reconstruction algorithm, enabled to characterize the system performances and to compare it to the state of the art. Sensitivity of about 0,027% in the center of the field of view combined with a tomographic spatial resolution of 0,87 mm were obtained.The major drawbacks of Monte Carlo methods led us to develop an efficient and simplified modeling of the physical effects occurring in the subject. My approach based on a system matrix decomposition, associated to a scatter pre-calculated database method, demonstrated an acceptable time for a daily imaging subject follow-up (1h), leading to a personalized imaging approach. The inherent approximations of the scatter pre-calculated approach have a moderate impact on the recovery coefficients results, nevertheless a correction of about 10% was achieved.
218

Interactions et structures dans les solutions hautement concentrées de protéines globulaires : étude du lysosyme et de l'ovalbumine / Interactions and structures in highly concentrated solutions of globular proteins : study of lysozyme and ovalbumin

Pasquier, Coralie 16 December 2014 (has links)
Les phases concentrées de protéines sont au centre de nombreuses études visant à identifier et caractériser les interactions et transitions de phases mises en jeu, en utilisant le large corpus de connaissances acquis sur les phases concentrées de colloïdes. Ces phases concentrées de protéines possèdent en outre une grande importance dans des domaines aussi variés que l’industrie agroalimentaire, l’industrie pharmaceutique et la médecine. L’établissement d’équations d’état présentant la pression osmotique (Π) en fonction de la fraction volumique (Φ) est une méthode efficace de caractérisation des interactions entre les composants d’un système. Nous l’avons appliquée à des solutions de deux protéines globulaires, le lysozyme et l’ovalbumine, en balayant une gamme de fractions volumiques allant d’une phase diluée (Φ < 0,01) à une phase concentrée, solide (Φ > 0,62). Les équations d’état obtenues, couplées à d’autres techniques (SAXS, simulations numériques), ont permis de mettre en évidence un comportement très différent des deux protéines lors de la concentration et ont montré leur complexité en comparaison avec des colloïdes modèles. La mise en relation des équations d’état et du comportement interfacial de ces deux protéines a montré des points de convergence et permis de formuler une nouvelle hypothèse expliquant certaines observations portant sur l’adsorption des protéines à l’interface air-eau. / Concentrated phases of proteins are the subject of numerous studies aiming at identifying and characterizing the interactions and phase transitions at play, using the large corpus of knowledge in the field of concentrated colloids. Those concentrated phases of proteins have, in addition, a great importance in various fields, such as food industry, pharmaceutical industry and medicine. The establishment of equations of state relating osmotic pressure (Ð) and volume fraction (Φ) is an efficient way of characterization of the interactions between the components of a system. We applied this method to solutions of two globular proteins, lysozyme and ovalbumin, spanning volume fractions ranging from a dilute phase ( Φ < 0,01) to a concentrated, solid phase ( Φ > 0,62). The equations of state, coupled to other methods (SAXS, numerical simulations), enabled us to show that the two proteins carry a very different behavior when submitted to concentration and that their complexity is beyond that of colloids. Relating equations of state and interfacial behavior of these two proteins also showed points of convergence and enabled us to formulate a new hypothesis which explains some of the results obtained in the study of adsorption of proteins at the air-water interface.
219

Charged particle diagnostics for PETAL, calibration of the detectors and development of the demonstrator / Diagnostics de particules chargées pour PETAL, étalonnage des détecteurs and développement d’un démonstrateur

Rabhi, Nesrine 06 December 2016 (has links)
Afin de protéger leurs systèmes de détection de l'impulsion électromagnétique géante générée par l'interaction du laser PETAL avec sa cible, les diagnostics de PETAL seront équipés de détecteurs passifs. Pour les ensembles SEPAGE et SESAME, une combinaison d'Imaging Plates (IP) et de couches de protection de matériaux de grand numéro atomique sera utilisée, qui permettra: 1) d'assurer que la réponse des détecteurs sera indépendante de son environnement mécanique proche dans les diagnostics et donc homogène sur toute la détection, 2) de blinder les détecteurs contre les photons de haute énergie produits dans la cible de PETAL. Dans le travail présenté ici, nous avons réalisé des expériences d'étalonnage avec les IPs auprès d'installations générant des électrons, des protons ou des ions, dans le but de couvrir le domaine en énergie cinétique de la détection des particules chargées de PETAL, de 0.1 à 200 MeV. L'introduction a pour but de décrire les méthodes et outils utilisés au cours de cette étude. Le second chapitre présente les résultats de deux expériences réalisées avec des électrons dans le domaine d'énergie cinétique [5-180] MeV. Le troisième chapitre décrit une expérience et ses résultats avec les protons entre 80 et 200 MeV étaient envoyés sur nos détecteurs. Le quatrième chapitre est consacré à une expérience utilisant des protons et des ions entre1 et 22 MeV en énergie de protons et dont l'objectif était l'étude de détecteurs et le test du démonstrateur de SEPAGE. Nous avons utilisé GEANT4 pour l'analyse de nos données et prédire la réponse de nos détecteurs dans le domaine 0.1 à 1000 MeV. / In order to protect their detection against the giant electromagnetic pulse generated by the interaction of the PETAL laser with its target, PETAL diagnostics will be equipped with passive detectors. For SESAME and SEPAGE systems, a combination of imaging plate (IP) detectors with high-Z material protection layers will be used to provide additional features such as: 1) Ensuring a response of the detector to be independent of its environment and hence homogeneous over the surface of the diagnostics; 2) Shielding the detectors against high-energy photons from the PETAL target. In this work, calibration experiments of such detectors based on IPs were performed at electron and proton facilities with the goal of covering the energy range of the particle detection at PETAL from 0.1 to 200 MeV. The introduction aims at providing the reader the methods and tools used for this study. The second chapter presents the results of two experiments performed with electrons in the range from 5 to 180 MeV. The third chapter describes an experiment and its results, where protons in the energy range between 80 and 200 MeV were sent onto detectors. The fourth chapter is dedicated to an experiment with protons and ions in the energy range from 1 to 22 MeV proton energy, which aimed at studying our detector responses and testing the demonstrator of the SEPAGE diagnostic. We used the GEANT4 toolkit to analyse our data and compute the detection responses on the whole energy range from 0.1 to 1000 MeV.
220

Méthodologie d’aide à la décision pour une gestion durable des risques d’origine naturelle en contexte incertain / Decision-support methodology for a sustainable management of natural hazard risk under uncertainty

Edjossan-Sossou, Abla Mimi 14 December 2015 (has links)
La gestion des risques d’origine naturelle est un défi stratégique majeur pour les collectivités territoriales en raison de l’impact négatif potentiel de ces risques sur leur développement. Dans la perspective d’une gestion durable de ces risques, l’élaboration de méthodes et d’outils d’aide à la décision multicritère pour l’évaluation de la durabilité des stratégies de gestion représente une thématique de recherche intéressante et d’actualité. Les principaux verrous scientifiques sous-jacents à cette thématique portent sur la nécessité de définir un cadre théorique pour l’évaluation de cette durabilité et la prise en compte d’incertitudes provenant de différentes sources (données d’entrée, choix méthodologiques, dynamique du contexte, etc.) susceptibles d'influer sur la qualité des résultats de l’évaluation et donc sur la prise de décision. D’où la nécessité d’une méthodologie pour la prise en compte des incertitudes dans le processus décisionnel afin de fournir des résultats les plus pertinents possibles aux décideurs. Pour lever ces verrous, cette thèse propose une méthodologie globale d’évaluation qui repose sur le concept de développement durable et intègre un ensemble de critères et indicateurs permettant de rendre compte des conséquences techniques, économiques, sociétales, environnementales et institutionnelles des stratégies de gestion. Les incertitudes sont quantifiées selon une approche probabiliste (Simulations Monte Carlo) ou possibiliste (théorie des possibilités) et propagées le long du processus d’évaluation par l’arithmétique de la théorie des intervalles. Elle propose également un simulateur pour évaluer les dommages liés aux inondations et permettre une estimation aussi bien déterministe qu’aléatoire de différents types de ces dommages à l’échelle d’une commune. Ces contributions ont été appliquées à une étude de cas sur la commune de Dieulouard où trois stratégies de gestion des risques liés aux inondations sont comparées (respect des prescriptions du plan de prévention des risques d’inondations pour la construction de tout nouveau bâtiment, réduction du niveau de l’aléa par la construction d’une digue, réduction de la vulnérabilité de tous les bâtiments en zone inondable par des dispositifs de protection individuelle). Les résultats permettent d’illustrer l’opérationnalité de la méthodologie de dégager des perspectives de recherche / Natural hazard risk management is a major strategic challenge for territorial authorities because of the potential adverse effects on their development that arise from the occurrence of such a kind of risks. With a view to sustainably managing these risks, the development of multicriteria decision-support methods and tools to evaluate the sustainability of risk management strategies is an interesting and topical research subject. The main underlying challenges of sustainability assessment are to define a theoretical framework that will enable assessing the sustainability, and to take into account inherent uncertainties that could derive from various sources (input data, methodological choices, dynamics of the context, etc.), and that could potentially influence the relevance of assessment results. Hence, there is a need to develop a methodology for handling uncertainties in the decision-making process in order to provide decision-makers with the most relevant results. The present research introduces an overall decision-support methodology for assessing the sustainability of risk management strategies that relies on the concept of sustainable development and includes a set of criteria and indicators for reporting on the technical, economic, societal, environmental as well as institutional outcomes of the strategies. Data uncertainties are quantified using probabilistic (Monte Carlo simulations) or possibilistic (possibility theory) approach, and are propagated along the evaluation process through interval arithmetic operations. Beyond that, a computational tool was designed to simulate, in a deterministic or uncertain way, various types of flood damages at a municipality scale. These contributions were applied to a case study regarding flood risk management in Dieulouard, which consists of comparing three management strategies (respecting constructive constraints for new buildings in hazard prone areas fixed by the flood risks prevention plan, constructing a dyke as a collective defence infrastructure, implementing individual protective measures for all buildings in hazard prone areas). This application demonstrates the practicality of the methodology, and highlights prospects for future works

Page generated in 0.2989 seconds