• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 135
  • 64
  • 58
  • 45
  • 18
  • 12
  • 9
  • 9
  • 8
  • 5
  • 3
  • 3
  • 3
  • 3
  • 2
  • Tagged with
  • 405
  • 128
  • 110
  • 60
  • 47
  • 39
  • 38
  • 30
  • 28
  • 27
  • 26
  • 26
  • 25
  • 25
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
361

Imagerie multi-paramètres et multi-résolutions pour l'observation et la caractérisation des mécanismes de glissements-coulées

Travelletti, Julien 18 October 2011 (has links) (PDF)
Les facteurs de contrôle (prédisposition, déclenchement) des glissements de terrain, leurs mécanismes et leurs comportements cinématiques peuvent être extrêmement hétérogènes et interagir sur des constantes de temps très variées. De nombreuses lacunes existent dans l'évaluation de l'aléa associé à ces processus, en particulier pour les glissements lents qui peuvent évoluer en coulées rapides. Les glissements-coulées présentent en effet des comportements très variables dans le temps et dans l'espace caractérisés par un large spectre de vitesses de déplacement (de moins de un centimètre par jour à plusieurs mètres par jour) et la présence de discontinuités et de grandes hétérogénéités dans la répartition de leurs propriétés pétro-physiques. Les techniques d'observation et d'investigation classiques permettent d'obtenir des informations ponctuelles qui ne sont pas suffisantes pour quantifier ces variabilités spatiales et temporelles. Une des conséquences est que, bien souvent, les modèles numériques de glissement ne sont calés et validés que sur un nombre limité d'observations in-situ. Les développements récents en imagerie géophysique multi-paramètres ont permis de progresser dans l'acquisition directe et indirecte de données sur la déformation (photogrammétrie, corrélation d'images, scanner laser) et les paramètres pétro-physiques (tomographie électrique et sismique réfraction). Malgré une précision souvent inférieure à celles des techniques classiques, elles ont l'avantage de fournir des informations multi-échelles et distribuées spatialement. Combiner ces informations spatiales à celles obtenues par les techniques classiques (GPS, extensométrie, piézométrie, géotechnique) et les intégrer dans un modèle conceptuel cohérent pour la modélisation numérique constitue une difficulté majeure. Des travaux scientifiques ont été engagés en combinant différentes approches issues d'observations de terrain (géomorphologie, géologie) et de données instrumentales (hydrogéophysique, photogrammétrie, scanner laser). L'objectif est de développer des méthodologies permettant de déterminer spatialement les différentes caractéristiques majeures des glissement-coulées (structuration interne, comportement hydrologique, comportement cinématique, mécanisme de déformation). Un modèle conceptuel de fonctionnement et des modélisations du comportement hydro-mécanique des glissements-coulées avec les codes Z-Soil et Slow-Mov sont proposés. Les sites d'étude retenus pour notre analyse sont les glissements-coulées marneux de Super-Sauze et de La Valette dans le Bassin de Barcelonnette (Alpes-de-Haute-Provence). Les travaux de recherche sont présentés en quatre parties dont les objectifs sont : 1. De caractériser la structuration interne et la géométrie 3D des glissement-coulées à l'aide de données géophysiques, géotechniques et géomorphologiques ; 2. De proposer un modèle conceptuel hydrologique de la zone non saturée par suivi temporel et spatial de résistivités électriques et par l'analyse de données piézométriques ; 3. De caractériser la cinématique des glissements-coulées à partir de plateformes terrestres de télédétection (photogrammétrie et scanner laser) combinées à des plateformes aériennes et à des suivis par GPS différentiel ; 4. D'identifier des seuils de modifications de régimes hydrologiques et cinématiques par la modélisation numérique. Les résultats soulignent le vaste champ d'application des principales techniques d'imagerie pour l'investigation des glissement-coulées et permettent de définir leurs limites d'utilisation. Des études futures devraient permettre de développer ces différentes méthodologies à des fins opérationnelles de surveillance et pour améliorer les capacités de prédiction de l'aléa.
362

Efficient ranging-sensor navigation methods for indoor aircraft

Sobers, David Michael, Jr. 09 July 2010 (has links)
Unmanned Aerial Vehicles are often used for reconnaissance, search and rescue, damage assessment, exploration, and other tasks that are dangerous or prohibitively difficult for humans to perform. Often, these tasks include traversing indoor environments where radio links are unreliable, hindering the use of remote pilot links or ground-based control, and effectively eliminating Global Positioning System (GPS) signals as a potential localization method. As a result, any vehicle capable of indoor flight must be able to stabilize itself and perform all guidance, navigation, and control tasks without dependence on a radio link, which may be available only intermittently. Since the availability of GPS signals in unknown environments is not assured, other sensors must be used to provide position information relative to the environment. This research covers a description of different ranging sensors and methods for incorporating them into the overall guidance, navigation, and control system of a flying vehicle. Various sensors are analyzed to determine their performance characteristics and suitability for indoor navigation, including sonar, infrared range sensors, and a scanning laser rangefinder. Each type of range sensor tested has its own unique characteristics and contributes in a slightly different way to effectively eliminate the dependence on GPS. The use of low-cost range sensors on an inexpensive passively stabilized coaxial helicopter for drift-tolerant indoor navigation is demonstrated through simulation and flight test. In addition, a higher fidelity scanning laser rangefinder is simulated with an Inertial Measurement Unit (IMU) onboard a quadrotor helicopter to enable active stabilization and position control. Two different navigation algorithms that utilize a scanning laser and techniques borrowed from Simultaneous Localization and Mapping (SLAM) are evaluated for use with an IMU-stabilized flying vehicle. Simulation and experimental results are presented for each of the navigation systems.
363

Design, Fabrication And Testing Of A Versatile And Low-Cost Diffuse Optical Tomographic Imaging System

Padmaram, R 05 1900 (has links)
This thesis reports the work done towards design and fabrication of a versatile and low cost, frequency domain DOT (Diffuse Optical Tomography) Imager. A design which uses only a single fiber for the source and a single fiber bundle for the detector is reported. From near the source, to diametrically opposite to the source, the detected intensity of scattered light varies by three to four orders in magnitude, depending on the tissue/phantom absorption and scattering properties. The photo multiplier tube’s (PMT’s) gain is controlled to operate it in the linear range, thus increasing the dynamic range of detection. Increasing the dynamic range by multi channel data acquisition is also presented. Arresting the oscillations of a stepper using a negative torque braking method is also adopted in this application for increasing the speed of data acquisition. The finite element method (FEM) for obtaining photon density solution to the transport equation and the model based iterative image reconstruction (MPBIIR) algorithm are developed for verifying the experimental prototype. Simulation studies presented towards the end of this thesis work provide insight into the nature of measurements. The optical absorption reconstructed images from the simulation, verified the validity of implementation of the reconstruction method for further reconstructions from data gathered from the developed imager. A single iteration of MOBIIR to segment the region of interest (ROI) using an homogeneous measurement estimate is presented. Using the single iteration MOBIIR to obtain a relatively more accurate starting value for the optical absorption coefficient, and the reconstruction results for data obtained from tissue mimicking solid epoxy-resin phantom with a single in-homogeneity inclusion is also presented to demonstrate the imager prototype.
364

Scanner data and the construction of price indices.

Ivancic, Lorraine, Economics, Australian School of Business, UNSW January 2007 (has links)
This thesis explores whether scanner data can be used to inform Consumer Price Index (CPI) construction, with particular reference to the issues of substitution bias and choice of aggregation dimensions. The potential costs and benefits of using scanner data are reviewed. Existing estimates of substitution bias are found to show considerable variation. An Australian scanner data set is used to estimate substitution bias for six different aggregation methods and for fixed base and superlative indexes. Direct and chained indexes are also calculated. Estimates of substitution bias are found to be highly sensitive to both the method of aggregation used and whether direct or chained indexes were used. The ILO (2004) recommends the use of dissimilarity indexes to determine the issue of when to chain. This thesis provides the first empirical study of dissimilarity indexes in this context. The results indicate that dissimilarity indexes may not be sufficient to resolve the issue. A Constant Elasticity of Substitution (CES) index provides an approximate estimate of substitution-bias-free price change, without the need for current period expenditure weights. However, an elasticity parameter is needed. Two methods, referred to as the algebraic and econometric methods, were used to estimate the elasticity parameter. The econometric approach involved the estimation of a system of equations proposed by Diewert (2002a). This system has not been estimated previously. The results show a relatively high level of substitution at the elementary aggregate level, which supports the use a Jevons index, rather than Carli or Dutot indexes, at this level. Elasticity parameter estimates were found to vary considerably across time, and statistical testing showed that elasticity parameter estimates were significantly different across estimation methods. Aggregation is an extremely important issue in the compilation of the CPI. However, little information exists about 'appropriate' aggregation methods. Aggregation is typically recommended over 'homogenous' units. An hedonic framework is used to test for item homogeneity across four supermarket chains and across all stores within each chain. This is a novel approach. The results show that treating the same good as homogenous across stores which belong to the same chain may be recommended.
365

Evaluating garment size and fit for petit women using 3D body scanned anthropometric data

Phasha, Masejeng Marion 05 1900 (has links)
Research suggests that there is a plethora of information on the size and shape of the average and plus sized women in South Africa (Winks, 1990; Pandarum, 2009; Muthambi, 2012; Afolayan & Mastamet-Mason, 2013 and Makhanya, 2015). However, there is very little information on petite women‟s body shapes, their body measurements and their shopping behaviour, especially in South Africa, for manufacturing ready-to-wear garments. The purpose of this petite women study was to investigate the shapes and sizes of a sample of petite South African women and develop size charts for the upper and lower body dimensions. This study used a mixed-method; purposive, non-probability sampling method to achieve the objectives of the study. A (TC)² NX16 3D full body scanner and an Adam‟s® medical scale were used to collect the body measurement data of 200 petite South African women, aged between 20-54 years with an average height range of 157cm, residing in Gauteng (Pretoria and Johannesburg). Other data collection instruments included a demographic questionnaire to collect the subjects‟ demographic information such as, age, height, weight, etc.; and the psychographic questionnaire to gather the petite subjects‟ demographics as well as their perceptions and preferences on currently available ready-to-wear shirt and trouser garments. Of the 200 subjects that were initially recruited, based on the petite women‟s body height that ranged from 5‟ 4” (163 cm) and below, the most prevalent body shape profile that emerged from the dataset, was the pear body shape which was evident in 180 of the 3D full body scanned petite women subjects. Therefore, the anthropometric data for these 180 subjects was used in the development of the experimental upper and lower body dimensions size charts and as the basis for the fit test garments developed in this study. The collected data was analysed and interpreted in Microsoft Excel and the IBM SPSS Statistics 24 (2016) software package, using principal component analysis (PCA) to produce the experimental size charts for the upper and lower body dimensions necessary for creating prototype shirt and trouser garments. Regression analysis was used to establish the primary and secondary body dimensions for the development of the size charts and for determining the size ranges. The experimental upper and lower body dimensions size charts were developed for sizes ranging from size 6/30 to size 26/50. Subsequently, the accuracy of the size charts developed in this study was evaluated by a panel of experts who analysed the fit of the prototype shirt and trouser garments, manufactured using measurements for a size 10/34 size range from the size chart, on a sample of the petite subjects. The fit of these garments was also compared with the fit of garments manufactured using the 3D full body scanned measurements of a size 10/34 petite tailoring mannequin, that is currently commercially available for use in the production of garments for petite women in South Africa. The shirt and trouser prototype garments developed using the size 10/34 upper and lower body dimensions size chart measurements had, overall, a better quality of fit than the garments made to fit the current, commercially available, size 10/34 mannequin. These findings thereby confirmed that the data extracted from the (TC)² NX16 3D full body scanner and the size charts subsequently developed using the data, has the potential to provide better/improved fit in garments for petite South African women than data hitherto published. From the evidence of this study, it is recommended that the South African garment manufacturing industry needs to revise the current sizing system for petite women to accommodate the body dimensions and shape variations that currently prevail amongst consumers. The South African garment manufacturers and retailers also need to familiarise themselves with the needs, challenges and preferences of the petite consumers‟ target market that purchase ready-to-wear shirt and trouser garments in South Africa. / Life and Consumer Sciences / M.ConSci. (Department of Life and Consumer Science)
366

Dysplasie ectodermique hypohidrotique : mise en évidence de nouveaux marqueurs phenotypiques crâniens et post-crâniens chez le mutant Tabby / Hypohidrotic ectodermal dysplasia : new phenotypic cranial and post-cranial skeletal markers in tabby mice

Gros, Catherine-Isabelle 16 September 2013 (has links)
La Dysplasie Ectodermique Hypohidrotique liée à l'X (DEX) est une maladie génétique liée à une mutation du gène EDA. Le phénotype exprimé par le modèle murin Tabby est l'équivalent de celui observé dans l'espèce humaine et présente des anomalies dentaires, cranio-faciales, vertébrales et des défauts de trabéculation osseuse. Dans ce contexte, une cartographie de ces anomalies chez le mutant Tabby était nécessaire et l'analyse de l’impact de la mutation Eda/Ta sur la croissance du squelette crânien et post-crânien a été étudiée. Un suivi longitudinal d'une cohorte d'individus murin Tabby (5 mâles hémizygotes EdaTa/Y, 6 femelles hétérozygotes EdaTa/+) et sauvages (n=12) a été réalisé à partir d’une succession d’acquisitions TDM pendant plus de 2 ans. L'observation des profils de croissance et de leurs paramètres a montré des anomalies de croissance du complexe crânio-facial, de la base du crâne (hypo-développement crânien) et un déficit de croissance relatif des os longs (fémur et humérus) chez les souris hémizygotes EdaTa/Y. Ces résultats mettent pour la première fois en évidence des anomalies de développement des os longs et confirment le rôle d’EDA-A dans la formation normale du squelette. Ces données constituent un pré-requis essentiel pour tester l’efficacité de tentatives de réversion phénotypique à partir de protéines recombinantes. / X-linked Hypohidrotic Ectodermal Dysplasia (XLHED) is a genetic disorder due to a mutation of the EDA gene. The phenotype expressed by Tabby mice, murine model of XLHED, is equivalent to that observed in humans including dental anomalies, craniofacial and vertebral trabecular bone defects. In this context, a mapping of these anomalies in Tabby mice was necessary and the impact of the EdaTa mutation on cranial and post -cranial skeletal growth was studied. A 2 years (112 weeks) μCT follow-up of Tabby mice (5 hemizygous males EdaTa/Y, 6 heterozygous females EdaTa/+) and Wild Type group (n = 12) hasbeen performed. The observation of growth patterns and parameters showed a relative cranial hypodevelopment, abnormal growth of the craniofacial complex and a relative hypo-development of appendicular skeleton (femur and humerus) in Tabby mice. These results allowed for the first time to highlight appendicular developmental abnormalities, confirming the role of EDA-A in the normal formation of the skeleton. While enriching the phenotypic picture of this syndrome, in a therapeuticperspective, all of these data are an essential prerequisite to test the effectiveness of attempts to phenotypic reversion from recombinant proteins.
367

Underwater 3D Surface Scanning using Structured Light

Törnblom, Nils January 2010 (has links)
In this thesis project, an underwater 3D scanner based on structured light has been constructed and developed. Two other scanners, based on stereoscopy and a line-swept laser, were also tested. The target application is to examine objects inside the water filled reactor vessel of nuclear power plants. Structured light systems (SLS) use a projector to illuminate the surface of the scanned object, and a camera to capture the surfaces' reflection. By projecting a series of specific line-patterns, the pixel columns of the digital projector can be identified off the scanned surface. 3D points can then be triangulated using ray-plane intersection. These points form the basis the final 3D model. To construct an accurate 3D model of the scanned surface, both the projector and the camera need to be calibrated. In the implemented 3D scanner, this was done using the Camera Calibration Toolbox for Matlab. The codebase of this scanner comes from the Matlab implementation by Lanman & Taubin at Brown University. The code has been modified and extended to meet the needs of this project. An examination of the effects of the underwater environment has been performed, both theoretically and experimentally. The performance of the scanner has been analyzed, and different 3D model visualization methods have been tested. In the constructed scanner, a small pico projector was used together with a high pixel count DSLR camera. Because these are both consumer level products, the cost of this system is just a fraction of commercial counterparts, which uses professional components. Yet, thanks to the use of a high pixel count camera, the measurement resolution of the scanner is comparable to the high-end of industrial structured light scanners.
368

Automation of Microscopic Tests for Cyto-diagnostics Using Custom-built Slide Scanner

Swetha, M January 2017 (has links) (PDF)
Optical microscopy is the simplest and the gold standard method adopted for the screening and subsequent diagnosis of various hematological and infectious diseases like malaria, sickle cell disease, tuberculosis etc. In addition to infectious disease diagnosis, its applications range from routine blood tests to the more sophisticated cancer biopsy sample analysis. Microscopy Tests (MTs) follow a common procedural workflow: (1) A technician prepares a smear of the given sample on a glass slide in a specific manner depending on the sample and the disease to be diagnosed; (2) The smeared slide is subsequently exposed to fixative agents and different histochemical stains specific to the diagnosis to be performed and (3) the prepared slide is then observed under a high quality bright- field bench-top microscope. An expert pathologist/cytologist is required to manually examine multiple fields-of-views of the prepared slide under appropriate magnification. Multiple re-adjustments in the focus and magnification makes the process of microscopic examination time consuming and tedious. Further, the manual intervention required in all the aforementioned steps involved in a typical MT, makes it inaccessible to rural/resource limited conditions and restricts the diagnostics to be performed by trained personnel in laboratory settings. To overcome these limitations, there has been considerable research interest in developing cost-effective systems that help in automating MTs. The work done in this thesis addresses these issues and proposes a two-step solution to the problem of affordable automation of MTs for cellular imaging and subsequent diagnostic assessment. The first step deals with the development of a low cost portable system that employs custom-built microscopy setup using o -the-shelf optical components, low cost motorized stage and camera modules to facilitate slide scanning and digital image acquisition. It incorporates a novel computational approach to generate good quality in-focus images, without the need for employing high-end precision translational stages, thereby reducing the overall system cost. The process of slide analysis for result generation is further automated by using image analysis and classification algorithms. The application of the developed platform in automating slide based quantitative detection of malaria is reported in this thesis. The second aspect of the thesis addresses the automation of slide preparation. A major factor that could influence the analysis results is the quality of the prepared smears. The feasibility of automating and standardizing the process of slide preparation using Microfluidics with appropriate surface fictionalization is explored and is demonstrated in the context of automated semen analysis. As an alternative to the mechanism of fixing the spermatozoa to the glass slide by smearing and chemical treatment with fixative, microfluidic chips pre-coated with adhesive protein are employed to capture and immobilize the cells. The subsequent histochemical staining is achieved by pumping the stains through the microfluidic device. The proof-of-principle experiments performed in this thesis demonstrate the feasibility of the developed system to provide an end-to-end cost-effective alternative solution to conventional MTs. This can further serve as an assistive tool for the pathologist or in some cases completely eliminate the manual intervention required in MTs enabling repeatability and reliability in diagnosis for clinical decision making
369

Érosion des falaises de la région Provence-Alpes-Côte d’Azur : évolution et origine de la morphologie côtière en Méditerranée : télédétection, géochronologie, géomorphologie / Rocky cliff erosion in the region Provence-Alpes-Côte d’Azur : evolution and origin of the coastal morphology in the Mediterranean : remote sensing, geochronology, geomorphology

Giuliano, Jérémy 16 December 2015 (has links)
L’intérêt croissant pour l’étude de la morphogénèse des côtes à falaises a permis de mieux appréhender les environnements méso/macrotidaux, mais en délaissant les environnements microtidaux. Pour cette raison, nous proposons d’étudier, à partir d’une approche exploratoire multi-échelle, la dynamique érosive des côtes à falaises en Méditerranée à travers l’exemple de la région Provence-Alpes-Côte d’Azur. Le choix de ce territoire s’est justifié en réponse à une problématique relative à la gestion des risques côtiers identifiée par le Conseil-Régional-PACA qui a financé ce travail doctoral. Les principaux objectifs visent à caractériser d’une part l’ampleur relative des occurrences érosives en fonction de la variabilité temporelle des forçages météo-climatiques, et d’autre part le degré du contrôle géologique sur la morphologie côtière. Toute la difficulté de l’étude réside donc dans l’optimisation des fenêtres d’observation afin de distinguer les comportements érosifs. Nous proposons donc de tester l’apport de quatre méthodes permettant de définir si l’érosion se produit de manière (1) continue à l’échelle annuelle (levés LiDAR embarqués depuis un bateau), (2) chronique à l’échelle séculaire (orthophotographies aériennes) et (3) exceptionnelle voir (4) catastrophique sur les temps propres à l’Holocène et au Quaternaire (datations aux cosmogéniques in situ 36Cl et analyses morphométriques). Au regard des résultats mesurés et interprétés, il apparaît que l’activité gravitaire produite au cours du XXe siècle (érosion moyenne de l’ordre du cm.an-1) est très faible par rapport aux environnements méso/macrotidaux / Increasing interest in studying rocky cliff coastline morphogenesis allowed a better understanding of meso/macrotidal environments, but let microtidal environments apart. Thus we propose studying the dynamic of cliff coastlines erosion in Mediterranean in the South-East of France, through a multi-scale explorative approach. This doctoral work was supported by the region Provence-Alpes-Côte d’Azur, which identified a problematic in relation with coastal hazard management. The main objectives aim at characterizing on one hand how the temporal variability of meteorologic and climatic forcings affects the magnitude of erosion, and on other hand to which extent the geological setting controls the coastline morphology. The great challenge of this work therefore consists in optimizing the observation range in order discriminate erosive behaviours. Thus we propose assessing the contribution of four methods in determining whether erosion takes place (1) continuously at annual scale (boat-borne laser scanning surveys), (2) chronic at secular scale (aerial orthophographies)and (3) exceptional even (4) catastrophic over the characteristic timescales of Holocene and Quaternary (cosmic ray exposure dating from in-situ 36Cl and morphometric analysis). The interpretation of the results shows that erosion rate observed through the XXth century (order of magnitude of cm.y-1) is very low compared to meso/macrotidal environments. However at timescales ranging from pluri-secular (0.29 ka BP) to pluri-millenar (6.8 ka BP), exceptional storms surges of +3 NGF could initiate an erosion process resulting in the formation of horizontal shore platforms.
370

Simulation d’un accélérateur linéaire d’électrons à l’aide du code Monte-Carlo PENELOPE : métrologie des traitements stéréotaxiques par cônes et évaluation de l’algorithme eMC / Simulation of a linear accelerator with PENELOPE Monte Carlo code : stereotactic treatments metrology by cones and eMC algorithm assessment

Garnier, Nicolas 19 December 2018 (has links)
L’accélérateur linéaire d’électrons du Centre Hospitalier Princesse Grace a été simulé à l’aide du code Monte-Carlo PenEasy. Après avoir validé l’ensemble des techniques permettant d’accélérer le temps de calcul (réduction de variance, parallélisation, …), les caractéristiques des faisceaux initiaux d’électrons ont été déterminées pour une énergie photons et quatre énergies électrons afin d’étudier deux problématiques cliniques. La première concerne l’étude comparative des réponses de huit dosimètres pour la mesure des données de base en mini-faisceaux à l’aide de cônes stéréotaxiques de diamètres compris entre 30 mm et 4 mm. Ces faisceaux de photons sont caractérisés par de forts gradients de dose et un manque important d’équilibre électronique latéral, ce qui rend les techniques dosimétriques conventionnelles inadaptées. Des mesures de facteurs d’ouverture collimateur (FOC), de profil de dose et de rendement en profondeur ont été réalisées avec sept détecteurs actifs (diodes, chambres d’ionisations et MicroDiamond) et un détecteur passif (film radiochromique) et comparées avec les résultats issus de la simulation Monte-Carlo considérée comme notre référence. Pour la mesure du FOC, seul le film radiochromique est en accord avec la simulation avec des écarts inférieurs à 1 %. La MicroDiamond semble être le meilleur détecteur actif avec un écart maximal de 3,7 % pour le cône de 5 mm. Concernant les mesures de profils de dose, les meilleurs résultats ont été obtenus avec le film radiochromique et les diodes blindées ou non (écart de pénombre inférieur à 0,2 mm). Pour les rendements en profondeur, l’ensemble des détecteurs utilisés sont satisfaisants (écart de dose absorbée inférieur à 1 %). La deuxième application concerne l’évaluation de l’algorithme de dépôt de dose électron eMC sur des coupes scanographiques. Pour cela, un programme de « voxélisation » sous MATLAB a été développé afin de transformer les nombres Hounsfield issus du scanner en propriété de matériau (densité et composition chimique) utilisable par le code Monte-Carlo PenEasy. Une triple comparaison entre la mesure avec films radiochromiques, le calcul avec l’algorithme eMC et la simulation Monte-Carlo PenEasy a été réalisée dans différentes configurations : des fantômes hétérogènes simples (superposition de plaques de différentes densités), un fantôme hétérogène complexe (fantôme anthropomorphique) et une comparaison sur patient. Les résultats ont montré qu’une mauvaise affectation d’un matériau du milieu provoque un écart de dose absorbée localement (jusqu’à 16 %) mais aussi en aval de la simulation du fait d’une mauvaise prise en compte de la modification du spectre électronique. La comparaison des distributions de dose absorbée sur le plan patient a montré un très bon accord entre les résultats issus de l’algorithme eMC et ceux obtenus avec le code PenEasy (écart < 3 %). / Using the PenEasy Monte-Carlo code was simulated the linear electron accelerator of Princess Grace Hospital Center. After the validation of all the techniques allowing to accelerate the calculation time (variance reduction technique, parallelization, etc.), the characteristics of the initial electron beams were determined for one photon energy and four electron energies in order to study two clinical issues. The first one concerns the comparative study of the responses of eight dosimeters for the measurement of basic data in small fields using stereotactic cones with a diameter between 30 mm to 4 mm. These photon beams are characterized by strong dose gradients and a significant lack of charged particule equilibrium, making conventional dosimetric techniques unsuitable. Output factor measurment (OF), dose profile and depth dose measurements were performed with seven active detectors (diodes, ionization chambers and MicroDiamond) and a passive detector (radiochromic film) and compared with the results from the Monte Carlo simulation considered as our reference. For the OF measurement, only the radiochromic film is in agreement with the simulation with difference less than 1%. The MicroDiamond seems to be the best active detector with a maximum gap of 3.7% for the 5 mm cone. Concerning the dose profile measurements, the best results were obtained with the radiochromic film and diodes shielded or not (penumbre difference of less than 0,2 mm). For depth dose, all the detectors used have good result (absorbed dose difference less than 1 %). The second application concerns the evaluation of the eMC electron deposition algorithm on CT slices. For this, a « voxelisation » program under MATLAB was developed to transform the Hounsfield numbers from the scanner to material property (density and chemical composition) usable by the PenEasy Monte-Carlo code. A triple comparison between measurement with radiochromic films, calculation with the eMC algorithm and Monte-Carlo PenEasy simulation was carried out in different configurations: simple heterogeneous phantom (superposition of plates of different densities), a complex heterogeneous phantom (anthropomorphic phantom) and a patient comparison. The results showed that a wrong material assignment of the medium causes a difference of dose absorbed locally (up to 16%) but also downstream the simulation due to a wrong taking into account of the modification of the electronic spectrum. The absorbed dose distribution comparison on the patient plane showed a very good agreement between the results from the eMC algorithm and those obtained with the PenEasy code (deviation < 3%).

Page generated in 0.2786 seconds