• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 237
  • 34
  • 26
  • 18
  • 13
  • 10
  • 9
  • 8
  • 7
  • 6
  • 3
  • 3
  • 1
  • 1
  • Tagged with
  • 440
  • 126
  • 76
  • 57
  • 57
  • 53
  • 50
  • 45
  • 45
  • 43
  • 39
  • 39
  • 38
  • 38
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
391

Empirical essays on macro-financial linkages

Melander, Ola January 2009 (has links)
How do financial variables, such as firms’ cash flow and banks’ capital, affect macroeconomic variables, such as investment and GDP growth? What are the macroeconomic effects of exchange rate depreciation in countries where firms and households have extensive foreign-currency liabilities? The doctoral thesis Empirical Essays on Macro-Financial Linkages consists of four separate papers in the field of empirical macroeconomics. The first three papers investigate the macroeconomic implications of financial-market imperfections. Imperfect information between borrowers and lenders makes it more costly for firms to finance investments with external funds than with internal funds. The external finance risk premium depends on the strength of firm balance sheets, which hence affects firm investment. The first paper, The Effect of Cash Flow on Investment: An Empirical Test of the Balance Sheet Channel, examines the importance of financial constraints for investment using a large Swedish firm-level data set which includes many smaller firms (where balance sheet effects are likely to be especially important). I find a positive effect of cash flow on investment, controlling for fundamental determinants of investment and any information in cash flow about investment opportunities. As predicted by the balance sheet channel, the estimated effect of cash flow on investment is especially large for firms which, a priori, are more likely to be financially constrained (low-dividend, small and non-group firms). Moreover, the investment-cash flow sensitivity is significantly larger and more persistent during the first half of the sample period, which includes a severe banking crisis and recession. The second paper, Credit Matters: Empirical Evidence on U.S. Macro-Financial Linkages, written jointly with Tamim Bayoumi, estimates the impact of an adverse shock to bank capital on credit availability and spending in the United States, allowing for feedback from spending and income through the balance sheets of banks, firms and households. We find that an exogenous fall in the bank capital/asset ratio by one percentage point reduces real GDP by some 1 ½ percent through its effects on credit availability, while an exogenous fall in demand of 1 percent of GDP is gradually magnified to around 2 percent through financial feedback effects. The third paper, The Effects of Real Exchange Rate Shocks in an Economy with Extreme Liability Dollarization, studies the effects of real exchange rate depreciation in Bolivia, where over 95 percent of bank credit is denominated in dollars. Currency depreciation increases the domestic-currency value of foreign-currency liabilities and the debt service burden, thus adversely affecting firm balance sheets. A key issue for policymakers in countries with widespread foreign-currency borrowing is whether depreciation would have the standard, expansionary effect on output, or if an adverse balance sheet would dominate. I find that real exchange depreciation has negligible effects on output, since a contractionary balance-sheet effect on investment is counteracted by the standard expansionary effect on net exports. The fourth paper, Uncovered Interest Parity in a Partially Dollarized Developing Country: Does UIP Hold in Bolivia? (And If Not, Why Not?), studies another aspect of macro-financial linkages. The so-called uncovered interest parity (UIP) condition states that interest rate differentials compensate for expected exchange rate changes, equalizing the expected returns from holding assets which only differ in terms of currency denomination. Because of data availability problems, there is a lack of empirical tests of UIP for developing countries. The paper studies the case of Bolivia, where there are bank accounts which only differ in terms of currency denomination (bolivianos or U.S. dollars). I find that UIP does not hold in Bolivia, but that the deviations are smaller than in most other studies of developed and emerging economies. / Diss. Stockholm : Handelshögskolan, 2009 Sammanfattning jämte 4 uppsatser
392

Développement et optimisation des diagnostiques des faisceaux du LHC et du SPS basé sur le suivi de la lumière synchrotron / Development and Optimization of the LHC and the SPS Beam Diagnostics Based on Synchrotron Radiation Monitoring

Trad, Georges 22 January 2015 (has links)
La mesure de l’émittance transverse du faisceau est fondamentale pour tous les accélérateurs, et en particulier pour les collisionneurs, son évaluation precise étant essentielle pour maximiser la luminosité et ainsi la performance des faisceaux de collision.Le rayonnement synchrotron (SR) est un outil polyvalent pour le diagnostic non-destructif de faisceau, exploité au CERN pour mesurer la taille des faisceaux de protons des deux machines du complexe dont l’énergie est la plus élevée, le SPS et le LHC où l’intensité du faisceau ne permet plus les techniques invasives.Le travail de thèse documenté dans ce rapport s’est concentré sur la conception, le développement, la caractérisation et l’optimisation des moniteurs de taille de faisceau basés sur le SR. Cette étude est fondée sur un ensemble de calculs théoriques, de simulation numériques et d’expériences conduite au sein des laboratoires et accélérateurs du CERN. Un outil de simulation puissant a été développé, combinant des logiciels classiques de simulation de SR et de propagation optique, permettant ainsi la caractérisation complète d’un moniteur SR de la source jusqu’au détecteur.La source SR a pu être entièrement caractérisée par cette technique, puis les résultats validés par observation directe et par la calibration à basse énergie basée sur les mesures effectuées avec les wire-scanners (WS), qui sont la référence en terme de mesure de taille de faisceau, ou telles que la comparaison directe avec la taille des faisceaux obtenue par déconvolution de la luminosité instantanée du LHC.Avec l’augmentation de l’énergie dans le LHC (7TeV), le faisceau verra sa taille diminuer jusqu’à atteindre la limite de la technique d’imagerie du SR. Ainsi, plusieurs solutions ont été investiguées afin d’améliorer la performance du système: la sélection d’une des deux polarisations du SR, la réduction des effets liés à la profondeur de champ par l’utilisation de fentes optiques et l’utilisation d’une longueur d’onde réduite à 250 nm.En parallèle à l’effort de réduction de la diffraction optique, le miroir d’extraction du SR qui s’était avéré être la source principale des aberrations du système a été entièrement reconçu. En effet, la détérioration du miroir a été causée par son couplage EM avec les champs du faisceau, ce qui a conduit à une surchauffe du coating et à sa dégradation. Une nouvelle géométrie de miroir et de son support permettant une douce transition en termes de couplage d’impédance longitudinale dans le beam pipe a été définie et caractérisée par la technique dite du “streched wire”. Egalement, comme méthode alternative à l’imagerie directe, un nouveau moniteur basé sur la technique d’interférométrie à deux fentes du SR, non limité par la diffraction, a également été développé. Le principe de cette méthode est basé sur la relation directe entre la visibilité des franges d’interférence et la taille de faisceau.Comme l’emittance du faisceau est la donnée d’intérêt pour la performance du LHC, il est aussi important de caractériser avec précision l’optique du LHC à la source du SR. Dans ce but, la méthode “K-modulation” a été utilisée pour la première fois au LHC en IR4. Les β ont été mesurés à l’emplacement de tous les quadrupoles et ont été évalués via deux algorithmes de propagation différents au BSRT et au WS. / Measuring the beam transverse emittance is fundamental in every accelerator, in particular for colliders, where its precise determination is essential to maximize the luminosity and thus the performance of the colliding beams.
 Synchrotron Radiation (SR) is a versatile tool for non-destructive beam diagnostics, since its characteristics are closely related to those of the source beam. At CERN, being the only available diagnostics at high beam intensity and energy, SR monitors are exploited as the proton beam size monitor of the two higher energy machines, the Super Proton Synchrotron (SPS) and the Large Hadron Collider (LHC). The thesis work documented in this report focused on the design, development, characterization and optimization of these beam size monitors. Such studies were based on a comprehensive set of theoretical calculations, numerical simulations and experiments.A powerful simulation tool has been developed combining conventional softwares for SR simulation and optics design, thus allowing the description of an SR monitor from its source up to the detector. 
The simulations were confirmed by direct observations, and a detailed performance studies of the operational SR imaging monitor in the LHC, where different techniques for experimentally validating the system were applied, such as cross-calibrations with the wire scanners at low intensity (that are considered as a reference) and direct comparison with beam sizes de-convoluted from the LHC luminosity measurements.In 2015, the beam sizes to be measured with the further increase of the LHC beam energy to 7 TeV will decrease down to ∼190 μm. In these conditions, the SR imaging technique was found at its limits of applicability since the error on the beam size determination is proportional to the ratio of the system resolution and the measured beam size. Therefore, various solutions were probed to improve the system’s performance such as the choice of one light polarization, the reduction of depth of field effect and the reduction of the imaging wavelength down to 250 nm.In parallel to reducing the diffraction contribution to the resolution broadening, the extraction mirror, found as the main sources of aberrations in the system was redesigned. Its failure was caused by the EM coupling with the beam’s fields that led to overheating and deterioration of the coating. A new system’s geometry featuring a smoother transition in the beam pipe was qualified in terms of longitudinal coupling impedance via the stretched wire technique. A comparison with the older system was carried out and resulted in a reduction of the total power dissipated in the extraction system by at least a factor of four.A new, non-diffraction limited, SR-based monitor based on double slit interferometry was designed as well as an alternative method to the direct imaging. Its principle is based on the direct relation between the interferogram fringes visibility and the beam size.Since the beam emittance is the physical quantity of interest in the performance analysis of the LHC, determining the optical functions at the SR monitors is as relevant as measuring the beam size. The “K-modulation” method for the optical function determination was applied for the first time in the LHC IR4, where most of the profile monitors sit. The βs at the quadrupoles were measured and via two different propagation algorithms the βs at the BSRT and the WS were obtained reducing significantly the uncertainty at the monitors location.
393

Détermination de sections efficaces pour la production de champs neutroniques monoénergétiques de basse énergie / Determination of cross sections for the production of low-energy monoenergetic neutron fields

Lamirand, Vincent 18 November 2011 (has links)
La réponse d’un détecteur de neutrons varie avec l’énergie du neutron incident. La détermination expérimentale de cette variation se réalise au moyen de champs neutroniques monoénergétiques. Ceux-ci sont produits par l’interaction entre un faisceau d’ions accélérés et une cible fine constituée d’un dépôt réactif sur un support métallique. En utilisant différentes réactions telles que 7Li(p,n), 3H(p,n), 2H(d,n) et 3H(d,n), il est possible de produire des neutrons entre 120 keV et 20 MeV dans la direction du faisceau incident (0°).Pour atteindre des énergies inférieures, il est possible d’augmenter l’angle du point de mesure par rapport à la direction du faisceau d’ions. Cependant, cette méthode présente des problèmes d’homogénéité en énergie et en fluence des neutrons à la surface du détecteur, ainsi qu’une augmentation de la proportion de neutrons diffusés. Une alternative est l’utilisation d’autres réactions nucléaires, notamment la réaction 45Sc(p,n) qui permet de descendre jusqu’à des énergies de 8 keV à 0°.Une étude complète de cette réaction et de sa section efficace a été menée au sein d’une coopération scientifique entre le laboratoire de métrologie et de dosimétrie des neutrons (LMDN) de l’IRSN, deux instituts de métrologie européens, le NPL (National Physical Laboratory, RU) et le PTB (Physikalisch-Technische Bundesanstalt, All), et l’IRMM (Institute for Reference Materials and Measurements, CEE). Parallèlement, d’autres réactions envisageables ont été étudiées : 65Cu(p,n), 51V(p,n), 57Fe(p,n), 49Ti(p,n), 53Cr(p,n) et 37Cl(p,n). Elles ont été comparées en termes d’émission neutronique et d’énergie minimale des neutrons produits. / The response of a neutron detector, defined as the reading of the device per unit of incident fluence or dose, varies with neutron energy. The experimental determination of this variation, i.e. of the response function of this instrument, has to be performed by facilities producing monoenergetic neutron fields. These neutrons are commonly produced by interaction between accelerated ions (proton or deuteron) onto a thin target composed of a reactive layer deposited on a metallic backing. Using the 7Li(p,n), 3H(p,n), 2H(d,n) and 3H(d,n) reactions, monoenergetic neutrons are obtained between 120 keV and 20 MeV in the ion beam direction (0°).To reach lower neutron energies, the angle of the measuring point with respect to the ion beam direction can be increased. However, this method presents several problems of neutron energy and fluence homogeneities over the detector surface, as well as an important increase of the scattered neutron contribution. Another solution is to investigate other nuclear reactions, as 45Sc(p,n) allowing to extend the neutron energy range down to 8 keV at 0°.A complete study of this reaction and its cross section has been undertaken within the framework of a scientific cooperation between the laboratory of neutron metrology and dosimetry (IRSN, France), two European national metrological institutes, the National Physical Laboratory (UK) and the Physikalisch-Technische Bundesanstalt (Germany), and IRMM, the Institute for Reference Materials and Measurements (EC).In parallel, other possible reactions have been investigated: 65Cu(p,n), 51V(p,n), 57Fe(p,n), 49Ti(p,n), 53Cr(p,n) and 37Cl(p,n). They were compared in terms of neutron fluence and minimum energy of the produced neutrons.
394

Mesure de la section efficace de production de paires de quarks top dans le canal tau+jets dans l'expérience CMS auprès du LHC / Measurement of the tt production cross section in the tau+jets channel in pp collisions at √S=7TeV [sqrt(S)=7TeV]

Ferro, Cristina 14 May 2012 (has links)
Cette thèse a pour sujet la mesure de la section efficace de production de paires de quarks top-antitop dans l’expérience CMS auprès du LHC. L’état final considéré est le canal semi-leptonique ”tau+jets”. Un boson W issu de la désintégration du quark top se désintègre en un tau hadronique et neutrino tandis que le second boson W se désintègre en une paire quark-antiquark. La conduite de cette analyse a nécessité le développement d’un nouveau mode de déclenchement des données (trigger) incluant la présence de quatre jets dont un identifié en tant que tau hadronique. La configuration de ce trigger ainsi que son efficacité ont été étudiés dans cette thèse. Un échantillon de données correspondant à 3.9 fb−1 a été enregistré avec ce trigger et analysé. Les leptons taus ont été reconstruits grâce à un algorithme identifiant leurs résonances intermédiaires tandis que les jets de quarks beaux issus de la désintégration des quarks tops ont été identifiés grâce à l’algorithme ”probabilité par jet”. Cet algorithme pour lequel j’ai mis en oeuvre une procédure de calibration depuis 2009 utilise le paramètre d’impact des traces de particules chargées pour calculer leur probabilité de venir du vertex primaire. Des études de performance de cet algorithme sont également présentées dans cette thèse. Afin de séparer le signal de l’important bruit de fond majoritairement constitué des processus multijets QCD et W+jets un réseau de neurones utilisant des variables dites d’environnement (aplanarité, HT, M(τ,jets), énergie transverse manquante...) a été développé. La section efficace a été extraite à l’aide d’un ajustement par méthode du maximum de vraisemblance de la sortie du réseau de neurones. Les incertitudes systématiques ont fait l’objet d’une étude détaillée. La valeur de la section efficace mesurée, σ(top-antitop) = 156 ± 12 (stat.) ± 33 (syst.) ± 3 (lumi) pb, est en accord avec la section efficace prédite par le modèle standard. / In this thesis we present the first measurement in the CMS experiment of the top-antitop production cross section in the tau+jets final state. To perform this measurement we designed a specific trigger requiring the presence of four jets, one of them being identified as an hadronic tau. The performance of this trigger has been studied in this thesis. A dataset of 3.9 fb-1 was collected with this trigger and analyzed. At offline level we needed to apply a sophisticated tau identification technique to identify the tau jets, based on the reconstruction of the intermediate resonances of the hadronic tau decay modes. Another crucial point was the b-jet identification, both to identify the b-jets in the final state and to modelize the background using a data driven technique. The studies done on the b-tag algorithms along the PhD period are also presented with particular attention to the ”Jet Probability” algorithm. It is the algorithm for which I performed the calibration since 2009 as well as the one used to tag the b-jets from the top decays. A neural network has been developed to separate the top-antitop events from the W+jets and multijet backgrounds. A binned likelihood fit to the neural network output distribution is done in order to extract the signal contribution from the background. A detailed estimation of the systematic uncertainties on the cross section measurement is also presented. The result for the cross section measurement, σ(tt) = 156 ± 12 (stat.) ± 33 (syst.) ± 3 (lumi) pb, is in perfect agreement with the standard model expectation.
395

The effect of the financial accelerator over Brazilian firms: an investment analysis with evidence from 1Q2005 to 3Q2017

Silva, Nathalie dos Santos 24 July 2018 (has links)
Submitted by Nathalie dos Santos Silva (nathass@gmail.com) on 2018-09-26T14:32:48Z No. of bitstreams: 1 dissertação final c ficha.pdf: 2465918 bytes, checksum: f48edd64a1f13d747b89668b7fb3fab5 (MD5) / Approved for entry into archive by Joana Martorini (joana.martorini@fgv.br) on 2018-09-26T14:57:24Z (GMT) No. of bitstreams: 1 dissertação final c ficha.pdf: 2465918 bytes, checksum: f48edd64a1f13d747b89668b7fb3fab5 (MD5) / Approved for entry into archive by Suzane Guimarães (suzane.guimaraes@fgv.br) on 2018-09-26T17:04:37Z (GMT) No. of bitstreams: 1 dissertação final c ficha.pdf: 2465918 bytes, checksum: f48edd64a1f13d747b89668b7fb3fab5 (MD5) / Made available in DSpace on 2018-09-26T17:04:37Z (GMT). No. of bitstreams: 1 dissertação final c ficha.pdf: 2465918 bytes, checksum: f48edd64a1f13d747b89668b7fb3fab5 (MD5) Previous issue date: 2018-07-24 / This dissertation examines the impacts of monetary shocks on investment decisions of firms located in Brazil to test the presence of the financial accelerator. Because of this, after a brief review of the literature, limiting the scope of transmission via the balance sheet channel, a panel consisting of Brazilian firms from 2005 to 2017 were tested. Regression control variables include control of the effect of the BNDES on the financial leverage of firms. After tests performed under reasonable levels of significance, the resources generated internally by the firm's operations were the only variable that impacted the investments. The literature that discusses the impact of BNDES on the Brazilian capital market is also revisited in order to corroborate the results obtained in the tests. / O presente trabalho examina os impactos de choques monetários sobre decisões de investimentos de firmas localizadas no Brasil para testar se existe a presença do acelerador financeiro. Para isto, após uma breve revisão da literatura, limitando o escopo de transmissão via canal do balanço, um painel contendo firmas brasileiras de 2005 a 2017 foi testado. Dentre as variáveis de controle da regressão, inclui-se o controle do efeito do BNDES sobre a alavancagem financeira das firmas. Após testes realizados sob níveis de significância razoáveis, os recursos gerados internamente pelas operações das firmas é a única variável que impacta os investimentos. A literatura que discute o impacto do BNDES no mercado de capitais brasileiro também é revisitada de modo a corroborar os resultados obtidos nos testes.
396

A Runtime Framework for Regular and Irregular Message-Driven Parallel Applications on GPU Systems

Rengasamy, Vasudevan January 2014 (has links) (PDF)
The effective use of GPUs for accelerating applications depends on a number of factors including effective asynchronous use of heterogeneous resources, reducing data transfer between CPU and GPU, increasing occupancy of GPU kernels, overlapping data transfers with computations, reducing GPU idling and kernel optimizations. Overcoming these challenges require considerable effort on the part of the application developers. Most optimization strategies are often proposed and tuned specifically for individual applications. Message-driven executions with over-decomposition of tasks constitute an important model for parallel programming and provide multiple benefits including communication-computation overlap and reduced idling on resources. Charm++ is one such message-driven language which employs over decomposition of tasks, computation-communication overlap and a measurement-based load balancer to achieve high CPU utilization. This research has developed an adaptive runtime framework for efficient executions of Charm++ message-driven parallel applications on GPU systems. In the first part of our research, we have developed a runtime framework, G-Charm with the focus primarily on optimizing regular applications. At runtime, G-Charm automatically combines multiple small GPU tasks into a single larger kernel which reduces the number of kernel invocations while improving CUDA occupancy. G-Charm also enables reuse of existing data in GPU global memory, performs GPU memory management and dynamic scheduling of tasks across CPU and GPU in order to reduce idle time. In order to combine the partial results obtained from the computations performed on CPU and GPU, G-Charm allows the user to specify an operator using which the partial results are combined at runtime. We also perform compile time code generation to reduce programming overhead. For Cholesky factorization, a regular parallel application, G-Charm provides 14% improvement over a highly tuned implementation. In the second part of our research, we extended our runtime to overcome the challenges presented by irregular applications such as a periodic generation of tasks, irregular memory access patterns and varying workloads during application execution. We developed models for deciding the number of tasks that can be combined into a kernel based on the rate of task generation, and the GPU occupancy of the tasks. For irregular applications, data reuse results in uncoalesced GPU memory access. We evaluated the effect of altering the global memory access pattern in improving coalesced access. We’ve also developed adaptive methods for hybrid execution on CPU and GPU wherein we consider the varying workloads while scheduling tasks across the CPU and GPU. We demonstrate that our dynamic strategies result in 8-38% reduction in execution times for an N-body simulation application and a molecular dynamics application over the corresponding static strategies that are amenable for regular applications.
397

Avaliação de materiais usando a radiografia computadorizada (CR) empregando um acelerador linear e cobalto - 60 como fontes de altas energias / Evaluation of materials using computed radiography (CR) employing a linear accelerator and cobalt - 60 as source of high energy

Heleno Ribeiro Simões 15 December 2012 (has links)
Nas construções de caldeiras de força, vasos de pressão e outros tipos de equipamentos para os diversos segmentos industriais têm exigido da engenharia de materiais um desenvolvimento tecnológico para melhores processos na obtenção de materiais fundidos, forjados, laminados, e outros. Desenvolver recursos tecnológicos que minimizem a presença de imperfeições que possam comprometer a integridade estrutural dos equipamentos que operam com pressão tem sido uma busca constante tanto nas usinas como nas indústrias de bens de capital nas fases das construções. Uma construção implica em seleção de materiais, projeto, fabricação, exames, inspeção, testes, certificação e dispositivos de alívio que atendam aos requisitos dos códigos e normas. Estes requisitos estão cada vez maiores e estabelecem limites para a existência destas imperfeições vão de encontro à necessidade de lançar mão de métodos de ensaios não destrutivos que permitam sempre a melhor probabilidade de detecção. Os processos de controle da qualidade têm buscado por meio das novas tecnologias aumentarem a sua sensibilidade, visando à detecção de descontinuidades que hoje são detectadas pelos métodos convencionais. Em termos de ensaios não destrutivos, as exigências para o ensaio radiográfico convencional estão no limite da sensibilidade dos filmes radiográficos industriais disponíveis, além do compromisso de buscar um tempo de exposição menor ser sempre um fator importante a ser considerado na qualidade, segurança e produtividade tanto na fábrica como no campo. O objetivo deste trabalho foi estudar e avaliar a técnica de radiografia computadorizada (RC) em relação à radiografia convencional para inspeção dos materiais, utilizando os parâmetros de avaliação, tais como, relação sinal ruído, resolução espacial, ferramentas para detectabilidade, sensibilidade ao contraste e tons de cinza, que são aplicáveis nas avaliações de imagens digitais. Para a avaliação da técnica de radiografia industrial digital foi utilizado um corpo-de-prova fabricada pelo processo de fundição com espessura de 75 a 150 mm, com defeitos típicos do processo. O corpo-de-prova foi radiografado com a técnica convencional e digital. Na técnica convencional foram utilizados os filmes radiográficos industriais tipos I e II do ASTM E 1815, um acelerador linear Varian modelo Linatron 400 de 4 MeV e duas fontes de Cobalto-60 com atividades diferentes. Na técnica computadorizada foram utilizados as mesmas fontes de radiação, uma placa de fósforo denominada IPX e um equipamento CR-50P ambos da GE IT. Pelos resultados obtidos pode-se verificar que a radiografia digital com os equipamentos avaliados, atende satisfatoriamente os códigos e normas que são utilizadas na avaliação de peças fundidas. A técnica mostrou-se mais qualitativa quando na avaliação das descontinuidades localizadas nas seções críticas, pois o sistema RC possibilita a utilização de ferramenta de perfil de linha que fornece os valores de nível de cinza ao longo de uma trajetória linear demarcada na área da imagem da descontinuidade. Com isto, mesmo em poucos experimentos e um único sistema de RC pode-se concluir que a técnica é bastante vantajosa na detecção de descontinuidades nos processos de fabricação e que atendeu tanto os requisitos do ASTM E 272 para o cobre ou o ASME Seção VIII Divisão 1, Apêndice 7 que referenciam os padrões radiográficos conforme as normas ASTM E-186 e ASTM E-280 para aços fundidos. / In the constructions of power boilers, pressure vessels and other equipment for several industries has required the development of materials engineering technology for better processes in obtaining materials cast, forged, rolled, and others. Develop technological resources that minimize the presence of imperfections that could compromise the structural integrity of the equipment operating pressure has been a constant search both in plants and in capital goods industries phases of construction. A construction involves materials selection, design, fabrication, testing, inspection, testing, certification, and relief devices that meet the requirements of codes and standards. These requirements are increasing and establish limits for the existence of these imperfections go against the need to resort to non-destructive testing methods that enable always the best probability of detection. The processes of quality control have sought through new technologies increase their sensitivity in order to detect discontinuities of today are detected by conventional methods. In terms of non-destructive testing, requirements for conventional radiographic testing are at the limit of sensitivity of available industrial radiographic films, besides the commitment to seek a lower exposure time is always an important factor to be considered in quality, safety and productivity both in the factory and in the field. The aim of this work was to study and evaluate the technical radiography Computed (RC) compared to conventional radiography for inspection of materials, using evaluation parameters such as signal to noise ratio, spatial resolution, and tools for detectability, contrast sensitivity and grayscale, which apply in evaluations of digital images. For the evaluation of industrial radiography technique it was used a test specimen manufactured by the casting process with thickness from 75 to 150 mm, with typical defects in the process. The test specimen was X-rayed with the conventional and digital techniques. In the conventional technique were used industrial radiographic films types I and II to ASTM E 1815, a linear accelerator Varian model 400 Linatron 4 MeV and two cobalt-60 sources with different activities. In the technique computed were used the same radiation source, a phosphor plate IPX and an apparatus called CR-50P both GE IT. From the results it can be seen that with digital radiography equipment evaluated satisfactorily meets the codes and standards that are used in the evaluation of castings. The technique was more qualitative evaluation when the discontinuities located in critical sections for the system to use RC allows tool profile line shows values of gray level along a linear path demarcated in the image area discontinuity. With this, even in a few experiments a single system and RC can be concluded that the technique is quite advantageous in the detection of discontinuities in the manufacturing processes and that both met the requirements of ASTM E 272 for copper or ASME Section VIII Division 1, Appendix 7 that reference radiographic patterns according to ASTM E-186 and ASTM E-280 for steel castings.
398

Experimentální studium pole neutronů v podkritickém urychlovačem řízeném jaderném reaktoru / Experimental Investigation of the Neutron Field in an Accelerator Driven Subcritical Reactor

Zeman, Miroslav January 2020 (has links)
This dissertation focuses on irradiations of a spallation set-up consisting of more than half a ton of natural uranium that were executed by a 660 MeV proton beam at the Joint Institute for Nuclear Reserch in Dubna. Two types of irradiations were arranged: with and without lead shielding. Both types were arranged with threshold activation detectors (Al-27, Mn-55, Co-59, and In-nat) located throughout the whole set-up both in horizontal and vertical positions and activated by secondary neutrons produced by spallation reaction. The threshold activation detectors were analysed by the method of gamma-ray spectroscopy. Radionuclides found in the threshold detectors were analysed and reaction rates were determined for each radionuclide. Ratios of the reaction rates were determined from irradiation of the set-up with and without lead shielding. Subsequently, the neutron spectra generated inside the spallation target at different positions were calculated using Co-59 detector. The experimental results were compared with Monte Carlo simulations performed using MCNPX 2.7.0.
399

Technické úpravy a aplikace zařízení pro ozařování MeV ionty při tandemovém urychlovači v Uppsale / MeV ion irradiation beamline at the Uppsala Tandem Accelerator: Improvements and applications

Sekula, Filip January 2021 (has links)
V této práci je představeno zařízení pro ozařování MeV ionty při tandemovém urychlovači na univerzitě v Uppsale. Jsou podány základy teorie interakce iontů s pevnou látkou a modifikace materiálu pomocí iontů s vysokou energií. Zařízení tandemového urychlovače je popsáno počínaje generací iontů a konče dopadem iontů na vzorek v hlavní komoře zařízení pro iontové ozařování. Následně jsou detailně charakterizovány modifikace systému pro přesun vzorků a popsán princip jeho funkce. Pilotní aplikace upraveného systému v oblasti materiálových modifikací je prezentována na příkladu ozařování Ge kvantových teček. Homogenita rozložení iontů na vzorku při ozařování je testována pomocí simulace elektrostatického deflektoru.
400

Monitoring a simulace chování experimentálních terčů pro ADS, vývinu tepla a úniku neutronů / Monitoring and Simulation of ADS Experimental Target Behaviour, Heat Generation, and Neutron Leakage

Svoboda, Josef January 2021 (has links)
Urychlovačem řízené podkritické systémy (ADS) se schopností transmutovat dlouhodobě žijící radionuklidy mohou vyřešit problematiku použitého jaderného paliva z aktuálních jaderných reaktorů. Stejně tak i potenciální problém s nedostatkem dnes používaného paliva, U-235, jelikož jsou schopny energeticky využít U-238 nebo i hojný izotop thoria Th-232. Tato disertační práce se v rámci základního ADS výzkumu zabývá spalačními reakcemi a produkcí tepla různých experimentálních terčů. Experimentální měření bylo provedeno ve Spojeném ústavu jaderných výzkumů v Dubně v Ruské federaci. V rámci doktorského studia bylo v průběhu let 2015-2019 provedeno 13 experimentů. Během výzkumu byly na urychlovači Fázotron ozařovány různé terče protony s energií 660 MeV. Nejdříve spalační terč QUINTA složený z 512 kg přírodního uranu, následně pak experimentální terče z olova a uhlíku nebo terč složený z olověných cihel. Byl proveden také speciální experiment zaměřený na detailní výzkum dvou protony ozařovaných uranových válečků, z nichž je složen spalační terč QUINTA. Výzkum byl především zaměřen na monitorování uvolňovaného tepla ze zpomalovaných protonů, spalační reakce a štěpení, způsobeného neutrony produkovanými spalační reakcí. Dále se na uvolňování tepla podílely piony a fotony. Teplota byla experimentálně měřena pomocí přesných termočlánků se speciální kalibrací. Rozdíly teplot byly monitorovány jak na povrchu, tak uvnitř terčů. Další výzkum byl zaměřený na monitorování unikajících neutronů z terče porovnávací metodou mezi dvěma detektory. První obsahoval malé množství štěpného materiálu s teplotním čidlem. Druhý byl složený z neštěpného materiálu (W nebo Ta), avšak s podobnými materiálovými vlastnostmi se stejnými rozměry. Unik neutronů (resp. neutronový tok mimo experimentální terč) byl detekován uvolněnou energií ze štěpné reakce. Tato práce se zabývá přesným měřením změny teploty pomocí termočlánků, s využitím elekroniky od National Instrument a softwaru LabView pro sběr dat. Pro práci s daty, analýzu a vizualizaci dat byl použit skriptovací jazyk Python 3.7. (s využitím několika knihoven). Přenos částic by simulován pomocí MCNPX 2.7.0., a konečně simulace přenosu tepla a určení povrchové teploty simulovaného modelu bylo provedeno v programu ANSYS Fluent (pro jednodušší výpočty ANSYS Transient Thermal).

Page generated in 0.0673 seconds