271 |
O que determina o padrÃo de contribuiÃÃo previdenciÃria das famÃlias nos estados brasileiros? / What determines the pattern of social security contributions of families in the states Brazilians?Josà Milson de Oliveira Lima Filho 30 March 2015 (has links)
nÃo hà / O presente trabalho contribui na discussÃo do modelo de previdÃncia social
brasileiro com foco no comportamento das famÃlias brasileiras em relaÃÃo Ãs
contribuiÃÃes destinadas à previdÃncia complementar; mais especificamente, na
modalidade do Plano Gerador de BenefÃcio Livre â PGBL, em face de sua
importÃncia na tentativa de garantir o poder de compra e manutenÃÃo do padrÃo de
qualidade de vida dos indivÃduos no futuro quando da reduÃÃo de sua capacidade
laboral e sua contribuiÃÃo à economia brasileira, enquanto poupanÃa privada
nacional. O estudo buscou identificar como as variÃveis econÃmicas nÃvel de renda
per capta, Ãndice de pobreza da populaÃÃo, Ãndice de concentraÃÃo de renda, Ãndice
de pobreza e quantidade de anos de estudo influenciam na captaÃÃo per capta
mÃdia de contribuiÃÃes para a previdÃncia complementar, modalidade PGBL,
utilizando-se de anÃlise de dados em painel. / This paper offers contribution to discussion of the Brazilian social welfare model with
a focus on the behaviour of Brazilian families in relation to contributions for private
pension plan, more specifically in the free benefit generator plan mode, in face of its
importance in an attempt to ensure the purchasing power and maintaining the
standard of quality of life of individuals in the future when there is a reduction in their
labour capacity and in their contribution to the Brazilian economy while national
private savings. The study aims to identify how economic variables such as per
capita income level, poverty index of population, income concentration index, poverty
rate and number of years of study influence on the funding per capita average of
private pension contributions, free benefit generator plan mode, using panel data
analysis.
|
272 |
Biodiversidade de fungos aflatoxigênicos e aflatoxinas em castanha do Brasil / Biodiversity of aflatoxigenic fungi and aflatoxins in Brazil nutsCalderari, Thaiane Ortolan, 1986- 19 August 2018 (has links)
Orientadores: José Luiz Pereira, Marta Hiromi Taniwaki / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia de Alimentos / Made available in DSpace on 2018-08-19T08:18:04Z (GMT). No. of bitstreams: 1
Calderari_ThaianeOrtolan_M.pdf: 6842829 bytes, checksum: 4d9030fc65c6d69a678e084cd772b7ed (MD5)
Previous issue date: 2011 / Resumo: A castanha do Brasil (Bertholletia excelsa) é uma das mais importantes espécies de exploração extrativista da floresta Amazônica, sendo exportada para diversos países devido ao seu alto valor nutritivo. No entanto, os baixos níveis tecnológicos característicos de sua cadeia produtiva, considerada ainda extrativista e as condições inadequadas de manejo da matéria prima, favorecem o aparecimento de contaminação por fungos produtores de aflatoxinas, compostos tóxicos considerados cancerígenos para humanos. Este problema é um entrave para a comercialização do produto, principalmente no mercado externo, dado ao rigoroso controle de países europeus e Estados Unidos em relação aos níveis de toxinas presentes nos alimentos. Nestas condições, o presente trabalho teve por objetivo investigar a incidência de fungos em castanhas do Brasil e avaliar o potencial toxigênico dos isolados Aspergillus section flavi para a produção de aflatoxinas, bem como analisar a presença de aflatoxinas nesta matriz. Um total de 143 amostras provenientes dos Estados do Pará, Amazonas e São Paulo em diferentes etapas da cadeia produtiva da castanha foi analisado. A técnica utilizada para análise da infecção fúngica foi o plaqueamento direto em meio Dicloran 18% Glicerol. Os resultados foram expressos em porcentagem de infecção fúngica. Os isolados suspeitos foram purificados em meio Czapek extrato de levedura ágar e incubados a 25ºC/7 dias em diferentes temperaturas para a identificação das espécies. Para a análise do potencial toxigênico de cada isolado da seção flavi foi utilizada a técnica do ágar plug. Para a análise de aflatoxinas foi utilizada coluna de imunoafinidade para extração e limpeza das amostras e Cromatografia Líquida de Alta Eficiência e detector de fluorescência acoplado ao sistema de derivatização Kobracell para detecção e quantificação das aflatoxinas. Dentre o total de amostras coletadas, aquelas provenientes das florestas foram as que apresentaram maior valor médio de atividade de água, assim como maior porcentagem de infecção fúngica quantificada e biodiversidade de espécies. Considerando todas as amostras avaliadas, foram no total 13.421 isolados de fungos filamentosos, sendo que as espécies mais incidentes foram Aspergillus flavus, Aspergillus nomius, Penicillium citrinum, Aspergillus tamarii, Syncephalastrum racemosum e Penicillium sp. Dentre as espécies encontradas, 450 isolados de Aspergillus nomius e 9 de Aspergillus parasiticus foram identificados e 100% apresentaram capacidade de produção de aflatoxinas AFB1, AFB2, AFG1, AFG2. Dos de 703 isolados de Aspergillus flavus, 63,5% apresentaram a capacidade de produzir aflatoxinas AFB1 e AFB2. A média de contaminação por aflatoxinas totais obtida foi de 7,17 µg/kg (ND-104,2 µg/kg), 1,13µg/kg (ND-7,44µg/kg) e 0,47 µg/kg (ND-0,2 µg/kg) para as amostras dos Estados do Pará, Amazonas e de São Paulo, respectivamente. Das 143 amostras coletadas, apenas 5 amostras excederam o limite máximo de aflatoxinas totais estabelecido pela União Européia e pela ANVISA (10,0ug/kg para castanhas do Brasil sem casca destinadas ao consumo direto para humanos) / Abstract: The Brazil nut (Bertholletia excelsa) is one of the most important species extracted from the Amazon forest, and is exported to several countries due to its high nutritional value. However, the low technological level of its productive chain and inadequate raw material handling favour contamination points for aflatoxin fungi producers aflatoxins. The presence of aflatoxins in Brazil nuts has been a barrier for its marketing, mainly for the export market, due to rigorous control of European countries and the United States. Therefore, the present work had the objective of investigating the incidence of fungi in Brazil nuts and evaluate the toxigenic potential of Aspergillus section flavi isolates to produce aflatoxins, as well as analyzing the presence of aflatoxins in this product. A total of 143 samples from three different states, at different stages of the Brazil nut chain was analyzed. The technique used for fungi infection analized was direct plating in DG18. The results were expressed in percentage of fungal infection. The suspected isolates were purified on Czapek yeast extract agar (CYA) and incubated at different temperature for species identification. For toxin production analysis of each isolatec Aspergillus section flavi the agar plug technique was used. For aflatoxin analysis an immunoafinity column was used for extraction and cleaning of the sample, high performance liquid for aflatoxin detection and quantification in Brazil nuts, chromatography (HPLC) with a fluorescence detector was used, coupled with the Kobracell derivatization system. Among the analyzed samples, the ones collected directly from the forests had the highest water activity, the highest fungal infection and greatest biodiversity of species. A total of 13,421 filamentous fungi were quantificated from all the samples with the most common isolated species were: Aspergillus flavus, Aspergillus nomius, Penicillium citrinum, Aspergillus tamarii, Syncephalastrum racemosum e Penicillium spp. All the 450 strains of Aspergillus nomius and 9 strains of Aspergillus parasiticus, showed 100% capacity of aflatoxin B1, B2, G1, G2 production. Out of 703 species of Aspergillus flavus isolated, 63.5% showed capacity of aflatoxin B1 e B2 production. The average of total aflatoxin contamination was: 7.17µg/kg (ND-104.2 µg/kg), 1.13µg/kg (ND-7.44µg/kg) and 0.47 µg/kg (ND-0.2 µg/kg) for samples from Pará, Amazon and São Paulo, respectively. Out of 143 analyzed samples, only 5 samples exceded the maximum level for total aflatoxins established by the European Union and ANVISA of 10 µg/kg for shelled Brazil nuts intended for direct human consumption / Mestrado / Ciência de Alimentos / Mestre em Ciência de Alimentos
|
273 |
Physique du quark top dans l'expérience CMS au démarrage du LHC / Physics of the quark top within the CMS experiment at the LHC launchLe grand, Thomas 28 September 2010 (has links)
La première partie de cette thèse porte sur l'amélioration de l'algorithme de l'étape d'initiation de reconstruction des traces de hadrons et de muons au sein du trajectographe au silicium de l'expérience CMS. Les différentes étapes de mise au point et de tests, qui ont permis d'aboutir à la qualification de ce nouvel algorithme en tant que méthode standard d'initiation de la reconstruction des traces, sont présentées dans ce document.La deuxième partie concerne la mise en place d'une méthode alternative de mesure de la section efficace de production des paires top-antitop dans l'expérience CMS lors du démarrage du LHC. Cette analyse est effectuée à partir du canal de désintégration semi-muonique avec au moins un muon supplémentaire provenant d'un des quarks bottom et a été réalisée en simulation complète démontrant ainsi la possibilité d'une “redécouverte” possible du quark top avec 5 pb-1. Les 2.4 pb-1 de données réelles obtenues à la fin du mois d'Août m'ont permis d'observer les premières paires top-antitop et d'effectuer une première mesure de section efficace : 171±77(stat.) ±27(syst.) pb / The first part of this thesis is about the improve made to the seeding algorithm of track reconstruction for the hadrons and the muons in the silicon tracker of the CMS experiment. The different stages from the creation to the tests, which allowed us to qualify this new algorithm as the standard seeding for tracks reconstruction, are presented in this document. The second part is dedicated to the creation of an alternative method to measure the cross-section of the top-antitop pairs production in the CMS experiment at the LHC launch. This analysis has been made using the channel of the semi-muonic decay with at least one another muon coming from a bottom quark and has been studied on full simulation showing the feasibility to “re-discover” the top quark with 5 pb-1. The 2.4 pb-1 of data collected by the end of august have allowed me to observe the first top-antitop pairs and to make the first cross-section measurement: 171±77(stat.) ±27(syst.) pb.
|
274 |
Behaviour of welded tubular structures in fireOzyurt, Emre January 2015 (has links)
This thesis presents the results of a research project to develop methods to carry out fire safety design of welded steel tubular trusses at elevated temperatures due to fire exposure. It deals with three subjects: resistance of welded tubular joints at elevated temperatures, effects of large truss deflection in fire on member design and effects of localised heating. The objectives of the project are achieved through numerical finite element modelling at elevated temperatures using the commercial Finite Element software ABAQUS v6.10-1 (2011). Validation of the simulation model for joints is based on comparison against the test results of Nguyen et al. (2010) and Kurobane et al. (1986). Validation of the simulation model for trusses is through checking against the test results of Edwards (2004) and Liu et al. (2010).For welded tubular joints, extensive numerical simulations have been conducted on T-, Y-, X-, N- and non-overlapped K-joints subjected to brace axial compression or tension, considering a wide range of geometrical parameters. Uniform temperature distribution was assumed for both the chord and brace members. Results of the numerical simulations indicate for gap K- and N-joints (two brace members, one in tension and the other in compression) and for T-, Y- and X-joints with the brace member under axial tensile load (one brace member only, in tension), it is suitable to use the same ambient temperature calculation equation as in the CIDECT (2010) or EN 1993-1-8 (CEN, 2005a) design guides and simply replace the ambient temperature strength of steel with the elevated temperature value. However, for T-, Y- and X-joints under brace compression load (one brace member only, in compression), the effect of large chord deformation should be considered. Large chord deformation changes the chord geometry and invalidates the assumed yield line mechanism at ambient temperature. For approximation, the results of this research indicate that it is acceptable to modify the ambient temperature joint strength by a reduction factor for the elastic modulus of steel at elevated temperatures. In the current fire safety design method for steel truss, a member based approach is used. In this approach, the truss member forces are calculated at ambient temperature based on linear elastic analysis. These forces are then used to calculate the truss member limiting temperatures. An extensive parametric study has been carried out to investigate whether this method is appropriate. The parametric study encompasses different design parameters over a wide range of values, including truss type, joint type, truss span-to-depth ratio, critical member slenderness, applied load ratio, number of brace members, initial imperfection and thermal elongation. The results of this research show that due to a truss undergoing large displacements at elevated temperatures, some truss members (compression brace members near the truss centre) experience large increases in member forces. Therefore, using the ambient temperature member force, as in the current truss fire safety design method, may overestimate the truss member critical temperature by 100 °C. A method has been proposed to analytically calculate the increase in brace compressive force due to large truss deformation. In this method, the maximum truss displacement is assumed to be span/30. A comparison of the results calculated using the proposed method against the truss parametric study results has shown good agreement with the two sets of results, with the calculation results generally being slightly on the safe side. When different members of a truss are heated to different temperatures due to localised fire exposure, the brace members in compression experience increased compression due to restrained thermal expansion. To calculate the critical temperature of a brace member in a localised heated truss, it is necessary to consider this effect of restrained thermal expansion. It is also necessary to consider the beneficial effects of the adjacent members being heated, which tends to reduce the increase in compressive force in the critical member under consideration. Again, an extensive set of parametric studies have been conducted, for different load ratio, slenderness and axial restraint ratio. The results of this parametric study suggest that to calculate the critical temperature of a brace member, it is not necessary to consider the effects of the third or further adjacent members being heated. For the remainder of the heated members, this thesis has proposed a linear elastic, static analysis method at ambient temperature to calculate the additional compressive force (some negative, indicating tension) in the critical member caused by the heated members (including the critical member itself and the adjacent members). The additional compressive force is then used to calculate the limiting temperature of the critical member. For this purpose, the approximate analytical equation of Wang et al. (2010) has been demonstrated to be suitable.
|
275 |
Measurement of the WW production cross-section in Proton-Proton Collisions at sqrt(s) = 8 TeV with the ATLAS detector / Mesure de la section efficace de production WW dans les collisions proton-proton A sqrt(s) = 8 TeV avec le détecteur ATLASGao, Jun 30 October 2015 (has links)
Le Modéle Standard (MS), actuelle théorie fondamentale de la physique des particules, fournit une description des particules élémentaires et de plusieurs interactions fondamentales : les forces électromagnétique, forte et faible. Au Centre Européen pour la Recherche Nucléaire (CERN), des scientifiques du monde entier cherchent à com- prendre les lois fondamentales régissant l’Univers. LHC (Large Hadron Collider), pour les faire entrer en collision au centre des détecteurs et obtenir des indications quant à la manière dont les particules interagissent et ainsi appréhender les lois fondamentales de la nature. L’expérience ATLAS(roidalLhcApparatuS), couvre un large spectre de mesures physiques, incluant des mesures de précision du MS, la recherche du boson de Higgs, ou de trace de nouvelle physique. L’expérience CMS a un programme similaire. Les événements W+W−sont sélectionnés à partir de trois états finaux : ee, eµ, and µµ. Afin de réduire le bruit de fond, constitué principalement de processus Drell-Yan ou de paires t¯t, une coupure est appliquée sur l’énergie transverse manquante, et les événements contenant des jets hadroniques satisfaisant certains critères de sélection sont rejetés. Les principaux bruits de fonds résiduels, essentiellement des processus W+jets, top, Z+jets, sont estimés à l’aide de modèles établis à partir des données observées (méthodes data driven). Ces méthodes d’estimation sont validées en les comparants à d’autres méthodes indépendantes. La section efficace mesurée est 71.0+1.1−1.1(stat)+5.7−5.0(syst)+2.1−2.0(lumi) pb, en accord avec la prédiction NNLO du MS de 63.2+2.0−1.8pb. / The Standard Model (SM), actual fundamental theory for particle physics, provides a description of the elementary particles and the fundamental interactions: the electromagnetic, weak and strong forces. At the European Organization for Nuclear Research (CERN), physicists and engineers from all over the world are searching to understand the fundamental laws of the universe. It is at CERN that the world's largest and most sophisticated experimental instruments have been built, to accelerate particles at the energy of 3.5-4 TeV with the Large Hadron Collider (LHC). A Toroidal LHC ApparatuS (ATLAS), one of the four main detectors at LHC. In ATLAS, di-boson production is one of the most important electro-weak processes.The electro-weak sector of the SM, as well as the strong interactions, can be tested through the precision measurements of the $W^+W^-$ production cross section. A measurement of the $W+W-$ production cross section in 8 TeV center of mass proton-proton collisions is presented here from data collected with the ATLAS detector at the LHC for a total integrated luminosity of 20.3 fb^-1. The W+W- events are selected with 3 final states: ee, emu, and mumu. In order to suppress the background contamination, mainly from the Drell-Yan and ttbar processes, a cut on missing transverse energy is applied and events with hadronic jets satisfying appropriate selection criteria are rejected. The major backgrounds, mainly including W +jets, top and Z+jets, are estimated by data driven technique. The measured cross section is 71.0+1.1−1.1(stat)+5.7−5.0(syst)+2.1−2.0(lumi) pb, which is consistent with SM Next-to-Next-Leading-Order prediction of 63.2+2.0-1.8 pb.
|
276 |
Modellering av tvärsnitt i betongbro med avseende på egenskaper som platta och balkWäster, Malin January 2013 (has links)
Examensarbetet behandlar ett brotvärsnitt som inte entydigt kan betraktas som ett balktvärsnitt eller plattvärsnitt. Med de måttdefinitioner som används vid broprojektering ska en plattkonstruktion ha en bredd som är fem gånger höjden, annars ska konstruktionen ses som en balk där även balkens längd definieras att vara större än tre gånger höjden. Brotvärsnittet som studeras i detta examensarbete kan alltså definieras både som ett plattvärsnitt och som ett balktvärsnitt. Målet med arbetet är att undersöka om det är möjligt att finna en metod att konstruera denna typ av tvärsnitt som befinner sig i gränslandet mellan två definitioner. Skillnaderna mellan en plattas och en balks verkningssätt ligger i att plattan antas bära last i två riktningar medan en balk enbart bär last i en riktning. Examensarbetet är genomfört i sammarbete med WSP Bro- och vattenbyggnad i Örebro, som konstruerade en bro med just detta tvärsnitt. Bro 344 över parkstråk i trafikplats Rinkeby å ramp mot Ärvinge, är 181 m lång bro i 9 spann och finns belägen vid trafikplats Rinkeby som är en del utav Trafikverkets projekt, E18 Hjulsta – Kista. Lasterna som används i analyserna är betongens egentyngd, utbredd last av beläggning och vertikala trafiklaster. I ett första skede i arbetet analyseras modellerna med rörliga trafiklaster. Det framkom dock under arbetets gång att förenklingar vad gäller trafiklasterna måste göras då arbetet skulle bli för omfattande annars. En statisk boggilast placeras ut i ett spann mitt i mellan dess tredjedelspunkt och halva spannlängden. Beräkningar utförs i en mjukvara där modellen både byggs upp av skalelement som en långsträckt platta där snittkrafter kommer ut som enhet per meter och med balkelement som en halvinspänd balk där snittkrafter kommer ut i enhet per balk. Mjukvaran som används är ett tredimensionellt finit element program, SOFISTIK, som likt många andra FE-program erbjuder användarvänliga modelleringsmiljöer, hanterar rörliga laster och har en mängd inbyggda moduler och funktioner. Beräkningarna som sedan utvärderas och jämförs är dels SOFISTIKs olika resultat för skalmodellen och balkmodellen. Där dimensionerande armeringsmängder beräknas för max fältmoment och max stödmoment. Dessa resultat från SOFISTIKs skalmodell respektive balkmodell jämförs också med resultat från de mjukvaror som användes vid dimensioneringen från början, vilket var för skalmodellanalysen Brigade Standard och för balkanalysen Strip Step 3. Armeringsmängderna jämförs slutligen genom att studera tre fall: • Skalmodell SOFISTIK - Brigade Standard • Balkmodell SOFISTIK - Strip Step 3 • SOFISTIK skalmodell – balkmodell Jämförelserna visar att både skalmodellerna från de olika programmen (SOFISTIK – Brigade Standard) och balkmodellerna från de olika programmen (SOFISTIK – Strip Step 3) ger likvärdiga armeringsmängder vilket ger en trygg verifiering av modellerna. Vidare visar jämförelse mellan skal- och balkmodell i SOFISTIK att balkmodellen ger avsevärt högre armeringsmängder, både i fält och över stöd. ar / The aim of this Master thesis is to study a cross section of a bridge that cannot be unambiguously considered to be defined as a beam cross-section or a slab cross-section. With the given definitions used in bridge engineering, a slab construction has to have a width wider than five times the height, otherwise it should be regarded as a beam construction. The length of a beam construction is also defined to be three times longer than the height. The cross section in this thesis can thus be treated as both a slab cross-section and as a beam cross-section. The aim of this work is to investigate whether it is possible to find a method to construct this type of cross-section that falls within both these two definitions. The difference in mode of action between a plate and a beam is that the plate is assumed to carry loads in two directions while a beam only carries load in one direction. The work done in this report has been performed in cooperation with the consulting company WSP Bridge & Hydraulic Design in Örebro who has constructed a bridge with the studied section, Bro 344 över parkstråk i trafikplats Rinkeby å ramp mot Ärvinge. This bridge is 181 m long in 9 spans and are located at the traffic interchange Rinkeby which is part of the Swedish Transport Administration project, E18 Hjulsta - Kista. The loads, which are discussed and analyzed are the deadweight of the concrete, distributed load of road surface and vertical traffic loads. In the first stage of the work the models are being analyzed with moving traffic loads, it appears, however, during the process that simplifications in terms of the moving traffic loads must be made, when the work would be too wide otherwise. A static bogie-load is deployed in one of the spans, in between the third point and half the span length. Calculations are performed using a computer software, in which the bridge is modeled both by shell elements and by beam elements. The shell-model is created as an elongated plate section in which the force comes out as unit per meter. The beam-model is considered as a semi-restrained beam in which the section forces come out in unit for the whole beam. Software used is a three-dimensional finite element program, SOFISTIK. As many other FEprograms SOFISTIK provide a user-friendly modeling workspace, it handles variable and moving loads and has a variety of embedded modules and functions. The calculations which are being evaluated and compared, is at the first hand the different results in between the shell-model and the beam-model from the models made in SOFISTIK. The amounts of designing reinforcement are calculated for the maximum bending moment and for the minimum bending moment. Those results, also compares with results from other software, the software used in the design from the beginning, which for the shell-analyze the software Brigade Standard and for the beam-analyze the software Strip Step 3. The amounts of design reinforcement are finally compared by studying three cases: • The Shell-model from SOFISTIK - Brigade Standard • Beam-model from SOFISTIK - Strip Step 3 • SOFISTIK the shell-model – the beam-model The comparisons show that both the shell-models from the two different programs (SOFISTIK and Brigade Standard) and the beam-models from the different two programs (SOFISTIK - Strip Step 3) give equivalent amounts of reinforcement which provides a secure verification of the models. Furthermore the comparison between the shell-model and the beam-model, in SOFISTIK , shows that the beam-model provides significantly higher amounts of reinforcement in both the field and at the support.
|
277 |
Optimization of Monte Carlo Neutron Transport Simulations with Emerging Architectures / Optimisation du code Monte Carlo neutronique à l’aide d’accélérateurs de calculsWang, Yunsong 14 December 2017 (has links)
L’accès aux données de base, que sont les sections efficaces, constitue le principal goulot d’étranglement aux performances dans la résolution des équations du transport neutronique par méthode Monte Carlo (MC). Ces sections efficaces caractérisent les probabilités de collisions des neutrons avec les nucléides qui composent le matériau traversé. Elles sont propres à chaque nucléide et dépendent de l’énergie du neutron incident et de la température du matériau. Les codes de référence en MC chargent ces données en mémoire à l’ensemble des températures intervenant dans le système et utilisent un algorithme de recherche binaire dans les tables stockant les sections. Sur les architectures many-coeurs (typiquement Intel MIC), ces méthodes sont dramatiquement inefficaces du fait des accès aléatoires à la mémoire qui ne permettent pas de profiter des différents niveaux de cache mémoire et du manque de vectorisation de ces algorithmes.Tout le travail de la thèse a consisté, dans une première partie, à trouver des alternatives à cet algorithme de base en proposant le meilleur compromis performances/occupation mémoire qui tire parti des spécificités du MIC (multithreading et vectorisation). Dans un deuxième temps, nous sommes partis sur une approche radicalement opposée, approche dans laquelle les données ne sont pas stockées en mémoire, mais calculées à la volée. Toute une série d’optimisations de l’algorithme, des structures de données, vectorisation, déroulement de boucles et influence de la précision de représentation des données, ont permis d’obtenir des gains considérables par rapport à l’implémentation initiale.En fin de compte, une comparaison a été effectué entre les deux approches (données en mémoire et données calculées à la volée) pour finalement proposer le meilleur compromis en termes de performance/occupation mémoire. Au-delà de l'application ciblée (le transport MC), le travail réalisé est également une étude qui peut se généraliser sur la façon de transformer un problème initialement limité par la latence mémoire (« memory latency bound ») en un problème qui sature le processeur (« CPU-bound ») et permet de tirer parti des architectures many-coeurs. / Monte Carlo (MC) neutron transport simulations are widely used in the nuclear community to perform reference calculations with minimal approximations. The conventional MC method has a slow convergence according to the law of large numbers, which makes simulations computationally expensive. Cross section computation has been identified as the major performance bottleneck for MC neutron code. Typically, cross section data are precalculated and stored into memory before simulations for each nuclide, thus during the simulation, only table lookups are required to retrieve data from memory and the compute cost is trivial. We implemented and optimized a large collection of lookup algorithms in order to accelerate this data retrieving process. Results show that significant speedup can be achieved over the conventional binary search on both CPU and MIC in unit tests other than real case simulations. Using vectorization instructions has been proved effective on many-core architecture due to its 512-bit vector units; on CPU this improvement is limited by a smaller register size. Further optimization like memory reduction turns out to be very important since it largely improves computing performance. As can be imagined, all proposals of energy lookup are totally memory-bound where computing units does little things but only waiting for data. In another word, computing capability of modern architectures are largely wasted. Another major issue of energy lookup is that the memory requirement is huge: cross section data in one temperature for up to 400 nuclides involved in a real case simulation requires nearly 1 GB memory space, which makes simulations with several thousand temperatures infeasible to carry out with current computer systems.In order to solve the problem relevant to energy lookup, we begin to investigate another on-the-fly cross section proposal called reconstruction. The basic idea behind the reconstruction, is to do the Doppler broadening (performing a convolution integral) computation of cross sections on-the-fly, each time a cross section is needed, with a formulation close to standard neutron cross section libraries, and based on the same amount of data. The reconstruction converts the problem from memory-bound to compute-bound: only several variables for each resonance are required instead of the conventional pointwise table covering the entire resolved resonance region. Though memory space is largely reduced, this method is really time-consuming. After a series of optimizations, results show that the reconstruction kernel benefits well from vectorization and can achieve 1806 GFLOPS (single precision) on a Knights Landing 7250, which represents 67% of its effective peak performance. Even if optimization efforts on reconstruction significantly improve the FLOP usage, this on-the-fly calculation is still slower than the conventional lookup method. Under this situation, we begin to port the code on GPGPU to exploit potential higher performance as well as higher FLOP usage. On the other hand, another evaluation has been planned to compare lookup and reconstruction in terms of power consumption: with the help of hardware and software energy measurement support, we expect to find a compromising solution between performance and energy consumption in order to face the "power wall" challenge along with hardware evolution.
|
278 |
The Effects of Shear Deformation in Rectangular and Wide Flange SectionsIyer, Hariharan 16 March 2005 (has links)
Shear deformations are, generally, not considered in structural analysis of beams and frames. But shear deformations in members with low clear span-to-member depth ratio will be higher than normally expected, thus adversely affecting the stiffness of these members. Inclusion of shear deformation in analysis requires the values of shear modulus (modulus of rigidity, G) and the shear area of the member. The shear area of the member is a cross-sectional property and is defined as the area of the section which is effective in resisting shear deformation. This value is always less than the gross area of the section and is also referred to as the form factor. The form factor is the ratio of the gross area of the section to its shear area. There are a number of expressions available in the literature for the form factors of rectangular and wide flange sections. However, preliminary analysis revealed a high variation in the values given by them. The variation was attributed to the different assumptions made, regarding the stress distribution and section behavior. This necessitated the use of three-dimensional finite element analysis of rectangular and wide flange sections to resolve the issue.
The purpose of finite element analysis was to determine which, if any, of the expressions in the literature provided correct answers. A new method of finite element analysis based on the principle of virtual work is used for analyzing rectangular and wide flange sections. The validity of the new method was established by analyzing rectangular sections for which closed form solutions for form factor were available. The form factors of various wide flange sections in the AISC database were calculated from finite element analysis and an empirical relationship was formulated for easy calculation of the form factor. It was also found that an empirical formula provided good results for form factors of wide flange sections.
Beam-column joint sub-assemblies were modeled and analyzed to understand the contribution of various components to the total drift. This was not very successful since the values obtained from the finite element analysis did not match the values calculated using virtual work. This discrepancy points to inaccuracies in modeling and, possibly, analysis of beam-column joints. This issue needs to be resolved before proceeding further with the analysis. / Master of Science
|
279 |
Cross-section measurements of top-quark pair production in association with a hard photon at 13 TeV with the ATLAS detectorZoch, Knut 06 July 2020 (has links)
No description available.
|
280 |
Reduced stress method for steel in class 4 cross-sections : Evaluation of the reduced stress method for a railway bridge / Reducerad spänning för stål i tvärsnittsklass 4 : Utvärdering av metoden reducerad spänning för en järnvägsbroBadrous, Therese, Lund, Ebba January 2021 (has links)
The effective cross-section method, also called reduced cross-section method is generally used for steel in class 4 cross-sections in considering local buckling. This method is a bit complicated and time consuming, which often leads to engineers not using profiles in class 4 cross-sections. The reduced stress method is an alternative method for handling slender steel cross-sections. These two methods are described in the Eurocode, of which the latter is less described. The national annex states that the reduced stress method should not be used, however, without explanation to the general recommendation. This study is a comparison of the two different methods and is intended to provide a better understanding of the reduced stress method. The calculation process and design for steel profiles in class 4 cross-sections can in this way become more efficient. This is done by determining when it is most profitable to use the reduced stress method instead of the effective cross-section method. Thus, can the use of profiles in class 4 cross-sections become a more obvious choice in the industry. This study considered a simply supported I-beam in an open railway bridge exposed to bending moment where the same conditions were investigated for each method. The effective crosssection method is implemented by reducing the cross-sectional area and was calculated manually. In the reduced stress method, it is the yield stress that is reduced. The reduced stress method was analyzed both through FEM and manual calculations in this study. The result showed that the reduced stress method performed through FEM gave a similar result as the effective cross-section method, which makes it an appealing method. The reduced stress method with manual calculation, however, gave a more conservative result. These methods are relativelydifferent and recommendations for each method are presented in this report. / Idag behandlas ståltvärsnitt i tvärsnittklass 4 generellt med hjälp av metoden effektivt tvärsnitt för att beakta lokal buckling. Metoden är en aning komplicerad och tidskrävande, vilket leder till att konstruktörer överlag inte använder profiler i tvärsnittsklass 4. Reducerad spänning är en alternativ metod för hantering av slanka ståltvärsnitt. Dessa två metoder beskrivs i Eurokoden varav den sist nämnda mer kortfattat. I den nationella bilagan står det att metoden reducerad spänning ej bör användas dock utan motivering till det allmänna rådet. Studien är en jämförelse av de två olika metoderna och är ämnad till att ge en bättre förståelse av metoden reducerad spänning. Således kan beräkningsgången samt projektering för stålprofiler i tvärsnittsklass 4 effektiviseras. Detta genom att avgöra när det är mest lönsamt att använda reducerad spänning framför effektivt tvärsnitt. Följaktligen kan användning av profiler i tvärsnittsklass 4 bli ett mer självklart val i branschen. Denna studie omfattade en fritt upplagd I-balk i en öppen järnvägsbro utsatt för böjande moment där samma förutsättningar har undersökts för respektive metod. Effektivt tvärsnitt går ut på att reducera en tvärsnittsarea och har utförts via handberäkningar. I metoden reducerad spänning är det sträckgränsen som reduceras. I denna studie undersöktes reducerad spänningsmetoden via FEM samt handberäkningar. Resultatet påvisade att metoden reducerad spänning utförd via FEM gav ett liknande resultat som metoden effektivt tvärsnitt, vilket gör det till en attraktiv metod. Reducerad spänning via handberäkning gav dock ett mer konservativt resultat. Metoderna är relativt olika och rekommendationer för tillämpning av respektive metod presenteras i denna rapport.
|
Page generated in 0.0904 seconds