• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Instrumentation development for magneto-transport and neutron scattering measurements at high pressure and low temperature

Wang, Weiwei January 2013 (has links)
High pressure, high magnetic field and low temperature techniques are required to investigate magnetic transitions and quantum critical behaviour in different ferromagnetic materials to elucidate how novel forms of superconductivity and other new states are brought about. In this project, several instruments for magneto-transport and neutron scattering measurements have been designed and built. They include inserts for a dilution refrigerator and pressure cells for resistivity, magnetic susceptibility and inelastic neutron scattering measurements. The technical drawings of the low temperature inserts and pressure cells were produced with Solid Edge computer-aided software and the performance and safety assessments were evaluated with the ANSYS finite element analysis package. The pressure cells developed include diamond anvil cells, piston cylinder cells and some auxiliary equipment. Pressure effects on the physical properties such as the electrical resistivity and magnetic ordering of some ferromagnetic materials were studied with the equipment developed. A two-axis rotating stage was developed and deployed with a dilution refrigerator combined within a superconducting magnet to measure various physical properties as a function of the orientation of the sample with respect to applied field at sub-Kelvin temperature. The rotating stage is made of Beryllium Copper (BeCu) alloy. In order to avoid the entanglement of the wires, custom-designed “flexi cables” - copper tracks printed on a Kapton foil with a yield of nearly 100% - to work with the rotating stage were manufactured. The performance of the rotating stage has been demonstrated by a quantum oscillation in the electrical resistivity study of a high field ferromagnetic superconductor URhGe. A miniature diamond anvil cell based on the turnbuckle principle has been designed. The cell, made of BeCu alloy, is 7mm in length and 7mm in diameter. It has been shown to reach a maximum pressure of 10 GPa with diamond anvils with 800 μm culets. The small dimensions of the cell allow it to fit into the existing sample environment such as Physical Properties Measurement System (PPMS) and Magnetic Properties Measurement System (MPMS) from Quantum Design, USA, and onto the customized two-axis rotating stage built for the dilution fridge. It also thermalizes rapidly allowing rapid cooling and heating during the experiments. The cell can be used to make both resistivity and magnetic susceptibility measurements. To ensure the hydrostaticity of the pressure around the sample in the turnbuckle cell, a gearbox was designed for cryogenic loading of liquid argon and room temperature gas loading of either helium or argon at a loading pressure of up to 0.3 GPa. Pressure effects on the Curie temperature of a PrNi ferromagnet were studied in a diamond anvil cell. Four-probe resistance measurements under pressures up to 9 GPa were carried out in a PPMS. The possibility of tuning the physical properties of the material by altering the pressures has been demonstrated. By analysing the results of the electrical resistivity measurements under pressures, it was concluded that the Curie temperature of PrNi increases with pressure at the rate of 0.85 K per GPa. The quantity ∆(δρ/δτ)which reflects some part of the entropy change also increases with pressure. The expected quantum critical point has not been observed in this material up to 9 GPa. A large volume high-pressure piston-cell for inelastic neutron scattering measurements has been designed and can reach a pressure of up to 1.8 GPa with a sample volume in excess of 400 mm3. The dimension of the part of the cell exposed to the neutron beam has been optimized to minimize the attenuation of the neutron beam. The novel design of the piston seal also eliminates the use of a sample container, which makes it possible to accommodate larger samples and reduces the absorption. The pressure in the cell is measured by a manganin pressure gauge placed next to the sample. The performance of the cell was illustrated by an inelastic neutron scattering study of UGe2.
2

Optimization of Monte Carlo Neutron Transport Simulations with Emerging Architectures / Optimisation du code Monte Carlo neutronique à l’aide d’accélérateurs de calculs

Wang, Yunsong 14 December 2017 (has links)
L’accès aux données de base, que sont les sections efficaces, constitue le principal goulot d’étranglement aux performances dans la résolution des équations du transport neutronique par méthode Monte Carlo (MC). Ces sections efficaces caractérisent les probabilités de collisions des neutrons avec les nucléides qui composent le matériau traversé. Elles sont propres à chaque nucléide et dépendent de l’énergie du neutron incident et de la température du matériau. Les codes de référence en MC chargent ces données en mémoire à l’ensemble des températures intervenant dans le système et utilisent un algorithme de recherche binaire dans les tables stockant les sections. Sur les architectures many-coeurs (typiquement Intel MIC), ces méthodes sont dramatiquement inefficaces du fait des accès aléatoires à la mémoire qui ne permettent pas de profiter des différents niveaux de cache mémoire et du manque de vectorisation de ces algorithmes.Tout le travail de la thèse a consisté, dans une première partie, à trouver des alternatives à cet algorithme de base en proposant le meilleur compromis performances/occupation mémoire qui tire parti des spécificités du MIC (multithreading et vectorisation). Dans un deuxième temps, nous sommes partis sur une approche radicalement opposée, approche dans laquelle les données ne sont pas stockées en mémoire, mais calculées à la volée. Toute une série d’optimisations de l’algorithme, des structures de données, vectorisation, déroulement de boucles et influence de la précision de représentation des données, ont permis d’obtenir des gains considérables par rapport à l’implémentation initiale.En fin de compte, une comparaison a été effectué entre les deux approches (données en mémoire et données calculées à la volée) pour finalement proposer le meilleur compromis en termes de performance/occupation mémoire. Au-delà de l'application ciblée (le transport MC), le travail réalisé est également une étude qui peut se généraliser sur la façon de transformer un problème initialement limité par la latence mémoire (« memory latency bound ») en un problème qui sature le processeur (« CPU-bound ») et permet de tirer parti des architectures many-coeurs. / Monte Carlo (MC) neutron transport simulations are widely used in the nuclear community to perform reference calculations with minimal approximations. The conventional MC method has a slow convergence according to the law of large numbers, which makes simulations computationally expensive. Cross section computation has been identified as the major performance bottleneck for MC neutron code. Typically, cross section data are precalculated and stored into memory before simulations for each nuclide, thus during the simulation, only table lookups are required to retrieve data from memory and the compute cost is trivial. We implemented and optimized a large collection of lookup algorithms in order to accelerate this data retrieving process. Results show that significant speedup can be achieved over the conventional binary search on both CPU and MIC in unit tests other than real case simulations. Using vectorization instructions has been proved effective on many-core architecture due to its 512-bit vector units; on CPU this improvement is limited by a smaller register size. Further optimization like memory reduction turns out to be very important since it largely improves computing performance. As can be imagined, all proposals of energy lookup are totally memory-bound where computing units does little things but only waiting for data. In another word, computing capability of modern architectures are largely wasted. Another major issue of energy lookup is that the memory requirement is huge: cross section data in one temperature for up to 400 nuclides involved in a real case simulation requires nearly 1 GB memory space, which makes simulations with several thousand temperatures infeasible to carry out with current computer systems.In order to solve the problem relevant to energy lookup, we begin to investigate another on-the-fly cross section proposal called reconstruction. The basic idea behind the reconstruction, is to do the Doppler broadening (performing a convolution integral) computation of cross sections on-the-fly, each time a cross section is needed, with a formulation close to standard neutron cross section libraries, and based on the same amount of data. The reconstruction converts the problem from memory-bound to compute-bound: only several variables for each resonance are required instead of the conventional pointwise table covering the entire resolved resonance region. Though memory space is largely reduced, this method is really time-consuming. After a series of optimizations, results show that the reconstruction kernel benefits well from vectorization and can achieve 1806 GFLOPS (single precision) on a Knights Landing 7250, which represents 67% of its effective peak performance. Even if optimization efforts on reconstruction significantly improve the FLOP usage, this on-the-fly calculation is still slower than the conventional lookup method. Under this situation, we begin to port the code on GPGPU to exploit potential higher performance as well as higher FLOP usage. On the other hand, another evaluation has been planned to compare lookup and reconstruction in terms of power consumption: with the help of hardware and software energy measurement support, we expect to find a compromising solution between performance and energy consumption in order to face the "power wall" challenge along with hardware evolution.

Page generated in 0.0635 seconds