• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 215
  • 71
  • 32
  • 29
  • 19
  • 17
  • 8
  • 8
  • 6
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 505
  • 99
  • 86
  • 79
  • 63
  • 61
  • 55
  • 51
  • 50
  • 47
  • 41
  • 40
  • 39
  • 35
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Designing Microstructure through Reverse Peritectoid Phase Transformation in Ni₃Mo Alloy

Khalfallah, Ibrahim 03 February 2017 (has links)
High-energy ball milling and powder metallurgy methods were used to produce a partially alloyed nickel and molybdenum of γ-Ni₃Mo composition (Ni-25at.%Mo). Milled powders were cold-compacted, sintered/solutionized at 1300°C for 100h sintering followed by quenching. Three transformation studies were performed. First, the intermetallic γ-Ni₃Mo was formed from the supersaturated solution at temperatures ranging between 600°C and 900°C for up to 100h. The 100% stable γ-Ni₃Mo phase was formed at 600°C after 100h, while aging at temperatures ranging between 650°C and 850°C for 25h was not sufficient to complete the transformation. The δ-NiMo phase was observed only at 900°C as cellular and basket strands precipitates. Second, the reversed peritectoid transformation from γ-Ni₃Mo to α-Ni and δ-NiMo was performed. Supersaturated solid solution samples were first aged at 600C for 100h followed by quenching to form the equilibrium γ-Ni₃Mo phase. After that, the samples were heat treated between 910°C and 1050°C for up to 10h followed by quenching. Regardless of heat-treatment temperature, samples heat-treated for shorter times exhibited small precipitates of δ-NiMo along and within grain boundaries of α-Ni phase, and it coarsened with time. Third, the transformation from the supersaturated solution α-Ni to the peritectoid two-phase region was performed. The samples were aged between 910°C and 1050°C for up to 10h followed by quenching. Precipitates of δ-NiMo were observed in the α-Ni matrix as small particles and then coarsened with aging time. In all three cases, hardness values increased and peaked in a way similar to that of traditional aging, except that the peak occurred much rapidly in the second and third cases. In the first case, hardness increased by about 113.6% due to the development of the new phases, while the hardness increased by 90.5% and 77.2% in the second and third cases, respectively. / Master of Science
102

QUALITY AND DURABILITY OF RUBBERIZED ASPHALT CEMENT AND WARM RUBBERIZED ASPHALT CEMENT

ADHIKARI, THAM 25 April 2013 (has links)
This thesis discusses and documents findings from an investigation of performance-based testing of asphalt cement (AC), warm mixed asphalt cement, asphalt rubber (AR), and warm asphalt rubber. A number of control, warm, and asphalt rubber binders from Ontario construction contracts were investigated for their compliance with conventional Superpave® test methods such as rolling thin film (RTFO), pressure aging vessel (PAV), dynamic shear rheometer (DSR), and bending beam rheometer (BBR), as well as additional specification tests such as extended BBR and double edge notched tension test. The quality and durability of those binders were determined. Quality means the ability of asphalt binder to reach a set of specific properties whereas durability is the measure of how well asphalt retains its original characteristics when exposed to normal weathering and aging process. One warm AC and two field-blended asphalt rubber samples showed high levels of physical hardening which can lead to premature and early cracking. The warm asphalt cement lost 8 °C when stored isothermally for three days at low temperatures according to Ontario’s extended bending beam rheometer (BBR) protocol (LS-308). The two asphalt rubber samples lost 10 °C and 12 °C following the same conditioning. Many of the studied asphalt samples showed deficient strain tolerance as measured in Ontario’s double-edge-notched tension (DENT) test (LS-299). In a study of warm rubberized asphalt cement with improved properties, a number of compositions were prepared with soft Cold Lake AC and a small quantity of naphthenic oil. These binders showed little chemical and physical hardening and reasonable critical crack tip opening displacements (CTOD). Strain tolerance was much improved by co-blending with a high vinyl type styrene-butadiene-styrene (SBS) polymer and a small amount of sulfur. / Thesis (Master, Chemistry) -- Queen's University, 2013-04-24 22:54:20.07
103

Hardening strategies for HPC applications / Estratégias de enrobustecimento para aplicações PAD

Oliveira, Daniel Alfonso Gonçalves de January 2017 (has links)
A confiabilidade de dispositivos de Processamentos de Alto Desempenho (PAD) é uma das principais preocupações dos supercomputadores hoje e para a próxima geração. De fato, o alto número de dispositivos em grandes centros de dados faz com que a probabilidade de ter pelo menos um dispositivo corrompido seja muito alta. Neste trabalho, primeiro avaliamos o problema realizando experimentos de radiação. Os dados dos experimentos nos dão uma taxa de erro realista de dispositivos PAD. Além disso, avaliamos um conjunto representativo de algoritmos que derivam entendimentos gerais de algoritmos paralelos e a confiabilidade de abordagens de programação. Para entender melhor o problema, propomos uma nova metodologia para ir além da quantificação do problema. Qualificamos o erro avaliando a importância de cada execução corrompida por meio de um conjunto dedicado de métricas. Mostramos que em relação a computação imprecisa, a simples detecção de incompatibilidade não é suficiente para avaliar e comparar a sensibilidade à radiação de dispositivos e algoritmos PAD. Nossa análise quantifica e qualifica os efeitos da radiação na saída das aplicações, correlacionando o número de elementos corrompidos com sua localidade espacial. Também fornecemos o erro relativo médio (em nível do conjunto de dados) para avaliar a magnitude do erro induzido pela radiação. Além disso, desenvolvemos um injetor de falhas, CAROL-FI, para entender melhor o problema coletando informações usando campanhas de injeção de falhas, o que não é possível através de experimentos de radiação. Injetamos diferentes modelos de falha para analisar a sensitividade de determinadas aplicações. Mostramos que partes de aplicações podem ser classificadas com diferentes criticalidades. As técnicas de mitigação podem então ser relaxadas ou enrobustecidas com base na criticalidade de partes específicas da aplicação. Este trabalho também avalia a confiabilidade de seis arquiteturas diferentes, variando de dispositivos PAD a embarcados, com o objetivo de isolar comportamentos dependentes de código e arquitetura. Para esta avaliação, apresentamos e discutimos experimentos de radiação que abrangem um total de mais de 352.000 anos de exposição natural e análise de injeção de falhas com base em um total de mais de 120.000 injeções. Por fim, as estratégias de ECC, ABFT e de duplicação com comparação são apresentadas e avaliadas em dispositivos PAD por meio de experimentos de radiação. Apresentamos e comparamos a melhoria da confiabilidade e a sobrecarga imposta das soluções de enrobustecimento selecionadas. Em seguida, propomos e analisamos o impacto do enrobustecimento seletivo para algoritmos de PAD. Realizamos campanhas de injeção de falhas para identificar as variáveis de código-fonte mais críticas e apresentamos como selecionar os melhores candidatos para maximizar a relação confiabilidade/sobrecarga. / HPC device’s reliability is one of the major concerns for supercomputers today and for the next generation. In fact, the high number of devices in large data centers makes the probability of having at least a device corrupted to be very high. In this work, we first evaluate the problem by performing radiation experiments. The data from the experiments give us realistic error rate of HPC devices. Moreover, we evaluate a representative set of algorithms deriving general insights of parallel algorithms and programming approaches reliability. To understand better the problem, we propose a novel methodology to go beyond the quantification of the problem. We qualify the error by evaluating the criticality of each corrupted execution through a dedicated set of metrics. We show that, as long as imprecise computing is concerned, the simple mismatch detection is not sufficient to evaluate and compare the radiation sensitivity of HPC devices and algorithms. Our analysis quantifies and qualifies radiation effects on applications’ output correlating the number of corrupted elements with their spatial locality. We also provide the mean relative error (dataset-wise) to evaluate radiation-induced error magnitude. Furthermore, we designed a homemade fault-injector, CAROL-FI, to understand further the problem by collecting information using fault injection campaigns that is not possible through radiation experiments. We inject different fault models to analyze the sensitivity of given applications. We show that portions of applications can be graded by different criticalities. Mitigation techniques can then be relaxed or hardened based on the criticality of the particular portions. This work also evaluates the reliability behaviors of six different architectures, ranging from HPC devices to embedded ones, with the aim to isolate code- and architecturedependent behaviors. For this evaluation, we present and discuss radiation experiments that cover a total of more than 352,000 years of natural exposure and fault-injection analysis based on a total of more than 120,000 injections. Finally, Error-Correcting Code, Algorithm-Based Fault Tolerance, and Duplication With Comparison hardening strategies are presented and evaluated on HPC devices through radiation experiments. We present and compare both the reliability improvement and imposed overhead of the selected hardening solutions. Then, we propose and analyze the impact of selective hardening for HPC algorithms. We perform fault-injection campaigns to identify the most critical source code variables and present how to select the best candidates to maximize the reliability/overhead ratio.
104

Mechanical properties of polymer glasses : Mechanical properties of polymer glasses / Propriétés mécaniques des polymères vitreux : théorie et simulation

Conca, Luca 02 May 2016 (has links)
Ce manuscrit présente des récentes extensions au modèle PFVD, basé sur l'hétérogénéité de la dynamique des polymères vitreux à l'échelle de quelques nanomètres et résolu par simulation en 3D, afin de fournir une description physique unifiée des propriétés mécaniques et dynamiques des polymères vitreux soumis à déformation plastique. Trois sujets principaux sont traités : La plastification. Sous déformation, les polymères atteignent le seuil de plasticité (yield) à quelques pourcents de déformation et quelques dizaines de MPa. Nous proposons que l'énergie élastique absorbée à l'échelle des hétérogénéités dynamiques accélère la dynamique locale. On observe contraintes ultimes de quelques dizaines de MPa à quelques pourcents de déformation et que la plastification est due à un nombre relativement petit d'événements locaux. Il a été observé que la dynamique devient plus rapide et homogène dans le régime plastique et que la mobilité moyenne atteint une valeur stationnaire, linéaire avec le taux de déformation. Nous proposons que la contrainte locale stimule la diffusion de monomères des domaines lents à ceux rapides (mécanisme de facilitation) et accélère dynamique locale. Ceci permets d'observer l'homogénéisation de la dynamique, avec des caractéristiques proches de l'expérience. L'écrouissage, dans les polymères enchevêtrés ou réticulés. A grande déformation, la contrainte augmente avec une pente caractéristique d'ordre 10 – 100 MPa au-dessous de la transition vitreuse. De manière analogue à une théorie récente, nous proposons que la déformation locale oriente les monomères dans la direction d'étirage et ralentie la dynamique, suite à l'intensification des interactions locales. Les modules d'écrouissage mesurés, les effets de la réticulation et du taux de déformation sont comparables aux données expérimentales. En outre, on trouve que l'écrouissage a un effet stabilisateur sur les phénomènes de localisation et sur les bandes de cisaillement / This manuscript presents recent extensions to the PFVD model, based on the heterogeneity of theh dynamics of glassy polymers at the scale of a few nanometers et solved by 3D numerical simulation, which aim at providing a unified physical description of the mechanical and dynamical properties of glassy polymers during plastic deformation. Three main topics are treated: Plasticization. Under applied deformation, polymers undergo yield at strains of a few percent and stresses of some 10 MPa.We propose that the elastic energy stored at the scale of dynamical heterogeneities accelerates local dynamics. We observe yield stresses of a few 10 MPa are obtained at a few percent of deformation and that plastification is due to a relatively small amount of local yields. It has been observed that dynamics becomes faster and more homogeneous close to yield and that the average mobility attains a stationary value, linear with the strain rate. We propose that stress-induced acceleration of the dynamics enhances the diffusion of monomers from slow domains to fast ones (facilitation mechanism), accelerating local dynamics. This allows for obtaining the homogeneisation of the dynamics, with the same features observed during experiments. Strain-hardening, in highly entangled and cross-linked polymers. At large strain, stress increases with increasing strain, with a characteristic slope (hardening modulus) of order 10 – 100 MPa well below the glass transition. Analogously to a recent theory, we propose that local deformation orients monomers in the drawing direction and slows dows the dynamics, as a consequence of the intensification of local interactions. The hardening moduli mesured, the effect of reticulation and of strain rate are comparable with experimental data. In addition, strain-hardening is found to have a stabilizing effect over strain localization and shear banding
105

Observacao direta da interacao de discordancias com defeitos em niobio irradiado por meio de microscopia eletronica de transmissao de alta voltagem

OTERO, MAURO P. 09 October 2014 (has links)
Made available in DSpace on 2014-10-09T12:32:04Z (GMT). No. of bitstreams: 0 / Made available in DSpace on 2014-10-09T13:56:51Z (GMT). No. of bitstreams: 1 02295.pdf: 2548113 bytes, checksum: f89a4fee5dc16d298e4ec80ff94b7464 (MD5) / Tese (Doutoramento) / IPEN/T / Instituto de Pesquisas Energeticas e Nucleares - IPEN/CNEN-SP
106

Hardening strategies for HPC applications / Estratégias de enrobustecimento para aplicações PAD

Oliveira, Daniel Alfonso Gonçalves de January 2017 (has links)
A confiabilidade de dispositivos de Processamentos de Alto Desempenho (PAD) é uma das principais preocupações dos supercomputadores hoje e para a próxima geração. De fato, o alto número de dispositivos em grandes centros de dados faz com que a probabilidade de ter pelo menos um dispositivo corrompido seja muito alta. Neste trabalho, primeiro avaliamos o problema realizando experimentos de radiação. Os dados dos experimentos nos dão uma taxa de erro realista de dispositivos PAD. Além disso, avaliamos um conjunto representativo de algoritmos que derivam entendimentos gerais de algoritmos paralelos e a confiabilidade de abordagens de programação. Para entender melhor o problema, propomos uma nova metodologia para ir além da quantificação do problema. Qualificamos o erro avaliando a importância de cada execução corrompida por meio de um conjunto dedicado de métricas. Mostramos que em relação a computação imprecisa, a simples detecção de incompatibilidade não é suficiente para avaliar e comparar a sensibilidade à radiação de dispositivos e algoritmos PAD. Nossa análise quantifica e qualifica os efeitos da radiação na saída das aplicações, correlacionando o número de elementos corrompidos com sua localidade espacial. Também fornecemos o erro relativo médio (em nível do conjunto de dados) para avaliar a magnitude do erro induzido pela radiação. Além disso, desenvolvemos um injetor de falhas, CAROL-FI, para entender melhor o problema coletando informações usando campanhas de injeção de falhas, o que não é possível através de experimentos de radiação. Injetamos diferentes modelos de falha para analisar a sensitividade de determinadas aplicações. Mostramos que partes de aplicações podem ser classificadas com diferentes criticalidades. As técnicas de mitigação podem então ser relaxadas ou enrobustecidas com base na criticalidade de partes específicas da aplicação. Este trabalho também avalia a confiabilidade de seis arquiteturas diferentes, variando de dispositivos PAD a embarcados, com o objetivo de isolar comportamentos dependentes de código e arquitetura. Para esta avaliação, apresentamos e discutimos experimentos de radiação que abrangem um total de mais de 352.000 anos de exposição natural e análise de injeção de falhas com base em um total de mais de 120.000 injeções. Por fim, as estratégias de ECC, ABFT e de duplicação com comparação são apresentadas e avaliadas em dispositivos PAD por meio de experimentos de radiação. Apresentamos e comparamos a melhoria da confiabilidade e a sobrecarga imposta das soluções de enrobustecimento selecionadas. Em seguida, propomos e analisamos o impacto do enrobustecimento seletivo para algoritmos de PAD. Realizamos campanhas de injeção de falhas para identificar as variáveis de código-fonte mais críticas e apresentamos como selecionar os melhores candidatos para maximizar a relação confiabilidade/sobrecarga. / HPC device’s reliability is one of the major concerns for supercomputers today and for the next generation. In fact, the high number of devices in large data centers makes the probability of having at least a device corrupted to be very high. In this work, we first evaluate the problem by performing radiation experiments. The data from the experiments give us realistic error rate of HPC devices. Moreover, we evaluate a representative set of algorithms deriving general insights of parallel algorithms and programming approaches reliability. To understand better the problem, we propose a novel methodology to go beyond the quantification of the problem. We qualify the error by evaluating the criticality of each corrupted execution through a dedicated set of metrics. We show that, as long as imprecise computing is concerned, the simple mismatch detection is not sufficient to evaluate and compare the radiation sensitivity of HPC devices and algorithms. Our analysis quantifies and qualifies radiation effects on applications’ output correlating the number of corrupted elements with their spatial locality. We also provide the mean relative error (dataset-wise) to evaluate radiation-induced error magnitude. Furthermore, we designed a homemade fault-injector, CAROL-FI, to understand further the problem by collecting information using fault injection campaigns that is not possible through radiation experiments. We inject different fault models to analyze the sensitivity of given applications. We show that portions of applications can be graded by different criticalities. Mitigation techniques can then be relaxed or hardened based on the criticality of the particular portions. This work also evaluates the reliability behaviors of six different architectures, ranging from HPC devices to embedded ones, with the aim to isolate code- and architecturedependent behaviors. For this evaluation, we present and discuss radiation experiments that cover a total of more than 352,000 years of natural exposure and fault-injection analysis based on a total of more than 120,000 injections. Finally, Error-Correcting Code, Algorithm-Based Fault Tolerance, and Duplication With Comparison hardening strategies are presented and evaluated on HPC devices through radiation experiments. We present and compare both the reliability improvement and imposed overhead of the selected hardening solutions. Then, we propose and analyze the impact of selective hardening for HPC algorithms. We perform fault-injection campaigns to identify the most critical source code variables and present how to select the best candidates to maximize the reliability/overhead ratio.
107

Hardening strategies for HPC applications / Estratégias de enrobustecimento para aplicações PAD

Oliveira, Daniel Alfonso Gonçalves de January 2017 (has links)
A confiabilidade de dispositivos de Processamentos de Alto Desempenho (PAD) é uma das principais preocupações dos supercomputadores hoje e para a próxima geração. De fato, o alto número de dispositivos em grandes centros de dados faz com que a probabilidade de ter pelo menos um dispositivo corrompido seja muito alta. Neste trabalho, primeiro avaliamos o problema realizando experimentos de radiação. Os dados dos experimentos nos dão uma taxa de erro realista de dispositivos PAD. Além disso, avaliamos um conjunto representativo de algoritmos que derivam entendimentos gerais de algoritmos paralelos e a confiabilidade de abordagens de programação. Para entender melhor o problema, propomos uma nova metodologia para ir além da quantificação do problema. Qualificamos o erro avaliando a importância de cada execução corrompida por meio de um conjunto dedicado de métricas. Mostramos que em relação a computação imprecisa, a simples detecção de incompatibilidade não é suficiente para avaliar e comparar a sensibilidade à radiação de dispositivos e algoritmos PAD. Nossa análise quantifica e qualifica os efeitos da radiação na saída das aplicações, correlacionando o número de elementos corrompidos com sua localidade espacial. Também fornecemos o erro relativo médio (em nível do conjunto de dados) para avaliar a magnitude do erro induzido pela radiação. Além disso, desenvolvemos um injetor de falhas, CAROL-FI, para entender melhor o problema coletando informações usando campanhas de injeção de falhas, o que não é possível através de experimentos de radiação. Injetamos diferentes modelos de falha para analisar a sensitividade de determinadas aplicações. Mostramos que partes de aplicações podem ser classificadas com diferentes criticalidades. As técnicas de mitigação podem então ser relaxadas ou enrobustecidas com base na criticalidade de partes específicas da aplicação. Este trabalho também avalia a confiabilidade de seis arquiteturas diferentes, variando de dispositivos PAD a embarcados, com o objetivo de isolar comportamentos dependentes de código e arquitetura. Para esta avaliação, apresentamos e discutimos experimentos de radiação que abrangem um total de mais de 352.000 anos de exposição natural e análise de injeção de falhas com base em um total de mais de 120.000 injeções. Por fim, as estratégias de ECC, ABFT e de duplicação com comparação são apresentadas e avaliadas em dispositivos PAD por meio de experimentos de radiação. Apresentamos e comparamos a melhoria da confiabilidade e a sobrecarga imposta das soluções de enrobustecimento selecionadas. Em seguida, propomos e analisamos o impacto do enrobustecimento seletivo para algoritmos de PAD. Realizamos campanhas de injeção de falhas para identificar as variáveis de código-fonte mais críticas e apresentamos como selecionar os melhores candidatos para maximizar a relação confiabilidade/sobrecarga. / HPC device’s reliability is one of the major concerns for supercomputers today and for the next generation. In fact, the high number of devices in large data centers makes the probability of having at least a device corrupted to be very high. In this work, we first evaluate the problem by performing radiation experiments. The data from the experiments give us realistic error rate of HPC devices. Moreover, we evaluate a representative set of algorithms deriving general insights of parallel algorithms and programming approaches reliability. To understand better the problem, we propose a novel methodology to go beyond the quantification of the problem. We qualify the error by evaluating the criticality of each corrupted execution through a dedicated set of metrics. We show that, as long as imprecise computing is concerned, the simple mismatch detection is not sufficient to evaluate and compare the radiation sensitivity of HPC devices and algorithms. Our analysis quantifies and qualifies radiation effects on applications’ output correlating the number of corrupted elements with their spatial locality. We also provide the mean relative error (dataset-wise) to evaluate radiation-induced error magnitude. Furthermore, we designed a homemade fault-injector, CAROL-FI, to understand further the problem by collecting information using fault injection campaigns that is not possible through radiation experiments. We inject different fault models to analyze the sensitivity of given applications. We show that portions of applications can be graded by different criticalities. Mitigation techniques can then be relaxed or hardened based on the criticality of the particular portions. This work also evaluates the reliability behaviors of six different architectures, ranging from HPC devices to embedded ones, with the aim to isolate code- and architecturedependent behaviors. For this evaluation, we present and discuss radiation experiments that cover a total of more than 352,000 years of natural exposure and fault-injection analysis based on a total of more than 120,000 injections. Finally, Error-Correcting Code, Algorithm-Based Fault Tolerance, and Duplication With Comparison hardening strategies are presented and evaluated on HPC devices through radiation experiments. We present and compare both the reliability improvement and imposed overhead of the selected hardening solutions. Then, we propose and analyze the impact of selective hardening for HPC algorithms. We perform fault-injection campaigns to identify the most critical source code variables and present how to select the best candidates to maximize the reliability/overhead ratio.
108

Observacao direta da interacao de discordancias com defeitos em niobio irradiado por meio de microscopia eletronica de transmissao de alta voltagem

OTERO, MAURO P. 09 October 2014 (has links)
Made available in DSpace on 2014-10-09T12:32:04Z (GMT). No. of bitstreams: 0 / Made available in DSpace on 2014-10-09T13:56:51Z (GMT). No. of bitstreams: 1 02295.pdf: 2548113 bytes, checksum: f89a4fee5dc16d298e4ec80ff94b7464 (MD5) / Tese (Doutoramento) / IPEN/T / Instituto de Pesquisas Energeticas e Nucleares - IPEN/CNEN-SP
109

Developing Precipitation Hardenable High Entropy Alloys

Gwalani, Bharat 08 1900 (has links)
High entropy alloys (HEAs) is a concept wherein alloys are constructed with five or more elements mixed in equal proportions; these are also known as multi-principle elements (MPEs) or complex concentrated alloys (CCAs). This PhD thesis dissertation presents research conducted to develop precipitation-hardenable high entropy alloys using a much-studied fcc-based equi-atomic quaternary alloy (CoCrFeNi). Minor additions of aluminium make the alloy amenable for precipitating ordered intermetallic phases in an fcc matrix. Aluminum also affects grain growth kinetics and Hall-Petch hardenability. The use of a combinatorial approach for assessing composition-microstructure-property relationships in high entropy alloys, or more broadly in complex concentrated alloys; using laser deposited compositionally graded AlxCrCuFeNi2 (0 < x < 1.5) complex concentrated alloys as a candidate system. The composition gradient has been achieved from CrCuFeNi2 to Al1.5CrCuFeNi2 over a length of ~25 mm, deposited using the laser engineered net shaping process from a blend of elemental powders. With increasing Al content, there was a gradual change from an fcc-based microstructure (including the ordered L12 phase) to a bcc-based microstructure (including the ordered B2 phase), accompanied with a progressive increase in microhardness. Based on this combinatorial assessment, two promising fcc-based precipitation strengthened systems have been identified; Al0.3CuCrFeNi2 and Al0.3CoCrFeNi, and both compositions were subsequently thermo-mechanically processed via conventional techniques. The phase stability and mechanical properties of these alloys have been investigated and will be presented. Additionally, the activation energy for grain growth as a function of Al content in these complex alloys has also been investigated. Change in fcc grain growth kinetic was studied as a function of aluminum; the apparent activation energy for grain growth increases by about three times going from Al0.1CoCrFeNi (3% Al (at%)) to Al0.3CoCrFeNi. (7% Al (at%)). Furthermore, Al addition leads to the precipitation of highly refined ordered L12 (γ′) and B2 precipitates in Al0.3CoCrFeNi. A detailed investigation of precipitation of the ordered phases in Al0.3CoCrFeNi and their thermal stability is done using atom probe tomography (APT), transmission electron microscopy (TEM) and Synchrotron X-ray in situ and ex situ analyses. The alloy strengthened via grain boundary strengthening following the Hall-Petch relationship offers a large increment of strength with small variation in grain size. Tensile strength of the Al0.3CoFeNi is increased by 50% on precipitation fine-scale γ′ precipitates. Furthermore, precipitation of bcc based ordered phase B2 in Al0.3CoCrFeNi can further strengthen the alloy. Fine-tuning the microstructure by thermo-mechanical treatments achieved a wide range of mechanical properties in the same alloy. The Al0.3CoCrFeNi HEA exhibited ultimate tensile strength (UTS) of ~250 MPa and ductility of ~65%; a UTS of ~1100 MPa and ductility of ~30%; and a UTS of 1850 MPa and a ductility of 5% after various thermo-mechanical treatments. Grain sizes, precipitates type and size scales manipulated in the alloy result in different strength ductility combinations. Henceforth, the alloy presents a fertile ground for development by grain boundary strengthening and precipitation strengthening, and offers very high activation energy of grain growth aptly suitable for high-temperature applications.
110

Rotační bleskové kalení s pomocí elektronového svazku a laserového paprsku / Rotary flash hardening with help of electron beam and laser radiation

Klusáček, Martin January 2019 (has links)
This master thesis deals with surface hardening of steels, especially with rotary flash hardening of 42CrMo4 steel (WNr 1.3563, ČSN 15142). In this method homogenous heating of whole surface occurs during a very fast rotation of the component. In the theoretical part of this thesis the most common methods of surface hardening are described, with focus on laser and electron beam technologies. In the experimental part special device for this application was constructed. Rotary flash hardening was done using different radiation source and experimantal device parameters. The hardened surface layer of maximal thickness of 0,7 mm and width of 5,6 mm was achieved using laser beam. Results with electron beam were way better, because this technology allows to control the distribution of power along the beam width in order to improve the width/thickness ratio of hardened layer. Using this method maximal layer thickness of 1,4 mm and width of 13,4 mm was achieved.

Page generated in 0.0816 seconds