Spelling suggestions: "subject:"annealing."" "subject:"nnealing.""
271 |
An?lise de escalabilidade de uma implementa??o paralela do simulated annealing acopladoSilva, Kayo Gon?alves e 25 March 2013 (has links)
Made available in DSpace on 2014-12-17T14:56:13Z (GMT). No. of bitstreams: 1
KayoGS_DISSERT.pdf: 4975392 bytes, checksum: 5d113169a6356e5e7704aec116237caf (MD5)
Previous issue date: 2013-03-25 / Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior / This paper analyzes the performance of a parallel implementation of Coupled Simulated
Annealing (CSA) for the unconstrained optimization of continuous variables problems. Parallel
processing is an efficient form of information processing with emphasis on exploration of
simultaneous events in the execution of software. It arises primarily due to high computational
performance demands, and the difficulty in increasing the speed of a single processing core.
Despite multicore processors being easily found nowadays, several algorithms are not yet suitable
for running on parallel architectures. The algorithm is characterized by a group of Simulated
Annealing (SA) optimizers working together on refining the solution. Each SA optimizer runs
on a single thread executed by different processors. In the analysis of parallel performance and
scalability, these metrics were investigated: the execution time; the speedup of the algorithm
with respect to increasing the number of processors; and the efficient use of processing elements
with respect to the increasing size of the treated problem. Furthermore, the quality of
the final solution was verified. For the study, this paper proposes a parallel version of CSA
and its equivalent serial version. Both algorithms were analysed on 14 benchmark functions.
For each of these functions, the CSA is evaluated using 2-24 optimizers. The results obtained
are shown and discussed observing the analysis of the metrics. The conclusions of the paper
characterize the CSA as a good parallel algorithm, both in the quality of the solutions and the
parallel scalability and parallel efficiency / O presente trabalho analisa o desempenho paralelo de uma implementa??o do Simulated Annealing
Acoplado (CSA, do ingl?s Coupled Simulated Annealing) para otimiza??o de vari?veis
cont?nuas sem restri??es. O processamento paralelo ? uma forma eficiente de processamento
de informa??o com ?nfase na explora??o de eventos simult?neos na execu??o de um software.
Ele surge principalmente devido ?s elevadas exig?ncias de desempenho computacional e ? dificuldade
em aumentar a velocidade de um ?nico n?cleo de processamento. Apesar das CPUs
multiprocessadas, ou processadores multicore, serem facilmente encontrados atualmente, diversos
algoritmos ainda n?o s?o adequados para executar em arquiteturas paralelas. O algoritmo
do CSA ? caracterizado por um grupo de otimizadores Simulated Annealing (SA) trabalhando
em conjunto no refinamento da solu??o. Cada otimizador SA ? executado em uma ?nica thread,
e essas executadas por diferentes processadores. Na an?lise de desempenho e escalabilidade
paralela, as m?tricas investigadas foram: o tempo de execu??o; o speedup do algoritmo com
respeito ao aumento do n?mero de processadores; e a efici?ncia na utiliza??o de elementos de
processamento com rela??o ao aumento da inst?ncia do problema tratado. Al?m disso, foi verificada
a qualidade da solu??o final. Para o estudo, esse trabalho analisa uma vers?o paralela do
CSA e sua vers?o serial equivalente. Ambos algoritmos foram analisados sobre 14 fun??es de
refer?ncia. Para cada uma dessas fun??es, o CSA ? avaliado utilizando de 2 a 24 otimizadores.
Os resultados obtidos s?o exibidos e comentados observando-se as m?tricas de an?lise. As conclus?es
do trabalho caracterizam o CSA como um bom algoritmo paralelo, seja na qualidade
das solu??es como na escalabilidade e efici?ncia paralela
|
272 |
Otimização multimodal através de novas técnicas baseadas em clusterização nebulosa / Multimodal optimization by new techiniques based on fuzzy clusteringAna Carolina Rios Coelho 04 July 2011 (has links)
Fundação Carlos Chagas Filho de Amparo a Pesquisa do Estado do Rio de Janeiro / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Neste trabalho, é proposta uma nova família de métodos a ser aplicada à otimização de problemas multimodais. Nestas técnicas, primeiramente são geradas soluções iniciais com o intuito de explorar o espaço de busca. Em seguida, com a finalidade de encontrar mais de um ótimo, estas soluções são agrupadas em subespaços utilizando um algoritmo de clusterização nebulosa. Finalmente, são feitas buscas locais através de métodos determinísticos de otimização dentro de cada subespaço gerado na fase anterior com a finalidade de encontrar-se o ótimo local. A família de métodos é formada por seis variantes, combinando três esquemas de inicialização das soluções na primeira fase e dois algoritmos de busca local na terceira. A fim de que esta nova família de métodos possa ser avaliada, seus constituintes são comparados com outras metodologias utilizando problemas da literatura e os resultados alcançados são promissores. / In this thesis, a new family of methods designed for multimodal optimization is introduced. In these techniques, first of all, initial solutions are generated in order to explore the search space. Secondly, these solutions are grouped in clusters using a fuzzy-clustering algorithm so that multiple optima are found. Finally, an instance of deterministic optimization method is triggered within each cluster to reach for the local optimum. This family of methods is formed by six variants combining three initialization schemes in the first phase with two local search algorithms in the third. These methods are compared against other techniques in the literature using benchmarks, obtaining promising results.
|
273 |
Meta-heurística baseada em simulated annealing para programação da produção em máquinas paralelas com diferentes datas de liberação e tempos de setup / Metaheuristic based on simulated annealing for production schedule in parallel machines with different release dates and time setupMesquita, Fernanda Neiva 15 December 2015 (has links)
Submitted by Marlene Santos (marlene.bc.ufg@gmail.com) on 2016-10-20T17:39:38Z
No. of bitstreams: 2
Dissertação - Fernanda Neiva Mesquita - 2015.pdf: 2481424 bytes, checksum: 2263ae4d21d732d49ebd0e6d2e2763c6 (MD5)
license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Approved for entry into archive by Jaqueline Silva (jtas29@gmail.com) on 2016-10-21T19:23:42Z (GMT) No. of bitstreams: 2
Dissertação - Fernanda Neiva Mesquita - 2015.pdf: 2481424 bytes, checksum: 2263ae4d21d732d49ebd0e6d2e2763c6 (MD5)
license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Made available in DSpace on 2016-10-21T19:23:42Z (GMT). No. of bitstreams: 2
Dissertação - Fernanda Neiva Mesquita - 2015.pdf: 2481424 bytes, checksum: 2263ae4d21d732d49ebd0e6d2e2763c6 (MD5)
license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Previous issue date: 2015-12-15 / This study deals with problems of parallel machines with independent setup times, different dates
of release and minimizing the makespan. The production environment is common in the auto
industry that there may be jobs through the production line, they are added new machines or equal
equipment to expand productive capacity. Any production process requires effective management
by the Production Planning and Control (PCP). This activity includes the planning of production,
so the allocation of resources for task execution on a time basis. The programming activity is one
of the most complex tasks in the management of production because the need to deal with several
different types of resources and concurrent activities. Furthermore, the number of solutions grows
exponentially in several dimensions, according to the number of tasks, operations or machines,
thereby generating a combinatorial nature of the problem. The environment treated in this work
each task has the same processing time on any machine. Considering only the restriction
independently of the task setup time waiting for processing and the presence of release dates
different from zero very practical characteristics in industries. As were found in the literature work
that deals of this work environment, even less that used the meta-heuristic Simulated Anneling, so
we developed the method to the problem, along with the initial solution their disturbance schemes
and the setting of lower bounds for the makespan. / Este estudo trata de problemas de máquinas paralelas com tempos de setup independentes,
diferentes datas de liberação e minimização do makespan. Este ambiente de produção é comum na
indústria automobilística que pode haver postos de trabalho em meio à linha de produção, em que
são adicionadas novas máquinas ou equipamentos iguais para ampliar a capacidade produtiva.
Qualquer processo produtivo requer um gerenciamento eficaz por meio do Planejamento e
Controle da Produção (PCP). Esta atividade inclui a programação da produção, ou seja, a alocação
de recursos para execução de tarefas em uma base de tempo. A atividade de programação é uma
das tarefas mais complexas no gerenciamento da produção, pois a necessidade de lidar com
diversos tipos diferentes de recursos e atividades simultâneas. Além disso, o número de soluções
cresce exponencialmente em várias dimensões, de acordo com a quantidade de tarefas, operações
ou máquinas, gerando assim uma natureza combinatória ao problema. O ambiente tratado neste
trabalho cada tarefa tem o mesmo tempo de processamento em qualquer máquina. Considerando a
restrição de tempos de setup independente apenas da tarefa que espera por processamento e a
presença de datas de liberação diferentes de zero características muito práticas nas indústrias.
Como não foram encontrados na literatura trabalho que tratasse desse ambiente de trabalho, ainda
menos que utilizasse a meta-heurística Simulated Anneling, então foi desenvolvido o método para o
problema, juntamente com a solução inicial os respectivos esquemas de perturbação e a definição
de limitantes inferiores para o makespan.
|
274 |
Projeto de amplificadores de baixo ruído usando algoritmos metaheurísticos / Amplifier design low noise using algorithms metaheuristicCésar William Vera Casañas 27 May 2013 (has links)
O projeto de amplificadores de baixo ruído (LNA) aparenta ser um trabalho simples pelos poucos componentes ativos e passivos que o compõe, porém a alta correlação entre os seus parâmetros de projeto dificulta muito esse trabalho. Esta dissertação apresenta uma proposta para contornar essa dificuldade: o uso de algoritmos metaheurísticos, em particular algoritmos genéticos e simulated annealing. Algoritmos metaheurísticos são técnicas avançadas que emulam princípios físicos ou naturais para resolver problemas com alto grau de complexidade. Esses algoritmos estão emergindo nos últimos anos porque têm mostrado eficiência e eficácia. São feitos neste trabalho os projetos de três LNAs, dois (LNA1 e LNA2) para sistemas com arquitetura homódine (LNA com carga capacitiva) e um (LNA3) para sistemas com arquitetura heteródine (LNA com carga resistiva) utilizando-se algoritmos genéticos e simulated annealing (recozimento simulado). Apresenta-se inicialmente a análise detalhada da configuração escolhida para os projetos (fonte comum cascode com degeneração indutiva FCCDI). A frequência de operação dos LNAs é 1,8 GHz e a fonte de alimentação de 2,0 V. Para o LNA1 e o LNA2 se atingiu uma figura de ruído de 2,8 dB e 3,2 dB, consumo de potência de 6,8 mW e 2,7 mW e ganho de tensão de 22 dB e 24 dB, respectivamente. Para LNA3 se atingiu uma figura de ruído de 3,5 dB, consumo de potência de 7,8 mW e ganho de tensão de 15,5 dB. Os resultados obtidos e comparações feitas com LNAs da literatura demonstram viabilidade e eficácia da aplicação de algoritmos metaheurísticos no projeto de LNA. Neste trabalho utilizaram-se as ferramentas ELDO (simulador de circuitos elétricos), versão 2009.1 patch1 64 bits, ASITIC (para projetar e simular os indutores), versão 03.19.00.0.0.0 e MATLAB (o toolbox fornece os algoritmos metaheurísticos), versão 7.9.0.529 R2009b. Além disso, os projetos foram desenvolvidos na tecnologia CMOS 0,35 m da AMS (Austria Micro Systems). / The design of low noise amplifiers (LNA) seems to be a simple work because the small number of active and passive device that they are composes, nevertheless the high trade off of LNA parameters complicates very much the work. This research presents a proposal to contour act the obstacle: to use metaheuristic algorithms, in special genetic algorithms and simulated annealing. The metaheuristic algorithms are advanced techniques that emulate physics or natural principles to solve problems with high grade of complexity. They have been emerging in the last years because they have shown effectiveness and efficiency. In this dissertation were designed three LNAs using genetic algorithms and simulated annealing: two (LNA1 and LNA2) to homódine architecture (LNA with capacitive load) and one (LNA3) to heteródine architecture (LNA with resistive load). First it is show the detailed analysis of configuration chosen to the designs (common source cascode with inductive degeneration). The operation frequency is 1.8 GHz and power supply is 2.0 V for all LNAs. LNA1 and LNA2 reached a noise figure of 2.8 dB and 3.2 dB, a dissipation power of 6.8 mW and 2.7 mW, and a voltage gain of 22 dB and 24 dB respectively. LNA3 reached 3.5 dB of noise figure, 7.8 mW of dissipation power, and 15.5 dB of voltage gain. The results obtained and the comparisons with LNAs from the literature demonstrate that the metaheuristic algorithms show efficiency and effectiveness in the design of LNA. This study was developed with the help of the tools ELDO (electric circuit simulator) version 2009.1 patch1 64 bits, ASITIC (to design and simulate the inductors) version 03.19.00.0.0.0, and MATLAB (the toolbox provides the metaheuristic algorithms) version 7.9.0.529 R2009b. Furthermore, the designs were developed on CMOS 0.35 AMS (Austria Micro Systems) technology.
|
275 |
Determinação dos parâmetros de convecção- dispersão- transferência de massa em meio poroso usando tomografia computadorizada / Determination of convection- dispersion- mass transfer parameters in porous media using computed tomographyVidal Vargas, Janeth Alina, 1983- 27 August 2018 (has links)
Orientador: Osvair Vidal Trevisan / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica / Made available in DSpace on 2018-08-27T00:58:52Z (GMT). No. of bitstreams: 1
VidalVargas_JanethAlina_D.pdf: 6980631 bytes, checksum: 2e858ba97bc5f6f4bb3b1a075776555f (MD5)
Previous issue date: 2015 / Resumo: O conhecimento dos fenômenos físicos envolvidos no transporte de fluidos no meio poroso é muito importante para o projeto e o sucesso dos processos de recuperação melhorada de petróleo. O deslocamento miscível é um dos métodos mais eficientes de recuperação melhorada de petróleo. O parâmetro mais relevante na eficiência do deslocamento miscível é a dispersão, que controla a evolução da zona de mistura dos dois fluidos e a propagação do fluido injetado. Neste trabalho é desenvolvido e avaliado um modelo matemático para o deslocamento miscível 1-D em meios heterogêneos. O modelo, referido como modelo de concentração total (MCT) é desenvolvido com base na equação de convecção-dispersão (ECD) considerando a interação entre a rocha e os fluidos. Os parâmetros fenomenológicos envolvidos no MCT são o coeficiente de dispersão, o coeficiente de transferência de massa, a porosidade efetiva do meio poroso no momento de deslocamento e a fração de soluto que é depositada ou retirada do meio poroso. Estes parâmetros podem ser determinados por meio de ajustes multiparâmétricos do modelo aos dados obtidos em laboratório. Para avaliar a aplicação do modelo MCT foram realizados dois experimentos A e B, cada um formado por 4 e 5 testes de deslocamento respectivamente. Os testes de deslocamento utilizaram duas salmouras e foram realizados empregando-se uma rocha carbonática. A evolução das concentrações ao longo do meio poroso foi medida por Tomografia Computadorizada de Raios-X (TC). A grande quantidade de dados dos perfis de concentração determinados a partir das imagens da TC do Experimento A foi analisada e ajustada utilizando-se o modelo MCT por meio do método metaheurístico de recozimento simulado (Simulated Annealing, SA). O procedimento de ajuste global, considerando todas as curvas do histórico de concentração, foi utilizado para a determinação dos parâmetros governantes dos fenômenos envolvidos. A quantidade de dados utilizados e a robustez do método permitiu um ajuste muito bom do modelo aos dados experimentais. Determinou-se um coeficiente de dispersão de aproximadamente 0,01cm2/s para vazão de 1 cm3/min e 0,05 cm2/s para vazão de 5 cm3/min. Foram avaliados também os parâmetros de transferência de massa e interação do fluido com o meio poroso. O Experimento B foi realizado com a finalidade de comprovar a deposição de soluto enquanto o fluido se deslocava através da amostra de rocha. No modelo MCT, este fenômeno foi quantificado por meio do parâmetro fr. Os perfis de concentração do Experimento B foram medidos na entrada, ao longo da amostra (rocha) e na saída. A partir desses perfis, foi realizado um balanço de massa para avaliar a fração de deposição de soluto (fr) formulada e determinada a partir do MCT. Os valores de fr obtidos foram de 0,2 a 0,4, que são valores coerentes com os resultados obtidos com o modelo MCT / Abstract: The knowledge of the physical phenomena involved in fluid transport in porous medium is very important for the design and successful execution of oil enhanced recovery processes. Miscible displacement is one of the most efficient recovery methods. Dispersion is a key phenomenon in miscible displacement. It controls the evolution of the mixing zone of both fluids and the propagation of injected fluid. The present study focuses on the development and evaluation of a mathematical model for the 1-D miscible and active displacement in an intrinsically heterogeneous porous media. The model, referred to as total concentration model (TCM), is developed based on the convection-dispersion equation (CDE) considering the interaction between rock and fluids. The phenomenological parameters involved in TCM are the dispersion coefficient, the mass transfer coefficient, the effective porosity of the porous medium at the time of the displacement and the amount of solute that is deposited or removed from the porous medium. These parameters may be better determined through multiparametric matching of the model to the data obtained in the laboratory. In order to evaluate the application of the TCM model, two sets of experiments (A and B), totaling 9 tests, were carried out. The tests were conducted with two brines displaced in carbonate rock samples. The concentration evolution along the porous medium was measured by X-Ray Computed Tomography (CT). The vast amount of data from the concentration profiles determined from the CT images from set A was analyzed and matched to the TCM model through the simulated annealing metaheuristic method (Simulated Annealing, SA). The global matching procedure, considering all curves in the concentration history, was used to determine the governing parameters for the involved phenomena. The amount of data used and the robustness of the method allowed a very good matching of the model to the experimental data. A dispersion coefficient of 0.01cm2/s for a 1 cm3/min flow rate; and 0.05 cm2/s for a 5 cm3/min flow rate was determined. The parameters of mass transfer and of the fluid interaction with the rock porous structure were also evaluated. Experiment B was carried out in order to double check solute deposition while flowing through the rock sample. In the TCM model, the phenomenon was quantified by the fr parameter. The concentration profiles of Experiment B were measured at the input, along the rock sample and at the output. From these profiles a mass balance was carried out to evaluate the fraction of solute deposited (fr) during the experiment. The determined values for fr were 0.2 to 0.4, figures that are consistent with the results obtained with the TCM matching procedure / Doutorado / Reservatórios e Gestão / Doutora em Ciências e Engenharia de Petróleo
|
276 |
Planering av stränggjutningsproduktion : En heruistisk metodÄng, Oscar, Trygg, Alexander January 2017 (has links)
Detta arbete syftar till att undersöka om det är möjligt att med en heuristisk metod skapa giltiga lösningar till ett problem vid planering av stränggjutningsproduktion på SSAB. Planeringsproblemet uppstår när stål av olika sorter ska gjutas under samma dag. Beroende på i vilken ordning olika kundordrar av stål gjuts uppstår spill av olika storlek. Detta spill ska minimeras och tidigare arbete har genomförts på detta problem och resulterat i en matematisk modell för att skapa lösningar till problemet. Det tar i praktiken lång tid att hitta bra lösningar med modellen och frågeställningen är om det går att göra detta med en heuristisk metod för att kunna generera bra lösningar snabbare. Med inspiration från Variable Neighbourhood Search, Simulated Annealing och tabusökning har heuristiker skapats, implementerats och utvärderats mot den matematiska modellen. En av heuristikerna presterar bättre än den matematiska modellen gör på 10 minuter. Matematiska modellens resultat efter 60 minuter körtid är bättre än den utvecklade heuristiken, men resultaten är nära varandra. Körtiden för heuristiken tar signifikant mindre tid än 10 minuter. / This study aims to investigate if it is possible to use a heuristic method to create feasible solution in a Cast Batching Problem at SSAB. The problem occurs when different kinds of steel should be cast during the same day. Depending on which order the groups of different steel is placed different amounts of waste is produced, the goal is to minimize this waste. Earlier work has been done on this problem and resulted in a mathematical model to create feasible solutions to this problem. In practice the time it takes to create good solutions are long and the question is if it is possible to use a heuristic method to generate good solutions in a shorter amount of time. Drawing upon inspiration from metaheuristics such as Variable Neighbourhood Search, Simualted Annealing and Tabu Search multiple heuristics have been created, implemented and evaluated against the mathematical model. One of the heuristics perform better than the mathematical model does in 10 minutes. The result from the mathematical model after 60 minutes is slightly better than the heuristic, but the results are similar. With regards to running time the heuristic takes considerably less time than 10 minutes.
|
277 |
Increase the capacityof continuous annealing furnaces at OvakoDahlqvist, v January 2012 (has links)
The capacity of soft annealing of low alloyed tubes at Ovako’s continuous annealing furnaces have been evaluated by comparing how it is done today with information from published and internal articles on the subject. It was found that it is possible to reduce the cycle time by 30 % for one furnace, 55 % for one furnace and 72 % for two furnaces. Two separate fullscale tests were made to assess whether the faster soft annealing procedure was feasible. The tests were performed without any reconstruction of the furnace and were made by continuously vary the speed of the batch inside thefurnace. The temperature in the batch was measured and compared with results from computer simulations of the heating/cooling sequences. The computer simulations were performed in COMSOL. The soft annealing was evaluated according to the SEP-520 standard ,which means evaluating the microstructure and hardness. The results show that the faster heat treatment could yield lower grades than today but still meet it’s requirements. In order to achieve this increase a reconstruction of the furnaces is needed and the reconstruction is briefly treated in the report. Ideas to further increase the speed of the soft annealing procedure are also presented.
|
278 |
Hyperdoping Si with deep-level impurities by ion implantation and sub-second annealingLiu, Fang 11 October 2018 (has links)
Intermediate band (IB) materials have attracted considerable research interest since they can dramatically enhance the near infrared light absorption and lead to applications in the fields of so-called intermediate band solar cells or infrared photodetectors. Hyperdoping Si with deep level impurities is one of the most effective approaches to form an IB inside Si.
In this thesis, titanium (Ti) or chalcogen doped Si with concentrations far exceeding the Mott transition limits (~ 5×10^19 cm-3 for Ti) are fabricated by ion implantation followed by pulsed laser annealing (PLA) or flash lamp annealing (FLA). The structural and electrical properties of the implanted layer are investigated by channeling Rutherford backscattering spectrometry (cRBS) and Hall measurements.
For Si supersaturated with Ti, it is shown that Ti-implanted Si after liquid phase epitaxy shows cellular breakdown at high doping concentrations during the rapid solidification, preventing Ti incorporation into Si matrix. However, the out-diffusion and the cellular breakdown can be effectively suppressed by solid phase epitaxy during FLA, leading to a much higher Ti incorporation. In addition, the formed microstructure of cellular breakdown also complicates the interpretation of the electrical properties. After FLA, the samples remain insulating even with the highest Ti implantation fluence, whereas the sheet resistance decreases with increasing Ti concentration after PLA. According to the results from conductive atomic force microscopy (C-AFM), the decrease of the sheet resistance after PLA is attributed to the percolation of Ti-rich cellular walls, but not to the insulator-to-metal transition due to Ti-doping.
Se-hyperdoped Si samples with different Se concentrations are fabricated by ion implantation followed by FLA. The study of the structural properties of the implanted layer reveals that most Se atoms are located at substitutional lattice sites. Temperature-dependent sheet resistance shows that the insulator-to-metal transition occurs at a Se peak concentration of around 6.3 × 10^20 cm-3, proving the formation of an IB in host semiconductors. The correlation between the structural and electrical properties under different annealing processes is also investigated. The results indicate that the degrees of crystalline lattice recovery of the implanted layers and the Se substitutional fraction depend on pulse duration and energy density of the flash. The sample annealed at short pulse durations (1.3 ms) shows better conductivity than long pulse durations (20 ms). The electrical properties of the hyperdoped layers can be well-correlated to the structural properties resulting from different annealing processes.:Chapter 1 Introduction 1
1.1 Shallow and Deep level impurities in semiconductors 1
1.2 Challenges for hyperdoping semiconductors with deep level Impurities 2
1.3 Solid vs. liquid phase epitaxy 5
1.4 Previous work 7
1.4.1 Transition metal in Si 7
1.4.2 Chalcogens in Si 10
1.5 The organization of this thesis 15
Chapter 2 Experimental methods 18
2.1 Ion implantation 18
2.1.1 Basic principle of ion implantation 18
2.1.2 Ion implantation equipment 19
2.1.3 Energy loss 20
2.2 Pulsed laser annealing (PLA) 23
2.3 Flash lamp annealing (FLA) 24
2.4 Rutherford backscattering and channeling spectrometry (RBS/C) 27
2.4.1 Basic principles 27
2.4.2 Analysis of the elements in the target 28
2.4.3 Channeling and RBS/C 29
2.4.4 Analysis of the impurity lattice location 31
2.5 Hall measurements 31
2.5.1 Sample preparation 32
2.5.2 Resistivity 32
2.5.3 Hall measurements 33
Chapter 3 Suppressing the cellular breakdown in silicon supersaturated with titanium 34
3.1 Introduction 34
3.2 Experimental 35
3.3 Results 36
3.4 Conclusions 42
Chapter 4 Titanium-implanted silicon: does the insulator-to-metal transition really happen? 44
4.1 Introduction 44
4.2 Experimental section 45
4.3 Results 47
4.3.1 Recrystallization of Ti-implanted Si 47
4.3.2 Lattice location of Ti impurities 48
4.3.3 Electrical conduction 50
4.3.4 Surface morphology 52
4.3.5 Spatially resolved conduction 53
4.4 Discussion 55
4.5 Conclusion 56
Chapter 5 Realizing the insulator-to-metal transition in Se hyperdoped Si via non-equilibrium material processing 57
5.1 Introduction 57
5.2 Experimental 59
5.3 Results 60
5.4 Conclusions 65
Chapter 6 Structural and electrical properties of Se-hyperdoped Si via ion implantation and flash lamp annealing 67
6.1 Introduction 67
6.2 Experimental 68
6.3 Results 69
6.4 Conclusions 76
Chapter 7 Summary and outlook 78
7.1 Summary 78
7.2 Outlook 81
References 83
Publications 89
|
279 |
Formation of Supersaturated Alloys by Ion Implantation and Pulsed-Laser AnnealingWilson, Syd Robert 08 1900 (has links)
Supersaturated substitutional alloys formed by ion implantation and rapid liquid-phase epitaxial regrowth induced by pulsed-laser annealing have been studied using Rutherford-backscattering and ion-channeling analysis. A series of impurities (As, Sb, Bi, Ga, In, Fe, Sn, Cu) have been implanted into single-crystal (001) orientation silicon at doses ranging from 1 x 10^15/cm2 to 1 x 10^17/cm2. The samples were subsequently annealed with a Ω-switched ruby laser (energy density ~1.5 J/cm2, pulse duration 15 x 10-9 sec). Ion-channeling analysis shows that laser annealing incorporates the Group III (Ga, In) and Group V (As, Sb, Bi) impurities into substitutional lattice sites at concentrations far in excess of the equilibrium solid solubility. Channeling measurements indicate the silicon crystal is essentially defect free after laser annealing. The maximum Group III and Group V dopant concentrations that can be incorporated into substitutional lattice sites are determined for the present laser-annealing conditions. Dopant profiles have been measured before and after annealing using Rutherford backscattering. These experimental profiles are compared to theoretical model calculations which incorporate both dopant diffusion in liquid silicon and a distribution coefficient (k') from the liquid. It is seen that a distribution coefficient (k') far greater than the equilibrium value (k0) is required for the calculation to fit the experimental data. In the cases of Fe, Zn, and Cu, laser annealing causes the impurities to segregate toward the surface. After annealing, none of these impurities are observed to be substitutional in detectable concentrations. The systematics of these alloys systems are discussed.
|
280 |
Directed Self-Assembly of Nanostructured Block Copolymer Thin Films via Dynamic Thermal AnnealingBasutkar, Monali N. 21 September 2018 (has links)
No description available.
|
Page generated in 0.0746 seconds