• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 164
  • 37
  • 10
  • 10
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 582
  • 373
  • 306
  • 287
  • 265
  • 250
  • 63
  • 57
  • 46
  • 42
  • 39
  • 29
  • 25
  • 25
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
471

RESOURCE-AWARE OPTIMIZATION TECHNIQUES FOR MACHINE LEARNING INFERENCE ON HETEROGENEOUS EMBEDDED SYSTEMS

Spantidi, Ourania 01 May 2023 (has links) (PDF)
With the increasing adoption of Deep Neural Networks (DNNs) in modern applications, there has been a proliferation of computationally and power-hungry workloads, which has necessitated the use of embedded systems with more sophisticated, heterogeneous approaches to accommodate these requirements. One of the major solutions to tackle these challenges has been the development of domain-specific accelerators, which are highly optimized for the computationally intensive tasks associated with DNNs. These accelerators are designed to take advantage of the unique properties of DNNs, such as parallelism and data locality, to achieve high throughput and energy efficiency. Domain-specific accelerators have been shown to provide significant improvements in performance and energy efficiency compared to traditional general-purpose processors and are becoming increasingly popular in a range of applications such as computer vision and speech recognition. However, designing these architectures and managing their resources can be challenging, as it requires a deep understanding of the workload and the system's unique properties. Achieving a favorable balance between performance and power consumption is not always straightforward and requires careful design decisions to fully exploit the benefits of the underlying hardware. This dissertation aims to address these challenges by presenting solutions that enable low energy consumption without compromising performance for heterogeneous embedded systems. Specifically, this dissertation will focus on three topics: (i) the utilization of approximate computing concepts and approximate accelerators for energy-efficient DNN inference,(ii) the integration of formal properties in the systematic employment of approximate computing concepts, and (iii) resource management techniques on heterogeneous embedded systems.In summary, this dissertation provides a comprehensive study of solutions that can improve the energy efficiency of heterogeneous embedded systems, enabling them to perform computationally intensive tasks associated with modern applications that incorporate DNNs without compromising on performance. The results of this dissertation demonstrate the effectiveness of the proposed solutions and their potential for wide-ranging practical applications.
472

OPTIMIZATIONS ON FINITE THREE DIMENSIONAL LARGE EDDY SIMULATIONS

Phadke, Nandan Neelkanth 17 August 2015 (has links)
No description available.
473

Automated Runtime Analysis and Adaptation for Scalable Heterogeneous Computing

Helal, Ahmed Elmohamadi Mohamed 29 January 2020 (has links)
In the last decade, there have been tectonic shifts in computer hardware because of reaching the physical limits of the sequential CPU performance. As a consequence, current high-performance computing (HPC) systems integrate a wide variety of compute resources with different capabilities and execution models, ranging from multi-core CPUs to many-core accelerators. While such heterogeneous systems can enable dramatic acceleration of user applications, extracting optimal performance via manual analysis and optimization is a complicated and time-consuming process. This dissertation presents graph-structured program representations to reason about the performance bottlenecks on modern HPC systems and to guide novel automation frameworks for performance analysis and modeling and runtime adaptation. The proposed program representations exploit domain knowledge and capture the inherent computation and communication patterns in user applications, at multiple levels of computational granularity, via compiler analysis and dynamic instrumentation. The empirical results demonstrate that the introduced modeling frameworks accurately estimate the realizable parallel performance and scalability of a given sequential code when ported to heterogeneous HPC systems. As a result, these frameworks enable efficient workload distribution schemes that utilize all the available compute resources in a performance-proportional way. In addition, the proposed runtime adaptation frameworks significantly improve the end-to-end performance of important real-world applications which suffer from limited parallelism and fine-grained data dependencies. Specifically, compared to the state-of-the-art methods, such an adaptive parallel execution achieves up to an order-of-magnitude speedup on the target HPC systems while preserving the inherent data dependencies of user applications. / Doctor of Philosophy / Current supercomputers integrate a massive number of heterogeneous compute units with varying speed, computational throughput, memory bandwidth, and memory access latency. This trend represents a major challenge to end users, as their applications have been designed from the ground up to primarily exploit homogeneous CPUs. While heterogeneous systems can deliver several orders of magnitude speedup compared to traditional CPU-based systems, end users need extensive software and hardware expertise as well as significant time and effort to efficiently utilize all the available compute resources. To streamline such a daunting process, this dissertation presents automated frameworks for analyzing and modeling the performance on parallel architectures and for transforming the execution of user applications at runtime. The proposed frameworks incorporate domain knowledge and adapt to the input data and the underlying hardware using novel static and dynamic analyses. The experimental results show the efficacy of the introduced frameworks across many important application domains, such as computational fluid dynamics (CFD), and computer-aided design (CAD). In particular, the adaptive execution approach on heterogeneous systems achieves up to an order-of-magnitude speedup over the optimized parallel implementations.
474

Programmable Address Generation Unit for Deep Neural Network Accelerators

Khan, Muhammad Jazib January 2020 (has links)
The Convolutional Neural Networks are getting more and more popular due to their applications in revolutionary technologies like Autonomous Driving, Biomedical Imaging, and Natural Language Processing. With this increase in adoption, the complexity of underlying algorithms is also increasing. This trend entails implications for the computation platforms as well, i.e. GPUs, FPGA, or ASIC based accelerators, especially for the Address Generation Unit (AGU), which is responsible for the memory access. Existing accelerators typically have Parametrizable Datapath AGUs, which have minimal adaptability towards evolution in algorithms. Hence new hardware is required for new algorithms, which is a very inefficient approach in terms of time, resources, and reusability. In this research, six algorithms with different implications for hardware are evaluated for address generation, and a fully Programmable AGU (PAGU) is presented, which can adapt to these algorithms. These algorithms are Standard, Strided, Dilated, Upsampled and Padded convolution, and MaxPooling. The proposed AGU architecture is a Very Long Instruction Word based Application Specific Instruction Processor which has specialized components like hardware counters and zero-overhead loops and a powerful Instruction Set Architecture (ISA), which can model static and dynamic constraints and affine and non-affine Address Equations. The target has been to minimize the flexibility vs. area, power, and performance trade-off. For a working test network of Semantic Segmentation, results have shown that PAGU shows close to the ideal performance, one cycle per address, for all the algorithms under consideration excepts Upsampled Convolution for which it is 1.7 cycles per address. The area of PAGU is approx. 4.6 times larger than the Parametrizable Datapath approach, which is still reasonable considering the high flexibility benefits. The potential of PAGU is not just limited to neural network applications but also in more general digital signal processing areas, which can be explored in the future. / Convolutional Neural Networks blir mer och mer populära på grund av deras applikationer inom revolutionerande tekniker som autonom körning, biomedicinsk bildbehandling och naturligt språkbearbetning. Med denna ökning av antagandet ökar också komplexiteten hos underliggande algoritmer. Detta medför implikationer för beräkningsplattformarna såväl som GPU: er, FPGAeller ASIC-baserade acceleratorer, särskilt för Adressgenerationsenheten (AGU) som är ansvarig för minnesåtkomst. Befintliga acceleratorer har normalt Parametrizable Datapath AGU: er som har mycket begränsad anpassningsförmåga till utveckling i algoritmer. Därför krävs ny hårdvara för nya algoritmer, vilket är en mycket ineffektiv metod när det gäller tid, resurser och återanvändbarhet. I denna forskning utvärderas sex algoritmer med olika implikationer för hårdvara för adressgenerering och en helt programmerbar AGU (PAGU) presenteras som kan anpassa sig till dessa algoritmer. Dessa algoritmer är Standard, Strided, Dilated, Upsampled och Padded convolution och MaxPooling. Den föreslagna AGU-arkitekturen är en Very Long Instruction Word-baserad applikationsspecifik instruktionsprocessor som har specialiserade komponenter som hårdvara räknare och noll-overhead-slingor och en kraftfull Instruktionsuppsättning Arkitektur (ISA) som kan modellera statiska och dynamiska begränsningar och affinera och icke-affinerad adress ekvationer. Målet har varit att minimera flexibiliteten kontra avvägning av område, kraft och prestanda. För ett fungerande testnätverk av semantisk segmentering har resultaten visat att PAGU visar nära den perfekta prestanda, 1 cykel per adress, för alla algoritmer som beaktas undantar Upsampled Convolution för vilken det är 1,7 cykler per adress. Området för PAGU är ungefär 4,6 gånger större än Parametrizable Datapath-metoden, vilket fortfarande är rimligt med tanke på de stora flexibilitetsfördelarna. Potentialen för PAGU är inte bara begränsad till neurala nätverksapplikationer utan också i mer allmänna digitala signalbehandlingsområden som kan utforskas i framtiden.
475

Towards a free-electron laser driven by electrons from a laser-wakefield accelerator : simulations and bunch diagnostics

Bajlekov, Svetoslav January 2011 (has links)
This thesis presents results from two strands of work towards realizing a free-electron laser (FEL) driven by electron bunches generated by a laser-wakefield accelerator (LWFA). The first strand focuses on selecting operating parameters for such a light source, on the basis of currently achievable bunch parameters as well as near-term projections. The viability of LWFA-driven incoherent undulator sources producing nanojoule-level pulses of femtosecond duration at wavelengths of 5 nm and 0.5 nm is demonstrated. A study on the prospective operation of an FEL at 32 nm is carried out, on the basis of scaling laws and full 3-D time-dependent simulations. A working point is selected, based on realistic bunch parameters. At that working point saturation is expected to occur within a length of 1.6 m with peak power at the 0.1 GW-level. This level, as well as the stability of the amplification process, can be improved significantly by seeding the FEL with an external radiation source. In the context of FEL seeding, we study the ability of conventional simulation codes to correctly handle seeds from high-harmonic generation (HHG) sources, which have a broad bandwidth and temporal structure on the attosecond scale. Namely, they violate the slowly-varying envelope approximation (SVEA) that underpins the governing equations in conventional codes. For this purpose we develop a 1-D simulation code that works outside the SVEA. We carry out a set of benchmarks that lead us to conclude that conventional codes are adequately capable of simulating seeding with broadband radiation, which is in line with an analytical treatment of the interaction. The second strand of work is experimental, and focuses on on the use of coherent transition radiation (CTR) as an electron bunch diagnostic. The thesis presents results from two experimental campaigns at the MPI für Quantenoptik in Garching, Germany. We present the first set of single-shot measurements of CTR over a continuous wavelength range from 420 nm to 7 μm. Data over such a broad spectral range allows for the first reconstruction of the longitudinal profiles of electron bunches from a laser-wakefield accelerator, indicating full-width at half-maximum bunch lengths around 1.4 μm (4.7 fs), corresponding to peak currents of several kiloampères. The bunch profiles are reconstructed through the application of phase reconstruction algorithms that were initially developed for studying x-ray diffraction data, and are adapted here for the first time to the analysis of CTR data. The measurements allow for an analysis of acceleration dynamics, and suggest that upon depletion of the driving laser the accelerated bunch can itself drive a wake in which electrons are injected. High levels of coherence at optical wavelengths indicate the presence of an interaction between the bunch and the driving laser pulse.
476

Hardening strategies for HPC applications / Estratégias de enrobustecimento para aplicações PAD

Oliveira, Daniel Alfonso Gonçalves de January 2017 (has links)
A confiabilidade de dispositivos de Processamentos de Alto Desempenho (PAD) é uma das principais preocupações dos supercomputadores hoje e para a próxima geração. De fato, o alto número de dispositivos em grandes centros de dados faz com que a probabilidade de ter pelo menos um dispositivo corrompido seja muito alta. Neste trabalho, primeiro avaliamos o problema realizando experimentos de radiação. Os dados dos experimentos nos dão uma taxa de erro realista de dispositivos PAD. Além disso, avaliamos um conjunto representativo de algoritmos que derivam entendimentos gerais de algoritmos paralelos e a confiabilidade de abordagens de programação. Para entender melhor o problema, propomos uma nova metodologia para ir além da quantificação do problema. Qualificamos o erro avaliando a importância de cada execução corrompida por meio de um conjunto dedicado de métricas. Mostramos que em relação a computação imprecisa, a simples detecção de incompatibilidade não é suficiente para avaliar e comparar a sensibilidade à radiação de dispositivos e algoritmos PAD. Nossa análise quantifica e qualifica os efeitos da radiação na saída das aplicações, correlacionando o número de elementos corrompidos com sua localidade espacial. Também fornecemos o erro relativo médio (em nível do conjunto de dados) para avaliar a magnitude do erro induzido pela radiação. Além disso, desenvolvemos um injetor de falhas, CAROL-FI, para entender melhor o problema coletando informações usando campanhas de injeção de falhas, o que não é possível através de experimentos de radiação. Injetamos diferentes modelos de falha para analisar a sensitividade de determinadas aplicações. Mostramos que partes de aplicações podem ser classificadas com diferentes criticalidades. As técnicas de mitigação podem então ser relaxadas ou enrobustecidas com base na criticalidade de partes específicas da aplicação. Este trabalho também avalia a confiabilidade de seis arquiteturas diferentes, variando de dispositivos PAD a embarcados, com o objetivo de isolar comportamentos dependentes de código e arquitetura. Para esta avaliação, apresentamos e discutimos experimentos de radiação que abrangem um total de mais de 352.000 anos de exposição natural e análise de injeção de falhas com base em um total de mais de 120.000 injeções. Por fim, as estratégias de ECC, ABFT e de duplicação com comparação são apresentadas e avaliadas em dispositivos PAD por meio de experimentos de radiação. Apresentamos e comparamos a melhoria da confiabilidade e a sobrecarga imposta das soluções de enrobustecimento selecionadas. Em seguida, propomos e analisamos o impacto do enrobustecimento seletivo para algoritmos de PAD. Realizamos campanhas de injeção de falhas para identificar as variáveis de código-fonte mais críticas e apresentamos como selecionar os melhores candidatos para maximizar a relação confiabilidade/sobrecarga. / HPC device’s reliability is one of the major concerns for supercomputers today and for the next generation. In fact, the high number of devices in large data centers makes the probability of having at least a device corrupted to be very high. In this work, we first evaluate the problem by performing radiation experiments. The data from the experiments give us realistic error rate of HPC devices. Moreover, we evaluate a representative set of algorithms deriving general insights of parallel algorithms and programming approaches reliability. To understand better the problem, we propose a novel methodology to go beyond the quantification of the problem. We qualify the error by evaluating the criticality of each corrupted execution through a dedicated set of metrics. We show that, as long as imprecise computing is concerned, the simple mismatch detection is not sufficient to evaluate and compare the radiation sensitivity of HPC devices and algorithms. Our analysis quantifies and qualifies radiation effects on applications’ output correlating the number of corrupted elements with their spatial locality. We also provide the mean relative error (dataset-wise) to evaluate radiation-induced error magnitude. Furthermore, we designed a homemade fault-injector, CAROL-FI, to understand further the problem by collecting information using fault injection campaigns that is not possible through radiation experiments. We inject different fault models to analyze the sensitivity of given applications. We show that portions of applications can be graded by different criticalities. Mitigation techniques can then be relaxed or hardened based on the criticality of the particular portions. This work also evaluates the reliability behaviors of six different architectures, ranging from HPC devices to embedded ones, with the aim to isolate code- and architecturedependent behaviors. For this evaluation, we present and discuss radiation experiments that cover a total of more than 352,000 years of natural exposure and fault-injection analysis based on a total of more than 120,000 injections. Finally, Error-Correcting Code, Algorithm-Based Fault Tolerance, and Duplication With Comparison hardening strategies are presented and evaluated on HPC devices through radiation experiments. We present and compare both the reliability improvement and imposed overhead of the selected hardening solutions. Then, we propose and analyze the impact of selective hardening for HPC algorithms. We perform fault-injection campaigns to identify the most critical source code variables and present how to select the best candidates to maximize the reliability/overhead ratio.
477

Incubar ou acelerar? análise sobre o valor entregue para as startups pelas incubadoras e aceleradoras de negócios. / Incubate or accelerate? analysis of the value delivered to startups by business incubators and business accelerators.

Maruyama, Felipe Massami 11 December 2017 (has links)
Tanto as incubadoras como as aceleradoras são organizações especializadas no suporte de empreendimentos em fases iniciais, em especial, aqueles intensivos em inovação conhecidos como startups. Apesar da grande disseminação dessas organizações, há poucas informações na literatura que evidenciem as suas diferenças e as contribuições na jornada do empreendedorismo inovador. Assim, o objetivo principal deste estudo é comparar a diferença entre as propostas de valor das aceleradoras e das incubadoras a partir da percepção das startups que tenham sido tanto incubadas como aceleradas. Entre os objetivos específicos temos: discutir possíveis relações entre as aceleradoras e as incubadoras de negócios; apresentar a evolução das incubadoras e os fatores que induziram o surgimento das aceleradoras, descrevendo os diferentes arquétipos e as implicações que essas organizações têm no ecossistema de empreendedorismo; apresentar o cenário nacional do fenômeno de aceleração e de incubação. O levantamento de dados contará com duas etapas: análise documental de fontes de dados secundárias e estudos de caso com uso de técnica de entrevista e questionário semiestruturado. A análise documental foi feita a partir de banco de dados de artigos científicos, dados oficiais de governos, fundações, revistas e páginas web especializadas e editais de chamamento das próprias organizações. A análise documental fornecerá o retrato de como as incubadoras e as aceleradoras se promovem no ecossistema como organizações importantes no apoio às startups. Em seguida, através de abordagem exploratória descritiva e qualitativa, foram realizadas entrevistas com roteiros semiestruturados com fundadores de startups que foram incubadas e aceleradas, para compreender o valor que cada um desses processos forneceu ao desenvolvimento dessas empresas. Concluiu-se que existe uma dissonância entre o valor percebido pelas startups e o que as incubadoras e as aceleradoras promovem. Também foi possível identificar que a busca por recursos pelas startups tende a não seguir um processo linear, capturando as melhores oportunidades que estejam disponíveis no momento. Por fim, esta pesquisa é um passo exploratório para trazer novas evidências do fenômeno das startups e dos diferentes instrumentos que as constroem. Sugerem-se encaminhamentos que possam preencher lacunas na literatura a respeito dos fenômenos citados, indicando a necessidade de estudos futuros que adensem o conhecimento desse fenômeno. / Both incubators and acelerators are specialized organizations to support early-stage ventures, especially innovation-intensive ones known as startups. Despite the great spread of these organizations, there is a few information in the literature that show their differences and contributions in the journey of innovative entrepreneurship. The main objective of this study is comparing the difference between value porposition of accelerators and incubators from the perception of startups that have been both incubated and accelerated. The specific objectives are: to discuss possible relationships between accelerators and incubators; to present the evolution of the incubators and the factors that led to the emergence of the accelerators, describing the different accelerators archetypes and the implications in entrepreneurship ecosystem; to present the national scenario of acceleration and incubation. The data collection stage had two stages: documentary analysis of secondary data sources; and the case study using interview technique through semi-structured questionnaire. The documentary analysis was made from a database of scientific articles, official data from governments, foundations, journals and specialized web pages and incubators and accelerators calls for proposals. Documentary analysis provided a picture of how incubators and accelerators are promoted to the ecosystem and startups. Then, through a descriptive and qualitative exploratory approach, interviews were conducted with semistructured scripts with founders of startups that were incubated and accelerated to understand the value that each of these processes provided to the development of these companies. It was concluded that there is a dissonance between what the incubators and the accelerators promote and the value perceived by the startups, they are not being able to identify enough characteristics that distinguish them. The reason for it is the diversity of the needs and demands of the startups, different models of accelerators and incubators are formulated that, in many cases, overlap in the benefits offered. It was also possible to identify that the search for resources by startups, whether radical or disruptive startups, does not follow a linear process, capturing the best opportunities available in the ecosystem through a minimally tactical and selective approach. In order to contribute to the understanding of the growing formation of organizations supporting startups, such as incubators and accelerators, and considering the findings of this research, a tool was suggested to define the types of these organizations, loosely termed \"startup guiders\". This tool analyzes three basic dimensions: business model, value proposition and stage of intervention in the development of early-stages ventures. Finally, this research is an exploratory step in bringing new evidence of the phenomenon of startups and of the different instruments that construct them. It is suggested that there be gaps in the literature regarding the mentioned phenomena, indicating the need for future studies that increase the knowledge of this phenomenon.
478

Performance evaluation of the SPS scraping system in view of the high luminosity LHC

Mereghetti, Alessio January 2015 (has links)
Injection in the LHC is a delicate moment, since the LHC collimation system cannot offer adequate protection during beam transfer. For this reason, a complex chain of injection protection devices has been put in place. Among them, the SPS scrapers are the multi-turn cleaning system installed in the SPS aimed at halo removal immediately before injection in the LHC. The upgrade in luminosity of the LHC foresees beams brighter than those currently available in machine, posing serious problems to the performance of the existing injection protection systems. In particular, the integrity of beam-intercepting devices is challenged by unprecedented beam parameters, leading to interactions potentially destructive. In this context, a new design of scrapers has been proposed, aimed at improved robustness and performance. This thesis compares the two scraping systems, i.e. the existing one and the one proposed for upgrade. Unlike any other collimation system for regular halo cleaning, both are "fast" systems, characterised by the variation of the relative distance between the beam and the absorbing medium during cleaning, which enhances the challenge on energy deposition values. Assets/liabilities of the two systems are highlighted by means of numerical simulations and discussed, with particular emphasis on energy deposition in the absorbing medium, time evolution of the beam current during scraping and losses in the machine. Advantages of the system proposed for upgrade over the existing one are highlighted. The analysis of the existing system takes into account present operational conditions and addresses the sensitivity to settings previously not considered, updating and extending past studies. The work carried out on the upgraded system represents the first extensive characterisation of a multi-turn cleaning system based on a magnetic bump. Results have been obtained with the Fluka-SixTrack coupling, developed during this PhD activity from its initial version to being a state-of-art tracking tool for cleaning studies in circular machines. Relevant contributions to the development involve the handling of time-varying impact conditions. An extensive benchmark against a test of the scraper blades with beam has been carried out, to verify the reliability of results. Effcts induced in the tested blades confirm the high values of energy deposition predicted by the simulation. Moreover, the comparison with the time profile of the beam intensity measured during scraping allowed the reconstruction of the actual settings of the blades during the test. Finally, the good agreement of the quantitative benchmark against readouts of beam loss monitors finally proves the quality of the analyses and the maturity of the coupling.
479

An intra-pulse fast feedback system for a future linear collider

Jolly, Simon January 2003 (has links)
An intra-pulse Interaction Point fast feedback system (IPFB) has been designed for the Next Linear Collider (NLC), to correct relative beam-beam misalignments at the Interaction Point (IP). This system will utilise the large beam-beam kick that results from the beam-beam interaction and apply a rapid correction to the beam misalignment at the IP within a single bunch train. A detailed examination of the IPFB system is given, including a discussion of the necessary electronics, and the results of extensive simulations based on the IPFB concept for fast beam correction are presented. A recovery of the nominal luminosity of the NLC is predicted well within the NLC bunch train of 266 ns. The FONT experiment - Feedback On Nanosecond Timescales - was proposed as a direct test of the IPFB concept and was realised at the NLC Test Accelerator at SLAC. As part of FONT, a novel X-band BPM was designed and tested at the NLCTA. The results of these tests with the NLCTA short and long-pulse beam are presented, demonstrating a linear response to the position of the 180 ns long-pulse beam: measurements show a time constant of ~1.5 ns and a precision of better than 20 microns. A novel BPM processor for use at X-band, making use of the difference-over-sum processing technique, is also presented in detail, with results given for both short and long-pulse beams. The FONT design concepts and modification of the IPFB system for use at the NLCTA are described. The design of a fast charge normalisation circuit, to process the difference and sum signals produced by the BPM processor, forming part of the FONT feedback circuit, is detailed extensively. Bench tests of the feedback electronics demonstrate the effectiveness of the normalisation and feedback stages, for which a signal latency of 11 ns was measured. These bench tests also show the correct operation of the normalisation and feedback principles. Finally, the results of a full beam test of the FONT system are presented, during which a system latency of 70 ns was measured. These rigorous tests establish the soundness of the IPFB scheme and show correction of a mis-steered bunch train within the full NLCTA pulse length of 180 ns.
480

Intercomparacao de colimadores de multiplas laminas para implementacao de terapia de feixes de intensidade modulada

VITERI, JUAN F.D. 09 October 2014 (has links)
Made available in DSpace on 2014-10-09T12:51:09Z (GMT). No. of bitstreams: 0 / Made available in DSpace on 2014-10-09T13:56:37Z (GMT). No. of bitstreams: 0 / Dissertacao (Mestrado) / IPEN/D / Instituto de Pesquisas Energeticas e Nucleares - IPEN/CNEN-SP

Page generated in 0.0586 seconds