• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 28
  • 6
  • 3
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 54
  • 10
  • 8
  • 8
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Bacterial Leaching of Chalcopyrite Ore

Canfell, Anthony John Unknown Date (has links)
Bacterial leaching utilises bacteria, ubiquitous to sulphide mining environments to oxidise sulphide ores. The sulphide mineral chalcopyrite is the most common copper mineral in the world, comprising the bulk of the known copper reserves. Chalcopyrite is resistant to bacterial leaching and despite research over the last 20-30 years, has not yet been economically bioleached. Attempts have been made to use silver to catalyse the bacterial leaching of chalcopyrite since the early seventies. The majority of reported testwork had been performed on finely ground ore and concentrates in agitated batch reactors. This project used silver to catalyse the bioleaching of chalcopyrite in shake flasks, small columns and large columns. The catalytic effect was extensively studied and experimental parameters were varied to maximise copper recovery. Silver was also used to catalyse the ferric leaching of chalcopyrite at elevated temperatures. It was noted that the leaching performance of chalcopyrite in shake flasks compared to columns was markedly different. The specific differences between shake flasks and columns were qualified and separately tested to determine which parameter(s) affected the bioleaching of chalcopyrite. It was found that the ore to solution ratio, aeration, addition of carbon dioxide, solution distribution and small variations in the leaching temperature did not significantly effect the bioleaching of chalcopyrite ore in columns. The method of silver addition to columns did significantly affect the overall copper extraction. The ore in shake flasks was subjected to abrasion between ore particles and with the base of the flask. A test was designed to mimic the shake flask conditions, without the abrasion. The low abrasion test performed similarly to a column, operated with optimum silver addition. This indicated that the inherent equipment difference between shake flask and column operation largely accounted for the difference in leaching performance. Chalcopyrite ore was biologically leached in large columns. The ore crush size and other conditions were typical of those used in the field. The biological leach achieved 65% copper extraction in 160 days. This level of copper extraction is significantly higher than any previously reported results (typically /10% copper extraction) and represents a significant advance in the bacterial leaching of chalcopyrite ore. Due to the inherent high temperature within underground stopes, it was decided to investigate the possibility of separating the leaching and the bacterial oxidation stages. The concept of separate bacterial and ferric leaching has been previously suggested, however the application to a stope, and heat exchange between the process streams was a novel approach. Large column ferric leaches at 70 oC illustrated the technical feasibility of this process. Copper extraction was rapid and high (70% in 100 days of leaching), even when a reduced level of silver catalysis was used. After leaching in large columns, samples of ore were taken for analysis by optical mineralogy. The analysis gave valuable insights into the nature of reaction passivation on chalcopyrite ore. In particular, it was discovered that the precipitation of goethite was a major limiting factor in the bioleaching and ferric leaching of chalcopyrite in columns. In addition, reduced sulphide species were detected on the surface of residual chalcopyrite, giving an indication of the sequential nature of the chalcopyrite reaction chemistry. The bacterial population was characterised using DNA techniques developed during the project. Qualitative speciation was carried out and compared between the columns, down the columns and over time in a column. Comparison of these populations enabled greater mechanistic understanding of the role of bacteria in the leaching of chalcopyrite. This work was the most comprehensive attempt to date made to delineate the complex microbiological/mineral actions using analysis of population dynamics from a mixed inoculum. It was found that the iron oxidiser Thiobacillus ferrooxidans dominated within the columns and leach solutions. The sulphur oxidiser Sulfobacillus thermosulfidooxidans was also prevalent in the columns, particularly during the period of rapid chalcopyrite oxidation. The high temperature, ferric leaching of chalcopyrite was unexpectedly poor in the first round of large columns. The reason for the low extraction was attributed to an increase in pH down the column, resulting in excessive goethite precipitation. The solution flowrate (velocity) was increased by ten times in subsequent columns. There were no operational problems (e.g. break-up of ore agglomerates). The increase in flowrate resulted in a high yield of copper. The kinetics of extraction were faster than a corresponding bacterial leach, confirming the potential advantage of a high temperature leach. The small column studies highlighted that it was important to get an even distribution of silver down the stope to enable maximum catalytic effect. If the ore were agglomerated, silver would be added with acid at that point. However, it may not always be possible to agglomerate the ore. For example, the process may be used in-situ on a fractured ore body, or on an ore that has a low fines content, and hence does not require agglomeration. Various complexing agents were tested for their ability to distribute silver at the start of the leach and to recover silver at the end of the leach. For instance when silver was complexed with thiourea and then trickled through the ore, an even distribution of silver was achieved. After leaching was completed, a thiourea wash recovered a significant amount of the silver. These two techniques minimised the amount of silver required and thus significantly added to the economic viability of the process. The success of the technical work has led to an evaluation of the process in the field. A flowsheet was developed for the high temperature, in-stope ferric leach of chalcopyrite. An economic analysis was performed that illustrated the process would be viable in certain situations. An engineering study considered issues such as acid consumption, aeration, silver distribution, silver recovery and a heat balance of the stope.
22

Optimizing Heap Data Management on Software Managed Manycore Architectures

January 2017 (has links)
abstract: Caches pose a serious limitation in scaling many-core architectures since the demand of area and power for maintaining cache coherence increases rapidly with the number of cores. Scratch-Pad Memories (SPMs) provide a cheaper and lower power alternative that can be used to build a more scalable many-core architecture. The trade-off of substituting SPMs for caches is however that the data must be explicitly managed in software. Heap management on SPM poses a major challenge due to the highly dynamic nature of of heap data access. Most existing heap management techniques implement a software caching scheme on SPM, emulating the behavior of hardware caches. The state-of-the-art heap management scheme implements a 4-way set-associative software cache on SPM for a single program running with one thread on one core. While the technique works correctly, it suffers from signifcant performance overhead. This paper presents a series of compiler-based efficient heap management approaches that reduces heap management overhead through several optimization techniques. Experimental results on benchmarks from MiBenchGuthaus et al. (2001) executed on an SMM processor modeled in gem5Binkert et al. (2011) demonstrate that our approach (implemented in llvm v3.8Lattner and Adve (2004)) can improve execution time by 80% on average compared to the previous state-of-the-art. / Dissertation/Thesis / Masters Thesis Computer Science 2017
23

Análise crítica do sistema de gerenciamento de rejeitos provenientes de mieneração e beneficiamento de urânio: um estudo de caso da unidade de concentrado de urânio/INB

Araújo, Valeska Peres de, Instituto de Engenharia Nuclear 03 1900 (has links)
Submitted by Marcele Costal de Castro (costalcastro@gmail.com) on 2017-10-11T18:34:29Z No. of bitstreams: 1 VALESKA PERES DE ARAUJO M.pdf: 5304460 bytes, checksum: 1f83794cc3fe22b08f5ef43968316ac5 (MD5) / Made available in DSpace on 2017-10-11T18:34:29Z (GMT). No. of bitstreams: 1 VALESKA PERES DE ARAUJO M.pdf: 5304460 bytes, checksum: 1f83794cc3fe22b08f5ef43968316ac5 (MD5) Previous issue date: 2005-03 / O mercado mundial de urânio enfrentou, nas últimas décadas, uma depreciação desta “commodity”. Coma redução dos estoques secundários (representado pelos estoques de urânio enriquecido da antiga União Soviética) começa-se a projetar um aumento no preço deste bem e o mercado voltará a depender da produção primária. Para fazer frente a esta nova demanda, novas plantas terão que entrar em operação ou então aumentar-se a produção daquelas já existentes. Questões ambientais têm sido, e certamente continuarão sendo, determinantes na viabilidade operacional deste tipo de instalação. No caso da mineração de urânio os riscos radiológicos somam-se aos não radiológicos, e os grandes volumes de rejeitos gerados estão entre os principais aspectos ambientais. Por isso mesmo devem ser gerenciados adequadamente de forma a minimizar os impactos associados. No Brasil, toda a produção de urânio provém da Unidade de Concentrado de Urânio (URA), situada no município de Caetité, no estado da Bahia, sendo operada pelas Indústrias Nucleares do Brasil (INB). A unidade é constituída por uma mina com lavra a céu aberto e uma instalação de processamento de minério. O método da lixiviação ácida com H2SO4 em pilha (Heap Leach) é empregado para a extração de urânio. A capacidade de produção da unidade está em torno de 400 t/ano de U3O8.Este trabalho teve por objetivo apresentar uma avaliação do sistema de gerenciamento de rejeitos dessa unidade, analisando a sua eficácia em relação à mitigação dos impactos potenciais, tanto na fase operacional quanto na fase pós-operacional. Os rejeitos foram divididos em rejeitos de mineração e rejeitos de processo. No primeiro grupo foram incluídas as aguas de drenagem e os estéreis das atividades de lavra. No segundo constam o minério exaurido (do processo de lixiação), que são depositados em pilhas de forma consorciada com estéreis, e os rejeitos de processo, que são armazenados em tanques (ponds), dotados de drenos sub-aéreos. Foi observado que impactos na atmosfera não são relevantes. Simulações matemáticas não apontaram um potencial relevante de contaminação das aguas subterrâneas a partir dos tanques de deposição de rejeitos de processo. Todavia, simulações feitas com o minério exaurido indicam que tais fontes não podem ser descartadas como importantes fontes de contaminação a longo prazo. Não foi possível se caracterizar de forma quantitativa a contribuição das águas armazenadas na cava da mina no teor de urânio das águas subterrâneas na sua área de influência, e, portanto tal aspecto precisa ser mais bem investigado. Finalmente, o gerenciamento das águas de drenagem não é satisfatório, destaca-se a necessidade de avaliar outras estratégias de gerenciamento destas águas, inclusive o seu tratamento para posterior liberação controlada para o meio ambiente. Recomenda-se também a adoção de um Sistema de Gestão Ambiental com vistas a se atingir um desempenho ambiental mais satisfatório de empreendimento.
24

RTOS med 1.5K RAM?

Chahine, Sandy, Chowdhury, Selma January 2018 (has links)
Internet of Things (IoT) blir allt vanligare i dagens samhälle. Allt fler vardagsenheter blir uppkopplade mot det trådlösa nätet. För det krävs kostnadseffektiv datorkraft vilket medför att det kan vara gynnsamt att undersöka mikrokontroller och hur de skulle klara av detta arbete. Dessa kan ses som mindre kompakta datorer vilka trots sin storlek erbjuder en hel del prestanda. Denna studie avser att underrätta om något befintligt operativsystem kan fungera ihop med mikrokontrollern PIC18F452 samt hur många processer som kan köras parallellt givet MCU:ns begränsade minne. Olika metodval undersöktes och diskuterades för att avgöra vilken metod som skulle generera bäst resultat. En undersökning och flera experiment genomfördes för att kunna besvara dessa frågor. Experimenten krävde att en speciell utvecklingsmiljö installerades och att den generiska FreeRTOS distributionen porterades till både rätt processor och experimentkort. Porteringen lyckades och experimenten visade att frågeställningen kunde besvaras med ett ja - det går att köra ett realtidsoperativsystem på en MCU med enbart 1,5 kB RAM-minne. Under arbetets gång konstaterade också projektet att Amazon byggt sin IoTsatsning på FreeRTOS. De hade dock satsat på en mer kraftfull MCU. Satsningen ville därmed framhålla det som en mer framtidssäker inriktning. / Internet of Things (IoT) is becoming more common in today's society. More and more everyday devices are connected to the wireless network. This requires costeffective computing power, which means that it can be beneficial to investigate the microcontroller and how they would cope with this task. These can be seen as smaller compact computers which despite their size offer a lot of performance. This study aims to inform if any existing operating system can work together with the microcontroller PIC18F452 and how many processes that can run in parallel given the MCU's limited memory. A survey and an experiment were conducted to answer these questions. Different choice of methods was investigated and discussed to determine which method would generate the best results. A survey and an experiment were conducted to answer these questions. The experiments required a special development environment to be installed and the generic FreeRTOS distribution was ported to both the correct processor and the experimental card. The porting succeeded and experiments showed that the research question could be answered with a yes. You can run a real-time operating system on an MCU with only 1,5 kB RAM memory. During the work, the project also found that Amazon built its IoT on FreeRTOS. However, they had invested in a more powerful MCU. The effort would thus emphasize it as a more future-proof approach.
25

Implementation of operations in double-ended heaps / Implementation of operations in double-ended heaps

Bardiovský, Vojtech January 2012 (has links)
There are several approaches for creating double-ended heaps from the single-ended heaps. We build on one of them, the leaf correspondence heap, to create a generic double ended heap scheme called L-correspondence heap. This will broaden the class of eligible base single-ended heaps (e.g. by Fibonacci heap, Rank-pairing heap) and make the operations Decrease and Increase possible. We show this approach on specific examples for three different single-ended base heaps and give time complexity bounds for all operations. Another result is that for these three examples, the expected amortized time for Decrease and Increase operations in the L-correspondence heap is bounded by a constant.
26

The Asynchronous t-Step Approximation for Scheduling Batch Flow Systems

Grimsman, David R. 01 June 2016 (has links)
Heap models in the max-plus algebra are interesting dynamical systems that can be used to model a variety of tetris-like systems, such as batch flow shops for manufacturing models. Each heap in the model can be identified with a single product to manufacture. The objective is to manufacture a group of products in such an order so as to minimize the total manufacturing time. Because this scheduling problem reduces to a variation of the Traveling Salesman Problem (known to be NP-complete), the optimal solution is computationally infeasible for many real-world systems. Thus, a feasible approximation method is needed. This work builds on and expands the existing heap model in order to more effectively solve the scheduling problems. Specifically, this work:1. Further characterizes the admissible products to these systems.2. Further characterizes sets of admissible products. 3. Presents a novel algorithm, the asynchronous $t$-step approximation, to approximate these systems.4. Proves error bounds for the system approximation, and show why these error bounds are better than the existing approximation.
27

An Evaluation of Shortest Path Algorithms on Real Metropolitan Area Networks

Johansson, David January 2008 (has links)
<p>This thesis examines some of the best known algorithms for solving the shortest point-to-point path problem, and evaluates their performance on real metropolitan area networks. The focus has mainly been on Dijkstra‟s algorithm and different variations of it, and the algorithms have been implemented in C# for the practical tests. The size of the networks used in this study varied between 358 and 2464 nodes, and both running time and representative operation counts were measured.</p><p>The results show that many different factors besides the network size affect the running time of an algorithm, such as arc-to-node ratio, path length and network structure. The queue implementation of Dijkstra‟s algorithm showed the worst performance and suffered heavily when the problem size increased. Two techniques for increasing the performance were examined: optimizing the management of labelled nodes and reducing the search space. A bidirectional Dijkstra‟s algorithm using a binary heap to store temporarily labelled nodes combines both of these techniques, and it was the algorithm that performed best of all the tested algorithms in the practical tests.</p><p>This project was initiated by Netadmin Systems i Sverige AB who needed a new path finding module for their network management system NETadmin. While this study is primarily of interest for researchers dealing with path finding problems in computer networks, it may also be useful in evaluations of path finding algorithms for road networks since the two networks share some common characteristics.</p>
28

Single and Twin-Heaps as Natural Data Structures for Percentile Point Simulation Algorithms

Hatzinger, Reinhold, Panny, Wolfgang January 1993 (has links) (PDF)
Sometimes percentile points cannot be determined analytically. In such cases one has to resort to Monte Carlo techniques. In order to provide reliable and accurate results it is usually necessary to generate rather large samples. Thus the proper organization of the relevant data is of crucial importance. In this paper we investigate the appropriateness of heap-based data structures for the percentile point estimation problem. Theoretical considerations and empirical results give evidence of the good performance of these structures regarding their time and space complexity. (author's abstract) / Series: Forschungsberichte / Institut für Statistik
29

Comparing performance between plain JavaScript and popular JavaScript frameworks

Ladan, Zlatko January 2015 (has links)
JavaScript is used on the web together with HTML and CSS, in many cases using frameworks for JavaScript such as jQuery and Backbone.js. This project is comparing the speed and memory allocation of the programming language JavaScript and its two most used frameworks as well as the language on its own. Since JavaScript is not very fast and it has some missing features or features that differ from browser to browser and frameworks solve this problem but at the cost of speed and memory allocation, the aim is to find out how well JavaScript and the two frameworks jQuery and Backbone.js are doing this on Google Chrome Canary. The results varied (mostly) between the implementations and show that the to-do application is a good enough example to use when comparing the results of heap allocation and CPU time of methods. The results where compared with their mean values and using ANOVA. JavaScript was the fastest, but it might not be enough for a developer to completely stop using frameworks. With JavaScript a developer can choose to create a custom framework, or use an existing one based on the results of this project.
30

Finding the needle in the heap : combining binary analysis techniques to trigger use-after-free / Analyses de code binaire pour la détection et le déclenchement de use-after-free

Feist, Josselin 29 March 2017 (has links)
La sécurité des systèmes est devenue un élément majeur du développement logicielle, pour les éditeurs, les utilisateurs et les agences gouvernementales. Un problème récurrent est la détection de vulnérabilités, qui consiste à détecter les bugs qui pourraient permettre à un attaquant de gagner des privilèges non prévues, comme la lecture ou l’écriture de donnée sensible, voir même l’exécution de code non autorisé. Cette thèse propose une approche pratique pour la détection d’une vulnérabilité particulière : le use-after-free, qui apparaît quand un élément du tas est utilisé après avoir été libéré. Cette vulnérabilité a été utilisé dans de nombreux exploits, et est, de par sa nature, difficile à détecter. Les problèmes récurrents pour sa détection sont, par exemple, le fait que les éléments déclenchant la vulnérabilité peuvent être répartis à de grande distance dans le code, le besoin de raisonner sur l’allocateur mémoire, ou bien la manipulation de pointeurs. L’approche proposé consiste en deux étapes. Premièrement, une analyse statique, basée sur une analyse légère, mais non sûre, appelé GUEB, permet de traquer les accès mémoire ainsi que l’état des éléments du tas (alloué / libéré / utilisé) . Cette analyse mène à un slice de programme contenant de potentiel use-after-free. La seconde étape vient alors confirmer ou non la présence de vulnérabilité dans ces slices, et est basée sur un moteur d'exécution symbolique guidé, développé dans la plateforme Binsec. Ce moteur permet de générer des entrées du programme réel déclenchant un use-after-free. Cette combinaison s’est montré performante en pratique et a permis de détecter plusieurs use-after-free qui étaient précédemment inconnu dans plusieurs codes réels. L’implémentation des outils est disponible en open-source et fonctionne sur du code x86. / Security is becoming a major concern in software development, both for software editors, end-users, and government agencies. A typical problem is vulnerability detection, which consists in finding in a code bugs able to let an attacker gain some unforeseen privileges like reading or writing sensible data, or even hijacking the program execution.This thesis proposes a practical approach to detect a specific kind of vulnerability, called use-after-free, occurring when a heap memory block is accessed after being freed. Such vulnerabilities have lead to numerous exploits (in particular against web browsers), and they are difficult to detect since they may involve several distant events in the code (allocating, freeingand accessing a memory block).The approach proposed consists in two steps. First, a coarse-grain and unsound binary level static analysis, called GUEB, allows to track heap memory blocks operation (allocation, free, and use). This leads to a program slice containing potential use-after-free. Then, a dedicated guided dynamic symbolic execution, developed within the Binsec plateform, is used to retrieve concreteprogram inputs aiming to trigger these use-after-free. This combination happened to be be effective in practice and allowed to detect several unknown vulnerabilities in real-life code. The implementation is available as an open-source tool-chain operating on x86 binary code.

Page generated in 0.0703 seconds