• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1451
  • 532
  • 294
  • 170
  • 155
  • 116
  • 48
  • 44
  • 43
  • 29
  • 26
  • 20
  • 20
  • 20
  • 20
  • Tagged with
  • 3621
  • 632
  • 513
  • 483
  • 389
  • 378
  • 364
  • 314
  • 293
  • 290
  • 239
  • 239
  • 239
  • 228
  • 216
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
571

Processamento térmico de grafeno e sua síntese pela técnica de epitaxia por feixes moleculares

Rolim, Guilherme Koszeniewski January 2018 (has links)
Desempenho e consumo energético são requisitos fundamentais em sistemas de computação. Um desafio comumente encontrado é conciliar esses dois aspectos, buscando manter o mesmo desempenho, consumindo cada vez menos energia. Muitas técnicas possibilitam a redução do consumo de energia em aplicações paralelas, mas na maioria das vezes elas envolvem recursos encontrados apenas em processadores modernos ou um conhecimento amplo das características da aplicação e da plataforma alvo. Nesse trabalho propomos uma abordagem em formato de Workflow. Na primeira fase, o comportamento da aplicação paralela é investigado. A partir dessa investigação, a segunda fase realiza a execução da aplicação paralela com diferentes frequências (mínima e máxima) de processador, utilizando a caracterização das regiões, obtida na primeira fase da abordagem. Esse Workflow foi implementado em formato de biblioteca dinâmica, a fim de que ela possa ser utilizada em qualquer aplicação OpenMP. A biblioteca possui suporte as duas fases do Workflow, na primeira fase é gerado um arquivo que descreve as assinaturas comportamentais das regiões paralelas da aplicação. Esse arquivo é posteriormente utilizado na segunda fase, quando a biblioteca vai alterar dinamicamente a frequência de processador. O benchmark Lulesh é utilizado como cenário de testes da biblioteca, com isso o maior ganho obtido é a redução de 1,89% do consumo de energia. Esse ganho acarretou uma sobrecarga de 0,09% no tempo de execução. Ao comparar nossa técnica com a política de troca de frequência adotada pelo governor Ondemand do Sistema Operacional Linux, o ganho de 1,89% é significativo em relação ao benchmark utilizado, pois nele existem regiões paralelas de curta duração, o que impacta negativamente no overhead da operação de troca de frequência. / Performance and energy consumption are fundamental requirements in computer systems. A very frequent challenge is to combine both aspects, searching to keep the high performance computing while consuming less energy. There are a lot of techniques to reduce energy consumption, but in general, they use modern processors resources or they require specific knowledge about application and platform used. In this work, we propose a performance analysis workflow strategy divided into two steps. In the first step, we analyze the parallel application behavior through the use of hardware counters that reflect CPU and memory usage. The goal is to obtain a per-region computing signature. The result of this first step is a configuration file that describes the duration of each region, their hardware counters, and source code identification. The second step runs the parallel application with different frequencies (low or high) according to the characterization obtained in the previous step. The results show a reduction of 1,89% in energy consumption for the Lulesh benchmark with an increase of 0,09% in runtime when we compare our approach against the governor Ondemand of the Linux Operating System.
572

Uso das características computacionais de regiões paralelas OpenMP para redução do consumo de energia

Moro, Gabriel Bronzatti January 2018 (has links)
Desempenho e consumo energético são requisitos fundamentais em sistemas de computação. Um desafio comumente encontrado é conciliar esses dois aspectos, buscando manter o mesmo desempenho, consumindo cada vez menos energia. Muitas técnicas possibilitam a redução do consumo de energia em aplicações paralelas, mas na maioria das vezes elas envolvem recursos encontrados apenas em processadores modernos ou um conhecimento amplo das características da aplicação e da plataforma alvo. Nesse trabalho propomos uma abordagem em formato de Workflow. Na primeira fase, o comportamento da aplicação paralela é investigado. A partir dessa investigação, a segunda fase realiza a execução da aplicação paralela com diferentes frequências (mínima e máxima) de processador, utilizando a caracterização das regiões, obtida na primeira fase da abordagem. Esse Workflow foi implementado em formato de biblioteca dinâmica, a fim de que ela possa ser utilizada em qualquer aplicação OpenMP. A biblioteca possui suporte as duas fases do Workflow, na primeira fase é gerado um arquivo que descreve as assinaturas comportamentais das regiões paralelas da aplicação. Esse arquivo é posteriormente utilizado na segunda fase, quando a biblioteca vai alterar dinamicamente a frequência de processador. O benchmark Lulesh é utilizado como cenário de testes da biblioteca, com isso o maior ganho obtido é a redução de 1,89% do consumo de energia. Esse ganho acarretou uma sobrecarga de 0,09% no tempo de execução. Ao comparar nossa técnica com a política de troca de frequência adotada pelo governor Ondemand do Sistema Operacional Linux, o ganho de 1,89% é significativo em relação ao benchmark utilizado, pois nele existem regiões paralelas de curta duração, o que impacta negativamente no overhead da operação de troca de frequência. / Performance and energy consumption are fundamental requirements in computer systems. A very frequent challenge is to combine both aspects, searching to keep the high performance computing while consuming less energy. There are a lot of techniques to reduce energy consumption, but in general, they use modern processors resources or they require specific knowledge about application and platform used. In this work, we propose a performance analysis workflow strategy divided into two steps. In the first step, we analyze the parallel application behavior through the use of hardware counters that reflect CPU and memory usage. The goal is to obtain a per-region computing signature. The result of this first step is a configuration file that describes the duration of each region, their hardware counters, and source code identification. The second step runs the parallel application with different frequencies (low or high) according to the characterization obtained in the previous step. The results show a reduction of 1,89% in energy consumption for the Lulesh benchmark with an increase of 0,09% in runtime when we compare our approach against the governor Ondemand of the Linux Operating System.
573

Intelligent agents for electronic commerce in tourism

Ng, Faria Yuen-yi January 1999 (has links)
The current state of electronic commerce in tourism shows that it has become an increasingly complicated task for travellers to locate and integrate disparate information as a result of the rapid growth in the number of online travel sites. Therefore, new means of automating the searching and decision-making tasks are needed. A review of current literature shows that software agents are deemed to be highly suitable for delivering solutions to these problems. However, agents have failed to penetrate the electronic marketplace so far. An analysis of the reason for this failure has led the author to conclude that a new type of architecture is required, allowing a simple and useful first wave product to accelerate the penetration of agents. For this purpose, a proof-of-concept multi-agent prototype - Personal Travel Assistant (PTA) was developed. Firstly, user requirements were compared against what existing network and agent technologies could deliver. Then, a number of obstacles were identified that were used as guidelines to derive the prototype architecture. To overcome the main obstacles in the design, PTA used existing HTTP servers to tackle the interoperability problem and keep development costs low. A multi-agent collaborative learning strategy was designed to speed up knowledge acquisition by transferring and adapting rules encoded in the Java language. The construction of PTA goes to prove that an open multi-agent system could be deployed in a short time by standardising a small but adaptable set of communication protocols instead of going through a complex and lengthy standardisation process. Also, PTA's structure enables fully distributed computing thus minimising the necessary changes in existing hardware and software infrastructure. The major contribution of PTA to this research area is that its architecture is unique. It is hoped that it will lay the first step on the roadmap that would lead the evolution of agents into the next stage of development.
574

All-optical manipulation of photonic membranes

Kirkpatrick, Blair Connell January 2017 (has links)
Optical tweezers have allowed us to harness the momentum of light to trap, move, and manipulate microscopic particles with Angstrom-level precision. Position and force feedback systems grant us the ability to feel the microscopic world. As a tool, optical tweezers have allowed us to study a variety of biological systems, from the mechanical properties of red blood cells to the quantised motion of motor-molecules such as kinesin. They have been applied, with similar impact, to the manipulation of gases, atoms, and Bose-Einstein condensates. There are, however, limits to their applicability. Historically speaking, optical tweezers have only been used to trap relatively simple structures such as spheres or cylinders. This thesis is concerned with the development of a fabricational and optical manipulation protocol that allows holographical optical tweezers to trap photonic membranes. Photonic membranes are thin, flexible membranes, that are capable of supporting nanoplasmonic features. These features can be patterned to function as metamaterials, granting the photonic membrane the ability to function as almost any optical device. It is highly desirable to take advantage of these tools in a microfluidic environment, however, their extreme aspect ratios mean that they are not traditionally compatible with the primary technology of microfluidic manipulation: optical tweezers. In line with recent developments in optical manipulation, an holistic approach to optical trapping is used to overcome these limitations. Full six-degree-of-freedom control over a photonic membrane is demonstrated through the use of holographical optical tweezers. Furthermore, a photonic membrane (PM)-based surface-enhanced Raman spectroscopy sensor is presented which is capable of detecting rhodamine dye from a topologically undulating sample. This work moves towards marrying these technologies such that photonic membranes, designed for bespoke applications, can be readily deployed into a microfluidic environment. Extending the range of tools available in the microfluidic setting helps pave the way toward the next set of advances in the field of optical manipulation.
575

Novel nitric oxide delivery systems for biomedical applications

Cattaneo, Damiano January 2015 (has links)
The aim of the research presented in this thesis is to investigate and develop novel nitric oxide (NO) delivery systems, specifically designed for application in medical areas. The initial work has focused on utilising metal organic frameworks (MOFs) as a delivery system for this radical gas, NO. Due to their high porosity, high thermal stability and the presence of coordinated unsaturated metal sites (CUSs) when fully activated, the CPO-27 (Coordination Polymer of Oslo) family of MOFs has been selected as a suitable host framework. CPO-27 (Ni), CPO-27 (Mg) and CPO-27 (Zn) have been prepared using reflux and room temperature processes without recourse to the use of any toxic or harmful solvents. The resulting products are characterised by powder XRD (X-ray diffraction) and SEM (Scanning electron microscopy), and their NO adsorption, storage and release properties are reported. The results indicate that the crystallinity, particle size and NO adsorption, storage and release performance are comparable to those of equivalent samples synthesised via traditional solvothermal methods, paving the way for a more easily scalable and environmentally friendly synthetic procedure for these types of MOF. Depending on which metal is employed; the NO uptake, storage and release varies, the more toxic nickel based framework shows enhanced performance in terms of concentration and duration of NO released against either the magnesium or zinc counterparts. In order therefore to reduce the risk of toxicity whilst retaining good performance, Ni (II) ions were doped into the 3D framework of CPO-27 (Mg) and CPO-27 (Zn) using novel water-based reflux and room temperature crystallization methods. Several characterization techniques strongly support the effective incorporation of Ni (II) ions into the 3D framework. Nitric oxide (NO) adsorption/release data, as well as in vitro tests demonstrate that NO dosage and biological response can be tuned via the Ni doping process allowing enhanced performance without the high toxicity of pure Ni MOFs. Such materials would be extremely advantageous and more applicable for use in medical fields. NONOates and other NO-complexes have also been investigated as alternative NO delivery systems. This study has focused on developing NO-drug complexes using a variety of different compounds commonly used by clinicians, namely the antiseptic (chlorhexidine, CHx), the antibiotic (ciprofloxacin) and diuretic (furosemide). A unique high pressure NO loading methodology has been developed to coordinate nitric oxide to these drug molecules and their NO release performance has been evaluated. The resulting NO-drug complexes are characterised using a series of spectroscopic techniques and the collected data highlights that the radical gas coordinates with the secondary amine groups present in the drug molecules. The interaction between the amine group and the gas is reversible; in fact the release of NO from these complexes can be triggered using water (11% RH) and/or UV-light. In addition, chlorhexidine has been incorporated into the pores of the CPO-27 framework. The amount of antiseptic incorporated was determined using a variety of characterisation techniques. The controlled release of significant concentrations of CHx from the CPO-27 materials are achieved by exposing each CHx loaded sample to a water solution, in doing so topical conditions are simulated. The CHx loaded samples have also been activated and NO loaded following the novel high pressure procedure specifically developed during this research. The resulting NO loaded material released the radical gas in the presence of water and/or UV-light. By incorporating the CHx into the MOF and NO loading this complex the duration and release of NO was greatly enhanced over that of either of the components alone. On formulating the CHx loaded 3D frameworks into pellets, or even into a polyurethane polymer film, their ability to release the antiseptic under simulated topical conditions was maintained. The NO-CHx-CPO-27 composite film that has been prepared has proven to be able to simultaneously store and release both NO and CHx. Each component of the complex has more than one function and the quantity and duration of release of NO is again higher and longer than either of the two moieties alone. The release of these two antibacterial agents from a MOF is novel and is very exciting as it opens up the possibility of engineering products with multiple actions to fight infection. Owing to their high stability and shape persistence properties, the CC3 cage series (CC3, RCC3, FT-RCC3 and AT-RCC3) was chosen as the basis of an investigation into the potential use of porous organic cages as delivery systems for nitric oxide gas. NO has been stored in these porous materials through coordination to amine groups forming Nnitrosamine groups. Release of NO from these types of compounds can be triggered by various mechanisms including water and UV-light, the amine group being regenerated after the release of NO. The release performance significantly increased when the materials were exposed to UV-light and/or suspended in water. As a result of this investigation, these covalent organic molecular cages can now be added to the existing list of NO-based therapies available to medical professionals.
576

Monitoring a diagnosis for control of an intelligent machining process

Van Niekerk, Theo January 2001 (has links)
A multi-level modular control scheme to realize integrated process monitoring, diagnosis and control for intelligent machining is proposed and implemented. PC-based hardware architecture to manipulate machining process cutting parameters, using a PMAC interface card as well as sensing processes performance parameters through sampling, and processing by means of DSP interface cards is presented. Controller hardware, to interface the PC-based PMAC interface card to a machining process for the direct control of speed, feed and depth of cut, is described. Sensors to directly measure on-line process performance parameters, including cutting forces, cutting sound, tool-workpiece vibration, cutting temperature and spindle current are described. The indirect measurement of performance parameter surface roughness and tool wear monitoring, through the use of NF sensor fusion modeling, is described and verified. An object based software architecture, with corresponding user interfaces (using Microsoft Visual C++ Foundation Classes and implemented C++ classes for sending motion control commands to the PMAC and receiving processed on-line sensor data from the DSP) is explained. The software structure indicates all the components necessary for integrating the monitoring, diagnosis and control scheme. C-based software code executed on the DSP for real-time sampling, filtering and FFT processing of sensor signals, is explained. Making use of experimental data and regression analysis, analytical relationships between cutting parameters (independent) and each of the performance parameters (dependent) are obtained and used to simulate the machining process. A fuzzy relation that contains values determined from statistical data (indicating the strength of connection between the independent and dependent variables) is proposed. The fuzzy relation forms the basis of a diagnostic scheme that is able to intelligently determine which independent variable to change when a machining performance parameter exceeds control limits. The intelligent diagnosis scheme is extensively tested using the machining process simulation.
577

Graphene oxide derivatives for biomedical applications

Jasim, Dhifaf January 2016 (has links)
Graphene-based materials (GBM) have recently generated great interest due to their unique two-dimensional (2D) carbon geometry, which confers exceptional physicochemical properties that hold great promise in many fields, including biomedicine. An understanding of how these novel 2D materials interact with the biological milieu is therefore fundamental for their development and use. Graphene oxide (GO) has been proven more biologically friendly than the highly hydrophobic pristine graphene. Therefore, the main aim of this study was to prepare well-characterised GO derivatives and test the hypothesis of their possible use for biomedical applications. GO was prepared reproducibly by a modified Hummers' method and further functionalised by using a radio-metal chelating agent, namely 1,4,7,10-tetraazacyclododecane-1,4,7,10-tetraacetic acid (DOTA) to form GO-DOTA. The constructs were extensively studied using structural, optical and surface characterisation techniques. GO prepared from different forms of graphite demonstrated differences mainly in structure and production yields. However, all GO constructs were found biocompatible, with the mammalian cell cultures tested; furthermore, the biocompatibility of GO prepared as papers was retained when they were used as substrates for cell growth. Radiolabelling of GO-DOTA was further carried out to yield highly stable radio-labelled constructs, both in vitro and in vivo. These constructs were used for in vivo whole-body imaging and biodistribution studies in mice after intravenous administration. Extensive urinary excretion and accumulation mainly in the reticuloendothelial system (RES), including the spleen, liver and lungs, was the main fate of all the GO derivatives used in this thesis. The physicochemical characteristics were determined to play a central role for their preferential fate and accumulation. While the thicker sheets tended to accumulate mainly in the RES, the thinner ones were mostly excreted via the kidneys. Finally, it was crucial to perform safety investigations involving the structure and function of organs at high risk of injury (mainly the kidney and spleen). Our results revealed that no severe structural damage or histopathologic or functional abnormality of these vital organs. However, some preliminary inflammatory responses were detected that require further investigation. In summary, this study helped gain a better understanding of how thin 2D materials interact with biological barriers and the results indicate that these materials could be potential candidates for biological applications. Nevertheless, further investigations are necessary to confirm our findings.
578

Chronus : um novo suplemento para a redução de dados U-Pb obtidos por LA-MC-ICPMS

Oliveira, Felipe Valença de 29 June 2015 (has links)
Dissertação (mestrado)—Universidade de Brasília, Instituto de Geociências, Pós-Graduação em Geologia, 2015. / Submitted by Fernanda Percia França (fernandafranca@bce.unb.br) on 2016-02-25T11:38:58Z No. of bitstreams: 1 2015_FelipeValençadeOliveira.pdf: 5245176 bytes, checksum: 5a982ddd156db28d4bed918c900158db (MD5) / Approved for entry into archive by Guimaraes Jacqueline(jacqueline.guimaraes@bce.unb.br) on 2016-02-25T11:51:36Z (GMT) No. of bitstreams: 1 2015_FelipeValençadeOliveira.pdf: 5245176 bytes, checksum: 5a982ddd156db28d4bed918c900158db (MD5) / Made available in DSpace on 2016-02-25T11:51:36Z (GMT). No. of bitstreams: 1 2015_FelipeValençadeOliveira.pdf: 5245176 bytes, checksum: 5a982ddd156db28d4bed918c900158db (MD5) / A análise de isótopos de U-Pb por Laser Ablation Inductively Coupled Plasma Mass Spectrometry (LA-ICPMS) se popularizou nas geociências em função do seu custo relativamente baixo, da razoável precisão obtida e da velocidade com que os dados podem ser gerados. Para estudos que requerem grande quantidade de idades, como na análise de proveniência de bacias sedimentares, esse método mostra-se muito vantajoso. Entretanto, essa mesma velocidade de análise é acompanhada por um grande volume de dados a serem reduzidos. O projeto aqui descrito teve como objetivo o emprego de métodos computacionais na automação do processo de redução de dados. Utilizando a linguagem Visual Basic for Applications (VBA), intrinsecamente relacionada ao Microsoft Excel, todo as etapas de redução foram incluídas em único programa: Chronus. Por meio desse programa é possível escolher os parâmetros pertinentes à redução (tipo de detectores, padrões analisados, método de propagação de incertezas, etc.), importar os dados brutos automaticamente, corrigir o branco do método, corrigir as razões das amostras usando os padrões e finalmente calcular as incertezas. O Chronus cria um arquivo no formato excel com diferentes planilhas, nas quais são guardadas as confingurações escolhidas, as informações de cada etapa da redução e os resultados. A capacidade do Chronus para redução de dados U-Pb por LA-ICPMS foi testada usando análises dos padrões de zircão 91500 (1065 Ma, Wiedenbeck et al., 1995) e Plešovice (337 Ma, Sláma et al., 2008), tomando o zircão GJ-1 (608 Ma, Jackson et al., 2004) como padrão primário. A propagação das incertezas do GJ-1 nas análises foi feita de duas maneiras: considerando as incertezas das análises antes e depois das amostras ou usando o Mean Square of the Weighted Deviates (MSWD) das razões de interesse do padrão. A redução de um grande número de amostras permitiu a observação de intensidades não esperadas da massa 202. Esse fenômeno foi observado também especificamente nas análises dos padrões citados anteriormente. Há uma aparente relação entre o conteúdo de Elementos Terras Raras (ETR) dos zircões com as intensidades da massa 202, talvez devido à formação de óxidos de ETR durante o carreamento do material proveniente da câmara de ablação para os detectores. / The U-Pb analysis by Laser Ablation Inductively Coupled Plasma Mass Spectrometry (LA-ICPMS) became popular in geosciences due to its low cost, reasonable precision and rapid analysis. For studies that require a large number of ages, like in sediment provenance studies, this method is advantageous. Although, the high analysis speed is also followed by a big volume of data to be reduced. The project described in this dissertation had the objective to use computational methods to automatize the data reduction process. Using the Visual Basic for Applications programming language, which is intrinsically related to Microsoft Excel, all data reducing steps were included in a single program: Chronus. By using this program it is possible to choose the analyses’ settings (the type of collectors, the analyzed standards, the error propagation method, etc.), automatically import the raw data, subtract the signal of the blank from the samples, correct the samples’ ratios based on the standards’ analyses and calculate the uncertainties. Chronus creates a Excel spreadsheet with many sheets where the settings, the information of each step of data reduction and the results are stored. The capacity of Chronus to reduce U-Pb data obtained by LA-ICPMS was tested using analyses of the 91500 (1065 Ma, Wiedenbeck et al., 1995) and Plešovice (337 Ma, Sláma et al., 2008) zircon standards, using the GJ-1 standard (608 Ma, Jackson et al., 2004) as primary standard. Propagation of the GJ-1’s uncertainties into analyses was done by two different ways: taking into account the uncertainties of GJ-1’s analyses before and after the sample or using the Mean Square of the Weighted Deviates (MSWD) of the standard’s ratios. Reducing a large number of samples allowed the observation of unexpected 202 mass signal. This phenomenon was observed also in the zircon standards discussed previously. It seems to have a relationship between the zircon grains’ Rare Earth Elements (REE) contents and the 200 mass intensity. It might be due to the REE oxide formation during the material transport from the ablation chamber to the detectors.
579

Research and development of an intelligent AGV-based material handling system for industrial applications

Ferreira, Tremaine Pierre January 2015 (has links)
The use of autonomous robots in industrial applications is growing in popularity and possesses the following advantages: cost effectiveness, job efficiency and safety aspects. Despite the advantages, the major drawback to using autonomous robots is the cost involved to acquire such robots. It is the aim of GMSA to develop a low cost AGV capable of performing material handling in an industrial environment. Collective autonomous robots are often used to perform tasks, that is, more than one working together to achieve a common goal. The intelligent controller, responsible for establishing coordination between the individual robots, plays a key role in managing the tasks of each robot to achieve the common goal. This dissertation addresses the development of an AGV capable of such functionality. Key research areas include: the development of an autonomous coupling system, integration of key safety devices and the development of an intelligent control strategy that can be used to govern the operation of multiple AGVs in an area.
580

PERFORMANCE-AWARE RESOURCE MANAGEMENT OF MULTI-THREADED APPLICATIONS FOR MANY-CORE SYSTEMS

Olsen, Daniel 01 August 2016 (has links)
Future integrated systems will contain billions of transistors, composing tens to hundreds of IP cores. Modern computing platforms take advantage of this manufacturing technology advancement and are moving from Multi-Processor Systems-on-Chip (MPSoC) towards Many-Core architectures employing high numbers of processing cores. These hardware changes are also driven by application changes. The main characteristic of modern applications is the increased parallelism and the need for data storage and transfer. Resource management is a key technology for the successful use of such many-core platforms. The thread to core mapping can deal with the run-time dynamics of applications and platforms. Thus, the efficient resource management enables the efficient usage of the platform resources. maximizing platform utilization, minimizing interconnection network communication load and energy budget. In this thesis, we present a performance-aware resource management scheme for many- core architectures. Particular, the developed framework takes as input parallel applications and performs an application profiling. Based on that profile information, a thread to core mapping algorithm finds (i) the appropriate number of threads that this application will have in order to maximize the utilization of the system and (ii) the best mapping for maximizing the performance of the application under the selected number of threads. In order to validate the proposed algorithm, we used and extended the Sniper, state-of-art, many-core simulator. Last, we developed a discrete event simulator, on top of Sniper simulator, in order to test and validate multiple scenarios faster. The results show that the the proposed methodology, achieves on average a gain of 23% compared to a performance oriented mapping presented and each application completes its workload 18% faster on average.

Page generated in 0.1077 seconds