• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 2
  • 1
  • Tagged with
  • 42
  • 8
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Evaluating technologies and techniques for transitioning hydrodynamics applications to future generations of supercomputers

Mallinson, A. C. January 2016 (has links)
Current supercomputer development trends present severe challenges for scientific codebases. Moore’s law continues to hold, however, power constraints have brought an end to Dennard scaling, forcing significant increases in overall concurrency. The performance imbalance between the processor and memory sub-systems is also increasing and architectures are becoming significantly more complex. Scientific computing centres need to harness more computational resources in order to facilitate new scientific insights and maintaining their codebases requires significant investments. Centres therefore have to decide how best to develop their applications to take advantage of future architectures. To prevent vendor "lock-in" and maximise investments, achieving portableperformance across multiple architectures is also a significant concern. Efficiently scaling applications will be essential for achieving improvements in science and the MPI (Message Passing Interface) only model is reaching its scalability limits. Hybrid approaches which utilise shared memory programming models are a promising approach for improving scalability. Additionally PGAS (Partitioned Global Address Space) models have the potential to address productivity and scalability concerns. Furthermore, OpenCL has been developed with the aim of enabling applications to achieve portable-performance across a range of heterogeneous architectures. This research examines approaches for achieving greater levels of performance for hydrodynamics applications on future supercomputer architectures. The development of a Lagrangian-Eulerian hydrodynamics application is presented together with its utility for conducting such research. Strategies for improving application performance, including PGAS- and hybrid-based approaches are evaluated at large node-counts on several state-of-the-art architectures. Techniques to maximise the performance and scalability of OpenMP-based hybrid implementations are presented together with an assessment of how these constructs should be combined with existing approaches. OpenCL is evaluated as an additional technology for implementing a hybrid programming model and improving performance-portability. To enhance productivity several tools for automatically hybridising applications and improving process-to-topology mappings are evaluated. Power constraints are starting to limit supercomputer deployments, potentially necessitating the use of more energy efficient technologies. Advanced processor architectures are therefore evaluated as future candidate technologies, together with several application optimisations which will likely be necessary. An FPGA-based solution is examined, including an analysis of how effectively it can be utilised via a high-level programming model, as an alternative to the specialist approaches which currently limit the applicability of this technology.
32

Fullerene based systems for optical spin readout

Rahman, Rizvi January 2012 (has links)
Optical spin readout (OSR) in fullerene-based systems has the potential to solve the spin readout and scalability challenges in solid-state quantum information processing. While the rich variety of chemical groups that can be linked (covalently or not) to the fullerenes opens the possibility of making large and controlled arrays of qubits, optical methods can be used to measure EPR down to a single spin thanks to the large energy of optical photons compared to the microwave ones. After reviewing the state of the art of OSR, for which the diamond NV cen- ters constitute the benchmark, we undertake the study of fullerene-based species for OSR. An optically detected magnetic resonance (ODMR) setup was imple- mented in a commercial EPR spectrometer for this purpose. Each experimental chapter focuses on one of the molecular systems in question: a functionalised C<sub>60</sub> fullerene with a phosphonate group (C<sub>60</sub>-phosphine), porphyrin-fullerene ar- chitectures (weakly, strongly and moderately coupled) and finally erbium-doped trimetallic nitride template (TNT) fullerenes (focusing on ErSc<sub>2</sub>N@C<sub>80</sub>). In the C<sub>60</sub>-phosphine system, coherent optically detected magnetic resonance (ODMR) in the triplet state has been achieved. Since a large variety of organic and organometallic molecules can be attached to it both via the fullerene cage and the phosponate group, this result makes it a very useful template to study OSR molecules chemically linked to a qubit. In the porphyrin based structures, an intermediate coupling case in the form of a trimer-fullerene host-guest complex is found to allow detection of both the porphyrin and fullerene triplet sates by CW ODMR, which makes organo-metallic complexes a possible coupling route for a qubit to an OSR component. In the TNT fullerene, crystal field mixing makes the Er<sup>3+</sup> inaccessible by ODMR. However, optical photons cause a mechanical rearrangement of the en- dohedral cluster which in turns impacts on the observed EPR. In particular, the dynamics of this process have been studied for the first time and hint to- wards diffusion kinetics at low pump power. An orientational selectivity has been discovered by using a polarised pump, and the time dynamics indicate the rearrangement of the matrix via difusion on a free volume around the fullerenes. This shows that the endohedral Er<sup>3+</sup> in ErSc<sub>2</sub>N@C<sub>80</sub> can probe the environment outside the cage.
33

Επεξεργασία και μεταφορά πληροφορίας σε νανοδομές με εφαρμογές σε κβαντικούς υπολογιστές και σε οπτικές επικοινωνίες

Φουντουλάκης, Αντώνιος 09 October 2009 (has links)
Στην παρούσα διατριβή μελετάται η σύμφωνη αλληλεπίδραση ημιαγώγιμων νανοδομών με ηλεκτρομαγνητικά πεδία. Κατά την αλληλεπίδραση αυτή μπορούν να προκύψουν ενδιαφέροντα φαινόμενα, με αρκετές τεχνολογικές εφαρμογές τόσο στο άμεσο όσο και στο προσεχές μέλλον. Οι σημαντικότερες από αυτές παρατηρούνται στη κβαντική τεχνολογία, στους κβαντικούς υπολογιστές και στις οπτικές επικοινωνίες δεδομένων. / In the present thesis is studied coherent interaction of semiconductor nanostructures with electromagnetic fields. Out of this interaction can result several interesting phenomena and this could be potentially useful in several areas of modern optical and quantum technology, such as quantum computers and optical communications.
34

High speed simulation of microprocessor systems using LTU dynamic binary translation

Jones, Daniel January 2010 (has links)
This thesis presents new simulation techniques designed to speed up the simulation of microprocessor systems. The advanced simulation techniques may be applied to the simulator class which employs dynamic binary translation as its underlying technology. This research supports the hypothesis that faster simulation speeds can be realized by translating larger sections of the target program at runtime. The primary motivation for this research was to help facilitate comprehensive design-space exploration and hardware/software co-design of novel processor architectures by reducing the time required to run simulations. Instruction set simulators are used to design and to verify new system architectures, and to develop software in parallel with hardware. However, compromises must often be made when performing these tasks due to time constraints. This is particularly true in the embedded systems domain where there is a short time-to-market. The processing demands placed on simulation platforms are exacerbated further by the need to simulate the increasingly complex, multi-core processors of tomorrow. High speed simulators are therefore essential to reducing the time required to design and test advanced microprocessors, enabling new systems to be released ahead of the competition. Dynamic binary translation based simulators typically translate small sections of the target program at runtime. This research considers the translation of larger units of code in order to increase simulation speed. The new simulation techniques identify large sections of program code suitable for translation after analyzing a profile of the target program’s execution path built-up during simulation. The average instruction level simulation speed for the EEMBC benchmark suite is shown to be at least 63% faster for the new simulation techniques than for basic block dynamic binary translation based simulation and 14.8 times faster than interpretive simulation. The average cycle-approximate simulation speed is shown to be at least 32% faster for the new simulation techniques than for basic block dynamic binary translation based simulation and 8.37 times faster than cycle-accurate interpretive simulation.
35

Protection des systèmes informatiques vis-à-vis des malveillances : un hyperviseur de sécurité assisté par le matériel / Protection of the computer systems face to face hostilities : a hypersight of sight (security) assisted by the material ( equipment)

Morgan, Benoît 06 December 2016 (has links)
L'utilisation des systèmes informatiques est aujourd'hui en pleine évolution. Le modèle classique qui consiste à associer à chaque utilisateur une machine physique qu'il possède et dont il va exploiter les ressources devient de plus en plus obsolète. Aujourd'hui, les ressources informatiques que l'on utilise peuvent être distribuées n'importe où dans l'Internet et les postes de travail du quotidien ne sont plus systématiquement des machines réelles. Cette constatation met en avant deux phénomènes importants qui sont à l'origine de l'évolution de notre utilisation de l'informatique : le Cloud computing et la virtualisation. Le Cloud computing (ou informatique en nuage en français) permet à un utilisateur d'exploiter des ressources informatiques, de granularités potentiellement très différentes, pendant une durée variable, qui sont à disposition dans un nuage de ressources. L'utilisation de ces ressources est ensuite facturée à l'utilisateur. Ce modèle peut être bien sûr avantageux pour une entreprise qui peut s'appuyer sur des ressources informatiques potentiellement illimitées, qu'elle n'a pas nécessairement à administrer et gérer elle-même. Elle peut ainsi en tirer un gain de productivité et un gain financier. Du point de vue du propriétaire des machines physiques, le gain financier lié à la location des puissances de calcul est accentué par une maximisation de l'exploitation de ces machines par différents clients.L'informatique en nuage doit donc pouvoir s'adapter à la demande et facilement se reconfigurer. Une manière d'atteindre ces objectifs nécessite notamment l'utilisation de machines virtuelles et des techniques de virtualisation associées. Même si la virtualisation de ressources informatiques n'est pas née avec le Cloud, l'avènement du Cloud a considérablement augmenté son utilisation. L'ensemble des fournisseurs d'informatique en nuage s'appuient aujourd'hui sur des machines virtuelles, qui sont beaucoup plus facilement déployables et migrables que des machines réelles.Ainsi, s'il est indéniable que l'utilisation de la virtualisation apporte un véritable intérêt pour l'informatique d'aujourd'hui, il est par ailleurs évident que sa mise en œuvre ajoute une complexité aux systèmes informatiques, complexité à la fois logicielle (gestionnaire de machines virtuelles) et matérielle (nouveaux mécanismes d'assistance à la virtualisation intégrés dans les processeurs). A partir de ce constat, il est légitime de se poser la question de la sécurité informatique dans ce contexte où l'architecture des processeurs devient de plus en plus complexe, avec des modes de plus en plus privilégiés. Etant donné la complexité des systèmes informatiques, l'exploitation de vulnérabilités présentes dans les couches privilégiées ne risque-t-elle pas d'être très sérieuse pour le système global ? Étant donné la présence de plusieurs machines virtuelles, qui ne se font pas mutuellement confiance, au sein d'une même machine physique, est-il possible que l'exploitation d'une vulnérabilité soit réalisée par une machine virtuelle compromise ? N'est-il pas nécessaire d'envisager de nouvelles architectures de sécurité prenant en compte ces risques ?C'est à ces questions que cette thèse propose de répondre. En particulier, nous présentons un panorama des différents problèmes de sécurité dans des environnements virtualisés et des architectures matérielles actuelles. A partir de ce panorama, nous proposons dans nos travaux une architecture originale permettant de s'assurer de l'intégrité d'un logiciel s'exécutant sur un système informatique, quel que soit son niveau de privilège. Cette architecture est basée sur une utilisation mixte de logiciel (un hyperviseur de sécurité développé par nos soins, s'exécutant sur le processeur) et de matériel (un périphérique de confiance, autonome, que nous avons également développé). / Computer system are nowadays evolving quickly. The classical model which consists in associating a physical machine to every users is becoming obsolete. Today, computer resources we are using can be distributed any place on the Internet and usual workstations are not systematically a physical machine anymore. This fact is enlightening two important phenomenons which are leading the evolution of the usage we make of computers: the Cloud computing and hardware virtualization. The cloud computing enable users to exploit computers resources, with a fine grained granularity, with a non-predefined amount of time, which are available into a cloud of resources. The resource usage is then financially charged to the user. This model can be obviously profitable for a company which wants to lean on a potentially unlimited amount of resources, without administrating and managing it. A company can thereby increase its productivity and furthermore save money. From the physical machine owner point of view, the financial gain related to the leasing of computing power is multiplied by the optimization of machine usage by different clients. The cloud computing must be able to adapt quickly to a fluctuating demand a being able to reconfigure itself quickly. One way to reach these goals is dependant of the usage of virtual machines and the associated virtualization techniques. Even if computer resource virtualization has not been introduced by the cloud, the arrival of the cloud it substantially increased its usage. Nowadays, each cloud provider is using virtual machines, which are much more deployable and movable than physical machines. Virtualization of computer resources was before essentially based on software techniques. But the increasing usage of virtual machines, in particular in the cloud computing, leads the microprocessor manufacturers to include virtualization hardware assistance mechanisms. Theses hardware extensions enable on the one hand to make virtualization process easier et on the other hand earn performances. Thus, some technologies have been created, such as Intel VT-x and VT-d or AMD-V by AMD and virtualization extensions by ARM. Besides, virtualization process needs the implementation of extra functionalities, to be able to manage the different virtual machine, schedule them, isolate and share hardware resources like memory and peripherals. These different functionalities are in general handled by a virtual machine manager, whose work can be more or less eased by the characteristics of the processor on which it is executing.In general, these technologies are introducing new execution modes on the processors, more and more privileged and complex.Thus, even if we can see that virtualization is a real interest for modern computer science, it is either clear that its implementation is adding complexity to computer systems, at the same time software and hardwarecomplexity. From this observation, it is legitimate do ask the question about computer security in this context where the architecture of processors is becoming more and more complex, with more and more privileged execution modes. Given the presence of multiple virtual machine, which do not trust each other, in the same physical machine, is it possible that the exploitation of one vulnerability be carried out by a compromised virtual machine ? Isn't it necessary to consider new security architectures taking these risks into account?This thesis is trying to answer to these questions. In particular, we are introducing state of the art security issues in virtualized environment of modern architectures. Starting from this work, we are proposing an originalarchitecture ensuring the integrity of a software being executed on a computer system, regardless its privilege level. This architecture is both using software, a security hypervisor, and hardware, a trusted peripheral, we have both designed and implemented.
36

Schematic calculi for the analysis of decision procedures / Calculs schématiques pour l'analyse de procédures de décision

Tushkanova, Elena 19 July 2013 (has links)
Dans cette thèse, on étudie des problèmes liés à la vérification de systèmes (logiciels). On s’intéresseplus particulièrement à la conception sûre de procédures de décision utilisées en vérification. De plus, onconsidère également un problème de modularité pour un langage de modélisation utilisé dans la plateformede vérification Why.De nombreux problèmes de vérification peuvent se réduire à un problème de satisfaisabilité modulodes théories (SMT). Pour construire des procédures de satisfaisabilité, Armando et al. ont proposé en2001 une approche basée sur la réécriture. Cette approche utilise un calcul général pour le raisonnementéquationnel appelé paramodulation. En général, une application équitable et exhaustive des règles ducalcul de paramodulation (PC) conduit à une procédure de semi-décision qui termine sur les entréesinsatisfaisables (la clause vide est alors engendrée), mais qui peut diverger sur les entrées satisfaisables.Mais ce calcul peut aussi terminer pour des théories intéressantes en vérification, et devient ainsi uneprocédure de décision. Pour raisonner sur ce calcul, un calcul de paramodulation schématique (SPC)a été étudié, en particulier pour prouver automatiquement la décidabilité de théories particulières etde leurs combinaisons. L’avantage de ce calcul SPC est que s’il termine sur une seule entrée abstraite,alors PC termine pour toutes les entrées concrètes correspondantes. Plus généralement, SPC est unoutil automatique pour vérifier des propriétés de PC telles que la terminaison, la stable infinité et lacomplétude de déduction.Une contribution majeure de cette thèse est un environnement de prototypage pour la conception etla vérification de procédures de décision. Cet environnement, basé sur des fondements théoriques, estla première implantation du calcul de paramodulation schématique. Il a été complètement implanté surla base solide fournie par le système Maude mettant en oeuvre la logique de réécriture. Nous montronsque ce prototype est très utile pour dériver la décidabilité et la combinabilité de théories intéressantes enpratique pour la vérification.Cet environnement est appliqué à la conception d’un calcul de paramodulation schématique dédié àune arithmétique de comptage. Cette contribution est la première extension de la notion de paramodulationschématique à une théorie prédéfinie. Cette étude a conduit à de nouvelles techniques de preuveautomatique qui sont différentes de celles utilisées manuellement dans la littérature. Les hypothèses permettantd’appliquer nos techniques de preuves sont faciles à satisfaire pour les théories équationnellesavec opérateurs de comptage. Nous illustrons notre contribution théorique sur des théories représentantdes extensions de structures de données classiques comme les listes ou les enregistrements.Nous avons également contribué au problème de la spécification modulaire pour les classes et méthodesJava génériques. Nous proposons des extensions du language de modélisation Krakatoa, faisant partiede la plateforme Why qui permet de prouver qu’un programme C ou Java est correct par rapport à saspécification. Les caractéristiques essentielles de notre apport sont l’introduction de la paramétricité à lafois pour les types et les théories, ainsi qu’une relation d’instantiation entre les théories. Les extensionsproposées sont illustrées sur deux exemples significatifs: tri de tableaux et fonctions de hachage.Les deux problèmes traités dans cette thèse ont pour point commun les solveurs SMT. Les procéduresde décision sont les moteurs des solveurs SMT, et la plateforme Why engendre des conditions devérification dérivées d’un programme source annoté, qu’elle transmet aux solveurs SMT (ou assistants depreuve) pour vérifier la correction du programme.Mots-clés: / In this thesis we address problems related to the verification of software-based systems. We aremostly interested in the (safe) design of decision procedures used in verification. In addition, we alsoconsider a modularity problem for a modeling language used in the Why verification platform.Many verification problems can be reduced to a satisfiability problem modulo theories (SMT). In orderto build satisfiability procedures Armando et al. have proposed in 2001 an approach based on rewriting.This approach uses a general calculus for equational reasoning named paramodulation. In general, afair and exhaustive application of the rules of paramodulation calculus (PC) leads to a semi-decisionprocedure that halts on unsatisfiable inputs (the empty clause is then generated) but may diverge onsatisfiable ones. Fortunately, it may also terminate for some theories of interest in verification, and thusit becomes a decision procedure. To reason on the paramodulation calculus, a schematic paramodulationcalculus (SPC) has been studied, notably to automatically prove decidability of single theories and oftheir combinations. The advantage of SPC is that if it halts for one given abstract input, then PC haltsfor all the corresponding concrete inputs. More generally, SPC is an automated tool to check propertiesof PC like termination, stable infiniteness and deduction completeness.A major contribution of this thesis is a prototyping environment for designing and verifying decisionprocedures. This environment, based on the theoretical studies, is the first implementation of theschematic paramodulation calculus. It has been implemented from scratch on the firm basis provided bythe Maude system based on rewriting logic. We show that this prototype is very useful to derive decidabilityand combinability of theories of practical interest in verification. It helps testing new saturationstrategies and experimenting new extensions of the original (schematic) paramodulation calculus.This environment has been applied for the design of a schematic paramodulation calculus dedicated tothe theory of Integer Offsets. This contribution is the first extension of the notion of schematic paramodulationto a built-in theory. This study has led to new automatic proof techniques that are different fromthose performed manually in the literature. The assumptions to apply our proof techniques are easyto satisfy for equational theories with counting operators. We illustrate our theoretical contribution ontheories representing extensions of classical data structures such as lists and records.We have also addressed the problem of modular specification of generic Java classes and methods.We propose extensions to the Krakatoa Modeling Language, a part of the Why platform for provingthat a Java or C program is a correct implementation of some specification. The key features arethe introduction of parametricity both for types and for theories and an instantiation relation betweentheories. The proposed extensions are illustrated on two significant examples: the specification of thegeneric method for sorting arrays and for generic hash map.Both problems considered in this thesis are related to SMT solvers. Firstly, decision procedures areat the core of SMT solvers. Secondly, the Why platform extracts verification conditions from a sourceprogram annotated by specifications, and then transmits them to SMT solvers or proof assistants to checkthe program correctness.
37

Discrete quantum walks and quantum image processing

Venegas-Andraca, Salvador Elías January 2005 (has links)
In this thesis we have focused on two topics: Discrete Quantum Walks and Quantum Image Processing. Our work is a contribution within the field of quantum computation from the perspective of a computer scientist. With the purpose of finding new techniques to develop quantum algorithms, there has been an increasing interest in studying Quantum Walks, the quantum counterparts of classical random walks. Our work in quantum walks begins with a critical and comprehensive assessment of those elements of classical random walks and discrete quantum walks on undirected graphs relevant to algorithm development. We propose a model of discrete quantum walks on an infinite line using pairs of quantum coins under different degrees of entanglement, as well as quantum walkers in different initial state configurations, including superpositions of corresponding basis states. We have found that the probability distributions of such quantum walks have particular forms which are different from the probability distributions of classical random walks. Also, our numerical results show that the symmetry properties of quantum walks with entangled coins have a non-trivial relationship with corresponding initial states and evolution operators. In addition, we have studied the properties of the entanglement generated between walkers, in a family of discrete Hadamard quantum walks on an infinite line with one coin and two walkers. We have found that there is indeed a relation between the amount of entanglement available in each step of the quantum walk and the symmetry of the initial coin state. However, as we show with our numerical simulations, such a relation is not straightforward and, in fact, it can be counterintuitive. Quantum Image Processing is a blend of two fields: quantum computation and image processing. Our aim has been to promote cross-fertilisation and to explore how ideas from quantum computation could be used to develop image processing algorithms. Firstly, we propose methods for storing and retrieving images using non-entangled and entangled qubits. Secondly, we study a case in which 4 different values are randomly stored in a single qubit, and show that quantum mechanical properties can, in certain cases, allow better reproduction of original stored values compared with classical methods. Finally, we briefly note that entanglement may be used as a computational resource to perform hardware-based pattern recognition of geometrical shapes that would otherwise require classical hardware and software.
38

Designing a quantum computer based on pulsed electron spin resonance

Morley, Gavin W. January 2005 (has links)
Electron spin resonance (ESR) experiments are used to assess the possibilities for processing quantum information in the electronic and nuclear spins of endohedral fullerenes. It is shown that ¹⁵N@C₆₀ can be used for universal two-qubit quantum computing. The first step in this scheme is to initialize the nuclear and electron spins that each store one qubit. This was achieved with a magnetic field of 8.6 T at 3 K, by applying resonant RF and microwave radiation. This dynamic nuclear polarization technique made it possible to show that the nuclear T₁ time of ¹⁵N@C₆₀ is on the order of twelve hours at 4.2 K. The electronic T₂ is the limiting decoherence time for the system. At 3.7 K, this can be extended to 215 μs by using amorphous sulphur as the solvent. Pulse sequences are described that could perform all single-qubit gates to the two qubits independently, as well as CNOT gates. After these manipulations, the value of the qubits should be measured. Two techniques are demonstrated for this, by measuring the nuclear spin. Sc@C₈₂ could also be useful for quantum computation. By comparing ESR measurements with density functional theory calculations, it is shown how the orientation of a Sc@C₈₂ molecule in an applied magnetic field affects the molecule's Zeeman and hyperfine coupling. Hence the g- and A-tensors are written in the coordinate frame of the molecule. Pulsed ESR measurements show that the decoherence time at 20 K is 13 μs, which is 20 times longer than had been previously reported. Carbon nanotubes have been filled with endohedral fullerenes, forming 1D arrays that could lead to a scalable quantum computer. N@C₀₆ and Sc@C₈₂ have been used for this filling in various concentrations. ESR measurements of these samples are consistent with simulations of the dipolar coupling.
39

Functionalization of endohedral fullerenes and their application in quantum information processing

Liu, Guoquan January 2011 (has links)
Quantum information processing (QIP), which inherently utilizes quantum mechanical phenomena to perform information processing, may outperform its classical counterpart at certain tasks. As one of the physical implementations of QIP, the electron-spin based architecture has recently attracted great interests. Endohedral fullerenes with unpaired electrons, such as N@C<sub>60</sub>, are promising candidates to embody the qubits because of their long spin decoherence time. This thesis addresses several fundamental aspects of the strategy of engineering the N@C<sub>60</sub> molecules for applications in QIP. Chemical functionalization of N@C<sub>60</sub> is investigated and several different derivatives of N@C<sub>60</sub> are synthesized. These N@C<sub>60</sub> derivatives exhibit different stability when they are exposed to ambient light in a degassed solution. The cyclopropane derivative of N@C60 shows comparable stability to pristine N@C<sub>60</sub>, whereas the pyrrolidine derivatives demonstrate much lower stability. To elucidate the effect of the functional groups on the stability, an escape mechanism of the encapsulated nitrogen atom is proposed based on DFT calculations. The escape of nitrogen is facilitated by a 6-membered ring formed in the decomposition of the pyrrolidine derivatives of N@C<sub>60</sub>. In contrast, the 4-membered ring formed in the cyclopropane derivative of N@C<sub>60</sub> prohibits such an escape through the addends. Two N@C<sub>60</sub>-porphyrin dyads are synthesized. The dyad with free base porphyrin exhibits typical zero-field splitting (ZFS) features due to functionalization in the solid-state electron spin resonance (ESR) spectrum. However, the nitrogen ESR signal in the second dyad of N@C<sub>60</sub> and copper porphyrin is completely suppressed at a wide range of sample concentrations. The dipolar coupling between the copper spin and the nitrogen spins is calculated to be 27.0 MHz. To prove the presence of the encapsulated nitrogen atom in the second dyad, demetallation of the copper porphyrin moiety is carried out. The recovery of approximately 82% of the signal intensity confirms that the dipolar coupling suppresses the ESR signal of N@C<sub>60</sub>. To prepare ordered structure of N@C<sub>60</sub>, the nematic matrix MBBA is employed to align the pyrrolidine derivatives of N@C<sub>60</sub>. Orientations of these derivatives are investigated through simulation of their ESR spectra. The derivatives with a –CH3 or phenyl group derived straightforward from the N-substituent of the pyrrolidine ring are preferentially oriented based on their powder-like ESR spectra in the MBBA matrix. An angle of about is also found between the directors of fullerene derivatives and MBBA. In contrast, the derivatives with a –CH₂ group inserted between the phenyl group and the pyrrolidine ring are nearly randomly distributed in MBBA. These results illustrate the applicability of liquid crystal as a matrix to align N@C<sub>60</sub> derivatives for QIP applications.
40

Building and operating large-scale SpiNNaker machines

Heathcote, Jonathan David January 2016 (has links)
SpiNNaker is an unconventional supercomputer architecture designed to simulate up to one billion biologically realistic neurons in real-time. To achieve this goal, SpiNNaker employs a novel network architecture which poses a number of practical problems in scaling up from desktop prototypes to machine room filling installations. SpiNNaker's hexagonal torus network topology has received mostly theoretical treatment in the literature. This thesis tackles some of the challenges encountered when building `real-world' systems. Firstly, a scheme is devised for physically laying out hexagonal torus topologies in machine rooms which avoids long cables; this is demonstrated on a half-million core SpiNNaker prototype. Secondly, to improve the performance of existing routing algorithms, a more efficient process is proposed for finding (logically) short paths through hexagonal torus topologies. This is complemented by a formula which provides routing algorithms with greater flexibility when finding paths, potentially resulting in a more balanced network utilisation. The scale of SpiNNaker's network and the models intended for it also present their own challenges. Placement and routing algorithms are developed which assign processes to nodes and generate paths through SpiNNaker's network. These algorithms minimise congestion and tolerate network faults. The proposed placement algorithm is inspired by techniques used in chip design and is shown to enable larger applications to run on SpiNNaker than the previous state-of-the-art. Likewise the routing algorithm developed is able to tolerate network faults, inevitably present in large-scale systems, with little performance overhead.

Page generated in 0.1958 seconds