• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 1
  • Tagged with
  • 10
  • 10
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Scaling Moore’s Wall: Existing Institutions and the End of a Technology Paradigm

Khan, Hassan N. 01 December 2017 (has links)
This dissertation is an historical and evaluative study of the semiconductor industry as it approaches the end of the silicon integrated-circuit paradigm. For nearly 60 years, semiconductor technology has been defined by rapid rates of progress and concomitant decreases in costs-perfunction made possible by the extendibility of the silicon integrated-circuit. The stability of this technological paradigm has not only driven the transformation of the global economy but also deeply shaped scholars’ understanding of technological change. This study addresses the nature of technological change at the end of a paradigm and examines the role and capability of different institutions in shaping directions and responding to challenges during this period. This study first provides theoretical and historical context for the phenomenon under consideration. In order to place the dynamics of an industry at the end of a technology paradigm into proper context, particular attention is given to the semiconductor industry’s history of failed proclamations of impending limits. The examination of previous episodes of technological uncertainty and the development of institutions to respond to those episodes is used to illustrate the industry’s departure from previous modes of technological and institutional evolution. The overall findings suggest that existing institutions may not be capable of addressing the semiconductor industry’s looming technological discontinuity. Despite the creation of an entirely new institution, the Nanoelectronics Research Initiative, specifically oriented toward the end of Moore’s Law the industry, government agencies, and the scientific community writ large have been unable to find a successor to the silicon CMOS transistor to date. At the terminus of this dissertation, research toward new computing technologies remains ongoing with considerable scientific, technological, and market uncertainty over future technology directions.
2

Robust and Scalable Silicon Photonic Interconnects and Devices

Novick, Asher January 2023 (has links)
At the same time as Moore’s law is reaching it’s limits, there has been exponential growth in required computation power, most notably driven by the widespread deployment of artificial intelligence (AI) and deep learning (DL) models, such as ChatGPT. The unprecedented modern, and projected, bandwidth density requirements between compute needs for high performance (HPC) and data center (DC) applications leads directly to an equally unprecedented need to supply and dissipate extreme amounts of power in ever smaller volumetric units. While at smaller scales this becomes a question of power dissipation limits for discrete components, in aggregate the power consumed across the full system quickly adds up to becoming an environmentally significant energy drain. Traditional electronic interconnects have failed to keep pace, both in terms of supporting bandwidth density and achieving sufficient energy per-bit efficiency, leading to optical interconnects becoming the dominant form of high-bandwidth communication between nodes at shorter and shorter reaches. Co-packaged silicon photonics (SiPh)s have been proposed as a promising solution for driving these optical interconnects. In fact, SiPh engines have already become widely accepted in the commercial ecosystem, specifically for network switches and plugable optical modules for mid- (10 m - 500 m) and long-haul (≥2 km) applications. The work in this thesis proposes novel integrated SiPh interconnect architectures, as well asnovel devices that enable them, in order to push SiPh driven interconnects down into the inter-chip scale, inside the compute and memory nodes (< cm), as well as all the way out to the low-earth orbit (LEO) inter-satellite scale (> 1000 km). In the case of the former, recent advances in chip-scale Kerr frequency comb sources have allowed for fully integrated ultra-broadband dense wavelength division multiplexing (DWDM). To take full advantage of these integrated DWDM sources, similar advances must be made at both the architecture and device levels. In the latter case, interest in inter-constellation connectivity is growing as LEO becomes saturated with varying satellites owned by private and public entities. For these constellations to communicate directly, a new class of satellite must join the sky, with adaptive communication capabilities to translate Baud rate and modulation format between otherwise incompatible constellations. To support each of these applications with integrated photonics solution, advances in both SiPh architectures and the devices that comprise them. This work first presents an overview of the system-level challenges associated with such links, including novel proposed integrated interconnect architectures, and then explores novel photonic devices that are designed to enable critical functionality and overcome system-level limitations. The advances demonstrated in this thesis provide a clear direction towards realizing a future fully permeated by ultra-efficient optical connectivity, supporting resource disaggregation and all-to-all connectivity from green hyper-scale data centers all the way to LEO.
3

Exploiting heterogeneous many cores on sequential code / Exploiter des multi-coeurs hétérogènes dans le cadre de codes séquentiels

Narasimha Swamy, Bharath 05 March 2015 (has links)
Les architectures ''Heterogeneous Many Cores'' (HMC) qui mélangent beaucoup de petits/simples cœurs avec quelques cœurs larges/complexes, fournissent de bonnes performances pour des applications séquentielles et permettent une économie d'énergie pour les applications parallèles. Les petits cœurs des HMC peuvent être utilisés comme des cœurs auxiliaires pour accélérer les applications séquentielles gourmandes en mémoire qui s'exécutent sur le cœur principal. Cependant, le surcoût pour accéder aux petits cœurs limite leur utilisation comme cœurs auxiliaires. En raison de la disparité de performance entre le cœur principal et les petits cœurs, on ne sait pas encore si les petits cœurs sont adaptés pour exécuter des threads auxiliaires pour faire du prefetching pour un cœur plus puissant. Dans cette thèse, nous présentons une architecture hardware/software appelée « core-tethering », pour supporter efficacement l'exécution de threads auxiliaires sur les systèmes HMC. Cette architecture permet au cœur principal de pouvoir lancer et contrôler directement l'exécution des threads auxiliaires, et de transférer efficacement le contexte des applications nécessaire à l'exécution des threads auxiliaires. Sur un ensemble de programmes ayant une utilisation intensive de la mémoire, les threads auxiliaires s'exécutant sur des cœurs relativement petits, peuvent apporter une accélération significative par rapport à du prefetching matériel seul. Et les petits cœurs fournissent un bon compromis par rapport à l'utilisation d'un seul cœur puissant pour exécuter les threads auxiliaires. En résumé, malgré le surcoût lié à la latence d'accès aux lignes de cache chargées par le prefetching depuis le cache L3 partagé, le prefetching par les threads auxiliaires sur les petits cœurs semble être une manière prometteuse d'améliorer la performance des codes séquentiels pour des applications ayant une utilisation intensive de la mémoire sur les systèmes HMC. / Heterogeneous Many Cores (HMC) architectures that mix many simple/small cores with a few complex/large cores are emerging as a design alternative that can provide both fast sequential performance for single threaded workloads and power-efficient execution for through-put oriented parallel workloads. The availability of many small cores in a HMC presents an opportunity to utilize them as low-power helper cores to accelerate memory-intensive sequential programs mapped to a large core. However, the latency overhead of accessing small cores in a loosely coupled system limits their utility as helper cores. Also, it is not clear if small cores can execute helper threads sufficiently in advance to benefit applications running on a larger, much powerful, core. In this thesis, we present a hardware/software framework called core-tethering to support efficient helper threading on heterogeneous many-cores. Core-tethering provides a co-processor like interface to the small cores that (a) enables a large core to directly initiate and control helper execution on the helper core and (b) allows efficient transfer of execution context between the cores, thereby reducing the performance overhead of accessing small cores for helper execution. Our evaluation on a set of memory intensive programs chosen from the standard benchmark suites show that, helper threads using moderately sized small cores can significantly accelerate a larger core compared to using a hardware prefetcher alone. We also find that a small core provides a good trade-off against using an equivalent large core to run helper threads in a HMC. In summary, despite the latency overheads of accessing prefetched cache lines from the shared L3 cache, helper thread based prefetching on small cores looks as a promising way to improve single thread performance on memory intensive workloads in HMC architectures.
4

Analysis and optimization of global interconnects for many-core architectures

Balakrishnan, Anant 02 December 2010 (has links)
The objective of this thesis is to develop circuit-aware interconnect technology optimization for network-on-chip based many-core architectures. The dimensions of global interconnects in many-core chips are optimized for maximum bandwidth density and minimum delay taking into account network-on-chip router latency and size effects of copper. The optimal dimensions thus obtained are used to characterize different network-on-chip topologies based on wiring area utilization, maximum core-to-core channel width, aggregate chip bandwidth and worse case latency. Finally, the advantages of many-core many-tier chips are evaluated for different network-on-chip topologies. Area occupied by a router within a core is shown to be the bottleneck to achieve higher performance in network-on-chip based architectures.
5

Highly Parallel Silicon Photonic Links with Integrated Kerr Frequency Combs

Rizzo, Anthony January 2022 (has links)
The rapid growth of data-intensive workloads such as deep learning and artificial intelligence has placed significant strain on the interconnects of high performance computing systems, presenting a looming bottleneck of significant societal concern. Furthermore, with the impending end of Moore's Law, continued reliance on transistor density scaling in compute nodes to compensate for this bottleneck will experience an abrupt halt in the coming decade. Optical interconnects provide an appealing path to mitigating this communication bottleneck through leveraging the favorable physical properties of light to increase bandwidth while simultaneously reducing energy consumption with distance-agnostic performance, in stark contrast to electrical signaling. In particular, silicon photonics presents an ideal platform for optical interconnects for a variety of economic, fundamental scientific, and engineering reasons; namely, (i) the chips are fabricated using the same mature complementary metal-oxide-semiconductor (CMOS) infrastructure used for microelectronic chips; (ii) the high index contrast between silicon and silicon dioxide permits micron-scale devices at telecommunication wavelengths; and (iii) decades of engineering effort has resulted in state-of-the-art devices comparable to discrete components in other material platforms including low-loss (< 0.5 dB/cm) waveguides, high-speed (> 100 Gb/s) modulators and photodetectors, and low-loss (< 1 dB) fiber-to-chip interfaces. Through leveraging these favorable properties of the platform, silicon photonic chips can be directly co-packaged with CMOS electronics to yield unprecedented interconnect bandwidth at length scales ranging from millimeters to kilometers while simultaneously achieving substantial reduction in energy consumption relative to currently deployed solutions. The work in this thesis aims to address the fundamental scalability of silicon photonic interconnects to orders-of-magnitude beyond the current state-of-the-art, enabling extreme channel counts in the frequency domain through leveraging advances in chip-scale Kerr frequency combs. While the current co-packaged optics roadmap includes silicon photonics as an enabling technology to ~ 5 pJ/bit terabit-scale interconnects, this work examines the foundational challenges which must be overcome to realize forward-looking sub-pJ/bit petabit-scale optical I/O. First, an overview of the system-level challenges associated with such links is presented, motivating the following chapters focused on device innovations that address these challenges. Leveraging these advances, a novel link architecture capable of scaling to hundreds of wavelength channels is proposed and experimentally demonstrated, providing an appealing path to future petabit/s photonic interconnects with sub-pJ/bit energy consumption. Such photonic interconnects with ultra-high bandwidth, ultra-low energy consumption, and low latency have the potential to revolutionize future data center and high performance computing systems through removing the strong constraint of data locality, permitting drastically new architectures through resource disaggregation. The advances demonstrated in this thesis provide a clear direction towards realizing future green hyper-scale data centers and high performance computers with environmentally-conscious scaling, providing an energy-efficient and massively scalable platform capable of keeping pace with ever-growing bandwidth demands through the next quarter-century and beyond.
6

Histories, Tech, and a New Central Planning

Glickman, Susannah Elizabeth January 2023 (has links)
My research seeks to uncover how imagined futures and technological promises--in this case, the promise of quantum computers--became so tangible in the present. How could such a significant industry be built and maintained around mere potential existence? My project locates the answer to this question in the broader politico-economic category of ‘tech’—by which users typically mean information technology—through the history of quantum computing and information (QC). A category articulated by actors in this history, ‘tech’ emerges in its current form in the mid-1980s and relies on the conflation of economic and national security in the flesh of high-tech products like semiconductors. Since the field has yet to deliver on any of its promises, it cannot activate an after-the-fact teleology of “discovery”. For this reason, combined with its high visibility and institutional maturity, QC provides a particularly rich view into how actors construct institutions, histories, narratives and ideologies in real time, as well as how these narratives shift according to the needs of an audience, field, or other factors. Not only products of changing institutions, these narratives also reciprocally produce institutions—they mediate between material reality and ideology. For example, I look at the role of Moore’s Law in the reconstruction of the semiconductor industry and in the production of institutions for QC. My project uses new archival research and extensive oral interviews with more than 90 researchers and other important figures from academia, government and industry in the US, Japan, Europe, China, Singapore, and Israel to analyze the development of QC and the infrastructure that made it possible over the past 50 years. This project would constitute the first history of QC and would contribute a unique and incisive perspective on the rise of ‘tech’ in statecraft and power.
7

Memory Subsystem Optimization Techniques for Modern High-Performance General-Purpose Processors

January 2018 (has links)
abstract: General-purpose processors propel the advances and innovations that are the subject of humanity’s many endeavors. Catering to this demand, chip-multiprocessors (CMPs) and general-purpose graphics processing units (GPGPUs) have seen many high-performance innovations in their architectures. With these advances, the memory subsystem has become the performance- and energy-limiting aspect of CMPs and GPGPUs alike. This dissertation identifies and mitigates the key performance and energy-efficiency bottlenecks in the memory subsystem of general-purpose processors via novel, practical, microarchitecture and system-architecture solutions. Addressing the important Last Level Cache (LLC) management problem in CMPs, I observe that LLC management decisions made in isolation, as in prior proposals, often lead to sub-optimal system performance. I demonstrate that in order to maximize system performance, it is essential to manage the LLCs while being cognizant of its interaction with the system main memory. I propose ReMAP, which reduces the net memory access cost by evicting cache lines that either have no reuse, or have low memory access cost. ReMAP improves the performance of the CMP system by as much as 13%, and by an average of 6.5%. Rather than the LLC, the L1 data cache has a pronounced impact on GPGPU performance by acting as the bandwidth filter for the rest of the memory subsystem. Prior work has shown that the severely constrained data cache capacity in GPGPUs leads to sub-optimal performance. In this thesis, I propose two novel techniques that address the GPGPU data cache capacity problem. I propose ID-Cache that performs effective cache bypassing and cache line size selection to improve cache capacity utilization. Next, I propose LATTE-CC that considers the GPU’s latency tolerance feature and adaptively compresses the data stored in the data cache, thereby increasing its effective capacity. ID-Cache and LATTE-CC are shown to achieve 71% and 19.2% speedup, respectively, over a wide variety of GPGPU applications. Complementing the aforementioned microarchitecture techniques, I identify the need for system architecture innovations to sustain performance scalability of GPG- PUs in the face of slowing Moore’s Law. I propose a novel GPU architecture called the Multi-Chip-Module GPU (MCM-GPU) that integrates multiple GPU modules to form a single logical GPU. With intelligent memory subsystem optimizations tailored for MCM-GPUs, it can achieve within 7% of the performance of a similar but hypothetical monolithic die GPU. Taking a step further, I present an in-depth study of the energy-efficiency characteristics of future MCM-GPUs. I demonstrate that the inherent non-uniform memory access side-effects form the key energy-efficiency bottleneck in the future. In summary, this thesis offers key insights into the performance and energy-efficiency bottlenecks in CMPs and GPGPUs, which can guide future architects towards developing high-performance and energy-efficient general-purpose processors. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2018
8

Synthesis of top coat surface treatments for the orientation of thin film block copolymers

Chen, Christopher Hancheng 08 October 2013 (has links)
Block copolymer self-assembly has demonstrated sub-optical lithographic resolution . High values of chi, the block copolymer interaction parameter, are required to achieve next-generation lithographic resolution . Unfortunately, high values of chi can lead to thin film orientation control difficulties , which are believed to be caused by large differences in the surface energy of each block relative to the substrate and the free surface. The substrate-block interface can be modified to achieve a surface energy intermediate to that of each individual block ; the air-polymer interface, however, presents additional complications. This thesis describes the synthesis of polymers for top coat surface treatments, which are designed to modify the surface energy of the air-block copolymer interface and enable block copolymer orientation control upon thermal annealing. Polymers with β-keto acid functionality were synthesized to allow polarity switching upon decarboxylation. Syntheses of anhydride containing polymers were established that provide another class of polarity switching materials. / text
9

Validation of individual consciousness in strong artificial intelligence : an African theological contribution

Forster, Dion Angus 30 June 2006 (has links)
The notion of identity has always been central to the human person's understanding of self. The question "who am I?" is fundamental to human being. Answers to this question have come from a wide range of academic disciplines. Philosophers, theologians, scientists, sociologists and anthropologists have all sought to offer some insight. The question of individual identity has traditionally been answered from two broad perspectives. The objectivist approach has sought to answer the question through empirical observation - you are a mammal, you are a homo-sapien, you are male, you are African etc. The subjectivist approach has sought to answer the question through phenomenological exploration - I understand myself to be sentient, I remember my past, I feel love etc. A recent development in the field of computer science has however shown a shortcoming in both of these approaches. Ray Kurzweil, a theorist in strong artificial intelligence, suggests the possibility of an interesting identity crisis. He suggests that if a machine could be programmed and built to accurately and effectively emulate a person's conscious experience of being `self' it could lead to a crisis of identity. In an instance where the machine and the person it is emulating cannot be either objectively distinguished (i.e., both display the same characteristics of the person in question), or subjectively distinguish themselves (i.e., both believe themselves to be the `person in question' since both have an experience of being that person. This experience could be based on memory, emotion, understanding and other subjective realities) how is the true identity of the individual validated? What approach can be employed in order to distinguish which of the two truly is the `person in question' and which is the `emulation of that person'? This research investigates this problem and presents a suggested solution to it. The research begins with an investigation of the claims of strong artificial intelligence and discusses Ray Kurzweil's hypothetical identity crisis. It also discusses various approaches to consciousness and identity, showing both their value and shortfall within the scope of this identity conundrum. In laying the groundwork for the solution offered in this thesis, the integrative theory of Ken Wilber is presented as a model that draws on the strengths of the objectivist and subjectivist approaches to consciousness, yet also emphasises the need for an approach which is not only based on individual data (i.e., the objectivist - you are, or subjectivist - I am). Rather, it requires an intersubjective knowing of self in relation to others. The outcome of this research project is an African Theological approach to self-validating consciousness in strong artificial intelligence. This takes the form of an African Theology of relational ontology. The contribution falls within the ambit of Christian anthropology and Trinitarian theology - stressing the Christian belief that true identity is both shaped by, and discovered in, relationship with others. The clearest expression of this reality is to be found in the African saying Umuntu ngumuntu ngabantu (A person is a person through other persons). / Systematic Theology / D. Th.
10

Validation of individual consciousness in strong artificial intelligence : an African theological contribution

Forster, Dion Angus 30 June 2006 (has links)
The notion of identity has always been central to the human person's understanding of self. The question "who am I?" is fundamental to human being. Answers to this question have come from a wide range of academic disciplines. Philosophers, theologians, scientists, sociologists and anthropologists have all sought to offer some insight. The question of individual identity has traditionally been answered from two broad perspectives. The objectivist approach has sought to answer the question through empirical observation - you are a mammal, you are a homo-sapien, you are male, you are African etc. The subjectivist approach has sought to answer the question through phenomenological exploration - I understand myself to be sentient, I remember my past, I feel love etc. A recent development in the field of computer science has however shown a shortcoming in both of these approaches. Ray Kurzweil, a theorist in strong artificial intelligence, suggests the possibility of an interesting identity crisis. He suggests that if a machine could be programmed and built to accurately and effectively emulate a person's conscious experience of being `self' it could lead to a crisis of identity. In an instance where the machine and the person it is emulating cannot be either objectively distinguished (i.e., both display the same characteristics of the person in question), or subjectively distinguish themselves (i.e., both believe themselves to be the `person in question' since both have an experience of being that person. This experience could be based on memory, emotion, understanding and other subjective realities) how is the true identity of the individual validated? What approach can be employed in order to distinguish which of the two truly is the `person in question' and which is the `emulation of that person'? This research investigates this problem and presents a suggested solution to it. The research begins with an investigation of the claims of strong artificial intelligence and discusses Ray Kurzweil's hypothetical identity crisis. It also discusses various approaches to consciousness and identity, showing both their value and shortfall within the scope of this identity conundrum. In laying the groundwork for the solution offered in this thesis, the integrative theory of Ken Wilber is presented as a model that draws on the strengths of the objectivist and subjectivist approaches to consciousness, yet also emphasises the need for an approach which is not only based on individual data (i.e., the objectivist - you are, or subjectivist - I am). Rather, it requires an intersubjective knowing of self in relation to others. The outcome of this research project is an African Theological approach to self-validating consciousness in strong artificial intelligence. This takes the form of an African Theology of relational ontology. The contribution falls within the ambit of Christian anthropology and Trinitarian theology - stressing the Christian belief that true identity is both shaped by, and discovered in, relationship with others. The clearest expression of this reality is to be found in the African saying Umuntu ngumuntu ngabantu (A person is a person through other persons). / Systematic Theology / D. Th.

Page generated in 0.6288 seconds