• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6096
  • 664
  • 654
  • 654
  • 654
  • 654
  • 654
  • 654
  • 184
  • 62
  • 16
  • 7
  • 2
  • 2
  • 2
  • Tagged with
  • 10237
  • 10237
  • 6037
  • 1908
  • 826
  • 792
  • 524
  • 518
  • 509
  • 494
  • 454
  • 442
  • 441
  • 431
  • 401
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Digital Control Of Half-Bridge Dc-Dc Converters With Current Doubler Rectification

Yao, Liangbin 01 January 2005 (has links)
DC-DC power converters play an important role in powering telecom and computing systems. Complex systems, including power electronics systems, are increasingly using digital controllers because of the major advancements in digital controllers and DSP as well as there ability to perform sophisticated and enhanced control schemes. In this thesis, the digital controller is investigated for DC-DC converters in high current low voltage applications. For an optimal design of a regulated DC-DC converter, it is necessary to derive a valid model. The current doubler rectified half bridge (CDRHB) DC-DC converter is suitable for high current low voltage applications. In this thesis, the topology operations are analyzed and then the unified state space model, analog small signal model and digital small signal model are derived. Then the digital compensator design is discussed as well as the analog-digital converter (ADC) and the digital pulse-width-modulator (DPWM) design rules. In addition, voltage driving optimization is proposed for the benefit of the digital controller. Finally, experimental results based on the CDRHB are presented and analyzed.
222

Improvement of Data-Intensive Applications Running on Cloud Computing Clusters

Ibrahim, Ibrahim Adel 01 January 2019 (has links)
MapReduce, designed by Google, is widely used as the most popular distributed programming model in cloud environments. Hadoop, an open-source implementation of MapReduce, is a data management framework on large cluster of commodity machines to handle data-intensive applications. Many famous enterprises including Facebook, Twitter, and Adobe have been using Hadoop for their data-intensive processing needs. Task stragglers in MapReduce jobs dramatically impede job execution on massive datasets in cloud computing systems. This impedance is due to the uneven distribution of input data and computation load among cluster nodes, heterogeneous data nodes, data skew in reduce phase, resource contention situations, and network configurations. All these reasons may cause delay failure and the violation of job completion time. One of the key issues that can significantly affect the performance of cloud computing is the computation load balancing among cluster nodes. Replica placement in Hadoop distributed file system plays a significant role in data availability and the balanced utilization of clusters. In the current replica placement policy (RPP) of Hadoop distributed file system (HDFS), the replicas of data blocks cannot be evenly distributed across cluster's nodes. The current HDFS must rely on a load balancing utility for balancing the distribution of replicas, which results in extra overhead for time and resources. This dissertation addresses data load balancing problem and presents an innovative replica placement policy for HDFS. It can perfectly balance the data load among cluster's nodes. The heterogeneity of cluster nodes exacerbates the issue of computational load balancing; therefore, another replica placement algorithm has been proposed in this dissertation for heterogeneous cluster environments. The timing of identifying the straggler map task is very important for straggler mitigation in data-intensive cloud computing. To mitigate the straggler map task, Present progress and Feedback based Speculative Execution (PFSE) algorithm has been proposed in this dissertation. PFSE is a new straggler identification scheme to identify the straggler map tasks based on the feedback information received from completed tasks beside the progress of the current running task. Straggler reduce task aggravates the violation of MapReduce job completion time. Straggler reduce task is typically the result of bad data partitioning during the reduce phase. The Hash partitioner employed by Hadoop may cause intermediate data skew, which results in straggler reduce task. In this dissertation a new partitioning scheme, named Balanced Data Clusters Partitioner (BDCP), is proposed to mitigate straggler reduce tasks. BDCP is based on sampling of input data and feedback information about the current processing task. BDCP can assist in straggler mitigation during the reduce phase and minimize the job completion time in MapReduce jobs. The results of extensive experiments corroborate that the algorithms and policies proposed in this dissertation can improve the performance of data-intensive applications running on cloud platforms.
223

Soft-Error Resilience Framework For Reliable and Energy-Efficient CMOS Logic and Spintronic Memory Architectures

Alghareb, Faris 01 January 2019 (has links)
The revolution in chip manufacturing processes spanning five decades has proliferated high performance and energy-efficient nano-electronic devices across all aspects of daily life. In recent years, CMOS technology scaling has realized billions of transistors within large-scale VLSI chips to elevate performance. However, these advancements have also continually augmented the impact of Single-Event Transient (SET) and Single-Event Upset (SEU) occurrences which precipitate a range of Soft-Error (SE) dependability issues. Consequently, soft-error mitigation techniques have become essential to improve systems' reliability. Herein, first, we proposed optimized soft-error resilience designs to improve robustness of sub-micron computing systems. The proposed approaches were developed to deliver energy-efficiency and tolerate double/multiple errors simultaneously while incurring acceptable speed performance degradation compared to the prior work. Secondly, the impact of Process Variation (PV) at the Near-Threshold Voltage (NTV) region on redundancy-based SE-mitigation approaches for High-Performance Computing (HPC) systems was investigated to highlight the approach that can realize favorable attributes, such as reduced critical datapath delay variation and low speed degradation. Finally, recently, spin-based devices have been widely used to design Non-Volatile (NV) elements such as NV latches and flip-flops, which can be leveraged in normally-off computing architectures for Internet-of-Things (IoT) and energy-harvesting-powered applications. Thus, in the last portion of this dissertation, we design and evaluate for soft-error resilience NV-latching circuits that can achieve intriguing features, such as low energy consumption, high computing performance, and superior soft errors tolerance, i.e., concurrently able to tolerate Multiple Node Upset (MNU), to potentially become a mainstream solution for the aerospace and avionic nanoelectronics. Together, these objectives cooperate to increase energy-efficiency and soft errors mitigation resiliency of larger-scale emerging NV latching circuits within iso-energy constraints. In summary, addressing these reliability concerns is paramount to successful deployment of future reliable and energy-efficient CMOS logic and spintronic memory architectures with deeply-scaled devices operating at low-voltages.
224

Value-of-Information based Data Collection in Underwater Sensor Networks

Khan, Fahad 01 May 2019 (has links)
Underwater sensor networks are deployed in marine environments, presenting specific challenges compared to sensor networks deployed in terrestrial settings. Among the major issues that underwater sensor networks face is communication medium limitations that result in low bandwidth and long latency. This creates problems when these networks need to transmit large amounts of data over long distances. A possible solution to address this issue is to use mobile sinks such as autonomous underwater vehicles (AUVs) to offload these large quantities of data. Such mobile sinks are called data mules. Often it is the case that a sensor network is deployed to report events that require immediate attention. Delays in reporting such events can have catastrophic consequences. In this dissertation, we present path planning algorithms that help in prioritizing data retrieval from sensor nodes in such a manner that nodes that require more immediate attention would be dealt with at the earliest. In other words, the goal is to improve the Quality of Information (QoI) retrieved. The path planning algorithms proposed in this dissertation are based on heuristics meant to improve the Value of Information (VoI) retrieved from a system. Value of information is a construct that helps in encoding the valuation of an information segment i.e. it is the price an optimal player would pay to obtain a segment of information in a game theoretic setting. Quality of information and value of information are complementary concepts. In this thesis, we formulate a value of information model for sensor networks and then consider the constraints that arise in underwater settings. On the basis of this, we develop a VoI-based path planning problem statement and propose heuristics that solve the path planning problem. We show through simulation studies that the proposed strategies improve the value, and hence, quality of the information retrieved. It is important to note that these path planning strategies can be applied equally well in terrestrial settings that deploy mobile sinks for data collection.
225

Autonomous Discovery and Maintenance of Mobile Frees-Space-Optical Links

Khan, Mahmudur 01 August 2018 (has links)
Free-Space-Optical (FSO) communication has the potential to play a significant role in future generation wireless networks. It is advantageous in terms of improved spectrum utilization, higher data transfer rate, and lower probability of interception from unwanted sources. FSO communication can provide optical-level wireless communication speeds and can also help solve the wireless capacity problem experienced by the traditional RF-based technologies. Despite these advantages, communications using FSO transceivers require establishment and maintenance of line-of-sight (LOS). We consider autonomous mobile nodes (Unmanned Ground Vehicles or Unmanned Aerial Vehicles), each with one FSO transceiver mounted on a movable head capable of scanning in the horizontal and vertical planes. We propose novel schemes that deal with the problems of automatic discovery, establishment, and maintenance of LOS alignment between these nodes with mechanical steering of the directional FSO transceivers in 2-D and 3-D scenarios. We perform extensive simulations to show the effectiveness of the proposed methods for both neighbor discovery and LOS maintenance. We also present a prototype implementation of such mobile nodes with FSO transceivers. The potency of the neighbor discovery and LOS alignment protocols is evaluated by analyzing the results obtained from both simulations and experiments conducted using the prototype. The results show that, by using such mechanically steerable directional transceivers and the proposed methods, it is possible to establish optical wireless links within practical discovery times and maintain the links in a mobile setting with minimal disruption.
226

Managing IO Resource for Co-running Data Intensive Applications in Virtual Clusters

Huang, Dan 01 January 2018 (has links)
Today Big Data computer platforms employ resource management systems such as Yarn, Torque, Mesos, and Google Borg to enable sharing the physical computing among many users or applications. Given virtualization and resource management systems, users are able to launch their applications on the same node with low mutual interference and management overhead on CPU and memory. However, there are still challenges to be addressed before these systems can be fully adopted to manage the IO resources in Big Data File Systems (BDFS) and shared network facilities. In this study, we mainly study on three IO management problems systematically, in terms of the proportional sharing of block IO in container-based virtualization, the network IO contention in MPI-based HPC applications and the data migration overhead in HPC workflows. To improve the proportional sharing, we develop a prototype system called BDFS-Container, by containerizing BDFS at Linux block IO level. Central to BDFS-Container, we propose and design a proactive IOPS throttling based mechanism named IOPS Regulator, which improves proportional IO sharing under the BDFS IO pattern by 74.4% on an average. In the aspect of network IO resource management, we exploit using virtual switches to facilitate network traffic manipulation and reduce mutual interference on the network for in-situ applications. In order to dynamically allocate the network bandwidth when it is needed, we adopt SARIMA-based techniques to analyze and predict MPI traffic issued from simulations. Third, to solve the data migration problem in small-medium sized HPC clusters, we propose to construct a sided IO path, named as SideIO, to explicitly direct analysis data to BDFS that co-locates computation with data. By experimenting with two real-world scientific workflows, SideIO completely avoids the most expensive data movement overhead and achieves up to 3x speedups compared with current solutions.
227

A Game-theoretic Model for Regulating Freeriding in Subsidy-Based Pervasive Spectrum Sharing Markets

Rahman, Mostafizur 01 January 2018 (has links)
Cellular spectrum is a limited natural resource becoming scarcer at a worrisome rate. To satisfy users' expectation from wireless data services, researchers and practitioners recognized the necessity of more utilization and pervasive sharing of the spectrum. Though scarce, spectrum is underutilized in some areas or within certain operating hours due to the lack of appropriate regulatory policies, static allocation and emerging business challenges. Thus, finding ways to improve the utilization of this resource to make sharing more pervasive is of great importance. There already exists a number of solutions to increase spectrum utilization via increased sharing. Dynamic Spectrum Access (DSA) enables a cellular operator to participate in spectrum sharing in many ways, such as geological database and cognitive radios, but these systems perform spectrum sharing at the secondary level (i.e., the bands are shared if and only if the primary/licensed user is idle) and it is questionable if they will be sufficient to meet the future expectations of the spectral efficiency. Along with the secondary sharing, spectrum sharing among primary users is emerging as a new domain of future mode of pervasive sharing. We call this type of spectrum sharing among primary users as "pervasive spectrum sharing (PSS)". However, such spectrum sharing among primary users requires strong incentives to share and ensuring a freeriding-free cellular market. Freeriding in pervasively shared spectrum markets (be it via government subsidies/regulations or self-motivated coalitions among cellular operators) is a real techno-economic challenge to be addressed. In a PSS market, operators will share their resources with primary users of other operators and may sometimes have to block their own primary users in order to attain sharing goals. Small operators with lower quality service may freeride on large operators' infrastructure in such pervasively shared markets. Even worse, since small operators' users may perceive higher-than-expected service quality for a lower fee, this can cause customer loss to the large operators and motivate small operators to continue freeriding with additional earnings from the stolen customers. Thus, freeriding can drive a shared spectrum market to an unhealthy and unstable equilibrium. In this work, we model the freeriding by small operators in shared spectrum markets via a game-theoretic framework. We focus on a performance-based government incentivize scheme and aim to minimize the freeriding issue emerging in such PSS markets. We present insights from the model and discuss policy and regulatory challenges.
228

Guided Autonomy for Quadcopter Photography

Alabachi, Saif 01 January 2019 (has links)
Photographing small objects with a quadcopter is non-trivial to perform with many common user interfaces, especially when it requires maneuvering an Unmanned Aerial Vehicle (C) to difficult angles in order to shoot high perspectives. The aim of this research is to employ machine learning to support better user interfaces for quadcopter photography. Human Robot Interaction (HRI) is supported by visual servoing, a specialized vision system for real-time object detection, and control policies acquired through reinforcement learning (RL). Two investigations of guided autonomy were conducted. In the first, the user directed the quadcopter with a sketch based interface, and periods of user direction were interspersed with periods of autonomous flight. In the second, the user directs the quadcopter by taking a single photo with a handheld mobile device, and the quadcopter autonomously flies to the requested vantage point. This dissertation focuses on the following problems: 1) evaluating different user interface paradigms for dynamic photography in a GPS-denied environment; 2) learning better Convolutional Neural Network (CNN) object detection models to assure a higher precision in detecting human subjects than the currently available state-of-the-art fast models; 3) transferring learning from the Gazebo simulation into the real world; 4) learning robust control policies using deep reinforcement learning to maneuver the quadcopter to multiple shooting positions with minimal human interaction.
229

Leveraging the Intrinsic Switching Behaviors of Spintronic Devices for Digital and Neuromorphic Circuits

Pyle, Steven 01 May 2019 (has links)
With semiconductor technology scaling approaching atomic limits, novel approaches utilizing new memory and computation elements are sought in order to realize increased density, enhanced functionality, and new computational paradigms. Spintronic devices offer intriguing avenues to improve digital circuits by leveraging non-volatility to reduce static power dissipation and vertical integration for increased density. Novel hybrid spintronic-CMOS digital circuits are developed herein that illustrate enhanced functionality at reduced static power consumption and area cost. The developed spin-CMOS D Flip-Flop offers improved power-gating strategies by achieving instant store/restore capabilities while using 10 fewer transistors than typical CMOS-only implementations. The spin-CMOS Muller C-Element developed herein improves asynchronous pipelines by reducing the area overhead while adding enhanced functionality such as instant data store/restore and delay-element-free bundled data asynchronous pipelines. Spintronic devices also provide improved scaling for neuromorphic circuits by enabling compact and low power neuron and non-volatile synapse implementations while enabling new neuromorphic paradigms leveraging the stochastic behavior of spintronic devices to realize stochastic spiking neurons, which are more akin to biological neurons and commensurate with theories from computational neuroscience and probabilistic learning rules. Spintronic-based Probabilistic Activation Function circuits are utilized herein to provide a compact and low-power neuron for Binarized Neural Networks. Two implementations of stochastic spiking neurons with alternative speed, power, and area benefits are realized. Finally, a comprehensive neuromorphic architecture comprising stochastic spiking neurons, low-precision synapses with Probabilistic Hebbian Plasticity, and a novel non-volatile homeostasis mechanism is realized for subthreshold ultra-low-power unsupervised learning with robustness to process variations. Along with several case studies, implications for future spintronic digital and neuromorphic circuits are presented.
230

Heterogeneous Reconfigurable Fabrics for In-circuit Training and Evaluation of Neuromorphic Architectures

Mohammadizand, Ramtin 01 May 2019 (has links)
A heterogeneous device technology reconfigurable logic fabric is proposed which leverages the cooperating advantages of distinct magnetic random access memory (MRAM)-based look-up tables (LUTs) to realize sequential logic circuits, along with conventional SRAM-based LUTs to realize combinational logic paths. The resulting Hybrid Spin/Charge FPGA (HSC-FPGA) using magnetic tunnel junction (MTJ) devices within this topology demonstrates commensurate reductions in area and power consumption over fabrics having LUTs constructed with either individual technology alone. Herein, a hierarchical top-down design approach is used to develop the HSCFPGA starting from the configurable logic block (CLB) and slice structures down to LUT circuits and the corresponding device fabrication paradigms. This facilitates a novel architectural approach to reduce leakage energy, minimize communication occurrence and energy cost by eliminating unnecessary data transfer, and support auto-tuning for resilience. Furthermore, HSC-FPGA enables new advantages of technology co-design which trades off alternative mappings between emerging devices and transistors at runtime by allowing dynamic remapping to adaptively leverage the intrinsic computing features of each device technology. HSC-FPGA offers a platform for fine-grained Logic-In-Memory architectures and runtime adaptive hardware. An orthogonal dimension of fabric heterogeneity is also non-determinism enabled by either low-voltage CMOS or probabilistic emerging devices. It can be realized using probabilistic devices within a reconfigurable network to blend deterministic and probabilistic computational models. Herein, consider the probabilistic spin logic p-bit device as a fabric element comprising a crossbar-structured weighted array. The Programmability of the resistive network interconnecting p-bit devices can be achieved by modifying the resistive states of the array's weighted connections. Thus, the programmable weighted array forms a CLB-scale macro co-processing element with bitstream programmability. This allows field programmability for a wide range of classification problems and recognition tasks to allow fluid mappings of probabilistic and deterministic computing approaches. In particular, a Deep Belief Network (DBN) is implemented in the field using recurrent layers of co-processing elements to form an n x m1 x m2 x ::: x mi weighted array as a configurable hardware circuit with an n-input layer followed by i ≥ 1 hidden layers. As neuromorphic architectures using post-CMOS devices increase in capability and network size, the utility and benefits of reconfigurable fabrics of neuromorphic modules can be anticipated to continue to accelerate.

Page generated in 0.1237 seconds