• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6800
  • 683
  • 671
  • 671
  • 671
  • 671
  • 671
  • 671
  • 184
  • 62
  • 16
  • 7
  • 2
  • 2
  • 2
  • Tagged with
  • 10978
  • 10978
  • 6695
  • 1946
  • 989
  • 862
  • 543
  • 532
  • 524
  • 507
  • 506
  • 468
  • 457
  • 448
  • 403
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Extracting Data-Level Parallelism in High-Level Synthesis for Reconfigurable Architectures

Escobedo Contreras, Juan Andres 01 January 2020 (has links)
High-Level Synthesis (HLS) tools are a set of algorithms that allow programmers to obtain implementable Hardware Description Language (HDL) code from specifications written high-level, sequential languages such as C, C++, or Java. HLS has allowed programmers to code in their preferred language while still obtaining all the benefits hardware acceleration has to offer without them needing to be intimately familiar with the hardware platform of the accelerator. In this work we summarize and expand upon several of our approaches to improve the automatic memory banking capabilities of HLS tools targeting reconfigurable architectures, namely Field-Programmable Gate Arrays or FPGA's. We explored several approaches to automatically find the optimal partition factor and a usable banking scheme for stencil kernels including a tessellation based approach using multiple families of hyperplanes to do the partitioning which was able to find a better banking factor than current state-of-the-art methods and a graph theory methodology that allowed us to mathematically prove the optimality of our banking solutions. For non-stencil kernels we relaxed some of the conditions in our graph-based model to propose a best-effort solution to arbitrarily reduce memory access conflicts (simultaneous accesses to the same memory bank). We also proposed a non-linear transformation using prime factorization to convert a small subset of non-stencil kernels into stencil memory accesses, allowing us to use all previous work in memory partition to them. Our approaches were able to obtain better results than commercial tools and state-of-the-art algorithms in terms of reduced resource utilization and increased frequency of operation. We were also able to obtain better partition factors for some stencil kernels and usable baking schemes for non-stencil kernels with better performance than any applicable existing algorithm.
192

Data-Driven Nonlinear Control Designs for Constrained Systems

Harvey, Roland 01 January 2020 (has links)
Systems with nonlinear dynamics are theoretically constrained to the realm of nonlinear analysis and design, while explicit constraints are expressed as equalities or inequalities of state, input, and output vectors of differential equations. Few control designs exist for systems with such explicit constraints, and no generalized solution has been provided. This dissertation presents general techniques to design stabilizing controls for a specific class of nonlinear systems with constraints on input and output, and verifies that such designs are straightforward to implement in selected applications. Additionally, a closed-form technique for an open-loop problem with unsolvable dynamic equations is developed. Typical optimal control methods cannot be readily applied to nonlinear systems without heavy modification. However, by embedding a novel control framework based on barrier functions and feedback linearization, well-established optimal control techniques become applicable when constraints are imposed by the design in real-time. Applications in power systems and aircraft control often have safety, performance, and hardware restrictions that are combinations of input and output constraints, while cryogenic memory applications have design restrictions and unknown analytic solutions. Most applications fall into a broad class of systems known as passivity-short, in which certain properties are utilized to form a structural framework for system interconnection with existing general stabilizing control techniques. Previous theoretical contributions are extended to include constraints, which can be readily applied to the development of scalable system networks in practical systems, even in the presence of unknown dynamics. In cases such as these, model identification techniques are used to obtain estimated system models which are guaranteed to be at least passivity-short. With numerous analytic tools accessible, a data-driven nonlinear control design framework is developed using model identification resulting in passivity-short systems which handles input and output saturations. Simulations are presented that prove to effectively control and stabilize example practical systems.
193

Improving Usability of Genetic Algorithms through Self Adaptation on Static and Dynamic Environments

Norat, Reamonn 01 January 2020 (has links)
We propose a self-adaptive genetic algorithm, called SAGA, for the purposes of improving the usability of genetic algorithms on both static and dynamic problems. Self-adaption can improve usability by automating some of the parameter tuning for the algorithm, a difficult and time-consuming process on canonical genetic algorithms. Reducing or simplifying the need for parameter tuning will help towards making genetic algorithms a more attractive tool for those who are not experts in the field of evolutionary algorithms, allowing more people to take advantage of the problem solving capabilities of a genetic algorithm on real-world problems. We test SAGA and analyze its the behavior on a variety of problems. First we test on static test problems, where our focus is on usability improvements as measured by the number of parameter configurations to tune and the number of fitness evaluations conducted. On the static problems, SAGA is compared to a canonical genetic algorithm. Next, we test on dynamic test problems, where the fitness landscape varies over the course of the problem's execution. The dynamic problems allows us to examine whether self-adaptation can effectively react to ever-changing and unpredictable problems. On the dynamic problems, we compare to a canonical genetic algorithm as well as other genetic algorithm methods that are designed or utilized specifically for dynamic problems. Finally, we test on a real-world problem pertaining to Medicare Fee-For-Service payments in order to validate the real-world usefulness of SAGA. For this real-world problem, we compare SAGA to both a canonical genetic algorithm and logistic regression, the standard method for this problem in the field of healthcare informatics. We find that this self-adaptive genetic algorithm is successful at improving usability through a large reduction of parameter tuning while maintaining equal or superior results on a majority of the problems tested. The large reduction of parameter tuning translates to large time savings for users of SAGA. Furthermore, self-adaptation proves to be a very capable mechanisms for dealing with the difficulties of dynamic environment problems as observed by the changes to parameters in response to changes in the fitness landscape of the problem.
194

Energy-Efficient Signal Conversion and In-Memory Computing using Emerging Spin-based Devices

Salehi Mobarakeh, Soheil 01 January 2020 (has links)
New approaches are sought to maximize the signal sensing and reconstruction performance of Internet-of-Things (IoT) devices while reducing their dynamic and leakage energy consumption. Recently, Compressive Sensing (CS) has been proposed as a technique aimed at reducing the number of samples taken per frame to decrease energy, storage, and data transmission overheads. CS can be used to sample spectrally-sparse wide-band signals close to the information rate rather than the Nyquist rate, which can alleviate the high cost of hardware performing sampling in low-duty IoT applications. In my dissertation, I am focusing mainly on the adaptive signal acquisition and conversion circuits utilizing spin-based devices to achieve a highly-favorable range of accuracy, bandwidth, miniaturization, and energy trade-offs while co-designing the CS algorithms. The use of such approaches specifically targets new classes of Analog to Digital Converter (ADC) designs providing Sampling Rate (SR) and Quantization Resolution (QR) adapted during the acquisition by a cross-layer strategy considering both signal and hardware-specific constraints. Extending CS and Non-uniform CS (NCS) methods using emerging devices is highly desirable. Among promising devices, the 2014 ITRS Magnetism Roadmap identifies nanomagnetic devices as capable post-CMOS candidates, of which Magnetic Tunnel Junctions (MTJs) are reaching broader commercialization. Thus, my doctoral research topic is well-motivated by the established aims of academia and industry. Furthermore, the benefits of alternatives to von-Neumann architectures are sought for emerging applications such as IoT and hardware-aware intelligent edge devices, as well as the application of spintronics for neuromorphic processing. Thus, in my doctoral research, I have also focused on realizing post-fabrication adaptation, which is ubiquitous in post-Moore approaches, as well as mission-critical, IoT, and neuromorphic applications.
195

MFPA: Mixed-Signal Field Programmable Array for Energy-Aware Compressive Signal Processing

Tatulian, Adrian 01 January 2020 (has links)
Compressive Sensing (CS) is a signal processing technique which reduces the number of samples taken per frame to decrease energy, storage, and data transmission overheads, as well as reducing time taken for data acquisition in time-critical applications. The tradeoff in such an approach is increased complexity of signal reconstruction. While several algorithms have been developed for CS signal reconstruction, hardware implementation of these algorithms is still an area of active research. Prior work has sought to utilize parallelism available in reconstruction algorithms to minimize hardware overheads; however, such approaches are limited by the underlying limitations in CMOS technology. Herein, the MFPA (Mixed-signal Field Programmable Array) approach is presented as a hybrid spin-CMOS reconfigurable fabric specifically designed for implementation of CS data sampling and signal reconstruction. The resulting fabric consists of 1) slice-organized analog blocks providing amplifiers, transistors, capacitors, and Magnetic Tunnel Junctions (MTJs) which are configurable to achieving square/square root operations required for calculating vector norms, 2) digital functional blocks which feature 6-input clockless lookup tables for computation of matrix inverse, and 3) an MRAM-based nonvolatile crossbar array for carrying out low-energy matrix-vector multiplication operations. The various functional blocks are connected via a global interconnect and spin-based analog-to-digital converters. Simulation results demonstrate significant energy and area benefits compared to equivalent CMOS digital implementations for each of the functional blocks used: this includes an 80% reduction in energy and 97% reduction in transistor count for the nonvolatile crossbar array, 80% standby power reduction and 25% reduced area footprint for the clockless lookup tables, and roughly 97% reduction in transistor count for a multiplier built using components from the analog blocks. Moreover, the proposed fabric yields 77% energy reduction compared to CMOS when used to implement CS reconstruction, in addition to latency improvements.
196

Enabling Recovery of Secure Non-Volatile Memories

Ye, Mao 01 January 2020 (has links)
Emerging non-volatile memories (NVMs), such as phase change memory (PCM), spin-transfer torque RAM (STT-RAM) and resistive RAM (ReRAM), have dual memory-storage characteristics and, therefore, are strong candidates to replace or augment current DRAM and secondary storage devices. The newly released Intel 3D XPoint persistent memory and Optane SSD series have shown promising features. However, when these new devices are exposed to events such as power loss, many issues arise when data recovery is expected. In this dissertation, I devised multiple schemes to enable secure data recovery for emerging NVM technologies when memory encryption is used. With the data-remanence feature of NVMs, physical attacks become easier; hence, emerging NVMs are typically paired with encryption. In particular, counter-mode encryption is commonly used due to its performance and security advantages over other schemes (e.g., electronic codebook encryption). However, enabling data recovery in power failure events requires the recovery of security metadata associated with data blocks. Naively writing security metadata updates along with data for each operation can further exacerbate the write endurance problem of NVMs as they have limited write endurance and very slow write operations. Therefore, it is necessary to enable the recovery of data and security metadata (encryption counters) but without incurring a significant number of writes. The first work of this dissertation presents an explanation of Osiris, a novel mechanism that repurposes error correcting code (ECC) co-located with data to enable recovery of encryption counters by additionally serving as a sanity-check for encryption counters used. Thus, by using a stop-loss mechanism with a limited number of trials, ECC can be used to identify which encryption counter that was used most recently to encrypt the data and, hence, allow correct decryption and recovery. The first work of this dissertation explores how different stop-loss parameters along with optimizations of Osiris can potentially reduce the number of writes. Overall, Osiris enables the recovery of encryption counters while achieving better performance and fewer writes than a conventional write-back caching scheme of encryption counters, which lacks the ability to recover encryption counters. Later, in the second work, Osiris implementation is expanded to work with different counter-mode memory encryption schemes, where we use an epoch-based approach to periodically persist updated counters. Later, when a crash occurs, we can recover counters through test-and-verification to identify the correct counter within the size of an epoch for counter recovery. Our proposed scheme, Osiris-Global, incurs minimal performance overheads and write overheads in enabling the recovery of encryption counters. In summary, the findings of the present PhD work enable the recovery of secure NVM systems and, hence, allows persistent applications to leverage the persistency features of NVMs. Meanwhile, it also minimizes the number of writes required in meeting this crash consistency requirement of secure NVM systems.
197

Control Strategies for Multi-Controller Multi-Objective Systems

Al-Azzawi, Raaed 01 January 2020 (has links)
This dissertation's focus is control systems controlled by multiple controllers, each having its own objective function. The control of such systems is important in many practical applications such as economic systems, the smart grid, military systems, robotic systems, and others. To reap the benefits of feedback, we consider and discuss the advantages of implementing both the Nash and the Leader-Follower Stackelberg controls in a closed-loop form. However, closed-loop controls require continuous measurements of the system's state vector, which may be expensive or even impossible in many cases. As an alternative, we consider a sampled closed-loop implementation. Such an implementation requires only the state vector measurements at pre-specified instants of time and hence is much more practical and cost-effective compared to the continuous closed-loop implementation. The necessary conditions for existence of such controls are derived for the general linear-quadratic system, and the solutions developed for the Nash and Stackelberg controls in detail for the scalar case. To illustrate the results, an example of a control system with two controllers and state measurements available at integer multiples of 10% of the total control interval is presented. While both Nash and Stackelberg are important approaches to develop the controls, we then considered the advantages of the Leader-Follower Stackelberg strategy. This strategy is appropriate for control systems controlled by two independent controllers whose roles and objectives in terms of the system's performance and implementation of the controls are generally different. In such systems, one controller has an advantage over the other in that it has the capability of designing and implementing its control first, before the other controller. With such a control hierarchy, this controller is designated as the leader while the other is the follower. To take advantage of its primary role, the leader's control is designed by anticipating and considering the follower's control. The follower becomes the sole controller in the system after the leader's control has been implemented. In this study, we describe such systems and derive in detail the controls of both the leader and follower. In systems where the roles of leader and follower are negotiated, it is important to consider each controller's leadership property. This property considers the question for each controller as to whether it is preferable to be a leader and let the other controller be a follower or be a follower and let the other controller be the leader. In this dissertation, we try to answer this question by considering two models, one static and the other dynamic, and illustrating the results with an example in each case. The final chapter of the dissertation considers an application in microeconomics. We consider a dynamic duopoly problem, and we derive the necessary conditions for the Stackelberg solution with one firm as a leader controlling the price in the market
198

Statistical and Stochastic Learning Algorithms for Distributed and Intelligent Systems

Bian, Jiang 01 January 2020 (has links)
In the big data era, statistical and stochastic learning for distributed and intelligent systems focuses on enhancing and improving the robustness of learning models that have become pervasive and are being deployed for decision-making in real-life applications including general classification, prediction, and sparse sensing. The growing prospect of statistical learning approaches such as Linear Discriminant Analysis and distributed Learning being used (e.g., community sensing) has raised concerns around the robustness of algorithm design. Recent work on anomalies detection has shown that such Learning models can also succumb to the so-called 'edge-cases' where the real-life operational situation presents data that are not well-represented in the training data set. Such cases have been the primary reason for quite a few mis-classification bottleneck problems recently. Although initial research has begun to address scenarios with specific Learning models, there remains a significant knowledge gap regarding the detection and adaptation of learning models to 'edge-cases' and extreme ill-posed settings in the context of distributed and intelligent systems. With this motivation, this dissertation explores the complex in several typical applications and associated algorithms to detect and mitigate the uncertainty which will substantially reduce the risk in using statistical and stochastic learning algorithms for distributed and intelligent systems.
199

Detecting Small Moving Targets in Infrared Imagery

Cuellar, Adam 01 January 2020 (has links)
Deep convolutional neural networks have achieved remarkable results for detecting large and medium sized objects in images. However, the ability to detect smallobjects has yet to achieve the same level performance. Our focus is on applications that require the accurate detection and localization of small moving objects that are distantfrom the sensor. We first examine the ability of several state-of-the-art object detection networks (YOLOv3 and Mask R-CNN) to find small moving targets in infraredimagery using a publicly released dataset by the US Army Night Vision and Electronic Sensors Directorate. We then introduce a novel Moving Target Indicator Network (MTINet) and repurpose a hyperspectral imagery anomaly detection algorithm, the Reed-Xiaoli detector, for detecting small moving targets. We analyze the robustness and applicability of these methods by introducing simulated sensor movement to the data. Compared with other state-of-the-art methods, MTINet and the ARX algorithm achieve ahigher probability of detection at lower false alarm rates. Specifically, the combination of the algorithms results in a probability of detection of approximately 70% at a low false alarm rate of 0.1, which is about 50% greater than that of YOLOv3 and 65% greater than Mask R-CNN.
200

Distributed Multi-agent Optimization and Control with Applications in Smart Grid

Rahman, Towfiq 01 January 2020 (has links)
With recent advancements in network technologies like 5G and Internet of Things (IoT), the size and complexity of networked interconnected agents have increased rapidly. Although centralized schemes have simpler algorithm design, in practicality, it creates high computational complexity and requires high bandwidth for centralized data pooling. In this dissertation, for distributed optimization of networked multi-agent architecture, the Alternating Direction Method of Multipliers (ADMM) is investigated. In particular, a new adaptive-gain ADMM algorithm is derived in closed form and under the standard convex property to greatly speed up the convergence of ADMM-based distributed optimization. Using the Lyapunov direct approach, the proposed solution embeds control gains into a weighted network matrix among the agents uses and those weights as adaptive penalty gains in the augmented Lagrangian. For applications in a smart grid where system parameters are greatly affected by intermittent distributed energy resources like Electric Vehicles (EV) and Photo-voltaic (PV) panels, it is necessary to implement the algorithm in real-time since the accuracy of the optimal solution heavily relies on sampling time of the discrete-time iterative methods. Thus, the algorithm is further extended to the continuous domain for real-time applications and the convergence is proved also through Lyapunov direct approach. The algorithm is implemented on a distribution grid with high EV penetration where each agent exchanges relevant information among the neighboring nodes through the communication network, optimizes a combined convex objective of EV welfare and voltage regulation with power equations as constraints. The algorithm falls short when the dynamic equations like EVs state of charge are taken into account. Thus, the algorithm is further developed to incorporate dynamic constraints and the convergence along with control law is developed using Lyapunov direct approach. An alternative approach for convergence using passivity-short properties is also shown. Simulation results are included to demonstrate the effectiveness of proposed schemes.

Page generated in 0.126 seconds