• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6096
  • 664
  • 654
  • 654
  • 654
  • 654
  • 654
  • 654
  • 184
  • 62
  • 16
  • 7
  • 2
  • 2
  • 2
  • Tagged with
  • 10237
  • 10237
  • 6037
  • 1908
  • 826
  • 792
  • 524
  • 518
  • 509
  • 494
  • 454
  • 442
  • 441
  • 431
  • 401
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Enabling Recovery of Secure Non-Volatile Memories

Ye, Mao 01 January 2020 (has links)
Emerging non-volatile memories (NVMs), such as phase change memory (PCM), spin-transfer torque RAM (STT-RAM) and resistive RAM (ReRAM), have dual memory-storage characteristics and, therefore, are strong candidates to replace or augment current DRAM and secondary storage devices. The newly released Intel 3D XPoint persistent memory and Optane SSD series have shown promising features. However, when these new devices are exposed to events such as power loss, many issues arise when data recovery is expected. In this dissertation, I devised multiple schemes to enable secure data recovery for emerging NVM technologies when memory encryption is used. With the data-remanence feature of NVMs, physical attacks become easier; hence, emerging NVMs are typically paired with encryption. In particular, counter-mode encryption is commonly used due to its performance and security advantages over other schemes (e.g., electronic codebook encryption). However, enabling data recovery in power failure events requires the recovery of security metadata associated with data blocks. Naively writing security metadata updates along with data for each operation can further exacerbate the write endurance problem of NVMs as they have limited write endurance and very slow write operations. Therefore, it is necessary to enable the recovery of data and security metadata (encryption counters) but without incurring a significant number of writes. The first work of this dissertation presents an explanation of Osiris, a novel mechanism that repurposes error correcting code (ECC) co-located with data to enable recovery of encryption counters by additionally serving as a sanity-check for encryption counters used. Thus, by using a stop-loss mechanism with a limited number of trials, ECC can be used to identify which encryption counter that was used most recently to encrypt the data and, hence, allow correct decryption and recovery. The first work of this dissertation explores how different stop-loss parameters along with optimizations of Osiris can potentially reduce the number of writes. Overall, Osiris enables the recovery of encryption counters while achieving better performance and fewer writes than a conventional write-back caching scheme of encryption counters, which lacks the ability to recover encryption counters. Later, in the second work, Osiris implementation is expanded to work with different counter-mode memory encryption schemes, where we use an epoch-based approach to periodically persist updated counters. Later, when a crash occurs, we can recover counters through test-and-verification to identify the correct counter within the size of an epoch for counter recovery. Our proposed scheme, Osiris-Global, incurs minimal performance overheads and write overheads in enabling the recovery of encryption counters. In summary, the findings of the present PhD work enable the recovery of secure NVM systems and, hence, allows persistent applications to leverage the persistency features of NVMs. Meanwhile, it also minimizes the number of writes required in meeting this crash consistency requirement of secure NVM systems.
202

Control Strategies for Multi-Controller Multi-Objective Systems

Al-Azzawi, Raaed 01 January 2020 (has links)
This dissertation's focus is control systems controlled by multiple controllers, each having its own objective function. The control of such systems is important in many practical applications such as economic systems, the smart grid, military systems, robotic systems, and others. To reap the benefits of feedback, we consider and discuss the advantages of implementing both the Nash and the Leader-Follower Stackelberg controls in a closed-loop form. However, closed-loop controls require continuous measurements of the system's state vector, which may be expensive or even impossible in many cases. As an alternative, we consider a sampled closed-loop implementation. Such an implementation requires only the state vector measurements at pre-specified instants of time and hence is much more practical and cost-effective compared to the continuous closed-loop implementation. The necessary conditions for existence of such controls are derived for the general linear-quadratic system, and the solutions developed for the Nash and Stackelberg controls in detail for the scalar case. To illustrate the results, an example of a control system with two controllers and state measurements available at integer multiples of 10% of the total control interval is presented. While both Nash and Stackelberg are important approaches to develop the controls, we then considered the advantages of the Leader-Follower Stackelberg strategy. This strategy is appropriate for control systems controlled by two independent controllers whose roles and objectives in terms of the system's performance and implementation of the controls are generally different. In such systems, one controller has an advantage over the other in that it has the capability of designing and implementing its control first, before the other controller. With such a control hierarchy, this controller is designated as the leader while the other is the follower. To take advantage of its primary role, the leader's control is designed by anticipating and considering the follower's control. The follower becomes the sole controller in the system after the leader's control has been implemented. In this study, we describe such systems and derive in detail the controls of both the leader and follower. In systems where the roles of leader and follower are negotiated, it is important to consider each controller's leadership property. This property considers the question for each controller as to whether it is preferable to be a leader and let the other controller be a follower or be a follower and let the other controller be the leader. In this dissertation, we try to answer this question by considering two models, one static and the other dynamic, and illustrating the results with an example in each case. The final chapter of the dissertation considers an application in microeconomics. We consider a dynamic duopoly problem, and we derive the necessary conditions for the Stackelberg solution with one firm as a leader controlling the price in the market
203

Statistical and Stochastic Learning Algorithms for Distributed and Intelligent Systems

Bian, Jiang 01 January 2020 (has links)
In the big data era, statistical and stochastic learning for distributed and intelligent systems focuses on enhancing and improving the robustness of learning models that have become pervasive and are being deployed for decision-making in real-life applications including general classification, prediction, and sparse sensing. The growing prospect of statistical learning approaches such as Linear Discriminant Analysis and distributed Learning being used (e.g., community sensing) has raised concerns around the robustness of algorithm design. Recent work on anomalies detection has shown that such Learning models can also succumb to the so-called 'edge-cases' where the real-life operational situation presents data that are not well-represented in the training data set. Such cases have been the primary reason for quite a few mis-classification bottleneck problems recently. Although initial research has begun to address scenarios with specific Learning models, there remains a significant knowledge gap regarding the detection and adaptation of learning models to 'edge-cases' and extreme ill-posed settings in the context of distributed and intelligent systems. With this motivation, this dissertation explores the complex in several typical applications and associated algorithms to detect and mitigate the uncertainty which will substantially reduce the risk in using statistical and stochastic learning algorithms for distributed and intelligent systems.
204

Detecting Small Moving Targets in Infrared Imagery

Cuellar, Adam 01 January 2020 (has links)
Deep convolutional neural networks have achieved remarkable results for detecting large and medium sized objects in images. However, the ability to detect smallobjects has yet to achieve the same level performance. Our focus is on applications that require the accurate detection and localization of small moving objects that are distantfrom the sensor. We first examine the ability of several state-of-the-art object detection networks (YOLOv3 and Mask R-CNN) to find small moving targets in infraredimagery using a publicly released dataset by the US Army Night Vision and Electronic Sensors Directorate. We then introduce a novel Moving Target Indicator Network (MTINet) and repurpose a hyperspectral imagery anomaly detection algorithm, the Reed-Xiaoli detector, for detecting small moving targets. We analyze the robustness and applicability of these methods by introducing simulated sensor movement to the data. Compared with other state-of-the-art methods, MTINet and the ARX algorithm achieve ahigher probability of detection at lower false alarm rates. Specifically, the combination of the algorithms results in a probability of detection of approximately 70% at a low false alarm rate of 0.1, which is about 50% greater than that of YOLOv3 and 65% greater than Mask R-CNN.
205

Distributed Multi-agent Optimization and Control with Applications in Smart Grid

Rahman, Towfiq 01 January 2020 (has links)
With recent advancements in network technologies like 5G and Internet of Things (IoT), the size and complexity of networked interconnected agents have increased rapidly. Although centralized schemes have simpler algorithm design, in practicality, it creates high computational complexity and requires high bandwidth for centralized data pooling. In this dissertation, for distributed optimization of networked multi-agent architecture, the Alternating Direction Method of Multipliers (ADMM) is investigated. In particular, a new adaptive-gain ADMM algorithm is derived in closed form and under the standard convex property to greatly speed up the convergence of ADMM-based distributed optimization. Using the Lyapunov direct approach, the proposed solution embeds control gains into a weighted network matrix among the agents uses and those weights as adaptive penalty gains in the augmented Lagrangian. For applications in a smart grid where system parameters are greatly affected by intermittent distributed energy resources like Electric Vehicles (EV) and Photo-voltaic (PV) panels, it is necessary to implement the algorithm in real-time since the accuracy of the optimal solution heavily relies on sampling time of the discrete-time iterative methods. Thus, the algorithm is further extended to the continuous domain for real-time applications and the convergence is proved also through Lyapunov direct approach. The algorithm is implemented on a distribution grid with high EV penetration where each agent exchanges relevant information among the neighboring nodes through the communication network, optimizes a combined convex objective of EV welfare and voltage regulation with power equations as constraints. The algorithm falls short when the dynamic equations like EVs state of charge are taken into account. Thus, the algorithm is further developed to incorporate dynamic constraints and the convergence along with control law is developed using Lyapunov direct approach. An alternative approach for convergence using passivity-short properties is also shown. Simulation results are included to demonstrate the effectiveness of proposed schemes.
206

Investigations on the Use of Hyperthermia for Breast Cancer Treatment

Suseela, Sreekala 01 January 2020 (has links)
Hyperthermia using electromagnetic energy has been proven to be an effective method in the treatment of cancer. Hyperthermia is a therapeutic procedure in which the temperature in the tumor tissue is raised above 42°C without causing any damage to the surrounding healthy tissue. This method has been shown to increase the effectiveness of radiotherapy and chemotherapy. Radio frequencies, microwave frequencies or focused ultrasound can be used to deliver energy to the tumor tissue to attain higher temperatures in the tumor region for hyperthermia application. In this dissertation the use of a near field focused (NFF) microstrip antenna array for the treatment of stage 1 cancer tumors in the breast is proposed. The antenna array was designed to operate at a resonant frequency of 2.45 GHz. A hemispherical two-layer model of breast consisting of fat and glandular tissue layer was considered. The tumor, of the size of a typical stage 1 cancer, was considered at different locations within the breast tissue. The permittivity and conductivity of the breast and tumor tissue were obtained from literature. For a specific location of the tumor, the NFF array is positioned outside the breast in front of the tumor. The radiation from the array is focused onto the tumor and raises the temperature of the tumor. Regardless of the position of the tumor, when placed at the right distance, the array produced a focused spot at the tumor without heating the surrounding healthy tissue. Different placement locations of the antenna array were studied to analyze the depth of the focused radiation region. The antenna array can be placed on a rotating arm allowing it to be moved around the breast based on the tumor location. Results for the power density distribution, specific absorption rate and temperature increase in the tumor and surrounding breast region are presented.
207

Target Acquisition Performance Improvement with Boost and Restoration Filtering Using Deep-Electron-Well Infrared Detectors

Short, Robert 01 January 2020 (has links)
Recent advances in infrared focal plane fabrication have allowed for the production of sensors with small detector size (small pitch) and long integration time (deep electron wells) in large-format arrays. Individually, these are all welcome developments, but we raise the question of whether it is possible to utilize all of these technologies in concert to optimize performance. If so, a key part of such a system will be digital boost filtering, to recover the performance loss due to diffraction blur. We describe a system design concept called PWP (Pitch-Well-Processing) that uses each of these features along with Wiener filtering to optimize range performance. Current targeting performance models, chiefly the Targeting Task Performance (TTP) metric, predict a significant increase in range performance due to boost filtering. We present these calculations within and compare the results to observer perception experiments conducted on simulated target imagery (formed from close-range thermal signatures artificially degraded by blur and noise). Initially, we used Triangle Orientation Discrimination (TOD) targets for basic experiments, followed by experiments using a set of 12 military vehicles. In both types of test, the range at which observers could reliably identify the target was measured with and without digital filtering. This dissertation is focused on the following problems: integrating boost filtering into a system design, measuring the effect of boost filtering through perception experiments, and modeling the same experiments using the TTP metric.
208

Provably Trustworthy and Secure Hardware Design with Low Overhead

Alasad, Qutaiba 01 January 2020 (has links)
Due to the globalization of IC design in the semiconductor industry and outsourcing of chip manufacturing, 3PIPs become vulnerable to IP piracy, reverse engineering, counterfeit IC, and hardware Trojans. To thwart such attacks, ICs can be protected using logic encryption techniques. However, strong resilient techniques incur significant overheads. SCAs further complicate matters by introducing potential attacks post-fabrication. One of the most severe SCAs is PA attacks, in which an attacker can observe the power variations of the device and analyze them to extract the secret key. PA attacks can be mitigated via adding large extra hardware; however, the overheads of such solutions can render them impractical, especially when there are power and area constraints. In our first approach, we present two techniques to prevent normal attacks. The first one is based on inserting MUX equal to half/full of the output bit number. In the second technique, we first design PLGs using SiNW FETs and then replace some logic gates in the original design with their SiNW FETs-based PLGs counterparts. In our second approach, we use SiNW FETs to produce obfuscated ICs that are resistant to advanced reverse engineering attacks. Our method is based on designing a small block, whose output is untraceable, namely URSAT. Since URSAT may not offer very strong resilience against the combined AppSAT-removal attack, S-URSAT is achieved using only CMOS-logic gates, and this increases the security level of the design to robustly thwart all existing attacks. In our third topic, we present the usage of ASLD to produce secure and resilient circuits that withstand IC attacks (during the fabrication) and PA attacks (after fabrication). First, we show that ASLD has unique features that can be used to prevent PA and IC attacks. In our three topics, we evaluate each design based on performance overheads and security guarantees.
209

Scalable Communication Frameworks for Multi-Agency Data Sharing

Chaudhry, Shafaq 01 January 2020 (has links)
With the rise in frequency and magnitude of natural disasters, there is a need to break down monolithic organizational barriers and engage with community volunteers. This calls for ease of systems interoperability to facilitate communication, data-sharing and scalability of real-time response, essential for crisis communications. We propose two scalable frameworks that enable multi-agency interoperability and real-time data-sharing. The first framework harnesses the power of social media, artificial intelligence, and community volunteers to form an extended rescue-and-response network that alleviates call center burden and augments the finite capacity of dispatch units. Through an "online 9-1-1" service, affected people can request help and be automatically triaged and routed to the closest response unit registered within the system. By connecting first responders, dispatchers, victims and volunteers, this approach can enable communities to respond effectively to large-scale disasters by having humanitarian organizations be a proactive and reactive part of the Public Safety Network. Delay analysis shows that the online 9-1-1 system has an expected response time comparable to the traditional system, with the added benefit of call center and dispatch scalability. The second framework enables data sharing between different agencies by allowing on-demand access to data protected by institutional policies. This is achieved through a custom, reactive Software-Defined Networking module in the Floodlight controller that communicates with an external server to get information about registered agencies and then pushes those traffic paths automatically to the respective domain's network device flow tables. This approach eliminates the need to have a global, consistent view of the network topology, and the resulting controller-to-controller communication and coordination which can be especially challenging in large networks. This framework has applicability in many areas, including scientific data sharing among universities or research institutions, patient data sharing among hospitals or between first responders quickly accessing critical medical information on-demand at a disaster site.
210

Multi-Element Multi-Datastream Visible Light Communication Networks

Ibne Mushfique, Sifat 01 January 2020 (has links)
Because of the exponentially increasing demand of wireless data, the Radio Frequency (RF) spectrum crunch is rising rapidly. The amount of available RF spectrum is being shrunk at a very heavy rate, and spectral management is becoming more difficult. Visible Light Communication (VLC) is a recent promising technology complementary to RF spectrum which operates at the visible light spectrum band (400 THz to 780 THz) and it has 10,000 times bigger bandwidth than radio waves (3 kHz to 300 GHz). Due to this tremendous potential, VLC has captured a lot of interest recently as there is already an extensive deployment of energy efficient Light Emitting Diodes (LEDs). The advancements in LED technology with fast nanosecond switching times is also very encouraging. One of the biggest advantages of VLC over other communication systems is that it can provide illumination and data communication simultaneously without needing any extra deployment. Although it is essential to provide data rate at a blazing speed to all the users nowadays, maintaining a satisfactory level in the distribution of lighting is also important. In this work, we present a multi-element multi-datastream (MEMD) VLC architecture capable of simultaneously providing lighting uniformity and communication coverage in an indoor setting. The architecture consists of a multi-element hemispherical bulb design, where it is possible to transmit multiple data streams from the bulb using multiple LED modules. We present the detailed components of the architecture and formulate joint optimization problems considering requirements for several scenarios. We formulate an optimization problem that jointly addresses the LED-user associations as well as the LEDs' transmit powers to maximize the Signal-to-Interference plus Noise Ratio (SINR) while taking an acceptable illumination uniformity constraint into consideration. We propose a near-optimal solution using Geometric Programming (GP) to solve the optimization problem and compare the performance of this GP solution to low complexity heuristics. To further improve the performance, we propose a mirror employment approach to redirect the reflected LED beams on the wall to darker spots in the room floor. We compare the performance of our heuristic approaches to solve the proposed two-stage optimization problem and show that about threefold increase in average illumination and fourfold increase in average throughput can be achieved when the mirror placement is applied which is a significant performance improvement. Also, we explore the use case of our architecture to provide scalable communications to Internet-of-Things (IoT) devices, where we minimize the total consumed energy emitted by each LED. Because of the non-convexity of the problem, we propose a two-stage heuristic solution and illustrate the performance of our method via simulations.

Page generated in 0.1158 seconds