• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6800
  • 683
  • 671
  • 671
  • 671
  • 671
  • 671
  • 671
  • 184
  • 62
  • 16
  • 7
  • 2
  • 2
  • 2
  • Tagged with
  • 10978
  • 10978
  • 6695
  • 1946
  • 989
  • 862
  • 543
  • 532
  • 524
  • 507
  • 506
  • 468
  • 457
  • 448
  • 403
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Investigations on the Use of Hyperthermia for Breast Cancer Treatment

Suseela, Sreekala 01 January 2020 (has links)
Hyperthermia using electromagnetic energy has been proven to be an effective method in the treatment of cancer. Hyperthermia is a therapeutic procedure in which the temperature in the tumor tissue is raised above 42°C without causing any damage to the surrounding healthy tissue. This method has been shown to increase the effectiveness of radiotherapy and chemotherapy. Radio frequencies, microwave frequencies or focused ultrasound can be used to deliver energy to the tumor tissue to attain higher temperatures in the tumor region for hyperthermia application. In this dissertation the use of a near field focused (NFF) microstrip antenna array for the treatment of stage 1 cancer tumors in the breast is proposed. The antenna array was designed to operate at a resonant frequency of 2.45 GHz. A hemispherical two-layer model of breast consisting of fat and glandular tissue layer was considered. The tumor, of the size of a typical stage 1 cancer, was considered at different locations within the breast tissue. The permittivity and conductivity of the breast and tumor tissue were obtained from literature. For a specific location of the tumor, the NFF array is positioned outside the breast in front of the tumor. The radiation from the array is focused onto the tumor and raises the temperature of the tumor. Regardless of the position of the tumor, when placed at the right distance, the array produced a focused spot at the tumor without heating the surrounding healthy tissue. Different placement locations of the antenna array were studied to analyze the depth of the focused radiation region. The antenna array can be placed on a rotating arm allowing it to be moved around the breast based on the tumor location. Results for the power density distribution, specific absorption rate and temperature increase in the tumor and surrounding breast region are presented.
202

Target Acquisition Performance Improvement with Boost and Restoration Filtering Using Deep-Electron-Well Infrared Detectors

Short, Robert 01 January 2020 (has links)
Recent advances in infrared focal plane fabrication have allowed for the production of sensors with small detector size (small pitch) and long integration time (deep electron wells) in large-format arrays. Individually, these are all welcome developments, but we raise the question of whether it is possible to utilize all of these technologies in concert to optimize performance. If so, a key part of such a system will be digital boost filtering, to recover the performance loss due to diffraction blur. We describe a system design concept called PWP (Pitch-Well-Processing) that uses each of these features along with Wiener filtering to optimize range performance. Current targeting performance models, chiefly the Targeting Task Performance (TTP) metric, predict a significant increase in range performance due to boost filtering. We present these calculations within and compare the results to observer perception experiments conducted on simulated target imagery (formed from close-range thermal signatures artificially degraded by blur and noise). Initially, we used Triangle Orientation Discrimination (TOD) targets for basic experiments, followed by experiments using a set of 12 military vehicles. In both types of test, the range at which observers could reliably identify the target was measured with and without digital filtering. This dissertation is focused on the following problems: integrating boost filtering into a system design, measuring the effect of boost filtering through perception experiments, and modeling the same experiments using the TTP metric.
203

Provably Trustworthy and Secure Hardware Design with Low Overhead

Alasad, Qutaiba 01 January 2020 (has links)
Due to the globalization of IC design in the semiconductor industry and outsourcing of chip manufacturing, 3PIPs become vulnerable to IP piracy, reverse engineering, counterfeit IC, and hardware Trojans. To thwart such attacks, ICs can be protected using logic encryption techniques. However, strong resilient techniques incur significant overheads. SCAs further complicate matters by introducing potential attacks post-fabrication. One of the most severe SCAs is PA attacks, in which an attacker can observe the power variations of the device and analyze them to extract the secret key. PA attacks can be mitigated via adding large extra hardware; however, the overheads of such solutions can render them impractical, especially when there are power and area constraints. In our first approach, we present two techniques to prevent normal attacks. The first one is based on inserting MUX equal to half/full of the output bit number. In the second technique, we first design PLGs using SiNW FETs and then replace some logic gates in the original design with their SiNW FETs-based PLGs counterparts. In our second approach, we use SiNW FETs to produce obfuscated ICs that are resistant to advanced reverse engineering attacks. Our method is based on designing a small block, whose output is untraceable, namely URSAT. Since URSAT may not offer very strong resilience against the combined AppSAT-removal attack, S-URSAT is achieved using only CMOS-logic gates, and this increases the security level of the design to robustly thwart all existing attacks. In our third topic, we present the usage of ASLD to produce secure and resilient circuits that withstand IC attacks (during the fabrication) and PA attacks (after fabrication). First, we show that ASLD has unique features that can be used to prevent PA and IC attacks. In our three topics, we evaluate each design based on performance overheads and security guarantees.
204

Scalable Communication Frameworks for Multi-Agency Data Sharing

Chaudhry, Shafaq 01 January 2020 (has links)
With the rise in frequency and magnitude of natural disasters, there is a need to break down monolithic organizational barriers and engage with community volunteers. This calls for ease of systems interoperability to facilitate communication, data-sharing and scalability of real-time response, essential for crisis communications. We propose two scalable frameworks that enable multi-agency interoperability and real-time data-sharing. The first framework harnesses the power of social media, artificial intelligence, and community volunteers to form an extended rescue-and-response network that alleviates call center burden and augments the finite capacity of dispatch units. Through an "online 9-1-1" service, affected people can request help and be automatically triaged and routed to the closest response unit registered within the system. By connecting first responders, dispatchers, victims and volunteers, this approach can enable communities to respond effectively to large-scale disasters by having humanitarian organizations be a proactive and reactive part of the Public Safety Network. Delay analysis shows that the online 9-1-1 system has an expected response time comparable to the traditional system, with the added benefit of call center and dispatch scalability. The second framework enables data sharing between different agencies by allowing on-demand access to data protected by institutional policies. This is achieved through a custom, reactive Software-Defined Networking module in the Floodlight controller that communicates with an external server to get information about registered agencies and then pushes those traffic paths automatically to the respective domain's network device flow tables. This approach eliminates the need to have a global, consistent view of the network topology, and the resulting controller-to-controller communication and coordination which can be especially challenging in large networks. This framework has applicability in many areas, including scientific data sharing among universities or research institutions, patient data sharing among hospitals or between first responders quickly accessing critical medical information on-demand at a disaster site.
205

Multi-Element Multi-Datastream Visible Light Communication Networks

Ibne Mushfique, Sifat 01 January 2020 (has links)
Because of the exponentially increasing demand of wireless data, the Radio Frequency (RF) spectrum crunch is rising rapidly. The amount of available RF spectrum is being shrunk at a very heavy rate, and spectral management is becoming more difficult. Visible Light Communication (VLC) is a recent promising technology complementary to RF spectrum which operates at the visible light spectrum band (400 THz to 780 THz) and it has 10,000 times bigger bandwidth than radio waves (3 kHz to 300 GHz). Due to this tremendous potential, VLC has captured a lot of interest recently as there is already an extensive deployment of energy efficient Light Emitting Diodes (LEDs). The advancements in LED technology with fast nanosecond switching times is also very encouraging. One of the biggest advantages of VLC over other communication systems is that it can provide illumination and data communication simultaneously without needing any extra deployment. Although it is essential to provide data rate at a blazing speed to all the users nowadays, maintaining a satisfactory level in the distribution of lighting is also important. In this work, we present a multi-element multi-datastream (MEMD) VLC architecture capable of simultaneously providing lighting uniformity and communication coverage in an indoor setting. The architecture consists of a multi-element hemispherical bulb design, where it is possible to transmit multiple data streams from the bulb using multiple LED modules. We present the detailed components of the architecture and formulate joint optimization problems considering requirements for several scenarios. We formulate an optimization problem that jointly addresses the LED-user associations as well as the LEDs' transmit powers to maximize the Signal-to-Interference plus Noise Ratio (SINR) while taking an acceptable illumination uniformity constraint into consideration. We propose a near-optimal solution using Geometric Programming (GP) to solve the optimization problem and compare the performance of this GP solution to low complexity heuristics. To further improve the performance, we propose a mirror employment approach to redirect the reflected LED beams on the wall to darker spots in the room floor. We compare the performance of our heuristic approaches to solve the proposed two-stage optimization problem and show that about threefold increase in average illumination and fourfold increase in average throughput can be achieved when the mirror placement is applied which is a significant performance improvement. Also, we explore the use case of our architecture to provide scalable communications to Internet-of-Things (IoT) devices, where we minimize the total consumed energy emitted by each LED. Because of the non-convexity of the problem, we propose a two-stage heuristic solution and illustrate the performance of our method via simulations.
206

Reducing Side-sweep Accidents with Vehicle-to-Vehicle Communications

Bulumulle, Gamini 01 January 2017 (has links)
This dissertation present contributions to the understanding of the causes of a side-sweep accidents on multi-lane highways using computer simulation. Side-sweep accidents are one of the major causes of loss of life and property damage on highways. This type of accident is caused by a driver initiating a lane change while another vehicle is blocking the road in the target lane. Our objective in the research described in this dissertation was to understand and simulate the different factors which affect the likelihood of side sweep accidents. For instance, we know that blind spots, parts of the road that are not visible to the driver directly or through the rear-view mirrors are often a contributing factor. Similarly, the frequency with which a driver checks his rear-view mirrors before initiating the lane change affects the likelihood of the accident. We can also have an intuition that side-sweep accidents are more likely if there is a significant difference in the vehicle velocities between the current and the target lanes. There are also factors that can reduce the likelihood of the accident: for instance, the signaling of the lane change by the driver can alert the nearby vehicles about the lane change, and they can change their behaviors to give way to the lane changing vehicle. The emerging technology of vehicle-to-vehicle communication offers promising new avenues to avoid such collisions by making vehicles communicate the lane change intent and their positions, such that automatic action can be taken to avoid the accident. While we can have an intuition about whether some factors increase or reduce accident rate, these factors interact with each other in complex ways. The research described in this dissertation developed a highway driving simulator specialized for the accurate simulation of the various factors which contribute to the act of lane change in highway driving. We are modeling the traffic as seen from the lane changing vehicle, including the density, distribution and relative velocity of the vehicles on the target lane. We are also modeling the geometry of the vehicle, including size, windows, mirrors, and blind spots. Moving to the human factors of the simulation, we are modeling the behavior of the driver with regards to the times of checking the mirrors, signalling and making the lane change decision. Finally, we are also modeling communication, both using the traditional way using the turn signals, as well as through means of automated vehicle to vehicle communication. The detailed modeling of these factors allowed us to perform extensive simulation studies that allow us to study the impact of various factors on the probability of side-sweep accidents. We validated the simulation models by comparing the results with the real-world observations of the National Highway Traffic Safety Administration. One of the benefits of our model is that it allows the modeling of the impact of vehicle to vehicle communication, a technology currently in prototype stage, that cannot be studied in extensive real world scenarios.
207

Security of Autonomous Systems under Physical Attacks: With application to Self-Driving Cars

Dutta, Raj Gautam 01 January 2018 (has links)
The drive to achieve trustworthy autonomous cyber-physical systems (CPS), which can attain goals independently in the presence of significant uncertainties and for long periods of time without any human intervention, has always been enticing. Significant progress has been made in the avenues of both software and hardware for fulfilling these objectives. However, technological challenges still exist and particularly in terms of decision making under uncertainty. In an autonomous system, uncertainties can arise from the operating environment, adversarial attacks, and from within the system. As a result of these concerns, human-beings lack trust in these systems and hesitate to use them for day-to-day use. In this dissertation, we develop algorithms to enhance trust by mitigating physical attacks targeting the integrity and security of sensing units of autonomous CPS. The sensors of these systems are responsible for gathering data of the physical processes. Lack of measures for securing their information can enable malicious attackers to cause life-threatening situations. This serves as a motivation for developing attack resilient solutions. Among various security solutions, attention has been recently paid toward developing system-level countermeasures for CPS whose sensor measurements are corrupted by an attacker. Our methods are along this direction as we develop an active and multiple passive algorithm to detect the attack and minimize its effect on the internal state estimates of the system. In the active approach, we leverage a challenge authentication technique for detection of two types of attacks: The Denial of Service (DoS) and the delay injection on active sensors of the systems. Furthermore, we develop a recursive least square estimator for recovery of system from attacks. The majority of the dissertation focuses on designing passive approaches for sensor attacks. In the first method, we focus on a linear stochastic system with multiple sensors, where measurements are fused in a central unit to estimate the state of the CPS. By leveraging Bayesian interpretation of the Kalman filter and combining it with the Chi-Squared detector, we recursively estimate states within an error bound and detect the DoS and False Data Injection attacks. We also analyze the asymptotic performance of the estimator and provide conditions for resilience of the state estimate. Next, we propose a novel distributed estimator based on l1 norm optimization, which could recursively estimate states within an error bound without restricting the number of agents of the distributed system that can be compromised. We also extend this estimator to a vehicle platoon scenario which is subjected to sparse attacks. Furthermore, we analyze the resiliency and asymptotic properties of both the estimators. Finally, at the end of the dissertation, we make an initial effort to formally verify the control system of the autonomous CPS using the statistical model checking method. It is done to ensure that a real-time and resource constrained system such as a self-driving car, with controllers and security solutions, adheres to strict timing constrains.
208

Bridging the Gap between Application and Solid-State-Drives

Zhou, Jian 01 January 2018 (has links)
Data storage is one of the important and often critical parts of the computing system in terms of performance, cost, reliability, and energy. Numerous new memory technologies, such as NAND flash, phase change memory (PCM), magnetic RAM (STT-RAM) and Memristor, have emerged recently. Many of them have already entered the production system. Traditional storage optimization and caching algorithms are far from optimal because storage I/Os do not show simple locality. To provide optimal storage we need accurate predictions of I/O behavior. However, the workloads are increasingly dynamic and diverse, making the long and short time I/O prediction challenge. Because of the evolution of the storage technologies and the increasing diversity of workloads, the storage software is becoming more and more complex. For example, Flash Translation Layer (FTL) is added for NAND-flash based Solid State Disks (NAND-SSDs). However, it introduces overhead such as address translation delay and garbage collection costs. There are many recent studies aim to address the overhead. Unfortunately, there is no one-size-fits-all solution due to the variety of workloads. Despite rapidly evolving in storage technologies, the increasing heterogeneity and diversity in machines and workloads coupled with the continued data explosion exacerbate the gap between computing and storage speeds. In this dissertation, we improve the data storage performance from both top-down and bottom-up approach. First, we will investigate exposing the storage level parallelism so that applications can avoid I/O contentions and workloads skew when scheduling the jobs. Second, we will study how architecture aware task scheduling can improve the performance of the application when PCM based NVRAM are equipped. Third, we will develop an I/O correlation aware flash translation layer for NAND-flash based Solid State Disks. Fourth, we will build a DRAM-based correlation aware FTL emulator and study the performance in various filesystems.
209

Improving Efficiency in Deep Learning for Large Scale Visual Recognition

Liu, Baoyuan 01 January 2016 (has links)
The emerging recent large scale visual recognition methods, and in particular the deep Convolutional Neural Networks (CNN), are promising to revolutionize many computer vision based artificial intelligent applications, such as autonomous driving and online image retrieval systems. One of the main challenges in large scale visual recognition is the complexity of the corresponding algorithms. This is further exacerbated by the fact that in most real-world scenarios they need to run in real time and on platforms that have limited computational resources. This dissertation focuses on improving the efficiency of such large scale visual recognition algorithms from several perspectives. First, to reduce the complexity of large scale classification to sub-linear with the number of classes, a probabilistic label tree framework is proposed. A test sample is classified by traversing the label tree from the root node. Each node in the tree is associated with a probabilistic estimation of all the labels. The tree is learned recursively with iterative maximum likelihood optimization. Comparing to the hard label partition proposed previously, the probabilistic framework performs classification more accurately with similar efficiency. Second, we explore the redundancy of parameters in Convolutional Neural Networks (CNN) and employ sparse decomposition to significantly reduce both the amount of parameters and computational complexity. Both inter-channel and inner-channel redundancy is exploit to achieve more than 90\% sparsity with approximately 1\% drop of classification accuracy. We also propose a CPU based efficient sparse matrix multiplication algorithm to reduce the actual running time of CNN models with sparse convolutional kernels. Third, we propose a multi-stage framework based on CNN to achieve better efficiency than a single traditional CNN model. With a combination of cascade model and the label tree framework, the proposed method divides the input images in both the image space and the label space, and processes each image with CNN models that are most suitable and efficient. The average complexity of the framework is significantly reduced, while the overall accuracy remains the same as in the single complex model.
210

Improving the Performance of Data-intensive Computing on Cloud Platforms

Dai, Wei 01 January 2017 (has links)
Big Data such as Terabyte and Petabyte datasets are rapidly becoming the new norm for various organizations across a wide range of industries. The widespread data-intensive computing needs have inspired innovations in parallel and distributed computing, which has been the effective way to tackle massive computing workload for decades. One significant example is MapReduce, which is a programming model for expressing distributed computations on huge datasets, and an execution framework for data-intensive computing on commodity clusters as well. Since it was originally proposed by Google, MapReduce has become the most popular technology for data-intensive computing. While Google owns its proprietary implementation of MapReduce, an open source implementation called Hadoop has gained wide adoption in the rest of the world. The combination of Hadoop and Cloud platforms has made data-intensive computing much more accessible and affordable than ever before. This dissertation addresses the performance issue of data-intensive computing on Cloud platforms from three different aspects: task assignment, replica placement, and straggler identification. Both task assignment and replica placement are subjects closely related to load balancing, which is one of the key issues that can significantly affect the performance of parallel and distributed applications. While task assignment schemes strive to balance data processing load among cluster nodes to achieve minimum job completion time, replica placement policies aim to assign block replicas to cluster nodes according to their processing capabilities to exploit data locality to the maximum extent. Straggler identification is also one of the crucial issues data-intensive computing has to deal with, as the overall performance of parallel and distributed applications is often determined by the node with the lowest performance. The results of extensive evaluation tests confirm that the schemes/policies proposed in this dissertation can improve the performance of data-intensive applications running on Cloud platforms.

Page generated in 0.0938 seconds