• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6096
  • 664
  • 654
  • 654
  • 654
  • 654
  • 654
  • 654
  • 184
  • 62
  • 16
  • 7
  • 2
  • 2
  • 2
  • Tagged with
  • 10237
  • 10237
  • 6037
  • 1908
  • 826
  • 792
  • 524
  • 518
  • 509
  • 494
  • 454
  • 442
  • 441
  • 431
  • 401
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Reducing Side-sweep Accidents with Vehicle-to-Vehicle Communications

Bulumulle, Gamini 01 January 2017 (has links)
This dissertation present contributions to the understanding of the causes of a side-sweep accidents on multi-lane highways using computer simulation. Side-sweep accidents are one of the major causes of loss of life and property damage on highways. This type of accident is caused by a driver initiating a lane change while another vehicle is blocking the road in the target lane. Our objective in the research described in this dissertation was to understand and simulate the different factors which affect the likelihood of side sweep accidents. For instance, we know that blind spots, parts of the road that are not visible to the driver directly or through the rear-view mirrors are often a contributing factor. Similarly, the frequency with which a driver checks his rear-view mirrors before initiating the lane change affects the likelihood of the accident. We can also have an intuition that side-sweep accidents are more likely if there is a significant difference in the vehicle velocities between the current and the target lanes. There are also factors that can reduce the likelihood of the accident: for instance, the signaling of the lane change by the driver can alert the nearby vehicles about the lane change, and they can change their behaviors to give way to the lane changing vehicle. The emerging technology of vehicle-to-vehicle communication offers promising new avenues to avoid such collisions by making vehicles communicate the lane change intent and their positions, such that automatic action can be taken to avoid the accident. While we can have an intuition about whether some factors increase or reduce accident rate, these factors interact with each other in complex ways. The research described in this dissertation developed a highway driving simulator specialized for the accurate simulation of the various factors which contribute to the act of lane change in highway driving. We are modeling the traffic as seen from the lane changing vehicle, including the density, distribution and relative velocity of the vehicles on the target lane. We are also modeling the geometry of the vehicle, including size, windows, mirrors, and blind spots. Moving to the human factors of the simulation, we are modeling the behavior of the driver with regards to the times of checking the mirrors, signalling and making the lane change decision. Finally, we are also modeling communication, both using the traditional way using the turn signals, as well as through means of automated vehicle to vehicle communication. The detailed modeling of these factors allowed us to perform extensive simulation studies that allow us to study the impact of various factors on the probability of side-sweep accidents. We validated the simulation models by comparing the results with the real-world observations of the National Highway Traffic Safety Administration. One of the benefits of our model is that it allows the modeling of the impact of vehicle to vehicle communication, a technology currently in prototype stage, that cannot be studied in extensive real world scenarios.
212

Security of Autonomous Systems under Physical Attacks: With application to Self-Driving Cars

Dutta, Raj Gautam 01 January 2018 (has links)
The drive to achieve trustworthy autonomous cyber-physical systems (CPS), which can attain goals independently in the presence of significant uncertainties and for long periods of time without any human intervention, has always been enticing. Significant progress has been made in the avenues of both software and hardware for fulfilling these objectives. However, technological challenges still exist and particularly in terms of decision making under uncertainty. In an autonomous system, uncertainties can arise from the operating environment, adversarial attacks, and from within the system. As a result of these concerns, human-beings lack trust in these systems and hesitate to use them for day-to-day use. In this dissertation, we develop algorithms to enhance trust by mitigating physical attacks targeting the integrity and security of sensing units of autonomous CPS. The sensors of these systems are responsible for gathering data of the physical processes. Lack of measures for securing their information can enable malicious attackers to cause life-threatening situations. This serves as a motivation for developing attack resilient solutions. Among various security solutions, attention has been recently paid toward developing system-level countermeasures for CPS whose sensor measurements are corrupted by an attacker. Our methods are along this direction as we develop an active and multiple passive algorithm to detect the attack and minimize its effect on the internal state estimates of the system. In the active approach, we leverage a challenge authentication technique for detection of two types of attacks: The Denial of Service (DoS) and the delay injection on active sensors of the systems. Furthermore, we develop a recursive least square estimator for recovery of system from attacks. The majority of the dissertation focuses on designing passive approaches for sensor attacks. In the first method, we focus on a linear stochastic system with multiple sensors, where measurements are fused in a central unit to estimate the state of the CPS. By leveraging Bayesian interpretation of the Kalman filter and combining it with the Chi-Squared detector, we recursively estimate states within an error bound and detect the DoS and False Data Injection attacks. We also analyze the asymptotic performance of the estimator and provide conditions for resilience of the state estimate. Next, we propose a novel distributed estimator based on l1 norm optimization, which could recursively estimate states within an error bound without restricting the number of agents of the distributed system that can be compromised. We also extend this estimator to a vehicle platoon scenario which is subjected to sparse attacks. Furthermore, we analyze the resiliency and asymptotic properties of both the estimators. Finally, at the end of the dissertation, we make an initial effort to formally verify the control system of the autonomous CPS using the statistical model checking method. It is done to ensure that a real-time and resource constrained system such as a self-driving car, with controllers and security solutions, adheres to strict timing constrains.
213

Bridging the Gap between Application and Solid-State-Drives

Zhou, Jian 01 January 2018 (has links)
Data storage is one of the important and often critical parts of the computing system in terms of performance, cost, reliability, and energy. Numerous new memory technologies, such as NAND flash, phase change memory (PCM), magnetic RAM (STT-RAM) and Memristor, have emerged recently. Many of them have already entered the production system. Traditional storage optimization and caching algorithms are far from optimal because storage I/Os do not show simple locality. To provide optimal storage we need accurate predictions of I/O behavior. However, the workloads are increasingly dynamic and diverse, making the long and short time I/O prediction challenge. Because of the evolution of the storage technologies and the increasing diversity of workloads, the storage software is becoming more and more complex. For example, Flash Translation Layer (FTL) is added for NAND-flash based Solid State Disks (NAND-SSDs). However, it introduces overhead such as address translation delay and garbage collection costs. There are many recent studies aim to address the overhead. Unfortunately, there is no one-size-fits-all solution due to the variety of workloads. Despite rapidly evolving in storage technologies, the increasing heterogeneity and diversity in machines and workloads coupled with the continued data explosion exacerbate the gap between computing and storage speeds. In this dissertation, we improve the data storage performance from both top-down and bottom-up approach. First, we will investigate exposing the storage level parallelism so that applications can avoid I/O contentions and workloads skew when scheduling the jobs. Second, we will study how architecture aware task scheduling can improve the performance of the application when PCM based NVRAM are equipped. Third, we will develop an I/O correlation aware flash translation layer for NAND-flash based Solid State Disks. Fourth, we will build a DRAM-based correlation aware FTL emulator and study the performance in various filesystems.
214

Improving Efficiency in Deep Learning for Large Scale Visual Recognition

Liu, Baoyuan 01 January 2016 (has links)
The emerging recent large scale visual recognition methods, and in particular the deep Convolutional Neural Networks (CNN), are promising to revolutionize many computer vision based artificial intelligent applications, such as autonomous driving and online image retrieval systems. One of the main challenges in large scale visual recognition is the complexity of the corresponding algorithms. This is further exacerbated by the fact that in most real-world scenarios they need to run in real time and on platforms that have limited computational resources. This dissertation focuses on improving the efficiency of such large scale visual recognition algorithms from several perspectives. First, to reduce the complexity of large scale classification to sub-linear with the number of classes, a probabilistic label tree framework is proposed. A test sample is classified by traversing the label tree from the root node. Each node in the tree is associated with a probabilistic estimation of all the labels. The tree is learned recursively with iterative maximum likelihood optimization. Comparing to the hard label partition proposed previously, the probabilistic framework performs classification more accurately with similar efficiency. Second, we explore the redundancy of parameters in Convolutional Neural Networks (CNN) and employ sparse decomposition to significantly reduce both the amount of parameters and computational complexity. Both inter-channel and inner-channel redundancy is exploit to achieve more than 90\% sparsity with approximately 1\% drop of classification accuracy. We also propose a CPU based efficient sparse matrix multiplication algorithm to reduce the actual running time of CNN models with sparse convolutional kernels. Third, we propose a multi-stage framework based on CNN to achieve better efficiency than a single traditional CNN model. With a combination of cascade model and the label tree framework, the proposed method divides the input images in both the image space and the label space, and processes each image with CNN models that are most suitable and efficient. The average complexity of the framework is significantly reduced, while the overall accuracy remains the same as in the single complex model.
215

Improving the Performance of Data-intensive Computing on Cloud Platforms

Dai, Wei 01 January 2017 (has links)
Big Data such as Terabyte and Petabyte datasets are rapidly becoming the new norm for various organizations across a wide range of industries. The widespread data-intensive computing needs have inspired innovations in parallel and distributed computing, which has been the effective way to tackle massive computing workload for decades. One significant example is MapReduce, which is a programming model for expressing distributed computations on huge datasets, and an execution framework for data-intensive computing on commodity clusters as well. Since it was originally proposed by Google, MapReduce has become the most popular technology for data-intensive computing. While Google owns its proprietary implementation of MapReduce, an open source implementation called Hadoop has gained wide adoption in the rest of the world. The combination of Hadoop and Cloud platforms has made data-intensive computing much more accessible and affordable than ever before. This dissertation addresses the performance issue of data-intensive computing on Cloud platforms from three different aspects: task assignment, replica placement, and straggler identification. Both task assignment and replica placement are subjects closely related to load balancing, which is one of the key issues that can significantly affect the performance of parallel and distributed applications. While task assignment schemes strive to balance data processing load among cluster nodes to achieve minimum job completion time, replica placement policies aim to assign block replicas to cluster nodes according to their processing capabilities to exploit data locality to the maximum extent. Straggler identification is also one of the crucial issues data-intensive computing has to deal with, as the overall performance of parallel and distributed applications is often determined by the node with the lowest performance. The results of extensive evaluation tests confirm that the schemes/policies proposed in this dissertation can improve the performance of data-intensive applications running on Cloud platforms.
216

Energy Efficient and Secure Wireless Sensor Networks Design

Attiah, Afraa 01 January 2018 (has links)
Wireless Sensor Networks (WSNs) are emerging technologies that have the ability to sense, process, communicate, and transmit information to a destination, and they are expected to have significant impact on the efficiency of many applications in various fields. The resource constraint such as limited battery power, is the greatest challenge in WSNs design as it affects the lifetime and performance of the network. An energy efficient, secure, and trustworthy system is vital when a WSN involves highly sensitive information. Thus, it is critical to design mechanisms that are energy efficient and secure while at the same time maintaining the desired level of quality of service. Inspired by these challenges, this dissertation is dedicated to exploiting optimization and game theoretic approaches/solutions to handle several important issues in WSN communication, including energy efficiency, latency, congestion, dynamic traffic load, and security. We present several novel mechanisms to improve the security and energy efficiency of WSNs. Two new schemes are proposed for the network layer stack to achieve the following: (a) to enhance energy efficiency through optimized sleep intervals, that also considers the underlying dynamic traffic load and (b) to develop the routing protocol in order to handle wasted energy, congestion, and clustering. We also propose efficient routing and energy-efficient clustering algorithms based on optimization and game theory. Furthermore, we propose a dynamic game theoretic framework (i.e., hyper defense) to analyze the interactions between attacker and defender as a non-cooperative security game that considers the resource limitation. All the proposed schemes are validated by extensive experimental analyses, obtained by running simulations depicting various situations in WSNs in order to represent real-world scenarios as realistically as possible. The results show that the proposed schemes achieve high performance in different terms, such as network lifetime, compared with the state-of-the-art schemes.
217

Masquerading Techniques in IEEE 802.11 Wireless Local Area Networks

Nakhila, Omar 01 January 2018 (has links)
The airborne nature of wireless transmission offers a potential target for attackers to compromise IEEE 802.11 Wireless Local Area Network (WLAN). In this dissertation, we explore the current WLAN security threats and their corresponding defense solutions. In our study, we divide WLAN vulnerabilities into two aspects, client, and administrator. The client-side vulnerability investigation is based on examining the Evil Twin Attack (ETA) while our administrator side research targets Wi-Fi Protected Access II (WPA2). Three novel techniques have been presented to detect ETA. The detection methods are based on (1) creating a secure connection to a remote server to detect the change of gateway's public IP address by switching from one Access Point (AP) to another. (2) Monitoring multiple Wi-Fi channels in a random order looking for specific data packets sent by the remote server. (3) Merging the previous solutions into one universal ETA detection method using Virtual Wireless Clients (VWCs). On the other hand, we present a new vulnerability that allows an attacker to force the victim's smartphone to consume data through the cellular network by starting the data download on the victim's cell phone without the victim's permission. A new scheme has been developed to speed up the active dictionary attack intensity on WPA2 based on two novel ideas. First, the scheme connects multiple VWCs to the AP at the same time-each VWC has its own spoofed MAC address. Second, each of the VWCs could try many passphrases using single wireless session. Furthermore, we present a new technique to avoid bandwidth limitation imposed by Wi-Fi hotspots. The proposed method creates multiple VWCs to access the WLAN. The combination of the individual bandwidth of each VWC results in an increase of the total bandwidth gained by the attacker. All proposal techniques have been implemented and evaluated in real-life scenarios.
218

A Contextual Approach to Real Time, Interactive Narrative Generation

Hollister, James 01 January 2016 (has links)
Oral story telling has become a lost art of family histories because social media and technology have taken over the personal interactions that once passed on the important stories and facts from generation to generation. This dissertation presents and evaluates a method of generating a narrative with input from the listener without actually forcing him or her to become an actual character in the narrative. This system is called CAMPFIRE Story Telling System (STS) and employs a contextual approach to story generation. This system uses the Cooperating Context Method (CCM) to generate and tell dynamic stories in real time and can be modified by the listener. CCM was created to overcome the weaknesses found in other contextual approaches during story generation while still meeting the design criteria of 1) being able to plan out a story; 2) being able to create a narrative that is entertaining to the listener; and 3) being able to modify the story that could incorporate the listener's request in the story. The CCM process begins by creating a list of tasks by analyzing the current situation. A list of contexts is narrowed down through a series of algorithms into two lists: high priority and low priority lists. These lists are analyzed and a set of context best suited to handle the tasks are selected. The CAMPFIRE STS was rigorously assessed for its functionality, novelty, and user acceptance as well as the time needed to modify the knowledge base. These evaluations showed that the CAMPFIRE STS has the ability to create novel stories using the same knowledge base. A group of 38 test subjects used and evaluated CAMPFIRE STS with respect to its use for children, story entertainment, story creativity and the system's ease of use answering a extensive survey of 54 questions. The survey showed that CAMPFIRE STS can create stories appropriate for bedtime stories with some minor modifications and that the generated stories are novel and entertaining stories, and that it was an easy system to use.
219

Enhancing Cognitive Algorithms for Optimal Performance of Adaptive Networks

Lugo-Cordero, Hector 01 January 2018 (has links)
This research proposes to enhance some Evolutionary Algorithms in order to obtain optimal and adaptive network configurations. Due to the richness in technologies, low cost, and application usages, we consider Heterogeneous Wireless Mesh Networks. In particular, we evaluate the domains of Network Deployment, Smart Grids/Homes, and Intrusion Detection Systems. Having an adaptive network as one of the goals, we consider a robust noise tolerant methodology that can quickly react to changes in the environment. Furthermore, the diversity of the performance objectives considered (e.g., power, coverage, anonymity, etc.) makes the objective function non-continuous and therefore not have a derivative. For these reasons, we enhance Particle Swarm Optimization (PSO) algorithm with elements that aid in exploring for better configurations to obtain optimal and sub-optimal configurations. According to results, the enhanced PSO promotes population diversity, leading to more unique optimal configurations for adapting to dynamic environments. The gradual complexification process demonstrated simpler optimal solutions than those obtained via trial and error without the enhancements. Configurations obtained by the modified PSO are further tuned in real-time upon environment changes. Such tuning occurs with a Fuzzy Logic Controller (FLC) which models human decision making by monitoring certain events in the algorithm. Example of such events include diversity and quality of solution in the environment. The FLC is able to adapt the enhanced PSO to changes in the environment, causing more exploration or exploitation as needed. By adding a Probabilistic Neural Network (PNN) classifier, the enhanced PSO is again used as a filter to aid in intrusion detection classification. This approach reduces miss classifications by consulting neighbors for classification in case of ambiguous samples. The performance of ambiguous votes via PSO filtering shows an improvement in classification, causing the simple classifier perform better the commonly used classifiers.
220

Joint Optimization of Illumination and Communication for a Multi-Element VLC Architecture

Ibne Mushfique, Sifat 01 January 2018 (has links)
Because of the ever increasing demand wireless data in the modern era, the Radio Frequency (RF) spectrum is becoming more congested. The remaining RF spectrum is being shrunk at a very heavy rate, and spectral management is becoming more difficult. Mobile data is estimated to grow more than 10 times between 2013 and 2019, and due to this explosion in data usage, mobile operators are having serious concerns focusing on public Wireless Fidelity (Wi-Fi) and other alternative technologies. Visible Light Communication (VLC) is a recent promising technology complementary to RF spectrum which operates at the visible light spectrum band (roughly 400 THz to 780 THz) and it has 10,000 times bigger size than radio waves (roughly 3 kHz to 300 GHz). Due to this tremendous potential, VLC has captured a lot of interest recently as there is already an extensive deployment of energy efficient Light Emitting Diodes (LEDs). The advancements in LED technology with fast nanosecond switching times is also very encouraging. In this work, we present hybrid RF/VLC architecture which is capable of providing simultaneous lighting and communication coverage in an indoor setting. The architecture consists of a multi-element hemispherical bulb design, where it is possible to transmit multiple data streams from the multi-element hemispherical bulb using LED modules. We present the detailed components of the architecture and make simulations considering various VLC transmitter configurations. Also, we devise an approach for an efficient bulb design mechanism to maintain both illumination and communication at a satisfactory rate, and analyze it in the case of two users in a room. The approach involves formulating an optimization problem and tackling the problem using a simple partitioning algorithm. The results indicate that good link quality and high spatial reuse can be maintained in a typical indoor communication setting.

Page generated in 0.3981 seconds