• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 33
  • 33
  • 18
  • 15
  • 14
  • 14
  • 12
  • 10
  • 10
  • 9
  • 9
  • 9
  • 9
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Coverage Manifold Estimation in Cellular Networks via Conditional GANs

Veni Goyal (18457590) 29 April 2024 (has links)
<p dir="ltr">This research introduces an approach utilizing a novel conditional generative adversarial network (cGAN) tailored specifically for the prediction of cellular network coverage. In comparison to state-of-the-art method convolutional neural networks (CNNs), our cGAN model offers a significant improvement by translating base station locations within any Region-of-Interest (RoI) into precise coverage probability values within a designated region-of-evaluation (RoE). </p><p dir="ltr">By leveraging base station location data from diverse geographical and infrastructural landscapes spanning regions like India, the USA, Germany, and Brazil, our model demonstrates superior predictive performance compared to existing CNN-based approaches. Notably, the prediction error, as quantified by the L1 norm, is reduced by two orders of magnitude in comparison to state-of-the-art CNN models.</p><p dir="ltr">Furthermore, the coverage manifolds generated by our cGAN model closely resemble those produced by conventional simulation methods, indicating a substantial advancement in both prediction accuracy and visual fidelity. This achievement underscores the potential of cGANs in enhancing the precision and reliability of cellular network performance prediction, offering promising implications for optimizing network planning and deployment strategies.</p>
22

Multi-robot System in Coverage Control: Deployment, Coverage, and Rendezvous

Shaocheng Luo (8795588) 04 May 2020 (has links)
<div>Multi-robot systems have demonstrated strong capability in handling environmental operations. In this study, We examine how a team of robots can be utilized in covering and removing spill patches in a dynamic environment by executing three consecutive stages: deployment, coverage, and rendezvous. </div><div> </div><div>For the deployment problem, we aim for robot allocation based on the discreteness of the patches that need to be covered. With the deep neural network (DNN) based spill detector and remote sensing facilities such as drones with vision sensors and satellites, we are able to obtain the spill distribution in the workspace. Then, we formulate the allocation problem in a general optimization form and provide solutions using an integer linear programming (ILP) solver under several realistic constraints. After the allocation process is completed and the robot team is divided according to the number of spills, we deploy robots to their computed optimal goal positions. In the robot deployment part, control laws based on artificial potential field (APF) method are proposed and practiced on robots with a common unicycle model. </div><div> </div><div>For the coverage control problem, we show two strategies that are tailored for a wirelessly networked robot team. We propose strategies for coverage with and without path planning, depending on the availability of global information. Specifically, in terms of coverage with path planning, we partition the workspace from the aerial image into pieces and let each robot take care of one of the pieces. However, path-planning-based coverage relies on GPS signals or other external positioning systems, which are not applicable for indoor or GPS-denied circumstances. Therefore, we propose an asymptotic boundary shrink control that enables a collective coverage operation with the robot team. Such a strategy does not require a planned path, and because of its distributedness, it shows many advantages, including system scalability, dynamic spill adaptability, and collision avoidance. In case of a large-scale patch that poses challenges to robot connectivity maintenance during the operation, we propose a pivot-robot coverage strategy by mean of an a priori geometric tessellation (GT). In the pivot-robot-based coverage strategy, a team of robots is sent to perform complete coverage to every packing area of GT in sequence. Ultimately, the entire spill in the workspace can be covered and removed.</div><div> </div><div>For the rendezvous problem, we investigate the use of graph theory and propose control strategies based on network topology to motivate robots to meet at a designated or the optimal location. The rendezvous control strategies show a strong robustness to some common failures, such as mobility failure and communication failure. To expedite the rendezvous process and enable herding control in a distributed way, we propose a multi-robot multi-point rendezvous control strategy. </div><div> </div><div>To verify the validity of the proposed strategies, we carry out simulations in the Robotarium MATLAB platform, which is an open source swarm robotics experiment testbed, and conduct real experiments involving multiple mobile robots.</div>
23

COMPARATIVE STUDY OF NETWORKED COMMUNITIES, CRISIS COMMUNICATION, AND TECHNOLOGY: RHETORIC OF DISASTER IN THE NEPAL EARTHQUAKE AND HURRICANE MARIA

Sweta Baniya (8786567) 04 May 2020 (has links)
<p>In April and May 2015 Nepal suffered two massive earthquakes of 7.5 and 6 5 magnitudes in the Richter scale, killing 8856 and injuring 22309. Two years later in September 2017, Puerto Rico underwent the Category 5 Hurricane Maria, killing an estimate of 800 to 8000 people and displacing hundreds of thousands of Puerto Ricans (Kishore et al., 2018). This dissertation project is the comparative study of Nepal’s and Puerto Rico’s networked communities, their actors, participants (Potts, 2014), and the users (Ingraham, 2015; Johnson, 1998) who used crisis communication practices to address the havoc created by the disaster. Using a mixed-methods research approach and with framework created with the Assemblage Theory (DeLanda, 2016), I argue that disasters create situations in which various networked communities are formed into transnational assemblages along with an emergence of innovative digital technical and professional communication practices.</p>
24

EXPLOITING THE SPATIAL DIMENSION OF BIG DATA JOBS FOR EFFICIENT CLUSTER JOB SCHEDULING

Akshay Jajoo (9530630) 16 December 2020 (has links)
With the growing business impact of distributed big data analytics jobs, it has become crucial to optimize their execution and resource consumption. In most cases, such jobs consist of multiple sub-entities called tasks and are executed online in a large shared distributed computing system. The ability to accurately estimate runtime properties and coordinate execution of sub-entities of a job allows a scheduler to efficiently schedule jobs for optimal scheduling. This thesis presents the first study that highlights spatial dimension, an inherent property of distributed jobs, and underscores its importance in efficient cluster job scheduling. We develop two new classes of spatial dimension based algorithms to<br>address the two primary challenges of cluster scheduling. First, we propose, validate, and design two complete systems that employ learning algorithms exploiting spatial dimension. We demonstrate high similarity in runtime properties between sub-entities of the same job by detailed trace analysis on four different industrial cluster traces. We identify design challenges and propose principles for a sampling based learning system for two examples, first for a coflow scheduler, and second for a cluster job scheduler.<br>We also propose, design, and demonstrate the effectiveness of new multi-task scheduling algorithms based on effective synchronization across the spatial dimension. We underline and validate by experimental analysis the importance of synchronization between sub-entities (flows, tasks) of a distributed entity (coflow, data analytics jobs) for its efficient execution. We also highlight that by not considering sibling sub-entities when scheduling something it may also lead to sub-optimal overall cluster performance. We propose, design, and implement a full coflow scheduler based on these assertions.
25

Bootstrapping a Private Cloud

Deepika Kaushal (9034865) 29 June 2020 (has links)
Cloud computing allows on-demand provision, configuration and assignment of computing resources with minimum cost and effort for users and administrators. Managing the physical infrastructure that underlies cloud computing services relies on the need to provision and manage bare-metal computer hardware. Hence there is a need for quick loading of operating systems in bare-metal and virtual machines to service the demands of users. The focus of the study is on developing a technique to load these machines remotely, which is complicated by the fact that the machines can be present in different Ethernet broadcast domains, physically distant from the provisioning server. The use of available bare-metal provisioning frameworks require significant skills and time. Moreover, there is no easily implementable standard method of booting across separate and different Ethernet broadcast domains. This study proposes a new framework to provision bare-metal hardware remotely using layer 2 services in a secure manner. This framework is a composition of existing tools that can be assembled to build the framework.
26

Accelerated In-situ Workflow of Memory-aware Lattice Boltzmann Simulation and Analysis

Yuankun Fu (10223831) 29 April 2021 (has links)
<div>As high performance computing systems are advancing from petascale to exascale, scientific workflows to integrate simulation and visualization/analysis are a key factor to influence scientific campaigns. As one of the campaigns to study fluid behaviors, computational fluid dynamics (CFD) simulations have progressed rapidly in the past several decades, and revolutionized our lives in many fields. Lattice Boltzmann method (LBM) is an evolving CFD approach to significantly reducing the complexity of the conventional CFD methods, and can simulate complex fluid flow phenomena with cheaper computational cost. This research focuses on accelerating the workflow of LBM simulation and data analysis.</div><div><br></div><div>I start my research on how to effectively integrate each component of a workflow at extreme scales. Firstly, we design an in-situ workflow benchmark that integrates seven state-of-the-art in-situ workflow systems with three synthetic applications, two real-world CFD applications, and corresponding data analysis. Then detailed performance analysis using visualized tracing shows that even the fastest existing workflow system still has 42% overhead. Then, I develop a novel minimized end-to-end workflow system, Zipper, which combines the fine-grain task parallelism of full asynchrony and pipelining. Meanwhile, I design a novel concurrent data transfer optimization method, which employs a multi-threaded work-stealing algorithm to transfer data using both channels of network and parallel file system. It significantly reduces the data transfer time by up to 32%, especially when the simulation application is stalled. Then investigation on the speedup using OmniPath network tools shows that the network congestion has been alleviated by up to 80%. At last, the scalability of the Zipper system has been verified by a performance model and various largescale workflow experiments on two HPC systems using up to 13,056 cores. Zipper is the fastest workflow system and outperforms the second-fastest by up to 2.2 times.</div><div><br></div><div>After minimizing the end-to-end time of the LBM workflow, I began to accelerate the memory-bound LBM algorithms. We first design novel parallel 2D memory-aware LBM algorithms. Then I extend to design 3D memory-aware LBM that combine features of single-copy distribution, single sweep, swap algorithm, prism traversal, and merging multiple temporal time steps. Strong scalability experiments on three HPC systems show that 2D and 3D memory-aware LBM algorithms outperform the existing fastest LBM by up to 4 times and 1.9 times, respectively. The speedup reasons are illustrated by theoretical algorithm analysis. Experimental roofline charts on modern CPU architectures show that memory-aware LBM algorithms can improve the arithmetic intensity (AI) of the fastest existing LBM by up to 4.6 times.</div>
27

A 3-DIMENSIONAL UAS FORENSIC INTELLIGENCE-LED TAXONOMY (U-FIT)

Fahad Salamh (11023221) 22 July 2021 (has links)
Although many counter-drone systems such as drone jammers and anti-drone guns have been implemented, drone incidents are still increasing. These incidents are categorized as deviant act, a criminal act, terrorist act, or an unintentional act (aka system failure). Examples of reported drone incidents are not limited to property damage, but include personal injuries, airport disruption, drug transportation, and terrorist activities. Researchers have examined only drone incidents from a technological perspective. The variance in drone architectures poses many challenges to the current investigation practices, including several operation approaches such as custom commutation links. Therefore, there is a limited research background available that aims to study the intercomponent mapping in unmanned aircraft system (UAS) investigation incorporating three critical investigative domains---behavioral analysis, forensic intelligence (FORINT), and unmanned aerial vehicle (UAV) forensic investigation. The UAS forensic intelligence-led taxonomy (U-FIT) aims to classify the technical, behavioral, and intelligence characteristics of four UAS deviant actions --- including individuals who flew a drone too high, flew a drone close to government buildings, flew a drone over the airfield, and involved in drone collision. The behavioral and threat profiles will include one criminal act (i.e., UAV contraband smugglers). The UAV forensic investigation dimension concentrates on investigative techniques including technical challenges; whereas, the behavioral dimension investigates the behavioral characteristics, distinguishing among UAS deviants and illegal behaviors. Moreover, the U-FIT taxonomy in this study builds on the existing knowledge of current UAS forensic practices to identify patterns that aid in generalizing a UAS forensic intelligence taxonomy. The results of these dimensions supported the proposed UAS forensic intelligence-led taxonomy by demystifying the predicted personality traits to deviant actions and drone smugglers. The score obtained in this study was effective in distinguishing individuals based on certain personality traits. These novel, highly distinguishing features in the behavioral personality of drone users may be of particular importance not only in the field of behavioral psychology but also in law enforcement and intelligence.
28

Data-Driven Computing and Networking Solution for Securing Cyber-Physical Systems

Yifu Wu (18498519) 03 May 2024 (has links)
<p dir="ltr">In recent years, a surge in data-driven computation has significantly impacted security analysis in cyber-physical systems (CPSs), especially in decentralized environments. This transformation can be attributed to the remarkable computational power offered by high-performance computers (HPCs), coupled with advancements in distributed computing techniques and sophisticated learning algorithms like deep learning and reinforcement learning. Within this context, wireless communication systems and decentralized computing systems emerge as highly suitable environments for leveraging data-driven computation in security analysis. Our research endeavors have focused on exploring the vast potential of various deep learning algorithms within the CPS domains. We have not only delved into the intricacies of existing algorithms but also designed novel approaches tailored to the specific requirements of CPSs. A pivotal aspect of our work was the development of a comprehensive decentralized computing platform prototype, which served as the foundation for simulating complex networking scenarios typical of CPS environments. Within this framework, we harnessed deep learning techniques such as restricted Boltzmann machine (RBM) and deep convolutional neural network (DCNN) to address critical security concerns such as the detection of Quality of Service (QoS) degradation and Denial of Service (DoS) attacks in smart grids. Our experimental results showcased the superior performance of deep learning-based approaches compared to traditional pattern-based methods. Additionally, we devised a decentralized computing system that encompassed a novel decentralized learning algorithm, blockchain-based learning automation, distributed storage for data and models, and cryptography mechanisms to bolster the security and privacy of both data and models. Notably, our prototype demonstrated excellent efficacy, achieving a fine balance between model inference performance and confidentiality. Furthermore, we delved into the integration of domain knowledge from CPSs into our deep learning models. This integration shed light on the vulnerability of these models to dedicated adversarial attacks. Through these multifaceted endeavors, we aim to fortify the security posture of CPSs while unlocking the full potential of data-driven computation in safeguarding critical infrastructures.</p>
29

Towards Building a High-Performance Intelligent Radio Network through Deep Learning: Addressing Data Privacy, Adversarial Robustness, Network Structure, and Latency Requirements.

Abu Shafin Moham Mahdee Jameel (18424200) 26 April 2024 (has links)
<p dir="ltr">With the increasing availability of inexpensive computing power in wireless radio network nodes, machine learning based models are being deployed in operations that traditionally relied on rule-based or statistical methods. Contemporary high bandwidth networks enable easy availability of significant amounts of training data in a comparatively short time, aiding in the development of better deep learning models. Specialized deep learning models developed for wireless networks have been shown to consistently outperform traditional methods in a variety of wireless network applications.</p><p><br></p><p dir="ltr">We aim to address some of the unique challenges inherent in the wireless radio communication domain. Firstly, as data is transmitted over the air, data privacy and adversarial attacks pose heightened risks. Secondly, due to the volume of data and the time-sensitive nature of the processing that is required, the speed of the machine learning model becomes a significant factor, often necessitating operation within a latency constraint. Thirdly, the impact of diverse and time-varying wireless environments means that any machine learning model also needs to be generalizable. The increasing computing power present in wireless nodes provides an opportunity to offload some of the deep learning to the edge, which also impacts data privacy.</p><p><br></p><p dir="ltr">Towards this goal, we work on deep learning methods that operate along different aspects of a wireless network—on network packets, error prediction, modulation classification, and channel estimation—and are able to operate within the latency constraint, while simultaneously providing better privacy and security. After proposing solutions that work in a traditional centralized learning environment, we explore edge learning paradigms where the learning happens in distributed nodes.</p>
30

Network Utility Maximization Based on Information Freshness

Cho-Hsin Tsai (12225227) 20 April 2022 (has links)
<p>It is predicted that there would be 41.6 billion IoT devices by 2025, which has kindled new interests on the timing coordination between sensors and controllers, i.e., how to use the waiting time to improve the performance. Sun et al. showed that a <i>controller</i> can strictly improve the data freshness, the so-called Age-of-Information (AoI), via careful scheduling designs. The optimal waiting policy for the <i>sensor</i> side was later characterized in the context of remote estimation. The first part of this work develops the jointly optimal sensor/controller waiting policy. It generalizes the above two important results in that not only do we consider joint sensor/controller designs, but we also assume random delay in both the forward and feedback directions. </p> <p> </p> <p>The second part of the work revisits and significantly strengthens the seminal results of Sun et al on the following fronts: (i) When designing the optimal offline schemes with full knowledge of the delay distributions, a new <i>fixed-point-based</i> method is proposed with <i>quadratic convergence rate</i>; (ii) When the distributional knowledge is unavailable, two new low-complexity online algorithms are proposed, which provably attain the optimal average AoI penalty; and (iii) the online schemes also admit a modular architecture, which allows the designer to <i>upgrade</i> certain components to handle additional practical challenges. Two such upgrades are proposed: (iii.1) the AoI penalty function incurred at the destination is unknown to the source node and must also be estimated on the fly, and (iii.2) the unknown delay distribution is Markovian instead of i.i.d. </p> <p> </p> <p>With the exponential growth of interconnected IoT devices and the increasing risk of excessive resource consumption in mind, the third part of this work derives an optimal joint cost-and-AoI minimization solution for multiple coexisting source-destination (S-D) pairs. The results admit a new <i>AoI-market-price</i>-based interpretation and are applicable to the setting of (i) general heterogeneous AoI penalty functions and Markov delay distributions for each S-D pair, and (ii) a general network cost function of aggregate throughput of all S-D pairs. </p> <p> </p> <p>In each part of this work, extensive simulation is used to demonstrate the superior performance of the proposed schemes. The discussion on analytical as well as numerical results sheds some light on designing practical network utility maximization protocols.</p>

Page generated in 0.1527 seconds