461 |
Effective Phishing Detection Using Machine Learning ApproachYaokai, Yang 01 February 2019 (has links)
No description available.
|
462 |
Binding and run-time support for remote procedure callKaiserswerth, Mathias. January 1983 (has links)
No description available.
|
463 |
A Distributed System Interface for a Flight SimulatorZeitoun, Omar 11 1900 (has links)
The importance of flight training has been realized since the inception of manned flight. In this thesis, a project about the interfacing of hardware cockpit instruments with a flight simulation software over a distributed system is to be described. A TRC472 Flight Cockpit was to be used while linked with Presagis FlightSIM to fully simulate a Cessna 172 Skyhawk aircraft. The TRC 472 contains flight input gauges (Airspeed Indicator, RPM indicator... etc.), pilot control devices (Rudder, Yoke...etc.) and navigation systems (VOR,ADF...etc.) all connected to computer through separate USBs and identified as HID's (Human Interface Devices). These devices required real-time interaction with FlightSIM software; in total 21 devices communicating at the same time. The TRC472 Flight Cockpit and the FlightSIM software were to be running on a distributed system of computers and to be communicating together through Ethernet. Serialization was to be used for the data transfer across the connection link so objects can be reproduced seamlessly on the different computers. Some of the TRC472 devices were straight forward in writing and reading from, but some of them required some calibrations of raw I/O data and buffers. The project also required making plugins to overwrite and extend FlightSIM software to communicate with the TRC472 Flight Cockpit. The final product is to be a full fledged flight experience with complete environment and physics of the Cessna 172. / Thesis / Master of Applied Science (MASc)
|
464 |
Towards the Inference, Understanding, and Reasoning on Edge DevicesMa, Guoqing 10 May 2023 (has links)
This thesis explores the potential of edge devices in three applications: indoor localization, urban traffic prediction, and multi-modal representation learning. For indoor localization, we propose a reliable data transmission network and robust data processing framework by visible light communications and machine learning to enhance the intelligence of smart buildings. The urban traffic prediction proposes a dynamic spatial and temporal origin-destination feature enhanced deep network with the graph convolutional network to collaboratively learn a low-dimensional representation for each region to predict in-traffic and out-traffic for every city region simultaneously. The multi-modal representation learning proposes using dynamic contexts to uniformly model visual and linguistic causalities, introducing a novel dynamic-contexts-based similarity metric that considers the correlation of potential causes and effects to measure the relevance among images.
To enhance distributed training on edge devices, we introduced a new system called Distributed Artificial Intelligence Over-the-Air (AirDAI), which involves local training on raw data and sending trained outputs, such as model parameters, from local clients back to a central server for aggregation. To aid the development of AirDAI in wireless communication networks, we suggested a general system design and an associated simulator that can be tailored based on wireless channels and system-level configurations. We also conducted experiments to confirm the effectiveness and efficiency of the proposed system design and presented an analysis of the effects of wireless environments to facilitate future implementations and updates.
This thesis proposes FedForest to address the communication and computation limitations in heterogeneous edge networks, which optimizes the global network by distilling knowledge from aggregated sub-networks. The sub-network sampling process is differentiable, and the model size is used as an additional constraint to extract a new sub-network for the subsequent local optimization process. FedForest significantly reduces server-to-client communication and local device computation costs compared to conventional algorithms while maintaining performance with the benchmark Top-K sparsification method. FedForest can accelerate the deployment of large-scale deep learning models on edge devices.
|
465 |
ARTS and CRAFTS: Predictive Scaling for Request-Based Services in the CloudGuenther, Andrew 01 June 2014 (has links) (PDF)
Modern web services can see well over a billion requests per day. Data and services at such scale require advanced software and large amounts of computational resources to process requests in reasonable time. Advancements in cloud computing now allow us to acquire additional resources faster than in traditional capacity planning scenarios. Companies can scale systems up and down as required, allowing them to meet the demand of their customers without having to purchase their own expensive hardware. Unfortunately, these, now routine, scaling operations remain a primarily manual task. To solve this problem, we present CRAFTS (Cloud Resource Anticipation For Timing Scaling), a system for automatically identifying application throughput and predictive scaling of cloud computing resources based on historical data. We also present ARTS (Automated Request Trace Simulator), a request based workload generation tool for constructing diverse and realistic request patterns for modern web applications. ARTS allows us to evaluate CRAFTS' algorithms on a wide range of scenarios. In this thesis, we outline the design and implementation of both ARTS and CRAFTS and evaluate the effectiveness of various prediction algorithms applied to real-world request data and artificial workloads generated by ARTS.
|
466 |
Optimizing the Distributed Hydrology Soil Vegetation Model for Uncertainty Assessment with Serial, Multicore and Distributed AccelerationsAdriance, Andrew 01 May 2018 (has links) (PDF)
Hydrology is the study of water. Hydrology tracks various attributes of water such as its quality and movement. As a tool Hydrology allows researchers to investigate topics such as the impacts of wildfires, logging, and commercial development. With perfect and complete data collection researchers could answer these questions with complete certainty. However, due to cost and potential sources of error this is impractical. As such researchers rely on simulations.
The Distributed Hydrology Soil Vegetation Model(also referenced to as DHSVM) is a scientific mathematical model to numerically represent watersheds. Hydrology, as with all fields, continues to produce large amounts of data from researchers. As the stores of data increase the scientific models that process them require occasional improvements to better handle processing the masses of information.
This paper investigates DHSVM as a serial C program. The paper implements and analyzes various high performance computing advancements to the original code base. Specifically this paper investigates compiler optimization, implementing par- allel computing with OpenMP, and adding distributed computing with OpenMPI. DHSVM was also tuned to run many instances on California Polytechnic State Uni- visity, San Luis Obispo’s high performance computer cluster. These additions to DHSVM help speed-up the results returned to researches, and improves DHSVM’s ability to be used with uncertainty analysis methods.
iv
This paper was able to improve the performance of DHSVM 2 times with serial and compiler optimization. In addition to the serial and compiler optimizations this paper found that OpenMP provided a noticeable speed up on hardware, that also scaled as the hardware improved. The pareallel optimization doubled DHSVM’s speed again on commodity hardware. Finally it was found that OpenMPI was best used for running multiple instances of DHSVM. All combined this paper was able to improve the performance of DHSVM by 4.4 times per instance, and allow it to run multiple instances on computing clusters.
|
467 |
Power System Reliability Analysis with Distributed GeneratorsZhu, Dan 27 May 2003 (has links)
Reliability is a key aspect of power system design and planning. In this research we present a reliability analysis algorithm for large scale, radially operated (with respect to substation), reconfigurable, electrical distribution systems. The algorithm takes into account equipment power handling constraints and converges in a matter of seconds on systems containing thousands of components. Linked lists of segments are employed in obtaining the rapid convergence. A power flow calculation is used to check the power handling constraints. The application of distributed generators for electrical distribution systems is a new technology. The placement of distributed generation and its effects on reliability is investigated. Previous reliability calculations have been performed for static load models and inherently make the assumption that system reliability is independent of load. The study presented here evaluates improvement in reliability over a time varying load curve. Reliability indices for load points and the overall system have been developed. A new reliability index is proposed. The new index makes it easier to locate areas where reliability needs to be improved. The usefulness of this new index is demonstrated with numerical examples. / Master of Science
|
468 |
Distributed Hydrologic Modeling of the Upper Roanoke River Watershed using GIS and NEXRADMcCormick, Brian Christopher 10 April 2003 (has links)
Precipitation and surface runoff producing mechanisms are inherently spatially variable. Many hydrologic runoff models do not account for this spatial variability and instead use "lumped" or spatially averaged parameters. Lumped model parameters often must be developed empirically or through optimization rather than be calculated from field measurements or existing data. Recent advances in geographic information systems (GIS) remote sensing (RS), radar measurement of precipitation, and desktop computing have made it easier for the hydrologist to account for the spatial variability of the hydrologic cycle using distributed models, theoretically improving hydrologic model accuracy.
Grid based distributed models assume homogeneity of model parameters within each grid cell, raising the question of optimum grid scale to adequately and efficiently model the process in question. For a grid or raster based hydrologic model, as grid cell size decreases, modeling accuracy typically increases, but data and computational requirements increase as well. There is great interest in determining the optimal grid resolution for hydrologic models as well as the sensitivity of hydrologic model outputs to grid resolution.
This research involves the application of a grid based hydrologic runoff model to the Upper Roanoke River watershed (1480km2) to investigate the effects of precipitation resolution and grid cell size on modeled peak flow, time to peak and runoff volume. The gridded NRCS curve number (CN) rainfall excess determination and ModClark runoff transformation of HEC-HMS is used in this modeling study. Model results are evaluated against observed streamflow at seven USGS stream gage locations throughout the watershed.
Runoff model inputs and parameters are developed from public domain digital datasets using commonly available GIS tools and public domain modeling software. Watersheds and stream networks are delineated from a USGS DEM using GIS tools. Topographic parameters describing these watersheds and stream channel networks are also derived from the GIS. A gridded representation of the NRCS CN is calculated from the soil survey geographic database of the NRCS and national land cover dataset of the USGS. Spatially distributed precipitation depths derived from WSR-88D next generation radar (NEXRAD) products are used as precipitation inputs. Archives of NEXRAD Stage III data are decoded, spatially and temporally registered, and verified against archived IFLOWS rain gage data. Stage III data are systematically degraded to coarser resolutions to examine model sensitivity to gridded rainfall resolution.
The effects of precipitation resolution and grid cell size on model outputs are examined. The performance of the grid based distributed model is compared to a similarly specified and parameterized lumped watershed model. The applicability of public domain digital datasets to hydrologic modeling is also investigated.
The HEC-HMS gridded SCS CN rainfall excess calculation and ModClark runoff transformation, as applied to the Upper Roanoke watershed and for the storm events chosen in this study, does not exhibit significant sensitivity to precipitation resolution, grid scale, or spatial distribution of parameters and inputs. Expected trends in peak flow, time to peak and overall runoff volume are observed with changes in precipitation resolution, however the changes in these outputs are small compared with their magnitudes and compared to the discrepancies between modeled and observed values. Significant sensitivity of runoff volume and consequently peak flow, to CN choices and antecedent moisture condition (AMC) was observed. The changes in model outputs between the distributed and lumped versions of the model were also small compared to the magnitudes of model outputs. / Master of Science
|
469 |
Optical Chirped Pulse Generation and its Applications for Distributed Optical Fiber SensingWang, Yuan 08 February 2023 (has links)
Distributed optical fiber sensors offer unprecedented advantages, and the most remarkable one is the ability to continuously measure physical or chemical parameters along the entire optical fiber, which is attached to the device, structure and system. As the most recently investigated distributed optical fiber sensors, phase-sensitive optical time domain reflectometry (φ-OTDR), Brillouin optical time domain analysis (BOTDA) and Brillouin dynamic grating-optical time domain reflectometry (BDG-OTDR) techniques have been given tremendous attention on the advantage of quantitative measurements ability over high sensitivity and absolute measurement with long sensing distance, respectively. However, the accompanying limitations in terms of static measurement range, acquisition rate, laser frequency drifting noise, and spatial resolution limitations in these techniques hinder their performance in practical applications. This thesis pays particular attention to the above three distributed sensing techniques to explore the fundamental limitations of the theoretical model and improve the sensing performance. Before presenting the novel sensing scheme with improved sensing performance, an introduction about distributed fiber optical sensing, including three main light scattering mechanisms in optical fiber, the recent advancements in distributed sensing and key parameters of Rayleigh scattering- and Brillouin scattering-based sensing systems. After that, a study on the theoretical analysis of large chirping rate pulse generation and the theoretical model of using chirped pulse as interrogation signal in φ-OTDR, BOTDA and BDG-OTDR systems are given. In the disruptive experimental implementations, the sensing performance has been improved in different aspects. By using a random fiber grating array as the distributed sensor, a high-precision distributed time delay measurement in a CP φ-OTDR system is proposed thanks to the enhanced in-homogeneity and reflectivity. In addition, a simple and effective method that utilizes the reference random fiber grating to monitor the laser frequency drifting noise is demonstrated. Dynamic strain measurement with a standard deviation of 66 nε over the vibration amplitude of 30 με is achieved. To solve the limited static measurement range issue, a multi-frequency database demodulation (MFDD) method is proposed to release the large strain variation induced time domain trace distortion by tuning the laser initial frequency. The maximum measurable strain variation of about 12.5 με represents a factor of 3 improvements. By using the optimized chirped pulse φ-OTDR system, a practical application of monitoring the impact load response in an I-steel beam is demonstrated, in which the static and distributed strain variation is successfully reconstructed. To obtain an enhanced static measurement range without a complicated database acquisition process, a photonic approach for generating low-frequency drifting noise, arbitrary and large frequency chirping rate (FCR) optical pulses based on the Kerr effect in the nonlinear optical fiber is theoretically analyzed and experimentally demonstrated by using both fixed-frequency pump and chirped pump. Due to the Kerr effect-induced sinusoidal phase modulation in the nonlinear fiber, high order Kerr pulse with a large chirping rate is generated. Thus the static measurement range of higher order Kerr pulse is significantly improved. Chirped pulse BOTDA based on non-uniform fiber is also analyzed, showing a high acquisition rate that is only limited by the sensor length and averaging times due to the relative Brillouin frequency shift (BFS) changes are directly extracted through the local time delays between adjacent Brillouin traces from two single-shot measurement without frequency sweep process. BFS measurement resolution of 0.42 MHz with 4.5 m spatial resolution is demonstrated over a 5 km non-uniform fiber. A hybrid simultaneous temperature/strain sensing system is also demonstrated, showing a strain uncertainty of 4.3 με and temperature uncertainty of 0.32 °C in a 5 km non-uniform fiber. Besides, the chirped pulse is also utilized as a probe signal in the Brillouin dynamic grating (BDG) detection along the PM fiber for distributed birefringence variations sensing. The strict phase-matching condition only enables part of the frequency components within the chirped probe pulse to be reflected by BDG, giving an adjustable spatial resolution without photo lifetime limitation. The spatial resolution is determined by the frequency chirping rate of the probe pulse.
|
470 |
CRIU-RTX: Remote Thread eXecution using Checkpoint/Restore in UserspaceNoor Mohamed, Mohamed Husain 21 July 2023 (has links)
Scaling up application performance on single high-end machines is increasingly becoming difficult due to scalability challenges of processor interconnects, cache coherence protocols, and memory bandwidth. Significant prior work has addressed this problem by scaling-out application threads across multiple nodes to exploit resources outside the single machine boundary. Prior works have also leveraged heterogeneous instruction set architecture (ISA) systems to improve application performance as well as energy-efficiency, a major cost driver in datacenters, by augmenting high-end servers with power-efficient embedded boards. Existing works, however, suffer from deployability challenges due to dependencies on the operating system or programming models that require non-trivial application modifications. We introduce CRIU-RTX, a userspace framework to scale-out multi-threaded applications across multiple nodes. Integrated with HetMigrate, a prior work on migrating processes across heterogeneous-ISA systems, CRIU-RTX can suspend a subset of threads in a process and resume their execution on different nodes, including, but not limited to heterogeneous-ISA nodes. CRIU-RTX implements distributed shared memory in userspace, thereby allowing application threads to access distributed memory transparently without any operating system dependency. Our experimental evaluations show 21% to 43% performance gains while scaling-out applications across x86-64 servers, and energy efficiency gains of up to 18% while scaling-out across a cluster of x86-64 server and ARM64 embedded boards. Since CRIU-RTX does not depend on operating system modifications, it can be easily deployed on a diverse set of machines, including, but not limited to ISA-different machines running the stock Linux operating system. / Master of Science / Commonly referred to as "Moore's Law", Gordan Moore predicted that the number of transistors on a chip would double every two years. However, this law no longer holds true, leading to a shift in computer research and development. To meet the increasing demands for faster and cheaper servers, researchers began exploring alternative computer designs. Data centers have started adopting servers with diverse architectures to enhance the cost-to-performance ratio, resulting in heterogeneous environments. Distributed execution refers to the process of running computational tasks or executing software across multiple interconnected systems or nodes. Instead of relying on a single machine or processor, the workload is distributed among a network of computers, allowing for parallel processing and improved performance. Prior works in this direction had difficulty in adoption due to customized hardware or operating system requirements. This thesis introduces CRIU-RTX, a userspace framework to scale-out application threads without operating system dependency. We implemented a distributed shared memory system in userspace to allow application threads running in scaled-out execution to access distributed memory as if they are running on the same machine. Our evaluations of CRIU-RTX show significant improvement in performance and energy-efficiency.
|
Page generated in 0.0512 seconds