501 |
Multi-Material Fiber Fabrication and Applications in Distributed SensingYu, Li 25 January 2019 (has links)
Distributed sensing has been an attractive alternative to the traditional single-point sensing technology when measurement at multiple locations is required. Traditional distributed sensing methods based on silica optical fiber and electric coaxial cables have some limitations for specific applications, such as in smart textiles and wearable sensors. By adopting the fiber thermal drawing technique, we have designed and fabricated multi-material electrode-embedded polymer fibers with distributed sensing capabilities. Polymers sensitive to temperature and pressure have been incorporated into the fiber structure, and thin metal electrodes placed inside fiber by convergence drawing have enabled detection of local impedance change with electrical reflectometry. We have demonstrated that these fibers can detect temperature and pressure change with high spatial resolution. We have also explored the possibility of using polymer optical fiber in a Raman scattering based distributed temperature sensing system. Stokes and Anti-Stokes signals of a PMMA fiber illuminated by a 532 nm pulsed laser was recorded, and the ratio was used to indicate local temperature change. We have also developed a unique way to fabricate porous polymer by thermal drawing polymer materials with controlled water content in the polymer. The porous fibers were loaded with a fluorescent dye, and its release in tissue phantoms and murine tumors was observed. The work has broadened the scope of multi-material, multi-functional fiber and may shed light on the development of novel smart textile devices. / PHD / In recent years smart textiles and wearable gadgets have already changed the way we live. There has been increasing industrial interest to develop novel flexible, stretchable devices that can interact with human and the environment. Thermal drawing technique originally invented for manufacturing telecommunication optical fiber has been used by researchers to fabricate fibers with more functionality. In this work, we report the progress made on the fabrication of multi-material fiber. Soft polymer fibers capable of measuring temperature and pressure were designed and made by the thermal drawing technique. Submillimeter fibers with thin copper electrodes have shown potential to be readily embedded in a smart fabric to provide 1D information in one direction or woven into a 2D pattern for area monitoring. We have also explored another temperature measurement scheme using polymer optical fibers with a pulsed laser. Compared with the electronic fibers, it is less susceptible to electrical noise and more robust. Lastly, we have shown a unique way to generate porosity in thermally drawn polymer fibers. The elongated pores in the fibers come from water escaping the fiber during the fabrication process. The three aspects of the project expand the scope of multi-material, multi-functional fiber and can shed light on the future development of electronic textile devices.
|
502 |
A Novel Approach to Communal Rainwater Harvesting for Single-Family Housing: A Study of Tank Size, Reliability, and CostsSemaan, Marie 09 April 2020 (has links)
An emerging field in rainwater harvesting (RWH) is the application of communal rainwater harvesting system. This system's main advantage compared to individual RWH is the centralization of water treatment, which some users of individual RWH find difficult to maintain. Despite alleviating one concern, this communal approach does not increase the RHW system's (RWHS) reliability nor necessarily satisfy all water demands, and hence is not a major improvement in terms of system performance.
This research tackles this challenge with a novel approach to communal RWH for single-family houses. Instead of the traditional communal approach to RWH which uses only one storage location, we propose connecting multiple single-family homes' RWHSs to a communal backup tank, i.e., capturing overflow from multiple RWHS, which will increase reliability and water demand met in a way that will significantly improve the current performance of communal RWH. The proposed system will potentially maximize the availability of potable water while limiting spillage and overflow.
We simulated the performance of the system in two cities, Houston and Jacksonville, for multiple private and communal storage combination. Results show that volumetric reliability gains, of 1.5% - 6% and 1.5% - 4%, can be achieved for seven to ten and six to seven connected households, respectively, for Houston and Jacksonville if the emphasis is on volumetric reliability (VR). As per total storage capacity, the system achieves higher VR gains for lower total storage capacity in Houston while the system achieves higher VR gains for higher total storage capacities in Jacksonville.
With regards to the total cost of ownership per household for the individual system and for the communal storage system, the lifecycle cost of the system was performed using the Net Present Value (NPV) method, with an interest rate of 7% over 30 years. The NPV of the total system costs per household in the city of Houston is lowest for nine to ten connected households, as well as comparable to the base case of a rainwater harvesting system that is not connected to a communal tank for seven and eight connected households.
This communal system is more resilient and can be a worthy addition to water and stormwater infrastructures, especially in the face of climate change. / Doctor of Philosophy / An emerging field in rainwater harvesting (RWH) is the application of communal rainwater harvesting system. This system's main advantage compared to individual RWH is the centralization of water treatment, which some users of individual RWH find difficult to maintain. Despite alleviating one concern, this communal approach does not increase the RHW system's (RWHS) reliability nor necessarily satisfy all water demands, and hence is not a major improvement in terms of system performance.
This research tackles this challenge with a novel approach to communal RWH for single-family houses. Instead of the traditional communal approach to RWH which uses only one storage location, we propose connecting multiple single-family homes' RWHSs to a communal backup tank, i.e., capturing overflow from multiple RWHS, which will increase reliability and water demand met in a way that will significantly improve the current performance of communal RWH. The proposed system will potentially maximize the availability of potable water while limiting spillage and overflow.
We simulated the performance of the system in two cities, Houston and Jacksonville, for multiple private and communal storage combination. Results show that volumetric reliability gains, of 1.5% - 6% and 1.5% - 4%, can be achieved for seven to ten and six to seven connected households, respectively, for Houston and Jacksonville if the emphasis is on volumetric reliability (VR). As per total storage capacity, the system achieves higher VR gains for lower total storage capacity in Houston while the system achieves higher VR gains for higher total storage capacities in Jacksonville.
With regards to the total cost of ownership per household for the individual system and for the communal storage system, the lifecycle cost of the system was performed using the Net Present Value (NPV) method, with an interest rate of 7% over 30 years. The NPV of the total system costs per household in the city of Houston is lowest for nine to ten connected households, as well as comparable to the base case of a rainwater harvesting system that is not connected to a communal tank for seven and eight connected households.
This communal system is more resilient and can be a worthy addition to water and stormwater infrastructures, especially in the face of climate change.
|
503 |
Demonstration of Vulnerabilities in Globally Distributed Additive ManufacturingNorwood, Charles Ellis 24 June 2020 (has links)
Globally distributed additive manufacturing is a relatively new frontier in the field of product lifecycle management. Designers are independent of additive manufacturing services, often thousands of miles apart. Manufacturing data must be transmitted electronically from designer to manufacturer to realize the benefits of such a system. Unalterable blockchain legers can record transactions between customers, designers, and manufacturers allowing each to trust the other two without needing to be familiar with each other. Although trust can be established, malicious printers or customers still have the incentive to produce unauthorized or pirated parts. To prevent this, machine instructions are encrypted and electronically transmitted to the printing service, where an authorized printer decrypts the data and prints an approved number of parts or products. The encrypted data may include G-Code machine instructions which contain every motion of every motor on a 3D printer. Once these instructions are decrypted, motor drivers send control signals along wires to the printer's stepper motors. The transmission along these wires is no longer encrypted. If the signals along the wires are read, the motion of the motor can be analyzed, and G-Code can be reverse engineered.
This thesis demonstrates such a threat through a simulated attack on a G-Code controlled device. A computer running a numeric controller and G-Code interpreter is connected to standard stepper motors. As G-Code commands are delivered, the magnetic field generated by the transmitted signals is read by a Hall Effect sensor. The rapid oscillation of the magnetic field corresponds to the stepper motor control signals which rhythmically move the motor. The oscillating signals are recorded by a high speed analog to digital converter attached to a second computer. The two systems are completely electronically isolated.
The recorded signals are saved as a string of voltage data with a matching time stamp. The voltage data is processed through a Matlab script which analyzes the direction the motor spins and the number of steps the motor takes. With these two pieces of data, the G-Code instructions which produced the motion can be recreated. The demonstration shows the exposure of previously encrypted data, allowing for the unauthorized production of parts, revealing a security flaw in a distributed additive manufacturing environment. / Master of Science / Developed at the end of the 20th century, additive manufacturing, sometimes known as 3D printing, is a relatively new method for the production of physical products. Typically, these have been limited to plastics and a small number of metals. Recently, advances in additive manufacturing technology have allowed an increasing number of industrial and consumer products to be produced on demand. A worldwide industry of additive manufacturing has opened up where product designers and 3D printer operators can work together to deliver products to customers faster and more efficiently. Designers and printers may be on opposite sides of the world, but a customer can go to a local printer and order a part designed by an engineer thousands of miles away. The customer receives a part in as little time as it takes to physically produce the object. To achieve this, the printer needs manufacturing information such as object dimensions, material parameters, and machine settings from the designer. The designer risks unauthorized use and the loss of intellectual property if the manufacturing information is exposed.
Legal protections on intellectual property only go so far, especially across borders. Technical solutions can help protect valuable IP. In such an industry, essential data may be digitally encrypted for secure transmission around the world. This information may only be read by authorized printers and printing services and is never saved or read by an outside person or computer. The control computers which read the data also control the physical operation of the printer. Most commonly, electric motors are used to move the machine to produce the physical object. These are most often stepper motors which are connected by wires to the controlling computers and move in a predictable rhythmic fashion. The signals transmitted through the wires generate a magnetic field, which can be detected and recorded. The pattern of the magnetic field matches the steps of the motors. Each step can be counted, and the path of the motors can be precisely traced. The path reveals the shape of the object and the encrypted manufacturing instructions used by the printer. This thesis demonstrates the tracking of motors and creation of encrypted machine code in a simulated 3D printing environment, revealing a potential security flaw in a distributed manufacturing system.
|
504 |
Towards a Resource Efficient Framework for Distributed Deep Learning ApplicationsHan, Jingoo 24 August 2022 (has links)
Distributed deep learning has achieved tremendous success for solving scientific problems in research and discovery over the past years. Deep learning training is quite challenging because it requires training on large-scale massive dataset, especially with graphics processing units (GPUs) in latest high-performance computing (HPC) supercomputing systems. HPC architectures bring different performance trends in training throughput compared to the existing studies. Multiple GPUs and high-speed interconnect are used for distributed deep learning on HPC systems. Extant distributed deep learning systems are designed for non-HPC systems without considering efficiency, leading to under-utilization of expensive HPC hardware. In addition, increasing resource heterogeneity has a negative effect on resource efficiency in distributed deep learning methods including federated learning. Thus, it is important to focus on an increasing demand for both high performance and high resource efficiency for distributed deep learning systems, including latest HPC systems and federated learning systems.
In this dissertation, we explore and design novel methods and frameworks to improve resource efficiency of distributed deep learning training. We address the following five important topics: performance analysis on deep learning for supercomputers, GPU-aware deep learning job scheduling, topology-aware virtual GPU training, heterogeneity-aware adaptive scheduling, and token-based incentive algorithm.
In the first chapter (Chapter 3), we explore and focus on analyzing performance trend of distributed deep learning on latest HPC systems such as Summitdev supercomputer at Oak Ridge National Laboratory. We provide insights by conducting a comprehensive performance study on how deep learning workloads have effects on the performance of HPC systems with large-scale parallel processing capabilities. In the second part (Chapter 4), we design and develop a novel deep learning job scheduler MARBLE, which considers efficiency of GPU resource based on non-linear scalability of GPUs in a single node and improves GPU utilization by sharing GPUs with multiple deep learning training workloads. The third part of this dissertation (Chapter 5) proposes topology-aware virtual GPU training systems TOPAZ, specifically designed for distributed deep learning on recent HPC systems. In the fourth chapter (Chapter 6), we conduct exploration on an innovative holistic federated learning scheduling that employs a heterogeneity-aware adaptive selection method for improving resource efficiency and accuracy performance, coupled with resource usage profiling and accuracy monitoring to achieve multiple goals. In the fifth part of this dissertation (Chapter 7), we are focused on how to provide incentives to participants according to contribution for reaching high performance of final federated model, while tokens are used as a means of paying for the services of providing participants and the training infrastructure. / Doctor of Philosophy / Distributed deep learning is widely used for solving critical scientific problems with massive datasets. However, to accelerate the scientific discovery, resource efficiency is also important for the deployment on real-world systems, such as high-performance computing (HPC) systems. Deployment of existing deep learning applications on these distributed systems may lead to underutilization of HPC hardware resources. In addition, extreme resource heterogeneity has negative effects on distributed deep learning training. However, much of the prior work has not focused on specific challenges in distributed deep learning including HPC systems and heterogeneous federated systems, in terms of optimizing resource utilization.This dissertation addresses the challenges in improving resource efficiency of distributed deep learning applications, through performance analysis on deep learning for supercomputers, GPU-aware deep learning job scheduling, topology-aware virtual GPU training, and heterogeneity-aware adaptive federated learning scheduling and incentive algorithms.
|
505 |
Four-Craft Virtual Coulomb Structure Analysis for 1 to 3 dimensional GeometriesVasavada, Harsh Amit 25 April 2007 (has links)
Coulomb propulsion has been proposed for spacecraft cluster applications with separation distances on the order of dozens of meters. This thesis presents an investigation of analytic charge solutions for a planar and three dimensional four satellite formations. The solutions are formulated in terms of the formation geometry. In contrast to the two and three spacecraft Coulomb formations, a four spacecraft formation has additional constraints that need to be satisfied for the individual charges on the spacecraft to be unique and real. A spacecraft must not only satisfy the previously developed inequality constraints to yield a real charge solution, but it must also satisfy three additional equality constraints to ensure the spacecraft charge is unique. Further, a method is presented to reduce the number of equality constraints arising due the dynamics of a four spacecraft formation. Formation geometries are explored to determine the feasibility of orienting a square formation arbitrarily in any given plane. The unique and real spacecraft charges are determined as functions of the orientation of the square formation in a given principal orbit plane. For a three-dimensional tetrahedron formation, the charge products obtained are a unique set of solution. The full three-dimensional rotation of a tetrahedron is reduced to a two angle rotation for simpler analysis. The number of equality constraints for unique spacecraft charges can not be reduced for a three-dimensional formation. The two angle rotation results are presented for different values of the third angle. The thesis also presents the set up for a co-linear four-craft problem. The solution for the co-linear formation is not developed. The discussion of co-linear formations serves as an open question on how to determine analytic solutions for system with null-space dimension greater than 1. The thesis also presents a numerical tool for determining potential shapes of a static Coulomb formation as a support to the analytical solutions. The numerical strategy presented here uses a distributed Genetic Algorithm (GA) as an optimization tool. The GA offers several advantages over traditional gradient based optimization methods. Distributing the work of the GA over several processors reduces the computation time to arrive at a solution. The thesis discusses the implementation of a distributed GA used in the analysis of a static Coulomb formation. The thesis also addresses the challenges of implementation of a distributed GA on a computing cluster and presents candidate solutions. / Master of Science
|
506 |
Distributed Ground Station Network for CubeSat CommunicationsLeffke, Zachary James 27 January 2014 (has links)
In the last decade the world has seen a steadily increasing number of Cube Satellites deployed to Low Earth Orbit. Traditionally, these cubesats rely on Amateur Radio communications technology that are proven to work from space. However, as data volumes increase, the existing Amateur Radio protocols, combined with the restrictions of use for the Amateur Radio Spectrum, as well as the trend to build one control station per cubesat, result in a bottle neck effect whereby existing communications methods are no longer sufficient to support the increasing data volumes of the spacecraft.
This Masters Thesis work explores the concept of deploying a network of distributed ground station receiver nodes for the purposes of increasing access time to the spacecraft, and thereby increasing the potential amount of data that can be transferred from orbit to the ground. The current trends in cubesat communications will be analyzed and an argument will be made in favor of transitioning to more modern digital communications approaches for on orbit missions. Finally, a candidate ground station receiver node design is presented a possible design that could be used to deploy such a network. / Master of Science
|
507 |
Distributed Monitoring System for Mobile Ad Hoc Networks: Design and ImplementationKazemi, Hanif S. 25 May 2007 (has links)
Mobile Ad hoc NETworks (MANETs) are networks in which the participating nodes can move freely without having to worry about maintaining a direct connection to any particular fixed access point. In a MANET, nodes collaborate with each other to form the network and as long as a node is in contact with any other member of the network, it—at least in theory—is part of the network and can communicate with all other nodes.
An important function of network management is to observe current network conditions: at the node level, this may mean keeping track of arriving and departing traffic load; at the network level, the system must monitor active routes and changes in network topology.
In this research, we present the design and implementation of a distributed network monitoring system for MANETs. Our system is completely distributed, generates no additional traffic on the network and produces a dynamic picture of the network level and node level information on a graphical user interface.
In our proposed scheme, multiple monitoring nodes collaborate to achieve a reasonably accurate snapshot of the network conditions. These monitoring nodes passively sniff network traffics and gather information from the network to construct partial network views. They then transmit their findings to a management unit where these local views are put together to produce a comprehensive picture of the network. The communication between all management nodes (a monitoring unit and a management node) takes place in an out-of-band communication link. Therefore, our monitoring solution does not depend on the MANET to perform, hence is robust to network partitioning, link breaks, node's death and node misbehavior in the monitored MANET.
Our solution provides a snapshot of the network topology that includes information about node-level behavior ratings and traffic activity.
The information provided by our monitoring system can be used for network management as well as for security assessment, including anomaly detection. Information regarding individual nodes' behavior can be used for detecting selfishness in the network. Also, an approximation of arriving and departing traffic levels at each node is important in the context of quality of service, load balancing and congestion control. Furthermore, the network topology picture can provide valuable information to network management in detecting preferred routes, discovering network partitioning and in fault detection.
We developed a proof-of-concept implementation of our system, which works with the Optimized Link State Routing (OLSR) protocol. Through experimental studies with up to 10-node MANETs, we were able to determine the feasibility and workability of our system. The scheme proved to be robust with respect to mobility, rapid changes in the network topology and node connectivity. Throughout our experiments we observed that our system replicated changes in the network on the GUI with less than two seconds delay. Also, when deployed in a high-traffic environment, with multiple TCP and UDP flows throughout the network, the system was able to report traffic load on each node accurately and consistently.
On average, CPU consumption on monitoring nodes was about 3.5% and the GUI never took up more than 4% of the processing power (general-purpose laptop computers were used throughout the experiments). Also, the overall storage capacity needed for archiving the information files was estimated as 1 Mbytes for monitoring a 10-node MANETs for 30 minutes.
Unobtrusive and distributed nature of our proposed approach helps the system to adapt to the constantly changing nature of MANETs and be able to provide valuable network management, security assessment and traffic analysis services, while requiring only modest processing and storage resources. The system is capable of quickly responding to changes in the network and is non-intrusive, generating no additional traffic on the MANET it monitors. / Master of Science
|
508 |
LIDS: An Extended LSTM Based Web Intrusion Detection System With Active and Distributed LearningSagayam, Arul Thileeban 24 May 2021 (has links)
Intrusion detection systems are an integral part of web application security. As Internet use continues to increase, the demand for fast, accurate intrusion detection systems has grown. Various IDSs like Snort, Zeek, Solarwinds SEM, and Sleuth9, detect malicious intent based on existing patterns of attack. While these systems are widely deployed, there are limitations with their approach, and anomaly-based IDSs that classify baseline behavior and trigger on deviations were developed to address their shortcomings. Existing anomaly-based IDSs have limitations that are typical of any machine learning system, including high false-positive rates, a lack of clear infrastructure for deployment, the requirement for data to be centralized, and an inability to add modules tailored to specific organizational threats. To address these shortcomings, our work proposes a system that is distributed in nature, can actively learn and uses experts to improve accuracy. Our results indicate that the integrated system can operate independently as a holistic system while maintaining an accuracy of 99.03%, a false positive rate of 0.5%, and speed of processing 160,000 packets per second for an average system. / Master of Science / Intrusion detection systems are an integral part of web application security. The task of an intrusion detection system is to identify attacks on web applications. As Internet use continues to increase, the demand for fast, accurate intrusion detection systems has grown. Various IDSs like Snort, Zeek, Solarwinds SEM, and Sleuth9, detect malicious intent based on existing attack patterns. While these systems are widely deployed, there are limitations with their approach, and anomaly-based IDSs that learn a system's baseline behavior and trigger on deviations were developed to address their shortcomings. Existing anomaly-based IDSs have limitations that are typical of any machine learning system, including high false-positive rates, a lack of clear infrastructure for deployment, the requirement for data to be centralized, and an inability to add modules tailored to specific organizational threats. To address these shortcomings, our work proposes a system that is distributed in nature, can actively learn and uses experts to improve accuracy. Our results indicate that the integrated system can operate independently as a holistic system while maintaining an accuracy of 99.03%, a false positive rate of 0.5%, and speed of processing 160,000 packets per second for an average system.
|
509 |
Voltage Unbalance-Cognizant Optimization of Distribution GridsSubramonia Pillai, Mathirush 26 January 2023 (has links)
The integration of distributed generators (DGs) into the distribution grid has exacerbated voltage unbalance issues leading to greater risks of reducing equipment lifetime, equipment damages, and increased ohmic losses. Most approaches to regulating voltage in distribution systems only focus on voltage magnitude and neglect phasor discrepancies and do little to remedy voltage unbalance. To combat this, a novel Optimal Power Flow (OPF) is designed to help operate these resources in a manner that curtails voltage unbalance using the reactive power compensation capabilities of inverters. The OPF was run for a wide variety of loading conditions on a pair of systems using MATLAB and was shown to improve the voltage profile of the system in addition to minimizing losses in most cases. However, it is noted that the OPF loses exactness in highly stressed conditions and is unable to provide meaningful solutions / Master of Science / With the power grid getting greener and smarter by the day, a slew of new challenges arise to overcome. Distributed sources of energy like solar panels and batteries are being added to the grid right from the household level. While they are desirable for reducing our need for traditional sources of energy, the addition of these resources has been shown to cause issues in the quality of the power grid. This is particularly observed at the low-voltage domestic part of the grid where the resources cause issues with the voltage quality. The distribution grid is unbalanced by nature and adding these resources only amplifies this problem. To help mitigate voltage quality issues grid operators are starting to require voltage regulation capabilities from resources to be connected to the grid and a lot of work has been conducted to find the optimal strategies for operating these resources. However, existing paradigms for these sources only focus on fixing the voltage magnitude part of the power quality and neglect phasor relationships. This thesis aims to bridge this gap by developing a method to determine the optimal operation of these resources by using the voltage regulation capability to address both voltage magnitude and voltage unbalance issues in addition to optimal operation.
|
510 |
Wireless Distributed Computing on the Android PlatformKarra, Kiran 23 October 2012 (has links)
The last couple of years have seen an explosive growth in smartphone sales. Additionally, the computational power of modern smartphones has been increasing at a high rate. For example, the popular iPhone 4S has a 1 GHz processor with 512 MB of RAM [5]. Other popular smartphones such as the Samsung Galaxy Nexus S also have similar specications. These smartphones are as powerful as desktop computers of the 2005 era, and the tight integration of many dierent hardware chipsets in these mobile devices makes for a unique mobile platform that can be exploited for capabilities other than traditional uses of a phone, such as talk and text [4].
In this work, the concept using smartphones that run the Android operating system for distributed computing over a wireless mesh network is explored. This is also known as wireless distributed computing (WDC). The complexities of WDC on mobile devices are different from traditional distributed computing because of, among other things, the unreliable wireless communications channel and the limited power available to each computing node. This thesis develops the theoretical foundations for WDC. A mathematical model representing the total amount of resources required to distribute a task with WDC is developed. It is shown that given a task that is distributable, under certain conditions, there exists a theoretical minimum amount of resources that can be used in order to perform a task using WDC. Finally, the WDC architecture is developed, an Android App implementation of the WDC architecture is tested, and it is shown in a practical application that using WDC to perform a task provides a performance increase over processing the job locally on the Android OS. / Master of Science
|
Page generated in 0.0767 seconds