• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2568
  • 1023
  • 403
  • 270
  • 95
  • 76
  • 52
  • 45
  • 44
  • 43
  • 41
  • 37
  • 29
  • 27
  • 23
  • Tagged with
  • 5691
  • 1756
  • 1281
  • 831
  • 827
  • 744
  • 744
  • 724
  • 618
  • 594
  • 551
  • 535
  • 523
  • 489
  • 478
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Efficient Distributed Processing Over Micro-batched Data Streams

Ahmed Abdelhamid (10539053) 07 May 2021 (has links)
<div><div><div><p>Advances in real-world applications require high-throughput processing over large data streams. Micro-batching is a promising computational model to support the needs of these applications. In micro-batching, the processing and batching of the data are interleaved, where the incoming data tuples are first buffered as data blocks, and then are processed collectively using parallel function constructs (e.g., Map-Reduce). The size of a micro-batch is set to guarantee a certain response-time latency that is to conform to the application’s service-level agreement. Compared to native tuple-at-a-time data stream processing, micro- batching can sustain higher data rates. However, existing micro-batch stream processing systems lack Load-awareness optimizations that are necessary to maintain performance and enhance resource utilization. In this thesis, we investigate the micro-batching paradigm and pinpoint some of its design principles that can benefit from further optimization. A new data partitioning scheme termed Prompt is presented that leverages the characteristics of the micro-batch processing model. Prompt enables a balanced input to the batching and processing cycles of the micro-batching model. Prompt achieves higher throughput process- ing with an increase in resource utilization. Moreover, Prompt+ is proposed to enforce la- tency by elastically adapting resource consumption according to workload changes. More specifically, Prompt+ employs a scheduling strategy that supports elasticity in response to workload changes while avoiding rescheduling bottlenecks. Moreover, we envision the use of deep reinforcement learning to efficiently partition data in distributed streaming systems. PartLy demonstrates the use of artificial neural networks to facilitate the learning of efficient partitioning policies that match the dynamic nature of streaming workloads. Finally, all the proposed techniques are abstracted and generalized over three widely used stream process- ing engines. Experimental results using real and synthetic data sets demonstrate that the proposed techniques are robust against fluctuations in data distribution and arrival rates. Furthermore, it achieves up to 5x improvement in system throughput over state-of-the-art techniques without degradation in latency.</p></div></div></div>
202

A Lightweight Intrusion Detection System for the Cluster Environment

Liu, Zhen 02 August 2002 (has links)
As clusters of Linux workstations have gained in popularity, security in this environment has become increasingly important. While prevention methods such as access control can enhance the security level of a cluster system, intrusions are still possible and therefore intrusion detection and recovery methods are necessary. In this thesis, a system architecture for an intrusion detection system in a cluster environment is presented. A prototype system called pShield based on this architecture for a Linux cluster environment is described and its capability to detect unique attacks on MPI programs is demonstrated. The pShield system was implemented as a loadable kernel module that uses a neural network classifier to model normal behavior of processes. A new method for generating artificial anomalous data is described that uses a limited amount of attack data in training the neural network. Experimental results demonstrate that using this method rather than randomly generated anomalies reduces the false positive rate without compromising the ability to detect novel attacks. A neural network with a simple activation function is used in order to facilitate fast classification of new instances after training and to ease implementation in kernel space. Our goal is to classify the entire trace of a program¡¯s execution based on neural network classification of short sequences in the trace. Therefore, the effect of anomalous sequences in a trace must be accumulated. Several trace classification methods were compared. The results demonstrate that methods that use information about locality of anomalies are more effective than those that only look at the number of anomalies. The impact of pShield on system performance was evaluated on an 8-node cluster. Although pShield adds some overhead for each API for MPI communication, the experimental results show that a real world parallel computing benchmark was slowed only slightly by the intrusion detection system. The results demonstrate the effectiveness of pShield as a light-weight intrusion detection system in a cluster environment. This work is part of the Intelligent Intrusion Detection project of the Center for Computer Security Research at Mississippi State University.
203

A Generalized Three-Phase Coupling Method For Distributed Simulation Of Power Systems

Wu, Jian 05 August 2006 (has links)
Simulation of power system behavior is a highly useful tool for planning, analysis of stability, and operator training. Traditionally, small power system studies are dominated by the time taken to solve the machine dynamics equations, while larger studies are dominated by the time taken to solve the network equations. With the trend towards more sophisticated and realistic modeling, the size and complexity of simulations of a power system grow tremendously. The ever-increasing need for computational power can be satisfied by the application of distributed simulation. Also power systems are distributed in nature. The terrestrial power systems are divided into groups and controlled by different Regional Transmission Organization (RTO). Each RTO owns the detailed parameter for the area under control, but only limited data and boundary measurement of the external network. Thus, performing power system analysis in such case is a challenge. Also, simulating a large-scale power system with detailed modeling of the components causes a heavy computational burden. One possible way of relieving this problem is to decouple the network into subsystems and solve the subsystem respectively, and then combine the results of the subsystems to get the solution. The way to decouple a network and represent the missing part will affect the result greatly. Also, due to information distribution in the dispatch centers, a problem of doing power system analysis with limited data available arises. The equivalents for other networks need to be constructed to analyze power system. In this research work, a distributed simulation algorithm is proposed to handle the issues above. A history of distributed simulation is briefly introduced. A generalized coupling method dealing with natural coupling is proposed. Distributed simulation models are developed and demonstrated in Virtual Test Bed (VTB). The models are tested with different network configurations. The test results are presented and analyzed. The performance of the distributed simulation is compared with the steady state result and time domain simulation result. Satisfactory results are achieved.
204

Impact of Distributed Generation on Distribution Contingency Analysis

Kotamarty, Sujatha 13 May 2006 (has links)
It is expected that increasing amounts of distributed generation (DG) will be connected to the power system in the future. Advances in technology, deregulation in the market and the changes brought about by the government in many countries to end the monopoly of the vertically integrated power utilities led to the birth of this new technology. The other incentive being the alternative energy sources which are becoming more cost effective. Although there are many advantages with the interconnection of the DG into the network, there are many problems that it brings with its interconnection. There are many issues to be considered for the interconnection of DG?s, like the sizing and siting of the DG. The size and site of the DG will have an effect on the voltages and operations of the distribution power system. Since it is necessary that the voltages be within a specified limit, this problem of the siting and sizing of the DG has taken top priority. This thesis discusses a procedure for evaluating the impact of the site, size of the DG and also a change in the loading conditions of the system before and after the reconfiguration of the system due to the fault. This contingency analysis work is validated using the I 13 and I 37 node distribution feeder. Many feasible combinations of the size and site of a DG are analyzed, which resulted in large number of data, while the load flow is run for each feasible combination. The results and trends are presented.
205

Optimizing The Size And Location Of Distributed Generators To Maximize The Grid Stability

Masanna gari, Abhilash Reddy 13 December 2008 (has links)
Distributed Generators (DGs) are being increasingly utilized in power system distribution networks to provide electric power at or near load centers. These are generally based on technologies like solar, wind and biomass and range from 10 kW to 50 MW. Research work carried out in this thesis relates to the optimal siting and sizing of DGs in order to maximize the system voltage stability and improve voltage profile. This has been formulated as an optimization problem and solved using LINGO software. Power flow equations have been embedded in the LINGO formulation, along with other operating constraints. The solution provides optimal values of the bus voltage magnitudes and angles, which have been utilized to compute a stability index. Finally, a multi-objective formulation has been developed to simultaneously optimize the size and placement of the DGs. The impact of the DGs on voltage stability and voltage profile has been studied on I standard distribution test systems and verified using three-phase unbalanced power flow software developed at Mississippi State University (MSU). Results indicate that the sizing and siting of DGs are system dependent and should be optimally selected before installing the distributed generators in the system.
206

OPTIMIZATION OF FACULTY DEVELOPMENT AT A DISTRIBUTED MEDICAL CAMPUS

Didyk, Nicole January 2019 (has links)
Background: Distributed Medical Education sites are satellites of large academic medical schools with faculty who are community-based physicians. These medical teachers need faculty development and there is little data about how this can best be delivered. This study asked the question: How can medical teaching expertise be developed and sustained at a Distributed Medical Education Campus? Using constructivist grounded theory methodology, a total of 16 semi-structured interviews were conducted with faculty members at two DME site campuses in Southern Ontario, and two faculty development events, one at each site, were observed. Findings_ The communitWhat are the perceptions of faculty at a Distributed Medical Education site regarding effective and acceptable faculty development activities for improving their skills as medical educators? y in which a DME campus medical school is implanted is transformed through a process of interaction between learners, medical teachers, and the community itself, which results in the production of expert community teachers. Community based physicians can develop teaching expertise and require faculty development to maintain interest and skill. They can access high quality, relevant faculty development within their own practice groups, a model referred to as a Community of Practice. These communities can be virtual or in-person and need several elements to be successful, including facilitation and mentorship. Conclusion: Teaching experts can develop in a DME site when there is accessible, relevant faculty development, such as in a Community of Practice. More research is needed to determine the best way to reward community teachers, most of whom are part time faculty in private practice. / Thesis / Master of Health Sciences (MSc) / Recently, satellite campuses of medical schools have been established in smaller cities, called Distributed Medical Education (DME) sites. There, the teaching faculty is composed of non-academic, community-based physicians. These faculty members need training to learn how to teach, or Faculty Development. This study asked the question: How can medical teaching expertise be developed and sustained at a Distributed Medical Education Campus? Sixteen interviews were conducted with teaching physicians, and two faculty development events were observed at two DME site campuses in Southern Ontario. The findings of this study revealed that the community is transformed through a process of interaction between learners, medical teachers, and the community itself, resulting in the production of expert community teachers. These teachers can access high quality faculty development within their own practice groups, a model referred to as a Community of Practice.
207

Blockchain and Distributed Consensus: From Security Analysis to Novel Applications

Xiao, Yang 13 May 2022 (has links)
Blockchain, the technology behind cryptocurrency, enables decentralized and distrustful parties to maintain a unique and consistent transaction history through consensus, without involving a central authority. The decentralization, transparency, and consensus-driven security promised by blockchain are unprecedented and can potentially enable a wide range of new applications that prevail in the decentralized zero-trust model. While blockchain represents a secure-by-design approach to building zero-trust applications, there still exist outstanding security bottlenecks that hinder the technology's wider adoption, represented by the following two challenges: (1) blockchain as a distributed networked system is multi-layered in nature which has complex security implications that are not yet fully understood or addressed; (2) when we use blockchain to construct new applications, especially those previously implemented in the centralized manner, there often lack effective paradigms to customize and augment blockchain's security offerings to realize domain-specific security goals. In this work, we provide answers to the above two challenges in two coordinated efforts. In the first effort, we target the fundamental security issues caused by blockchain's multi-layered nature and the consumption of external data. Existing analyses on blockchain consensus security overlooked an important cross-layer factor---the heterogeneity of the P2P network's connectivity. We first provide a comprehensive review on notable blockchain consensus protocols and their security properties. Then we focus one class of consensus protocol---the popular Nakamoto consensus---for which we propose a new analytical model from the networking perspective that quantifies the impact of heterogeneous network connectivity on key consensus security metrics, providing insights on the actual "51% attack" threshold (safety) and mining revenue distribution (fairness). The external data truthfulness challenge is another fundamental challenge concerning the decentralized applications running on top of blockchain. The validity of external data is key to the system's operational security but is out of the jurisdiction of blockchain consensus. We propose DecenTruth, a system that combines a data mining technique called truth discovery and Byzantine fault-tolerant consensus to enable decentralized nodes to collectively extract truthful information from data submitted by untrusted external sources. In the second effort, we harness the security offerings of blockchain's smart contract functionality along with external security tools to enable two domain-specific applications---data usage control and decentralized spectrum access system. First, we use blockchain to tackle a long-standing privacy challenge of data misuse. Individual data owners often lose control on how their data can be used once sharing the data with another party, epitomized by the Facebook-Cambridge Analytica data scandal. We propose PrivacyGuard, a security platform that combines blockchain smart contract and hardware trusted execution environment (TEE) to enable individual data owner's fine-grained control over the usage (e.g., which operation, who can use on what condition/price) of their private data. A core technical innovation of PrivacyGuard is the TEE-based execution and result commitment protocol, which extends blockchain's zero-trust security to the off-chain physical domain. Second, we employ blockchain to address the potential security and performance issues facing dynamic spectrum sharing in the 5G or next-G wireless networks. The current spectrum access system (SAS) designated by the FCC follows a centralized server-client service model which is vulnerable to single-point failures of SAS service providers and also lacks an efficient, automated inter-SAS synchronization mechanism. In response, we propose a blockchain-based decentralized SAS architecture dubbed BD-SAS to provide SAS service efficiently to spectrum users and enable automated inter-SAS synchronization, without assuming trust on individual SAS service providers. We hope this work can provide new insights into blockchain's fundamental security and applicability to new security domains. / Doctor of Philosophy / Blockchain, the technology behind cryptocurrency, enables decentralized and distrustful parties to maintain a unique and consistent transaction history through consensus, without involving a central authority. The decentralization, transparency, and consensus-driven security promised by blockchain are unprecedented and can potentially enable zero-trust applications in a wide range of domains. While blockchain's secure-by-design vision is truly inspiring, there still remain outstanding security challenges that hinder the technology's wider adoption. They originate from the blockchain system's complex multi-layer nature and the lack of effective paradigms to customize blockchain for domain-specific applications. In this work, we provide answers to the above two challenges in two coordinated efforts. In the first effort, we target the fundamental security issues caused by blockchain's multi-layered nature and the consumption of external data. We first provide a comprehensive review on existing notable consensus protocols and their security issues. Then we propose a new analytical model from a novel networking perspective that quantifies the impact of heterogeneous network connectivity on key consensus security metrics. Then we address the external data truthfulness challenge concerning the decentralized applications running on top of blockchain which consume the real-world data, by proposing DecenTruth, a system that combines data mining and consensus to allow decentralized blockchain nodes to collectively extract truthful information from untrusted external sources. In the second effort, we harness the security offerings of blockchain's smart contract functionality along with external security tools to enable two domain-specific applications. First, eyeing on our society's data misuse challenge where data owners often lose control on how their data can be used once sharing the data with another party, we propose PrivacyGuard, a security platform that combines blockchain smart contract and hardware security tools to give individual data owner's fine-grained control over the usage over their private data. Second, targeting the lack of a fault-tolerant spectrum access system in the domain of wireless networking, we propose a blockchain-based decentralized spectrum access system dubbed BD-SAS to provide spectrum management service efficiently to users without assuming trust on individual SAS service providers. We hope this work can provide new insights into blockchain's fundamental security and applicability to new security domains.
208

ink - An HTTP Benchmarking Tool

Phelps, Andrew Jacob 15 June 2020 (has links)
The Hypertext Transfer Protocol (HTTP) is one the foundations of the modern Internet. Because HTTP servers may be subject to unexpected periods of high load, developers use HTTP benchmarking utilities to simulate the load generated by users. However, many of these tools do not report performance details at a per-client level, which deprives developers of crucial insights into a server's performance capabilities. In this work, we present ink, an HTTP benchmarking tool that enables developers to better understand server performance. ink provides developers with a way of visualizing the level of service that each individual client receives. It does this by recording a trace of events for each individual simulated client. We also present a GUI that enables users to explore and visualizing the data that is generated by an HTTP benchmark. Lastly, we present a method for running HTTP benchmarks that uses a set of distributed machines to scale up the achievable load on the benchmarked server. We evaluate ink by performing a series of case studies to show that ink is both performant and useful. We validate ink's load generation abilities within the context of a single machine and when using a set of distributed machines. ink is shown to be capable of simulating hundreds of thousands of HTTP clients and presenting per-client results through the ink GUI. We also perform a set of HTTP benchmarks where ink is able to highlight performance issues and differences between server implementations. We compare servers like NGINX and Apache and highlight their differences using ink. / Master of Science / The World Wide Web (WWW) uses the Hypertext Transfer Protocol to send web content such as HTML pages or video to users. The servers providing this content are called HTTP servers. Sometimes, the performance of these HTTP servers is compromised because a large number of users requests documents at the same time. To prepare for this, server maintainers test how many simultaneous users a server can handle by using benchmarking utilities. These benchmarking utilities work by simulating a set of clients. Currently, these tools focus only on the amount of requests that a server can process per second. Unfortunately, this coarse-grained metric can hide important information, such as the level of service that individual clients received. In this work, we present ink, an HTTP benchmarking utility we developed that focuses on reporting information for each simulated client. Reporting data in this way allows for the developer to see how well each client was served during the benchmark. We achieve this by constructing data visualizations that include a set of client timelines. Each of these timelines represents the service that one client received. We evaluated ink through a series of case studies. These focus on the performance of the utility and the usefulness of the visualizations produced by ink. Additionally, we deployed ink in Virginia Tech's Computer Systems course. The students were able to use the tool and took a survey pertaining to their experience with the tool.
209

Modeling, Control and Stability Analysis of a PEBB Based DC Distribution Power System

Thandi, Gurjit Singh 24 June 1997 (has links)
Power Electronic Building Block (PEBB) concept is to provide generic building blocks for power conversion, regulation and distribution with control intelligence and autonomy. A comprehensive modeling and analysis of a PEBB based DC distributed power system (DPS), comprising of a front end power factor correction (PFC) boost rectifier, a DC-DC converter and a three phase four leg inverter is performed. All the sub-systems of the DC DPS are modeled and analyzed for stability and good transient performance. A comprehensive stability analysis of a PEBB based DC DPS is performed. The effect of impedance overlap on the system and individual sub-systems is examined. Ability of a PEBB based converter to stabilize the integrated system by actively changing the system bandwidth is presented. The fault tolerance capability in a PEBB based rectifier is established by ensuring stable system operation, with one leg of the rectifier failed open-circuited. / Master of Science
210

Hybrid modelling of machine tool axis drives.

Whalley, R., Ebrahimi, Kambiz M., Abdul-Ameer, A.A. January 2005 (has links)
No / The x-axis dynamics of a milling machine where the workpiece and saddle are mounted on supporting slides is considered. A permanent magnet motor, lead screw, ball nut and bearings are employed as the machine, traverse actuator mechanism. Hybrid, distributed¿lumped parameter methods are used to model the machine tool x-axis drive system. Inclusion of the spatial configuration of the drive generates the incident, travelling and reflected vibration signature of the system. Lead screw interactive torsion and tension loading, which is excited by cutting and input disturbance conditions, is incorporated in the modelling process. Measured and results from simulation exercises are presented in comparative studies enabling the dynamic characteristics of the machine to be identified under, no load and with the application of cyclic, cutting force disturbances. The effect of the lead screw length, cutting speed and hence the load disturbance frequency are examined and the resulting performance accuracy is commented upon.

Page generated in 0.0558 seconds