• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2577
  • 1023
  • 403
  • 270
  • 95
  • 76
  • 52
  • 45
  • 45
  • 43
  • 41
  • 37
  • 29
  • 27
  • 23
  • Tagged with
  • 5704
  • 1759
  • 1281
  • 831
  • 827
  • 745
  • 745
  • 724
  • 618
  • 594
  • 551
  • 536
  • 524
  • 490
  • 478
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
511

Resource sharing in secure distributed systems

Chakraborty, Trisha 10 May 2024 (has links) (PDF)
Allocating resources in computer systems is a significant challenge due to constraints on resources, coordinating access to those resources, and tolerating malicious behavior. This dissertation investigates two fundamental problems concerning resource allocation. The first addresses the general challenge of sharing server resources among multiple clients, where an adversary may deny the availability of these resources; this is known as a denial-of-service (DoS) attack. Here, we propose a deterministic algorithm that employs resource burning (RB)—the verifiable expenditure of a network resource—to defend against DoS attacks. Specifically, our solution forces an adversary to incur higher RB costs compared to legitimate clients. Next, we develop a general policy-driven framework that utilizes machine learning classification to tune the amount of RB used for mitigating DoS attacks. Finally, we expand the application of RB to defend against DoS attacks on hash tables, which are a popular data structure in network applications. The second problem deals with resource allocation in wireless systems; specifically, the sharing of the wireless medium among multiple participants competing to transmit data. While modern WiFi and cellular standards do solve this problem, several recent theoretical results suggest that superior solutions are possible. Here, we investigate the viability of these solutions and discover that they fall short of their promised performance in practice. Consequently, we identify the cause of this shortcoming and quantify the discrepancy through a combination of analytical and simulation work. Ultimately, we propose a revised theoretical model that aligns better with practical observations.
512

Parameter identification in distributed structures

Norris, Mark A. January 1986 (has links)
This dissertation develops two new techniques for the identification of parameters in distributed-parameter systems. The first technique identifies the physical parameter distributions such as mass, damping and stiffness. The second technique identifies the modal quantities of self-adjoint distributed-parameter systems. Distributed structures are distributed-parameter systems characterized by mass, damping and stiffness distributions. To identify the distributions, a new identification technique is introduced based on the finite element method. With this approach, the object is to identify "average" values of mass, damping and stiffness distributions over each finite element. This implies that the distributed parameters are identified only approximately, in the same way in which the finite element method approximates the behavior of a structure. It is common practice to represent the motion of a distributed parameter system by a linear combination of the associated modes of vibration. In theory, we have an infinite set of modes although, in practice we are concerned with only a finite linear combination of the modes. The modes of vibration possess certain properties which distinguish them from one another. Indeed, the modes of vibration are uncorrelated in time and orthogonal in space. The modal identification technique introduced in this dissertation uses path these spatial properties. Because both the temporal and spatial properties are used, the method does not encounter problems when the natural frequencies are closely-spaced or repeated. / Ph. D.
513

Holistic Abstraction for Distributed Network Debugging

Khan, Jehandad 15 March 2018 (has links)
Computer networks are engineered for performance and flexibility, delivering billions of packets per second with high reliability, until they fail. It is during such time of crisis that debugging and troubleshooting come to the forefront, however, the focus on performance results in design tradeoffs that make it challenging to troubleshoot them. This dissertation hypothesizes that a view of the network as a single entity solves the above problems, without compromising either performance or visibility. The primary contributions are 1) a topology oblivious network abstraction for performance monitoring and troubleshooting, 2) transformation of the network abstract query to device local semantics, 3) optimizations for reducing state collection overhead, and 4) global state semantics in the proposed query language easing expression of network queries. Abstracting the entire system as an entity simplifies the debugging process, making possible comprehensive root-cause analysis and exonerating the network administrator from dealing with many devices, delivering gains in productivity and efficiency. By merging network topology information with state collection, this thesis provides a new way to look at the network monitoring and troubleshooting problem. Such an amalgamation allows the translation of a performance query expressed in a domain specific language to small pieces of code operating on different devices in the network collecting necessary state. This merging results in lesser overhead per switch and reduces the strain on devices and provides a simple abstraction to the administrator. / PHD / Computer networks are complex and therefore take a tremendous amount of effort to operate in an efficient and error-free manner. This dissertation presents a mechanism that allows the operator to ignore many intricate details about the network and concentrate on how to fix problems. Such an approach enables automatic methods to facilitate the operator to focus on most relevant information. Saving time, money and resources when operating large networks.
514

Fully Distributed Multi-parameter Sensors Based on Acoustic Fiber Bragg Gratings

Hu, Di 31 March 2017 (has links)
A fully distributed multi-parameter acoustic sensing technology is proposed. Current fully distributed sensing techniques are exclusively based on intrinsic scatterings in optical fibers. They demonstrate long sensing span, but their limited applicable parameters (temperature and strain) and costly interrogation systems have prevented their widespread applications. A novel concept of acoustic fiber Bragg grating (AFBG) is conceived with inspiration from optical fiber Bragg grating (FBG). This AFBG structure exploits periodic spatial perturbations on an elongated waveguide to sense variations in the spectrum of an acoustic wave. It achieves ten times higher sensitivity than the traditional time-of-flight measurement system using acoustic pulses. A fast interrogation method is developed to avoid frequency scan, reducing both the system response time (from 3min to <1ms) and total cost. Since acoustic wave propagates with low attenuation along varieties of solid materials (metal, silica, sapphire, etc.), AFBG can be fabricated on a number of waveguides and to sense multiple parameters. Sub-millimeter metal wire and optical fiber based AFBGs have been demonstrated experimentally for effective temperature (25~700 degC) and corrosion sensing. A hollow borosilicate tube is demonstrated for simultaneous temperature (25~200 degC) and pressure (15~75 psi) sensing using two types of acoustic modes. Furthermore, a continuous 0.6 m AFBG is employed for distributed temperature sensing up to 500 degC and to accurately locate the 0.18 m long heated section. Sensing parameters, sensitivity and range of an AFBG can be tuned to fit a specific application by selecting acoustic waveguides with different materials and/or geometries. Therefore, AFBG is a fully distributed sensing technology with tremendous potentiality. / Ph. D. / Fully distributed sensing techniques are part of the growing ”Internet-of-Things” trend, as they improve on traditional point sensors by providing spatially distributed measurements. Current techniques for fully distributed sensing are based on fiber optics, and while these techniques are capable of measuring parameters along a lengthy sensing distance, their wider application is constrained by limited applicable parameters and costly interrogation systems. In this research, an innovative, fully distributed, multi-parameter acoustic sensing technology based on acoustic fiber Bragg grating (AFBG) is proposed. AFBG takes advantage of the interaction between an acoustic wave and the periodic structure on the measured material, and uses the spectrum property of an acoustic wave to achieve ten times higher sensitivity than traditional time-of-flight methods. In addition, a fast interrogation method is developed to avoid frequency scan, reducing both the system response time (from 3 min. to <1 ms) and system cost (from $5, 000 to < $500). AFBG can be fabricated using different elongated materials (i.e. waveguides) as acoustic waves propagate along a variety of materials without extensive power loss. In this research, AFBG is deployed on a sub-millimeter metal wire and silica fiber to demonstrate effective corrosion and temperature sensing (25 ∼ 700 ◦C). In addition, hollow tubes are shown to be feasible waveguides for simultaneous temperature (25 ∼ 200 ◦C) and pressure (15 ∼ 75 psi) sensing. Finally a long AFBG is employed for distributed temperature sensing up to 500 ◦C. Wide applicability and low cost suggest that this sensing technology may be a viable approach for fully distributed sensing, contributing to the growing Internet-of-Things movement.
515

Wireless Distributed Computing on the Android Platform

Karra, Kiran 23 October 2012 (has links)
The last couple of years have seen an explosive growth in smartphone sales. Additionally, the computational power of modern smartphones has been increasing at a high rate. For example, the popular iPhone 4S has a 1 GHz processor with 512 MB of RAM [5]. Other popular smartphones such as the Samsung Galaxy Nexus S also have similar specications. These smartphones are as powerful as desktop computers of the 2005 era, and the tight integration of many dierent hardware chipsets in these mobile devices makes for a unique mobile platform that can be exploited for capabilities other than traditional uses of a phone, such as talk and text [4]. In this work, the concept using smartphones that run the Android operating system for distributed computing over a wireless mesh network is explored. This is also known as wireless distributed computing (WDC). The complexities of WDC on mobile devices are different from traditional distributed computing because of, among other things, the unreliable wireless communications channel and the limited power available to each computing node. This thesis develops the theoretical foundations for WDC. A mathematical model representing the total amount of resources required to distribute a task with WDC is developed. It is shown that given a task that is distributable, under certain conditions, there exists a theoretical minimum amount of resources that can be used in order to perform a task using WDC. Finally, the WDC architecture is developed, an Android App implementation of the WDC architecture is tested, and it is shown in a practical application that using WDC to perform a task provides a performance increase over processing the job locally on the Android OS. / Master of Science
516

RSSI and throughput evaluation of an LTE system using a distributed MIMO antenna with a site specific channel propagation model

Dama, Yousef A.S., Anoh, Kelvin O.O., Asif, Rameez, Abd-Alhameed, Raed, Jones, Steven M.R., Ghazaany, Tahereh S., Zhu, Shaozhen (Sharon), Excell, Peter S. January 2013 (has links)
No
517

Towards a Framework for DHT Distributed Computing

Rosen, Andrew 12 August 2016 (has links)
Distributed Hash Tables (DHTs) are protocols and frameworks used by peer-to-peer (P2P) systems. They are used as the organizational backbone for many P2P file-sharing systems due to their scalability, fault-tolerance, and load-balancing properties. These same properties are highly desirable in a distributed computing environment, especially one that wants to use heterogeneous components. We show that DHTs can be used not only as the framework to build a P2P file-sharing service, but as a P2P distributed computing platform. We propose creating a P2P distributed computing framework using distributed hash tables, based on our prototype system ChordReduce. This framework would make it simple and efficient for developers to create their own distributed computing applications. Unlike Hadoop and similar MapReduce frameworks, our framework can be used both in both the context of a datacenter or as part of a P2P computing platform. This opens up new possibilities for building platforms to distributed computing problems. One advantage our system will have is an autonomous load-balancing mechanism. Nodes will be able to independently acquire work from other nodes in the network, rather than sitting idle. More powerful nodes in the network will be able use the mechanism to acquire more work, exploiting the heterogeneity of the network. By utilizing the load-balancing algorithm, a datacenter could easily leverage additional P2P resources at runtime on an as needed basis. Our framework will allow MapReduce-like or distributed machine learning platforms to be easily deployed in a greater variety of contexts.
518

Dynamic Load Balancing Schemes for Large-scale HLA-based Simulations

De Grande, Robson E. 26 July 2012 (has links)
Dynamic balancing of computation and communication load is vital for the execution stability and performance of distributed, parallel simulations deployed on shared, unreliable resources of large-scale environments. High Level Architecture (HLA) based simulations can experience a decrease in performance due to imbalances that are produced initially and/or during run-time. These imbalances are generated by the dynamic load changes of distributed simulations or by unknown, non-managed background processes resulting from the non-dedication of shared resources. Due to the dynamic execution characteristics of elements that compose distributed simulation applications, the computational load and interaction dependencies of each simulation entity change during run-time. These dynamic changes lead to an irregular load and communication distribution, which increases overhead of resources and execution delays. A static partitioning of load is limited to deterministic applications and is incapable of predicting the dynamic changes caused by distributed applications or by external background processes. Due to the relevance in dynamically balancing load for distributed simulations, many balancing approaches have been proposed in order to offer a sub-optimal balancing solution, but they are limited to certain simulation aspects, specific to determined applications, or unaware of HLA-based simulation characteristics. Therefore, schemes for balancing the communication and computational load during the execution of distributed simulations are devised, adopting a hierarchical architecture. First, in order to enable the development of such balancing schemes, a migration technique is also employed to perform reliable and low-latency simulation load transfers. Then, a centralized balancing scheme is designed; this scheme employs local and cluster monitoring mechanisms in order to observe the distributed load changes and identify imbalances, and it uses load reallocation policies to determine a distribution of load and minimize imbalances. As a measure to overcome the drawbacks of this scheme, such as bottlenecks, overheads, global synchronization, and single point of failure, a distributed redistribution algorithm is designed. Extensions of the distributed balancing scheme are also developed to improve the detection of and the reaction to load imbalances. These extensions introduce communication delay detection, migration latency awareness, self-adaptation, and load oscillation prediction in the load redistribution algorithm. Such developed balancing systems successfully improved the use of shared resources and increased distributed simulations' performance.
519

Dynamic Grid-Based Data Distribution Management in Large Scale Distributed Simulations

Roy, Amber Joyce 12 1900 (has links)
Distributed simulation is an enabling concept to support the networked interaction of models and real world elements that are geographically distributed. This technology has brought a new set of challenging problems to solve, such as Data Distribution Management (DDM). The aim of DDM is to limit and control the volume of the data exchanged during a distributed simulation, and reduce the processing requirements of the simulation hosts by relaying events and state information only to those applications that require them. In this thesis, we propose a new DDM scheme, which we refer to as dynamic grid-based DDM. A lightweight UNT-RTI has been developed and implemented to investigate the performance of our DDM scheme. Our results clearly indicate that our scheme is scalable and it significantly reduces both the number of multicast groups used, and the message overhead, when compared to previous grid-based allocation schemes using large-scale and real-world scenarios.
520

Distributed Linear Filtering and Prediction of Time-varying Random Fields

Das, Subhro 01 June 2016 (has links)
We study distributed estimation of dynamic random fields observed by a sparsely connected network of agents/sensors. The sensors are inexpensive, low power, and they communicate locally and perform computation tasks. In the era of large-scale systems and big data, distributed estimators, yielding robust and reliable field estimates, are capable of significantly reducing the large computation and communication load required by centralized estimators, by running local parallel inference algorithms. The distributed estimators have applications in estimation, for example, of temperature, rainfall or wind-speed over a large geographical area; dynamic states of a power grid; location of a group of cooperating vehicles; or beliefs in social networks. The thesis develops distributed estimators where each sensor reconstructs the estimate of the entire field. Since the local estimators have direct access to only local innovations, local observations or a local state, the agents need a consensus-type step to construct locally an estimate of their global versions. This is akin to what we refer to as distributed dynamic averaging. Dynamic averaged quantities, which we call pseudo-quantities, are then used by the distributed local estimators to yield at each sensor an estimate of the whole field. Using terminology from the literature, we refer to the distributed estimators presented in this thesis as Consensus+Innovations-type Kalman filters. We propose three distinct types of distributed estimators according to the quantity that is dynamically averaged: (1) Pseudo-Innovations Kalman Filter (PIKF), (2) Distributed Information Kalman Filter (DIKF), and (3) Consensus+Innovations Kalman Filter (CIKF). The thesis proves that under minimal assumptions the distributed estimators, PIKF, DIKF and CIKF converge to unbiased and bounded mean-squared error (MSE) distributed estimates of the field. These distributed algorithms exhibit a Network Tracking Capacity (NTC) behavior – the MSE is bounded if the degree of instability of the field dynamics is below a threshold. We derive the threshold for each of the filters. The thesis establishes trade-offs between these three distributed estimators. The NTC of the PIKF depends on the network connectivity only, while the NTC of the DIKF and of the CIKF depend also on the observation models. On the other hand, when all the three estimators converge, numerical simulations show that the DIKF improves 2dB over the PIKF. Since the DIKF uses scalar gains, it is simpler to implement than the CIKF. Of the three estimators, the CIKF provides the best MSE performance using optimized gain matrices, yielding an improvement of 3dB over the DIKF. Keywords: Kalman filter, distributed state estimation, multi-agent networks, sensor networks, distributed algorithms, consensus, innovation, asymptotic convergence, mean-squared error, dynamic averaging, Riccati equation, Lyapunov iterations, distributed signal processing, random dynamical systems.

Page generated in 0.0616 seconds