Spelling suggestions: "subject:"1nternet protocols"" "subject:"1nternet porotocols""
1 |
Distributed real-time control via the internetSrivastava, Abhinav 30 September 2004 (has links)
The objective of this research is to demonstrate experimentally the feasibility of using the Internet for a Distributed Control System (DCS). An algorithm has been designed and implemented to ensure stability of the system in the presence of upper bounded time-varying delays. A single actuator magnetic ball levitation system has been used as a test bed to validate the proposed algorithm. Experiments were performed to obtain the round-trip time delay between the host PC and the client PC under varying network loads and at different times. A digital real-time lead-lag controller was implemented for the magnetic levitation system. Upper bounds for the artificial and experimental round-trip time delay that can be accommodated in the control loop for the maglev system were estimated. The artificial time delay was based on various probabilistic distributions and was generated through MATLAB. To accommodate sporadic surges in time delays that are more than these upper bounds, a timeout algorithm with sensor data prediction was developed. Experiments were performed to validate the satisfactory performance of this algorithm in the presence of the bonded sporadic excessive time delays.
|
2 |
Distributed real-time control via the internetSrivastava, Abhinav 30 September 2004 (has links)
The objective of this research is to demonstrate experimentally the feasibility of using the Internet for a Distributed Control System (DCS). An algorithm has been designed and implemented to ensure stability of the system in the presence of upper bounded time-varying delays. A single actuator magnetic ball levitation system has been used as a test bed to validate the proposed algorithm. Experiments were performed to obtain the round-trip time delay between the host PC and the client PC under varying network loads and at different times. A digital real-time lead-lag controller was implemented for the magnetic levitation system. Upper bounds for the artificial and experimental round-trip time delay that can be accommodated in the control loop for the maglev system were estimated. The artificial time delay was based on various probabilistic distributions and was generated through MATLAB. To accommodate sporadic surges in time delays that are more than these upper bounds, a timeout algorithm with sensor data prediction was developed. Experiments were performed to validate the satisfactory performance of this algorithm in the presence of the bonded sporadic excessive time delays.
|
3 |
The DNS Bake Sale: Advertising DNS Cookie Support for DDoS ProtectionDavis, Jacob 02 April 2021 (has links)
The Domain Name System (DNS) has been frequently abused for Distributed Denial of Service (DDoS) attacks and cache poisoning because it relies on the User Datagram Protocol (UDP). Since UDP is connection-less, it is trivial for an attacker to spoof the source of a DNS query or response. DNS Cookies, a protocol standardized in 2016, add pseudo-random values to DNS packets to provide identity management and prevent spoofing attacks. This work finds that 30% of popular authoritative servers and open recursive resolvers fully support cookies and that 10% of recursive clients send cookies. Despite this, DNS cookie use is rarely enforced as it is non-trivial to ascertain whether a given client intends to fully support cookies. We also show that 80% of clients and 99% of servers do not change their behavior when encountering a missing or illegitimate cookie. This paper presents a new protocol to allow cookie enforcement: DNS Protocol Advertisement Records (DPAR). Advertisement records allow DNS clients intending to use cookies to post a public record in the reverse DNS zone stating their intent. DNS servers may then lookup this record and require a client to use cookies as directed, in turn preventing an attacker from sending spoofed messages without a cookie. In this paper, we define the specification for DNS Protocol Advertisement Records, considerations that were made, and comparisons to alternative approaches. We additionally estimate the effectiveness of advertisements in preventing DDoS attacks and the expected burden to DNS servers. Advertisement records are designed as the next step to strengthen the existing support of DNS Cookies by enabling strict enforcement of client cookies.
|
4 |
IPv6: Politics of the Next Generation InternetDeNardis, Laura Ellen 05 April 2006 (has links)
IPv6, a new Internet protocol designed to exponentially increase the global availability of Internet addresses, has served as a locus for incendiary international tensions over control of the Internet. Esoteric technical standards such as IPv6, on the surface, appear not socially significant. The technical community selecting IPv6 claimed to have excised sociological considerations from what they considered an objective technical design decision. Far from neutrality, however, the development and adoption of IPv6 intersects with contentious international issues ranging from tensions between the United Nations and the United States, power struggles between international standards authorities, U.S. military objectives, international economic competition, third world development objectives, and the promise of global democratic freedoms. This volume examines IPv6 in three overlapping epochs: the selection of IPv6 within the Internet's standards setting community; the adoption and promotion of IPv6 by various stakeholders; and the history of the administration and distribution of the finite technical resources of Internet addresses. How did IPv6 become the answer to presumed address scarcity? What were the alternatives? Once developed, stakeholders expressed diverse and sometimes contradictory expectations for IPv6. Japan, the European Union, China, India, and Korea declared IPv6 adoption a national priority and an opportunity to become more competitive in an American-dominated Internet economy. IPv6 activists espoused an ideological belief in IPv6, linking the standard with democratization, the eradication of poverty, and other social objectives. The U.S., with ample addresses, adopted a laissez-faire approach to IPv6 with the exception of the Department of Defense, which mandated an upgrade to the new standard to bolster distributed warfare capability. The history of IPv6 includes the history of the distribution of the finite technical resources of "IP addresses," globally unique binary numbers required for devices to exchange information via the Internet. How was influence over IP address allocation and control distributed globally? This history of IPv6 explains what's at stake economically, politically, and technically in the development and adoption of IPv6, suggesting a theoretical nexus between technical standards and politics and arguing that views lauding the Internet standards process for its participatory design approach ascribe unexamined legitimacy to a somewhat closed process. / Ph. D.
|
5 |
Security for the cloud / Sécurité pour le cloudCornejo-Ramirez, Mario 17 November 2016 (has links)
La cryptographie a été un facteur clé pour permettre la vente de services et du commerce par Internet. Le cloud computing a amplifié cette révolution et est devenu un service très demandé grâce à ses avantages comme : puissance de calcul importante, services à bas coûts, rendement, évolutivité, accessibilité et disponibilité. Parallèlement à la hausse de nouveaux business, des protocoles pour des calculs sécurisés ont aussi émergé. Le but de cette thèse est de contribuer à la sécurité des protocoles d’Internet existants en fournissant une analyse de la source aléatoire de ces protocoles et en introduisant des protocoles mieux adaptés pour les environnements des cloud computing. Nous proposons de nouvelles constructions en améliorant l'efficacité des solutions actuelles afin de les rendre plus accessibles et pratiques. Nous fournissons une analyse de sécurité détaillée pour chaque schéma avec des hypothèses raisonnables. Nous étudions la sécurité du cloud computing à différents niveaux. D'une part, nous formalisons un cadre pour analyser quelques-uns des générateurs de nombres pseudo-aléatoires populaires à ce jour qui sont utilisés dans presque chaque application cryptographique. D'autre part, nous proposons deux approches efficaces pour des calculs en cloud. Le premier permet à un utilisateur de partager publiquement son secret de haute entropie avec des serveurs différents pour plus tard le récupérer par interaction avec certains de ces serveurs en utilisant seulement son mot de passe et sans données authentifiées. Le second permet à un client d'externaliser à un serveur une base de données en toute sécurité, qui peut être recherchée et modifiée ultérieurement. / Cryptography has been a key factor in enabling services and products trading over the Internet. Cloud computing has expanded this revolution and it has become a highly demanded service or utility due to the advantages of high computing power, cheap cost of services, high performance, scalability, accessibility as well as availability. Along with the rise of new businesses, protocols for secure computation have as well emerged. The goal of this thesis is to contribute in the direction of securing existing Internet protocols by providing an analysis of the sources of randomness of these protocols and to introduce better protocols for cloud computing environments. We propose new constructions, improving the efficiency of current solutions in order to make them more accessible and practical. We provide a detailed security analysis for each scheme under reasonable assumptions. We study the security in a cloud computing environment in different levels. On one hand, we formalize a framework to study some popular real-life pseudorandom number generators used in almost every cryptographic application. On the other, we propose two efficient applications for cloud computing. The first allows a user to publicly share its high-entropy secret across different servers and to later recover it by interacting with some of these servers using only his password without requiring any authenticated data. The second, allows a client to securely outsource to a server an encrypted database that can be searched and modified later.
|
6 |
Data Fusion Based Physical Layer Protocols for Cognitive Radio ApplicationsVenugopalakrishna, Y R January 2016 (has links) (PDF)
This thesis proposes and analyzes data fusion algorithms that operate on the physical layer of a wireless sensor network, in the context of three applications of cognitive radios: 1. Cooperative spectrum sensing via binary consensus; 2. Multiple transmitter localization and communication footprint identification; 3.Target self-localization using beacon nodes.
For the first application, a co-phasing based data combining scheme is studied under imperfect channel knowledge. The evolution of network consensus state is modeled as a Markov chain, and the average transition probability matrix is derived. Using this, the average hitting time and average consensus duration are obtained, which are used to determine and optimize the performance of the consensus procedure.
Second, using the fact that a typical communication footprint map admits a sparse representation, two novel compressed sensing based schemes are proposed to construct the map using 1-bit decisions from sensors deployed in a geographical area. The number of transmitters is determined using the K-means algorithm and a circular fitting technique, and a design procedure is proposed to determine the power thresholds for signal detection at sensors.
Third, an algorithm is proposed for self-localization of a target node using power measurements from beacon nodes transmitting from known locations. The geographical area is overlaid with a virtual grid, and the problem is treated as one of testing overlapping subsets of grid cells for the presence of the target node. The column matching algorithm from group testing literature is considered for devising the target localization algorithm. The average probability of localizing the target within a grid cell is derived using the tools from Poisson point processes and order statistics. This quantity is used to determine the minimum required node density to localize the target within a grid cell with high probability.
The performance of all the proposed algorithms is illustrated through Monte Carlo simulations.
|
Page generated in 0.0773 seconds