• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1963
  • 183
  • 183
  • 147
  • 36
  • 25
  • 25
  • 25
  • 25
  • 25
  • 24
  • 16
  • 11
  • 9
  • 7
  • Tagged with
  • 2878
  • 2878
  • 750
  • 637
  • 506
  • 499
  • 393
  • 336
  • 314
  • 300
  • 299
  • 289
  • 288
  • 277
  • 276
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
481

Functional description and formal specification of a generic gateway.

Son, Chang Won. January 1988 (has links)
This dissertation is concerned with the design of a generic gateway which provides an interoperability between dissimilar computer networks. The generic gateway is decomposed with subnetwork dependent blocks and subnetwork independent blocks. The subnetwork dependent block is responsible to communicate with subnetwork nodes. The subnetwork independent block is responsible to interconnect the subnetwork dependent blocks. The communications between subnetwork dependent and independent blocks are done by service access points which defined independently to any specific subnetworks. Formal specification of a generic gateway is provided by LOTOS. The generic gateway specification is tested by a verifiable test method which is proposed in this dissertation. The correctness of the specification has been verified while the specified model is simulated. The major difference between conventional simulation and the verifiable test is in the objective of simulation. In the verifiable test method, the semantical properties are examined during the simulation process. The tester can be either human observer or other process.
482

Internetworking the defense data network with an integrated services digital network

Christoffersen, Daniel Arthur, 1964- January 1988 (has links)
The motivation behind this thesis is to develop a procedure for internetworking the Defense Data Network (DDN) with an Integrated Services Digital Network (ISDN). To accomplish this internetworking problem an integrated gateway must be designed to compensate for incompatibilities between the two networks. This thesis approaches this problem by giving a description of the two networks, DDN and ISDN, and also presenting a general approach to gateway design. This information is then combined into a detailed procedure for implementing a gateway to internetwork the DDN and ISDN. This is followed by a discussion of the practical aspects of the DDN/ISDN internetworking problem.
483

Analysis of approaches to synchronous faults simulation by surrogate propagation

Lee, Chang-Hwa, 1957- January 1988 (has links)
This thesis describes a new simulation technique, Synchronous Faults Simulation by Surrogate with Exception, first proposed by Dr. F. J. Hill and has been initiated under the direction of Xiolin Wang. This paper reports early results of that project. The Sequential Circuit Test Sequence System, SCIRTSS, is an automatic test generation system which is developed in University of Arizona which will be used as a target to compare against the results of the new simulator. The major objective of this research is to analyze the results obtained by using the new simulator SFSSE against the results obtained by using the parallel simulator SCIRTSS. The results are listed in this paper to verify superiority of the new simulation technique.
484

Free market communications

Biddiscombe, Martin David January 2000 (has links)
No description available.
485

Enforcing receiver-driven multicast congestion control using ECN-Nonce

Kulatunga, Chamil January 2009 (has links)
Providing robust congestion control is essential prior to Internet-wide deployment of long-lived multicast flows. This thesis therefore reviews currently proposed techniques, and identifies key issues. It then proposes a new framework that enables the network to police and enforce correct congestion behaviour. Receiver-driven layer multicast congestion control is especially vulnerable to misbehaving receivers. Countering these problems demands a new paradigm to enforce correct receiver behaviour. A framework based on Explicit Congestion Notification (ECN)-nonce is proposed, which preserves the ability to work with system/network heterogeneity in the multicast tree, mandates layered multicast receivers to feedback nonce-reports in an arrayed form. Appropriate behaviour may be enforced in the new framework by introducing selected border routers in the multicast tree, known as <i>enforcers</i>. This also avoids practical limitations requiring an upgrade to all edge routers, and allows local service providers to protect their own network. The approach uses re-active policing. This can not prevent receivers joining under congestion, but reacts by preventing forwarding of specific groups within a reasonable delay. This approach eliminates the need for secure transfer of information from the sender to the enforcing routers and avoids a need to upgrade the IGMP and PIM protocols. The method is compatible with using ECN when Active Queue Management (AQM), which has benefit when using Forward Error Correction (FEC) based reliability. This framework was analysed using simulation. The thesis also analyses some important related performance issues for congestion-controlled multicast transport protocols, considering issues such as excessive overshoot and poor congestion response in delay-diversified networks using the IETF NACK-Oriented Reliable Multicast (NORM) framework and an unnecessary congestion response with heterogeneous receivers in Asynchronous Layered Coding (ALC) framework.
486

Dynamic Resource Management in RSVP- Controlled Unicast Networks

Iyengar Prasanna, Venkatesan 12 1900 (has links)
Resources are said to be fragmented in the network when they are available in non-contiguous blocks, and calls are dropped as they may not end sufficient resources. Hence, available resources may remain unutilized. In this thesis, the effect of resource fragmentation (RF) on RSVP-controlled networks was studied and new algorithms were proposed to reduce the effect of RF. In order to minimize the effect of RF, resources in the network are dynamically redistributed on different paths to make them available in contiguous blocks. Extra protocol messages are introduced to facilitate resource redistribution in the network. The Dynamic Resource Redistribution (DRR) algorithm when used in conjunction with RSVP, not only increased the number of calls accommodated into the network but also increased the overall resource utilization of the network. Issues such as how many resources need to be redistributed and of which call(s), and how these choices affect the redistribution process were investigated. Further, various simulation experiments were conducted to study the performance of the DRR algorithm on different network topologies with varying traffic characteristics.
487

A generalized trust model using network reliability

Mahoney, Glenn R. 10 April 2008 (has links)
Economic and social activity is increasingly reflected in operations on digital objects and network-mediated interactions between digital entities. Trust is a prerequisite for many of these interactions, particularly if items of value are to be exchanged. The problem is that automated handling of trust-related concerns between distributed entities is a relatively new concept and many existing capabilities are limited or application-specific, particularly in the context of informal or ad-hoc relationships. This thesis contributes a new family of probabilistic trust metrics based on Network Reliability called the Generic Reliability Trust Model (GRTM). This approach to trust modelling is demonstrated with a new, flexible trust metric called Hop-count Limited Transitive Trust (HLTT), and is also applied to an implementation of the existing Maurer Confidence Valuation (MCV) trust metric. All metrics in the GRTM framework utilize a common probabilistic trust model which is the solution of a general reliability problem. Two generalized algorithms are presented for computing GRTM based on inclusion-exclusion and factoring. A conservative approximation heuristic is defined which leads to more practical algorithm performance. A JAVA-based implementation of these algorithms for HLTT and MCV trust metrics is used to demonstrate the impact of the approximation. An XML-based trust-graph representation and a random power-law trust graph generator is used to simulate large informal trust networks.
488

Uniform Access to Signal Data in a Distributed Heterogeneous Computing Environment

Jeffreys, Steven 10 1900 (has links)
International Telemetering Conference Proceedings / October 26-29, 1992 / Town and Country Hotel and Convention Center, San Diego, California / One of the problems in analyzing data is getting the data to the analysis system. The data can be stored in a variety of ways, from simple disk and tape files to a sophisticated relational database system. The variety of storage techniques requires the data analysis system to be aware of the details of how the data may be accessed (e.g., file formats, SQL statements, BBN/Probe commands, etc.). The problem is much worse in a network of heterogeneous machines; besides the details of each storage method, the analysis system must handle the details of network access, and may have to translate data from one vendor format to another as it moves from machine to machine. This paper describes a simple and powerful software interface to telemetry data in a distributed heterogeneous networking environment, and how that interface is being used in a diagnostic expert system. In this case, the interface connects the expert system, running on a Sun UNIX machine, with the data on a VAX/VMS machine. The interface exists as a small subroutine library that can be linked into a variety of data analysis systems. The interface insulates the expert system from all details of data access, providing transparent access to data across the network. A further benefit of this approach is that the data source itself can be a sophisticated data analysis system that may perform some processing of the data, again transparently to the user of the interface. The interface subroutine library can be readily applied to a wide variety of data analysis applications.
489

Risk assessment of the Naval Postgraduate School gigabit network

Shumaker, Todd, Rowlands, Dennis 09 1900 (has links)
Approved for public release; distribution is unlimited / This research thoroughly examines the current Naval Postgraduate School Gigabit Network security posture, identifies any possible threats or vulnerabilities, and recommends any appropriate safeguards that may be necessary to counter the found threats and vulnerabilities. The research includes any portion of computer security, physical security, personnel security, and communication security that may be applicable to the overall security of both the .mil and .edu domains. The goal of the research was to ensure that the campus network is operating with the proper amount of security safeguards to protect the confidentiality, integrity, availability, and authenticity adequately from both insider and outsider threats. Risk analysis was performed by assessing all of the possible threat and vulnerability combinations to determine the likelihood of exploitation and the potential impact the exploitation could have on the system, the information, and the mission of the Naval Postgraduate School. The results of the risk assessment performed on the network are to be used by the Designated Approving Authority of the Naval Postgraduate School Gigabit network when deciding whether to accredit the system. / Civilian, Research Associate
490

Computer network operations methodology

Vega, Juan Carlos 03 1900 (has links)
Approved for public release; distribution is unlimited / All nations face increasing tension between exploiting Computer Network Operations (CNO) in the military sphere and protecting the global information grid. The United States is moving apace to develop doctrines and capabilities that will allow them to exploit cyberspace for military advantage. Within the broad rubric of Information Operations, there is increasing effort devoted to integrating CNO into routine military planning. At the same time, these nations are becoming increasingly concerned at the dependency of their militaries, governments, economies and societies on the networked information systems that are emerging as the central nervous systems of post-industrial society. The armed forces desire to exploit and use CNO to their advantage is the central argument for this developed concept. This new weapons platform, or CNO, can be clearly identified so that the leaders will have an understanding of terms, limitations and capabilities of cyber operations. A methodology incorporating doctrine can be created to identify the Rules of Engagement (ROE) as well as the CNO components. The CNO area of operations and area of interest reach far beyond the typical battle space. The battle space has evolved and has penetrated every element of military operations that utilize computers and networks. / Captain (Promotable), United States Army

Page generated in 0.0407 seconds