• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 999
  • 126
  • 85
  • 63
  • 37
  • 26
  • 9
  • 8
  • 8
  • 8
  • 8
  • 8
  • 8
  • 5
  • 4
  • Tagged with
  • 1562
  • 1562
  • 585
  • 564
  • 428
  • 313
  • 309
  • 291
  • 276
  • 239
  • 208
  • 195
  • 192
  • 178
  • 171
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
301

Multicast techniques for bandwidth-demanding applications in overlay networks

Tsang, Cheuk-man, Mark., 曾卓敏. January 2008 (has links)
published_or_final_version / abstract / Computer Science / Doctoral / Doctor of Philosophy
302

Quality of service routing with path information aggregation

Tam, Wing-yan., 譚泳茵. January 2006 (has links)
published_or_final_version / abstract / Electrical and Electronic Engineering / Master / Master of Philosophy
303

Metamodels for describing the structure interaction of layered software systems.

Vincent, Stephen George. January 1988 (has links)
This research identifies a current and future need in the realm of information systems development which has surfaced as a result of layered architectures and software reuse. An analysis methodology based upon two three-dimensional metamodels which correspond to the principal aspects of system architecture, structure and communication, is developed. Each metamodel can be viewed as having three planes which represent increasing abstractions away from actual source code. For example, with regard to the structure metamodel, the lowest plane corresponds to actual source code structures written in a specific computer language, the middle plane represents the general form of the structure available in that language, and the top plane represents the general form of structures available in any language. An object-oriented viewpoint was adopted in order to allow the expression of the relationships between entities found on a single plane of a metamodel, as well as the expression of the relationships between entities found on different planes. The metamodels provide a framework and methodology for discerning the structure and communication mechanisms employed in software source code as well as a framework from within which behavioral models can be developed.
304

Consul: A communication substrate for fault-tolerant distributed programs.

Mishra, Shivakant. January 1992 (has links)
As human dependence on computing technology increases, so does the need for computer system dependability. This dissertation introduces Consul, a communication substrate designed to help improve system dependability by providing a platform for building fault-tolerant, distributed systems based on the replicated state machine approach. The key issues in this approach--ensuring replica consistency and reintegrating recovering replicas--are addressed in Consul by providing abstractions called fault-tolerant services. These include a broadcast service to deliver messages to a collection of processes reliably and in some consistent order, a membership service to maintain a consistent system-wide view of which processes are functioning and which have failed, and a recovery service to recover a failed process. Fault-tolerant services are implemented in Consul by a unified collection of protocols that provide support for managing communication, redundancy, failures, and recovery in a distributed system. At the heart of Consul is Psync, a protocol that provides for multicast communication based on a context graph that explicitly records the partial (or causal) order of messages. This graph also serves as the basis for novel algorithms used in the ordering, membership, and recovery protocols. The ordering protocol combines the semantics of the operations encoded in messages with the partial order provided by Psync to increase the concurrency of the application. Similarly, the membership protocol exploits the partial ordering to allow different processes to conclude that a failure has occurred at different times relative to the sequence of messages received, thereby reducing the amount of synchronization required. The recovery protocol combines checkpointing with the replay of messages stored in the context graph to recover the state of a failed process. Moreover, this collection of protocols is implemented in a highly-configurable manner, thus allowing a system builder to easily tailor an instance of Consul from this collection of building-block protocols. Consul is built in the x-Kernel and executes standalone on a collection of Sun 3 work-stations. Initial testing and performance studies have been done using two applications: a replicated directory and a distributed wordgame. These studies show that the semantic based order is more efficient than a total order in many situations, and that the overhead imposed by the checkpointing, membership, and recovery protocols is insignificant.
305

Social and technical issues of IP-based multi-modal semi-synchronous communication: rural telehealth communication in South Africa.

Vuza, Xolisa January 2005 (has links)
Most rural areas of developing countries are faced with problems like shortage of doctors in hospitals, illiteracy and poor power supply. Because of these issues, Information and Communication Technology (ICT) is often sees as a useful solution for these areas. Unfortunately, the social environment is often ignored. This leads to inappropriate systems being developed for these areas. The aims of this thesis were firstly, to learn how a communication system can be built for a rural telehealth environment in a developing country, secondly to learn how users can be supported to use such a system.
306

Internet congestion control for variable-rate TCP traffic

Biswas, Md. Israfil January 2011 (has links)
The Transmission Control Protocol (TCP) has been designed for reliable data transport over the Internet. The performance of TCP is strongly influenced by its congestion control algorithms that limit the amount of traffic a sender can transmit based on end-to-end available capacity estimations. These algorithms proved successful in environments where applications rate requirements can be easily anticipated, as is the case for traditional bulk data transfer or interactive applications. However, an important new class of Internet applications has emerged that exhibit significant variations of transmission rate over time. Variable-rate traffic poses a new challenge for congestion control, especially for applications that need to share the limited capacity of a bottleneck over a long delay Internet path (e.g., paths that include satellite links). This thesis first analyses TCP performance of bursty applications that do not send data continuously, but generate data in bursts separated by periods in which little or no data is sent. Simulation analysis shows that standard TCP methods do not provide efficient support for bursty applications that produce variable-rate traffic, especially over long delay paths. Although alternative forms of congestion control like TCP-Friendly Rate Control and the Datagram Congestion Control Protocol have been proposed, they did not achieve widespread deployment. Therefore many current applications that rely upon User Datagram Protocol are not congestion controlled. The use of non-standard or proprietary methods decreases the effectiveness of Internet congestion control and poses a threat to the Internet stability. Solutions are therefore needed to allow bursty applications to use TCP. Chapter three evaluates Congestion Window Validation (CWV), an IETF experimental specification that was proposed to improve support for bursty applications over TCP. It concluded that CWV is too conservative to support many bursty applications and does not provide an incentive to encourage use by application designers. Instead, application designers often avoid generating variable-rate traffic by padding idle periods, which has been shown to waste network resources. CWV is therefore shown to not provide an acceptable solution for variable-rate traffic. In response to this shortfall, a new modification to TCP, TCP-JAGO, is proposed. This allows variable-rate traffic to restart quickly after an inactive (i.e., idle) period and to effectively utilise available network resources while sending at a lower rate than the available rate (i.e., during an application-limited period). The analysis in Chapter five shows that JAGO provides faster convergence to a steady-state rate and improves throughput by more efficiently utilising the network. TCP-JAGO is also shown to provide an appropriate response when congestion is experienced after restart. Variable-rate TCP traffic can also be impacted by the Initial Window algorithm at the start or during the restart of a session. Chapter six considers this problem, where TCP has no prior indication of the network state. A recent proposal for a larger initial window is analysed. Issues and advantages of using a large IW over a range of scenarios are discussed. The thesis concludes by presenting recommendations to improve TCP support for bursty applications. This also provides an incentive for application designers to choose TCP for variable-rate traffic.
307

Connecting at a time of disconnection : the development and implementation of websites by non-profits in the field of separation and divorce

VanderSluis, Dan. 10 April 2008 (has links)
No description available.
308

Architectures for device aware network

Chung, Wai Kong. 03 1900 (has links)
In today's heterogeneous computing environment, a wide variety of computing devices with varying capabilities need to access information in the network. Existing network is not able to differentiate the different device capabilities, and indiscriminatingly send information to the end-devices, without regard to the ability of the end-devices to use the information. The goal of a device-aware network is to match the capability of the end-devices to the information delivered, thereby optimizing the network resource usage. In the battlefield, all resources - including time, network bandwidth and battery capacity - are very limited. A device-aware network avoids the waste that happens in current, device-ignorant networks. By eliminating unusable traffic, a device-aware network reduces the time the end-devices spend receiving extraneous information, and thus saves time and conserves battery-life. In this thesis, we evaluated two potential DAN architectures, Proxy-based and Router-based approaches, based on the key requirements we identified. To demonstrate the viability of DAN, we built a prototype using a hybrid of the two architectures. The key elements of our prototype include a DAN browser, a DAN Lookup Server and DAN Processing Unit (DPU). We have demonstrated how our architecture can enhance the overall network utility by ensuring that only appropriate content is delivered to the end-devices.
309

Development of future course content requirements supporting the Department of Defense's Internet Protocol verison 6 transition and implementation

Kay, James T. 06 1900 (has links)
Approved for public release, distribution unlimited / This thesis will focus on academia, specifically the Naval Postgraduate School, and its requirement to implement an education program that allows facilitators to properly inform future students on the gradual implementation of Internet Protocol version 6 (IPv6) technology while phasing out Internet Protocol version 4 (IPv4) from the current curriculum as the transition to IPv6 progresses. The DoD's current goal is to complete the transition of all DoD networks from IPv4 to IPv6 by fiscal year 2008. With this deadline quickly approaching, it is imperative that a plan to educate military and DoD personnel be implemented in the very near future. It is my goal to research and suggest a program that facilitators can use that will show the similarities, changes, advantages, and challenges that exist for the transition. / US Marine Corps (USMC) author.
310

Optimization of resources allocation for H.323 endpoints and terminals over VoIP networks

27 January 2014 (has links)
M.Phil. (Electrical & Electronic Engineering) / Without any doubt, the entire range of voice and TV signals will migrate to the packet network. The universal addressable mode of Internet protocol (IP) and the interfacing framing structure of Ethernet are the main reasons behind the success of TCP/IP and Ethernet as a packet network and network access scheme mechanisms. Unfortunately, the success of the Internet has been the problem for real-time traffic such as voice, leading to more studies in the domain of Teletraffic Engineering; and the lack of a resource reservation mechanism in Ethernet, which constitutes a huge problem as switching system mechanism, have raised enough challenges for such a migration. In that context, ITU-T has released a series of Recommendation under the umbrella of H.323 to guarantee the required Quality of Service (QoS) for such services. Although the “utilisation” is not a good parameter in terms of traffic and QoS, we are here in proposing a multiplexing scheme with a queuing solution that takes into account the positive correlations of the packet arrival process experienced at the multiplexer input with the aim to optimize the utilisation of the buffer and bandwidth on the one hand; and the ITU-T H.323 Endpoints and Terminals configuration that can sustain such a multiplexing scheme on the other hand. We take into account the solution of the models from the M/M/1 up to G/G/1 queues based on Kolmogorov’s analysis as our solution to provide a better justification of our approach. This solution, the Diffusion approximation, is the limit of the Fluid process that has not been used enough as queuing solution in the domain of networking. Driven by the results of the Fluid method, and the resulting Gaussian distribution from the Diffusion approximation, the application of the asymptotic properties of the Maximum Likelihood Estimation (MLE) as the central limit theorem allowed capturing the fluctuations and therefore filtering out the positive correlations in the queue system. This has resulted in a queue system able to serve 1 erlang (100% of transmission link capacity) of traffic intensity without any extra delay and a queue length which is 60% of buffer utilization when compared to the ordinary Poisson queue length.

Page generated in 0.0718 seconds