• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8770
  • 2932
  • 1104
  • 1047
  • 1021
  • 682
  • 315
  • 303
  • 277
  • 266
  • 135
  • 128
  • 79
  • 78
  • 75
  • Tagged with
  • 20134
  • 3915
  • 2835
  • 2577
  • 2440
  • 2351
  • 1938
  • 1840
  • 1554
  • 1538
  • 1518
  • 1512
  • 1502
  • 1445
  • 1399
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
301

Network planning and resource allocation for project control

Arden, Nicholas Russel January 1968 (has links)
The problems involved in network planning for project control are examined with reference to the published work in this field. Various solutions are described and compared. A detailed investigation is made of the standard assumptions concerning expected activity durations. The different approaches to the estimation of these times are shown to be inconsistent with their areas of application. A global heuristic solution to the problem of finding the minimum value of the maximum resource requirement during a project is presented. This procedure uses a modified resource profile. The results of a comparison between this solution and a standard local solution indicate a slight improvement with a considerable increase in computing time. The new approach permits easier subminimization. A combination of these methods is proposed. / Science, Faculty of / Computer Science, Department of / Graduate
302

A new methodology for OSI conformance testing based on trace analysis

Wvong, Russil January 1990 (has links)
This thesis discusses the problems of the conventional ISO 9646 methodology for OSI conformance testing, and proposes a new methodology based on trace analysis. In the proposed methodology, a trace analyzer is used to determine whether the observed behavior of the implementation under test is valid or invalid. This simplifies test cases dramatically, since they now need only specify the expected behavior of the IUT; unexpected behavior is checked by the trace analyzer. Test suites become correspondingly smaller. Because of this reduction in size and complexity, errors in test suites can be found and corrected far more easily. As a result, the reliability and the usefulness of the conformance testing process are greatly enhanced. In order to apply the proposed methodology, trace analyzers are needed. Existing trace analyzers are examined, and found to be unsuitable for OSI conformance testing. A family of new trace analysis algorithms is presented and proved. To verify the feasibility of the proposed methodology, and to demonstrate its benefits, it is applied to a particular protocol, the LAPB protocol specified by ISO 7776. The design and implementation of a trace analyzer for LAPB are described. The conventional ISO 8882-2 test suite for LAPB, when rewritten to specify only the expected behavior of the IUT, is found to be more than an order of magnitude smaller. / Science, Faculty of / Computer Science, Department of / Graduate
303

Implementation of the Cambridge ring protocols on the sun workstation

Chan, Linda January 1985 (has links)
As Local Area Networks gain momentum in recent Computer Science research, implementation is generally characterized by various factors such as efficiency, reliability, error recovery, and synchronism; however, how well the above issues can be achieved is heavily dependent on the facilities available in an implementation environment. Due to the recent popularity of message passing and concurrent processes, the UNIX 4.2bsd operating system with its interprocess communication facility is chosen to be the implementation environment for the Cambridge Ring's Basic Block and Byte Stream Protocols. Basic Block Protocol, implemented as a device driver in the system kernel, is the lowest level protocol which provides an unreliable datagram service, while the Byte Stream Protocol, implemented using multi-concurrent processes in the user space, provides a reliable, full-duplex virtual circuit service based on the service provided by the Basic Block Protocol. This thesis describes the protocol implementation on a 68000 based SUN workstation, and discusses results learnt from the experiment. The multi-concurrent processes approach is found to work adequately well for a small number of clients, but incur high overhead when the number of clients is large. / Science, Faculty of / Computer Science, Department of / Graduate
304

Specification-verification of protocols : the significant event temporal logic technique

Tsiknis, George January 1985 (has links)
This thesis addresses the problem of protocol verification. We first present a brief review of the existing specification methods for communication protocols, with emphasis on the hybrid techniques. The alternating bit protocol is specified in ISO/FDT, BBN/FST and UNISPEX to provide a comparison between three interesting hybrid models of protocol specification. A method for applying the unbounded state Temporal Logic to verify a protocol specified in a hybrid technique (in particular FDT) is outlined. Finally, a new specification and verification method called SETL is proposed, which is based on event sequences and temporal logic. To illustrate the method two data transfer protocols namely, the stop-wait and alternating bit protocols are specified in SETL and verified. We demonstrate that SETL is a generalization of the hybrid techniques, it is sound and that it can be semi-automated. / Science, Faculty of / Computer Science, Department of / Graduate
305

Implementation of a protocol validation and synthesis system

Tong, Darren Pong-Choi January 1985 (has links)
VALISYN, an automated system for the validation and synthesis of error-free protocols has been implemented in C language. It assists designers in the detection and prevention of various kinds of potential design errors, such as state deadlocks, non-executable interactions, unspecified receptions and state ambiguities. The technique employed is a stepwise application of a set of production rules which guarantee complete reception capability. These rules are implemented in a tracking algorithm, which prevents the formation of non-executable interactions and unspecified receptions, and which monitors the existence of state deadlocks and state ambiguities. The implementation of VALISYN is discussed and a number of protocol validation and synthesis examples are presented to illustrate its use and features. / Science, Faculty of / Computer Science, Department of / Graduate
306

Network synthesis problem : cost allocation and algorithms

Hojati, Mehran January 1987 (has links)
This thesis is concerned with a network design problem which is referred to in the literature as the network synthesis problem. The objective is to design an undirected network, at a minimum cost, which satisfies known requirements, i.e., lower bounds on the maximum flows, between every pair of nodes. If the requirements are to be satisfied nonsimultaneously, i.e., one at a time, the problem is called the nonsimultaneous network synthesis problem, whereas if the requirements are to be satisfied simultaneously, the problem is called the simultaneous network synthesis problem. The total construction cost of the network is the sum of the construction cost of capacities on the edges, where the construction cost of a unit capacity is fixed for any edge, independent of the size of the capacity, but it may differ from edge to edge. The capacities are allowed to assume noninteger nonnegative values. The simultaneous network synthesis problem was efficiently solved by Gomory and Hu [60], whereas the nonsimultaneous network synthesis problem can only be formulated and solved as a linear program with a large number of constraints. However, the special equal-cost case, i.e., when the unit construction costs are equal across the edges, can be efficiently solved, see Gomory and Hu [60], by some combinatorial method, other than linear programming. A cost allocation problem which is associated with the network synthesis problem would naturally arise, if we assume that the various nodes in the network represent different users or communities. In this case, we need to find a fair method for allocating the construction cost of the network among the different users. An interesting generalization of the nonsimultaneous network synthesis problem, the Steiner network synthesis problem, is derived, when only a proper subset of the nodes have positive requirements from each other. The thesis is concerned with two issues. First, we will analyze the cost allocation problems arising in the simultaneous and the equal cost nonsimultaneous network synthesis problem. Secondly, we will consider the Steiner network synthesis problem, with particular emphasis on simplifying the computations in some special cases, not considered before. We will employ cooperative game theory to formulate the cost allocation problems, and we will prove that the derived games are 'concave', which implies the existence of the core and the inclusion of the Shapley value and the nucleolus in the core, and then we will present irredundant representations of the cores. For the equal cost nonsimultaneous network synthesis game, we will use the irredundant representation of the core to provide an explicit closed form expression for the nucleolus of the game, when the requirement structure is a spanning tree; then, we will develop, in a special case, a decomposition of the game, which we will later use to efficiently compute the Shapley value of the game when the requirement structure is a tree; the decomposition will also be used for the core and the nucleolus of the game in the special case. For the simultaneous network synthesis game, we will also use the irredundant representation of the core to derive an explicit closed form expression for the nucleolus, and we will also decompose the core of this game in the special case, and prove that the Shapley value and the nucleolus coincide. Secondly, for the Steiner network synthesis problem, two conceptually different contributions have been made, one being a simplifying transformation, and the other being the case when the network has to be embedded in (i.e., restricted to) some special graphs. Namely, when the requirement structure is sparse (because there are only a few demand nodes and the rest are just intermediate nodes) and the positive requirements are equal, we will employ a transformation procedure to simplify the computations. This will enable us to efficiently solve the Steiner network synthesis problem with five or less nodes which have equal positive requirements from each other. Finally, when the solution network to the Steiner network synthesis problem is to be embedded in (restricted to) some special graphs, namely trees, rings (circles), series-parallel graphs, or M₂ and M₃-free graphs, we will provide combinatorial algorithms which are expected to simplify the computations. / Business, Sauder School of / Graduate
307

Resource Management and Optimization in Wireless Mesh Networks

Zhang, Xiaowen 02 November 2009 (has links)
A wireless mesh network is a mesh network implemented over a wireless network system such as wireless LANs. Wireless Mesh Networks(WMNs) are promising for numerous applications such as broadband home networking, enterprise networking, transportation systems, health and medical systems, security surveillance systems, etc. Therefore, it has received considerable attention from both industrial and academic researchers. This dissertation explores schemes for resource management and optimization in WMNs by means of network routing and network coding. In this dissertation, we propose three optimization schemes. (1) First, a triple-tier optimization scheme is proposed for load balancing objective. The first tier mechanism achieves long-term routing optimization, and the second tier mechanism, using the optimization results obtained from the first tier mechanism, performs the short-term adaptation to deal with the impact of dynamic channel conditions. A greedy sub-channel allocation algorithm is developed as the third tier optimization scheme to further reduce the congestion level in the network. We conduct thorough theoretical analysis to show the correctness of our design and give the properties of our scheme. (2) Then, a Relay-Aided Network Coding scheme called RANC is proposed to improve the performance gain of network coding by exploiting the physical layer multi-rate capability in WMNs. We conduct rigorous analysis to find the design principles and study the tradeoff in the performance gain of RANC. Based on the analytical results, we provide a practical solution by decomposing the original design problem into two sub-problems, flow partition problem and scheduling problem. (3) Lastly, a joint optimization scheme of the routing in the network layer and network coding-aware scheduling in the MAC layer is introduced. We formulate the network optimization problem and exploit the structure of the problem via dual decomposition. We find that the original problem is composed of two problems, routing problem in the network layer and scheduling problem in the MAC layer. These two sub-problems are coupled through the link capacities. We solve the routing problem by two different adaptive routing algorithms. We then provide a distributed coding-aware scheduling algorithm. According to corresponding experiment results, the proposed schemes can significantly improve network performance.
308

A policy-based architecture for virtual network embedding

Esposito, Flavio 22 January 2016 (has links)
Network virtualization is a technology that enables multiple virtual instances to coexist on a common physical network infrastructure. This paradigm fostered new business models, allowing infrastructure providers to lease or share their physical resources. Each virtual network is isolated and can be customized to support a new class of customers and applications. To this end, infrastructure providers need to embed virtual networks on their infrastructure. The virtual network embedding is the (NP-hard) problem of matching constrained virtual networks onto a physical network. Heuristics to solve the embedding problem have exploited several policies under different settings. For example, centralized solutions have been devised for small enterprise physical networks, while distributed solutions have been proposed over larger federated wide-area networks. In this thesis we present a policy-based architecture for the virtual network embedding problem. By policy, we mean a variant aspect of any of the three (invariant) embedding mechanisms: physical resource discovery, virtual network mapping, and allocation on the physical infrastructure. Our architecture adapts to different scenarios by instantiating appropriate policies, and has bounds on embedding efficiency, and on convergence embedding time, over a single provider, or across multiple federated providers. The performance of representative novel and existing policy configurations are compared via extensive simulations, and over a prototype implementation. We also present an object model as a foundation for a protocol specification, and we release a testbed to enable users to test their own embedding policies, and to run applications within their virtual networks. The testbed uses a Linux system architecture to reserve virtual node and link capacities.
309

DESIGN OF EFFICIENT MULTICAST ROUTING PROTOCOLS FOR COMPUTER NETWORKS

alyanbaawi, ashraf 01 May 2020 (has links)
Multicasting can be done in two different ways: source based tree approach andshared tree approach. Shared tree approach is preferred over source-based treeapproach because in the later construction of minimum cost tree per source is neededunlike a single shared tree in the former approach. However, in shared tree approach asingle core needs to handle the entire traffic load resulting in degraded multicastperformance. Besides, it also suffers from „single point failure‟. Multicast is acommunication between one or multiple senders and multiple receivers, which used asa way of sending IP datagrams to a group of interested receivers in one transmission.Core-based trees major concerns are core selection and core as single point of failure.The problem of core selection is to choose the best core or cores in the network toimprove the network performance.In this dissertation we propose 1) a multiple core selection approach for core-based tree multicasting, senders can select different cores to have an efficient loadbalanced multicore multicasting. It will overcome any core failure as well. 2) Novel andefficient schemes for load shared multicore multicasting are presented. Multiple coresare selected statically, that is, independent of any existing multicast groups and also theselection process is independent of any underlying unicast protocol. Some of theselected cores can be used for fault- tolerant purpose also to guard against any possible core failures. 3) We have presented two novel and efficient schemes forgroup-based load shared multicore multicasting in which members of a multicast groupuse the same core tree for their multicasting. 4) We also presented two schemes aim atachieving low latency multicasting along with load sharing for delay sensitive multicastapplications. Besides, we have presented a unique approach for core migration, whichuses two very important parameters, namely, depth of a core tree and pseudo diameterof a core. One noteworthy point from the viewpoint of fault tolerance is that the degreeof fault-tolerance can be enhanced from covering single point-failure to any number ofcore failures.
310

Pore Network Modeling: Alternative Methods to Account for Trapping and Spatial Correlation

De La Garza Martinez, Pablo 01 May 2016 (has links)
Pore network models have served as a predictive tool for soil and rock properties with a broad range of applications, particularly in oil recovery, geothermal energy from underground reservoirs, and pollutant transport in soils and aquifers [39]. They rely on the representation of the void space within porous materials as a network of interconnected pores with idealised geometries. Typically, a two-phase flow simulation of a drainage (or imbibition) process is employed, and by averaging the physical properties at the pore scale, macroscopic parameters such as capillary pressure and relative permeability can be estimated. One of the most demanding tasks in these models is to include the possibility of fluids to remain trapped inside the pore space. In this work I proposed a trapping rule which uses the information of neighboring pores instead of a search algorithm. This approximation reduces the simulation time significantly and does not perturb the accuracy of results. Additionally, I included spatial correlation to generate the pore sizes using a matrix decomposition method. Results show higher relative permeabilities and smaller values for irreducible saturation, which emphasizes the effects of ignoring the intrinsic correlation seen in pore sizes from actual porous media. Finally, I implemented the algorithm from Raoof et al. (2010) [38] to generate the topology of a Fontainebleau sandstone by solving an optimization problem using the steepest descent algorithm with a stochastic approximation for the gradient. A drainage simulation is performed on this representative network and relative permeability is compared with published results. The limitations of this algorithm are discussed and other methods are suggested to create a more faithful representation of the pore space.

Page generated in 0.0597 seconds