Spelling suggestions: "subject:"anetwork analysis (alanning)"" "subject:"anetwork analysis (aplanning)""
31 
Public transit system network models: consideration of guideway construction, passenger travel and delay time, and vehicle scheduling costsSharp, Gunter Pielbusch 12 1900 (has links)
No description available.

32 
State space partition techniques for multiterminal and multicommodity flows in stochastic networksDaly, Matthew Sean 12 1900 (has links)
No description available.

33 
Efficient implementations of the primaldual methodOsiakwan, Constantine N. K. January 1984 (has links)
No description available.

34 
Network dynamics in analog fluidic systemsLake, Allan James 05 1900 (has links)
No description available.

35 
A study and implementation of the network flow problem and edge integrity of networksHaiba, Mohamed Salem January 1991 (has links)
Fundamental problems in graph theory are of four types existence, construction, enumeration and optimization problems. Optimization problems lie at the interface between computer science and the field of operations research and are of primary importance in decisionmaking. In this thesis, two optimization problems are studied: the edgeintegrity of networks and the network flow problem. An implementation of the corresponding algorithms is also realized.The edge integrity of a communication network provides a way to assess the vulnerability of the network to disruption through the destruction or failure of some of its links. While the computation of the edgeintegrity of graphs in general has been proven to be NPcomplete, a recently published paper was devoted to a good algorithm using a technique of edge separation sequence for computing the edge integrity of trees. The main results of this paper will be presented and an implementation of this algorithm is achieved.The network flow problem models a distribution system in which commodities are flowing through an interconnected network. The goal is to find a maximum feasible flow and its value, given the capacity constraints for each edge. The three majors algorithms for this problem (Ford Fulkerso n, EdmondsKarp method, MPKM algorithm) are discussed, their complexities compared and an implementation of the FordFulkerson and the MPKM algorithms is presented. / Department of Computer Science

36 
A mathematical model for the longterm planning of a telephone network.Bruyn, Stewart James. January 1977 (has links) (PDF)
Thesis (Ph.D.)  University of Adelaide, Dept. of Applied Mathematics, 1979.

37 
Analyzing the robustness of telecommunication networks /Eller, Karol Schaeffer, January 1992 (has links)
Report (M.S.)Virginia Polytechnic Institute and State University. M.S. 1992. / Vita. Abstract. Includes bibliographical references (leaves 123125). Also available via the Internet.

38 
The evolution of standards /Simmering, Volker, January 2003 (has links)
Thesis (doctoral)Universität Hamburg, 2002. / Includes bibliographical references (p. 185193).

39 
Finitely convergent methods for solving stochastic linear programming and stochastic network flow problemsQi, Liqun. January 1984 (has links)
Thesis (Ph. D.)University of WisconsinMadison, 1984. / Typescript. Vita. eContent providerneutral record in process. Description based on print version record. Includes bibliographical references (leaves 126128).

40 
A method for the evaluation of similarity measures on graphs and networkstructured dataNaude, Kevin Alexander January 2014 (has links)
Measures of similarity play a subtle but important role in a large number of disciplines. For example, a researcher in bioinformatics may devise a new computed measure of similarity between biological structures, and use its scores to infer biological association. Other academics may use related approaches in structured text search, or for object recognition in computer vision. These are diverse and practical applications of similarity. A critical question is this: to what extent can a given similarity measure be trusted? This is a difficult problem, at the heart of which lies the broader issue: what exactly constitutes good similarity judgement? This research presents the view that similarity measures have properties of judgement that are intrinsic to their formulation, and that such properties are measurable. The problem of comparing similarity measures is one of identifying groundtruths for similarity. The approach taken in this work is to examine the relative ordering of graph pairs, when compared with respect to a common reference graph. Ground truth outcomes are obtained from a novel theory: the theory of irreducible change in graphs. This theory supports stronger claims than those made for edit distances. Whereas edit distances are sensitive to a configuration of costs, irreducible change under the new theory is independent of such parameters. Groundtruth data is obtained by isolating test cases for which a common outcome is assured for all possible least measures of change that can be formulated within a chosen change descriptor space. By isolating these specific cases, and excluding others, the research introduces a framework for evaluating similarity measures on mathematically defensible grounds. The evaluation method is demonstrated in a series of case studies which evaluate the similarity performance of known graph similarity measures. The findings of these experiments provide the first general characterisation of common similarity measures over a wide range of graph properties. The similarity computed from the maximum common induced subgraph (DiceMCIS) is shown to provide good general similarity judgement. However, it is shown that Blondel's similarity measure can exceed the judgement sensitivity of DiceMCIS, provided the graphs have both sufficient attribute label diversity, and edge density. The final contribution is the introduction of a new similarity measure for graphs, which is shown to have statistically greater judgement sensitivity than all other measures examined. All of these findings are made possible through the theory of irreducible change in graphs. The research provides the first mathematical basis for reasoning about the quality of similarity judgments. This enables researchers to analyse similarity measures directly, making similarity measures first class objects of scientific inquiry.

Page generated in 0.1623 seconds