• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 553
  • 139
  • 78
  • 62
  • 42
  • 34
  • 29
  • 25
  • 10
  • 8
  • 6
  • 5
  • 5
  • 5
  • 5
  • Tagged with
  • 1257
  • 1257
  • 660
  • 184
  • 179
  • 126
  • 116
  • 112
  • 108
  • 102
  • 99
  • 96
  • 95
  • 91
  • 86
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Community Mining: Discovering Communities in Social Networks

Chen, Jiyang Unknown Date
No description available.
52

Efficient implementations of the primal-dual method

Osiakwan, Constantine N. K. January 1984 (has links)
No description available.
53

Network dynamics in analog fluidic systems

Lake, Allan James 05 1900 (has links)
No description available.
54

A study and implementation of the network flow problem and edge integrity of networks

Haiba, Mohamed Salem January 1991 (has links)
Fundamental problems in graph theory are of four types existence, construction, enumeration and optimization problems. Optimization problems lie at the interface between computer science and the field of operations research and are of primary importance in decision-making. In this thesis, two optimization problems are studied: the edge-integrity of networks and the network flow problem. An implementation of the corresponding algorithms is also realized.The edge integrity of a communication network provides a way to assess the vulnerability of the network to disruption through the destruction or failure of some of its links. While the computation of the edge-integrity of graphs in general has been proven to be NPcomplete, a recently published paper was devoted to a good algorithm using a technique of edge separation sequence for computing the edge integrity of trees. The main results of this paper will be presented and an implementation of this algorithm is achieved.The network flow problem models a distribution system in which commodities are flowing through an interconnected network. The goal is to find a maximum feasible flow and its value, given the capacity constraints for each edge. The three majors algorithms for this problem (Ford -Fulkerso n, Edmonds-Karp method, MPKM algorithm) are discussed, their complexities compared and an implementation of the Ford-Fulkerson and the MPKM algorithms is presented. / Department of Computer Science
55

A mathematical model for the long-term planning of a telephone network.

Bruyn, Stewart James. January 1977 (has links) (PDF)
Thesis (Ph.D.) -- University of Adelaide, Dept. of Applied Mathematics, 1979.
56

Analyzing the robustness of telecommunication networks /

Eller, Karol Schaeffer, January 1992 (has links)
Report (M.S.)--Virginia Polytechnic Institute and State University. M.S. 1992. / Vita. Abstract. Includes bibliographical references (leaves 123-125). Also available via the Internet.
57

The evolution of standards /

Simmering, Volker, January 2003 (has links)
Thesis (doctoral)--Universität Hamburg, 2002. / Includes bibliographical references (p. 185-193).
58

Finitely convergent methods for solving stochastic linear programming and stochastic network flow problems

Qi, Liqun. January 1984 (has links)
Thesis (Ph. D.)--University of Wisconsin--Madison, 1984. / Typescript. Vita. eContent provider-neutral record in process. Description based on print version record. Includes bibliographical references (leaves 126-128).
59

A method for the evaluation of similarity measures on graphs and network-structured data

Naude, Kevin Alexander January 2014 (has links)
Measures of similarity play a subtle but important role in a large number of disciplines. For example, a researcher in bioinformatics may devise a new computed measure of similarity between biological structures, and use its scores to infer bio-logical association. Other academics may use related approaches in structured text search, or for object recognition in computer vision. These are diverse and practical applications of similarity. A critical question is this: to what extent can a given similarity measure be trusted? This is a difficult problem, at the heart of which lies the broader issue: what exactly constitutes good similarity judgement? This research presents the view that similarity measures have properties of judgement that are intrinsic to their formulation, and that such properties are measurable. The problem of comparing similarity measures is one of identifying ground-truths for similarity. The approach taken in this work is to examine the relative ordering of graph pairs, when compared with respect to a common reference graph. Ground- truth outcomes are obtained from a novel theory: the theory of irreducible change in graphs. This theory supports stronger claims than those made for edit distances. Whereas edit distances are sensitive to a configuration of costs, irreducible change under the new theory is independent of such parameters. Ground-truth data is obtained by isolating test cases for which a common outcome is assured for all possible least measures of change that can be formulated within a chosen change descriptor space. By isolating these specific cases, and excluding others, the research introduces a framework for evaluating similarity measures on mathematically defensible grounds. The evaluation method is demonstrated in a series of case studies which evaluate the similarity performance of known graph similarity measures. The findings of these experiments provide the first general characterisation of common similarity measures over a wide range of graph properties. The similarity computed from the maximum common induced subgraph (Dice-MCIS) is shown to provide good general similarity judgement. However, it is shown that Blondel's similarity measure can exceed the judgement sensitivity of Dice-MCIS, provided the graphs have both sufficient attribute label diversity, and edge density. The final contribution is the introduction of a new similarity measure for graphs, which is shown to have statistically greater judgement sensitivity than all other measures examined. All of these findings are made possible through the theory of irreducible change in graphs. The research provides the first mathematical basis for reasoning about the quality of similarity judgments. This enables researchers to analyse similarity measures directly, making similarity measures first class objects of scientific inquiry.
60

Incorporating semantic integrity constraints in a database schema

Yang, Heng-li 11 1900 (has links)
A database schema should consist of structures and semantic integrity constraints. Se mantic integrity constraints (SICs) are invariant restrictions on the static states of the stored data and the state transitions caused by the primitive operations: insertion, dele tion, or update. Traditionally, database design has been carried out on an ad hoc basis and focuses on structure and efficiency. Although the E-R model is the popular concep tual modelling tool, it contains few inherent SICs. Also, although the relational database model is the popular logical data model, a relational database in fourth or fifth normal form may still represent little of the data semantics. Most integrity checking is distributed to the application programs or transactions. This approach to enforcing integrity via the application software causes a number of problems. Recently, a number of systems have been developed for assisting the database design process. However, only a few of those systems try to help a database designer incorporate SICs in a database schema. Furthermore, current SIC representation languages in the literature cannot be used to represent precisely the necessary features for specifying declarative and operational semantics of a SIC, and no modelling tool is available to incorporate SICs. This research solves the above problems by presenting two models and one subsystem. The E-R-SIC model is a comprehensive modelling tool for helping a database designer in corporate SICs in a database schema. It is application domain-independent and suitable for implementation as part of an automated database design system. The SIC Repre sentation model is used to represent precisely these SICs. The SIC elicitation subsystem would verify these general SICs to a certain extent, decompose them into sub-SICs if necessary, and transform them into corresponding ones in the relational model. A database designer using these two modelling tools can describe more data semantics than with the widely used relational model. The proposed SIC elicitation subsystem can provide more modelling assistance for him (her) than current automated database design systems. / Business, Sauder School of / Graduate

Page generated in 0.0566 seconds