• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1963
  • 183
  • 182
  • 147
  • 36
  • 25
  • 25
  • 25
  • 25
  • 25
  • 24
  • 16
  • 11
  • 9
  • 7
  • Tagged with
  • 2877
  • 2877
  • 750
  • 637
  • 506
  • 499
  • 393
  • 336
  • 314
  • 300
  • 299
  • 289
  • 288
  • 277
  • 276
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1051

Trust in distributed information systems

Zhao, Weiliang, University of Western Sydney, College of Health and Science, School of Computing and Mathematics January 2008 (has links)
Trust management is an important issue in the analysis and design of secure information systems. This is especially the case where centrally managed security is not possible. Trust issues arise not only in business functions, but also in technologies used to support these functions. There are a vast number of services and applications that must accommodate appropriate notions of trust. Trust and trust management have become a hot research area. The motivation of this dissertation is to build up a comprehensive trust management approach that covers the analysis/modelling of trust relationships and the development of trust management systems in a consistent manner. A formal model of trust relationship is proposed with a strict mathematical structure that can not only reflect many of the commonly used notions of trust, but also provide a solid basis for a unified taxonomy framework of trust where a range of useful properties of trust relationships can be expressed and compared. A classification of trust relationships is presented. A set of definitions, propositions, and operations are proposed for the properties about scope and diversity of trust relationships, direction and symmetry of trust relationships, and relations of trust relationships. A general methodology for analysis and modelling of trust relationships in distributed information system is presented. The general methodology includes a range of major concerns in the whole lifecycle of trust relationships, and provides practical guidelines for analysis and modelling of trust relationships in the real world. A unified framework for trust management is proposed. Trust request, trust evaluation, and trust consuming are handled in a comprehensive and consistent manner. A variety of trust mechanisms including reputation, credentials, local data, and environment parameters are covered under the same framework. A trust management architecture is devised for facilitating the development of trust management systems. A trust management system for federated medical services is developed as an implementation example of the proposed trust management architecture. An online booking system is developed to show how a trust management system is employed by applications. A trust management architecture for web services is devised. It can be viewed as an extension of WS-Trust with the ability to integrate the message building blocks supported by web services protocol stack and other trust mechanisms. It provides high level architecture and guidelines for the development and deployment of a trust management layer in web services. Trust management extension of CardSpace identity system is introduced. Major concerns are listed for the analysis and modelling of trust relationships, and development of trust management systems for digital identities. / Doctor of Philosophy (PhD)
1052

The management of SPMD based parallel processing on clusters of workstations.

Hobbs, Michael J, mikewood@deakin.edu.au January 1998 (has links)
Current attempts to manage parallel applications on Clusters of Workstations (COWs) have either generally followed the parallel execution environment approach or been extensions to existing network operating systems, both of which do not provide complete or satisfactory solutions. The efficient and transparent management of parallelism within the COW environment requires enhanced methods of process instantiation, mapping of parallel process to workstations, maintenance of process relationships, process communication facilities, and process coordination mechanisms. The aim of this research is to synthesise, design, develop and experimentally study a system capable of efficiently and transparently managing SPMD parallelism on a COW. This system should both improve the performance of SPMD based parallel programs and relieve the programmer from the involvement into parallelism management in order to allow them to concentrate on application programming. It is also the aim of this research to show that such a system, to achieve these objectives, is best achieved by adding new special services and exploiting the existing services of a client/server and microkernel based distributed operating system. To achieve these goals the research methods of the experimental computer science should be employed. In order to specify the scope of this project, this work investigated the issues related to parallel processing on COWs and surveyed a number of relevant systems including PVM, NOW and MOSIX. It was shown that although the MOSIX system provide a number of good services related to parallelism management, none of the system forms a complete solution. The problems identified with these systems include: instantiation services that are not suited to parallel processing; duplication of services between the parallelism management environment and the operating system; and poor levels of transparency. A high performance and transparent system capable of managing the execution of SPMD parallel applications was synthesised and the specific services of process instantiation, process mapping and process interaction detailed. The process instantiation service designed here provides the capability to instantiate parallel processes using either creation or duplication methods and also supports multiple and group based instantiation which is specifically design for SPMD parallel processing. The process mapping service provides the combination of process allocation and dynamic load balancing to ensure the load of a COW remains balanced not only at the time a parallel program is initialised but also during the execution of the program. The process interaction service guarantees to maintain transparently process relationships, communications and coordination services between parallel processes regardless of their location within the COW. The combination of these services provides an original architecture and organisation of a system that is capable of fully managing the execution of SPMD parallel applications on a COW. A logical design of a parallelism management system was developed derived from the synthesised system and was shown that it should ideally be based on a distributed operating system employing the client server model. The client/server based distributed operating system provides the level of transparency, modularity and flexibility necessary for a complete parallelism management system. The services identified in the synthesised system have been mapped to a set of server processes including: Process Instantiation Server providing advanced multiple and group based process creation and duplication; Process Mapping Server combining load collection, process allocation and dynamic load balancing services; and Process Interaction Server providing transparent interprocess communication and coordination. A Process Migration Server was also identified as vital to support both the instantiation and mapping servers. The RHODOS client/server and microkernel based distributed operating system was selected to carry out research into the detailed design and to be used for the implementation this parallelism management system. RHODOS was enhanced to provide the required servers and resulted in the development of the REX Manager, Global Scheduler and Process Migration Manager to provide the services of process instantiation, mapping and migration, respectively. The process interaction services were already provided within RHODOS and only required some extensions to the existing Process Manager and IPC Managers. Through a variety of experiments it was shown that when this system was used to support the execution of SPMD parallel applications the overall execution times were improved, especially when multiple and group based instantiation services are employed. The RHODOS PMS was also shown to greatly reduce the programming burden experienced by users when writing SPMD parallel applications by providing a small set of powerful primitives specially designed to support parallel processing. The system was also shown to be applicable and has been used in a variety of other research areas such as Distributed Shared Memory, Parallelising Compilers and assisting the port of PVM to the RHODOS system. The RHODOS Parallelism Management System (PMS) provides a unique and creative solution to the problem of transparently and efficiently controlling the execution of SPMD parallel applications on COWs. Combining advanced services such as multiple and group based process creation and duplication; combined process allocation and dynamic load balancing; and complete COW wide transparency produces a totally new system that addresses many of the problems not addressed in other systems.
1053

Semantically annotated multi-protocol adapter nodes: a new approach to implementing network-based information systems using ontologies.

Falkner, Nickolas John Gowland January 2007 (has links)
Network-based information systems are an important class of distributed systems that serve large and diverse user communities with information and essential network services. Centrally defined standards for interoperation and information exchange ensure that any required functionality is provided but do so at the expense of flexibility and ease of system evolution. This thesis presents a novel approach to implementing network-based information systems in a knowledge-representation-based format using an ontological description of the service. Our approach allows us to provide flexible distributed systems that can conform to global standards while still allowing local developments and protocol extensions. We can share data between systems if we provide an explicit specification of the relationship between the knowledge in the system and the structure and nature of the values shared between systems. Existing distributed systems may share data based on the values and structures of that data but we go beyond syntax-based value exchange to introduce a semantically-based exchange of knowledge. The explicit statement of the semantics and syntax of the system in a machine-interpretable form provides the automated integration of different systems through the use of adapter nodes. Adapter nodes are members of more than one system and seamlessly transport data between the systems. We develop a multi-tier software architecture that characterises the values held inside the system depending on an ontological classification of their structure and context to allow the definition of values in terms of the knowledge that they represent. Initially, received values are viewed as data, with no structural information. Structural and type information, and the context of the value can now be associated with it through the use of ontologies, leading to a value-form referred to as knowledge: a value that is structurally and contextually rich. This is demonstrated through an implementation process employing RDF, OWL and SPARQL to develop an ontological description of a network-based information system. The implementation provides evidence for the benefits and costs of representing a system in such a manner, including a complexity-based analysis of system performance. The implementation demonstrates the ability of such a representation to separate global standards-based requirements from local user requirements. This allows the addition of behaviour, specific to local needs, to otherwise global systems in a way that does not compromise the global standards. Our contribution is in providing a means for network-based information systems to retain the benefits of their global interaction while still allowing local customisation to meet the user expectations. This thesis presents a novel use of ontologically-based representation and tools to demonstrate the benefits of the multi-tier software architecture with a separation of the contents of the system into data, information and knowledge. Our approach increases the ease of interoperation for large-scale distributed systems and facilitates the development of systems that can adapt to local requirements while retaining their wider interoperability. Further, our approach provides a strong contextual framework to ground concepts in the system and also supports the amalgamation of data from many sources to provide rich and extensible network-based information system. / http://library.adelaide.edu.au/cgi-bin/Pwebrecon.cgi?BBID=1295234 / Thesis (Ph.D.) -- School of Computer Science, 2007
1054

Management of innovation networks in technology transfer.

Rampersad, Giselle January 2008 (has links)
Network management is a critical concept in innovation and technology transfer. Linkages among network members are fundamental in the innovation process which has been heralded for its contribution to wealth creation in economies increasingly characterized by both globalization and technological connectivity. Innovation networks involve relationships among members of governments, businesses and universities that collaborate continuously to achieve shared scientific goals. This study focuses on identifying the key management factors operating in such networks and on determining the process through which these lead to successful technology transfer. This is of increasing interest for many countries seeking to foster innovation, technology transfer and, in turn, international competitiveness. The study integrates the technology transfer and network research streams in order to provide a unique contribution towards understanding key network factors that are important in technology transfer. Extant technology transfer literature predominantly provides a perspective of a focal organization or, at best, that of inter-organisational relationships while its empirical investigation from a network perspective remains limited. In order to develop a more holistic network perspective, this study draws on the network literature and in particular that of the Industrial Marketing and Purchasing (IMP) group. Although neither a comprehensive network management theory nor suitable measures at the network level of analysis currently exist, the network literature is quickly evolving and has highlighted several concepts that contribute to achieving network outcomes, albeit in a conjectural fashion. Therefore, this study applies these concepts towards contributing to network management theory development in both the network and technology transfer fields. This study adopts a multi-method research approach. Qualitative exploratory research was necessary as concepts from the technology transfer and network management literatures were combined in a novel way. It was also essential in developing appropriate scales. Quantitative research then followed in order to test these scales by applying exploratory factor analysis and reliability testing. The developed scales were then employed to advance theory development, using confirmatory factor analysis via structural equation modelling. The study predominantly investigates networks within several industries that are relevant internationally and consistent with some of Australia’s national research priorities. Consequently, a pilot study was conducted in the wine industry to purify scales followed by full field work undertaken in the information and communications technology and biotechnology/nanotechnology industries. Common patterns that emerge within different industries strengthen theory development and lead to generalizations to other related industries while differences lead to industry-specific implications. A number of patterns were uncovered. Evidence was provided for the significant impact of power distribution, trust, coordination and harmony on achieving network outcomes in the ICT and the biotechnology/nanotechnology industries. While both communication and R&D efficiencies were deemed important in achieving network effectiveness, the specific relationships among these factors varied between industries. The study contributes to advancing theory on network management and offers practical management implications particularly for the industries under investigation. / http://proxy.library.adelaide.edu.au/login?url= http://library.adelaide.edu.au/cgi-bin/Pwebrecon.cgi?BBID=1346750 / Thesis (Ph.D.) -- University of Adelaide, Business School, 2008
1055

Full-text keyword search in meta-search and P2P networks /

Zhao, Jing. January 2007 (has links)
Thesis (Ph.D.)--Hong Kong University of Science and Technology, 2007. / Includes bibliographical references (leaves 89-94). Also available in electronic version.
1056

Online community supporting trading functions in an online auction website. A dissertation submitted in partial fulfilment of the requirements for the degree of Master Computing Systems, Unitec New Zealand /

Elian, Ryan. January 2007 (has links)
Thesis (M.C.S.)--Unitec New Zealand, 2007. / Includes bibliographical references (leaves 56-62).
1057

Wireless multiple access communication over collision frequency shift keyed channels

Xia, Chen. January 1900 (has links)
Thesis (Ph.D.)--University of Nebraska-Lincoln, 2007. / Title from title screen (site viewed Dec. 5, 2007). PDF text: xvi, 142 p. : ill. ; 5 Mb. UMI publication number: AAT 3273188. Includes bibliographical references. Also available in microfilm and microfiche formats.
1058

Scheduling algorithms for data distribution in peer-to-peer collaborative file distribution networks

Chan, Siu-kei, Jonathan, January 2006 (has links)
Thesis (M. Phil.)--University of Hong Kong, 2006. / Title proper from title frame. Also available in printed format.
1059

The simulation studies on a behaviour based trust routing protocol for ad hoc networks

Kulkarni, Shrinivas Bhalachandra. January 2006 (has links)
Thesis (M.S.)--State University of New York at Binghamton, Dept. of Electrical & Computer Engineering, 2006. / Includes bibliographical references.
1060

Modeling, Implementation and Evaluation of IP Network Bandwidth Measurement Methods

Johnsson, Andreas January 2007 (has links)
<p>Internet has gained much popularity among the public since the mid 1990's and is now an integrated part of our society. A large range of high-speedbroadband providers and the development of new and more efficient Internet applications increase the possibilities to watch movies and live TV, use IP-telephony and share files over the Internet. Such applications demand high data transmission rates, which in turn consume network bandwidth. Since several users must share the common bandwidth capacity on the Internet, there will be locations in the network where the demand is higher than the capacity. This causes network congestion, which has negative impact on both the data transmission rate and transmission quality.</p><p>This thesis is about methods for measuring the available bandwidth of a network path between two computers. The available bandwidth can be interpreted as the maximum transfer rate possible without causing congestion. By deploying the methods studied in this thesis the available bandwidth can be measured without previous knowledge of the network topology. When an estimate of the available bandwidth is obtained, the transfer rate when sending messages between computers can be set to the measured value in order to avoid congestion.</p><p>In the thesis an active end-to-end available bandwidth measurement method called "Bandwidth Available in Real Time" (BART for short) is evaluated. BART measures the available bandwidth by injecting probe packets into the network at a given rate and then analysing how this rate has changed on the receiving side. A Kalman filter is used to update the current estimate of the available bandwidth using the new measurement sample.</p><p>The focus of the thesis is on how methods, such as BART, function in wireless 802.11 networks, which are very popular in work as well as in home environments. Wireless networks have a different construction compared to many other types of networks and this can affect the accuracy of the measurement methods discussed in this thesis. The effects must be analyzed and understood in order to obtain accurate available bandwidth estimates. Since wireless links are often parts of the network path between a sender and a receiver on the Internet, it is important to study how these links affect the estimates of the available bandwidth.</p>

Page generated in 0.0464 seconds