• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 999
  • 126
  • 85
  • 63
  • 37
  • 26
  • 9
  • 8
  • 8
  • 8
  • 8
  • 8
  • 8
  • 5
  • 4
  • Tagged with
  • 1562
  • 1562
  • 585
  • 564
  • 428
  • 313
  • 309
  • 291
  • 276
  • 239
  • 208
  • 195
  • 192
  • 178
  • 171
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
341

CSMA/VTR: a new high-performance medium access control protocol for wireless LANs.

January 2007 (has links)
Chan, Hing Pan. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2007. / Includes bibliographical references (leaves 107-109). / Abstracts in English and Chinese. / Chapter Chapter 1 - --- Introduction --- p.1 / Chapter Chapter 2 - --- Background --- p.3 / Chapter 2.1 --- IEEE 802.11 MAC Protocol --- p.3 / Chapter 2.2 --- Related Work --- p.5 / Chapter Chapter 3 - --- Design Principles --- p.8 / Chapter Chapter 4 - --- Load-Adaptive Transmission Scheduling --- p.11 / Chapter 4.1 --- Contention Period (CP) --- p.14 / Chapter 4.2 --- Service Period (SP) --- p.22 / Chapter Chapter 5 - --- Synchronization --- p.27 / Chapter 5.1 --- Slot Boundary Detection --- p.27 / Chapter 5.2 --- Period Boundary Detection --- p.29 / Chapter 5.3 --- Period Identification --- p.30 / Chapter 5.4 --- Exception Handling --- p.62 / Chapter Chapter 6 - --- Performance Analysis --- p.70 / Chapter Chapter 7 - --- Performance Evaluations --- p.73 / Chapter 7.1 --- Parameter Tuning --- p.75 / Chapter 7.2 --- CBR UDP Traffic --- p.82 / Chapter 7.3 --- TCP Traffic --- p.94 / Chapter 7.4 --- Performance in Multi-hop Networks --- p.101 / Chapter Chapter 8 - --- Conclusions --- p.105 / Bibliography --- p.107
342

Performance analysis of delay tolerant networks under resource constraints and node heterogeneity.

January 2007 (has links)
Ip, Yin Ki. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2007. / Includes bibliographical references (leaves 96-102). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgement --- p.iv / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Background Study --- p.6 / Chapter 2.1 --- DTN Reference Implementation Model --- p.7 / Chapter 2.2 --- DTN Applications --- p.9 / Chapter 2.3 --- Multiple-copy Routing Strategies --- p.11 / Chapter 2.4 --- Buffer Management Strategies --- p.12 / Chapter 2.5 --- Performance Modeling of Multiple-copy Routing --- p.14 / Chapter 2.6 --- Conclusion on Background Study --- p.18 / Chapter 3 --- DTN with Resource Constraints --- p.20 / Chapter 3.1 --- Introduction --- p.20 / Chapter 3.2 --- Related Work --- p.21 / Chapter 3.3 --- "System Model, Replication, Forwarding and Buffer Management Strategies" --- p.22 / Chapter 3.4 --- Performance Evaluation --- p.29 / Chapter 3.4.1 --- Analysis on single-message-delivery with unlimited network resource --- p.29 / Chapter 3.4.2 --- Simulation study on multi-message-delivery with limited resource constraint --- p.34 / Chapter 3.5 --- Conclusion on DTN with Resource Constraints --- p.39 / Chapter 4 --- Multiple-copy Routing in DTN with Heteroge- neous Node Types --- p.41 / Chapter 4.1 --- Introduction --- p.41 / Chapter 4.2 --- Related Work --- p.44 / Chapter 4.3 --- System Model --- p.44 / Chapter 4.4 --- Performance Modeling --- p.46 / Chapter 4.4.1 --- Continuous Time Markov Chain (CTMC) Model --- p.46 / Chapter 4.4.2 --- Fluid Flow Approximation (FFA) --- p.53 / Chapter 4.5 --- Conclusion on DTN with Node Heterogeneity --- p.73 / Chapter 5 --- Conclusion and Future Work --- p.75 / Chapter A --- Random Direction Mobility Model --- p.78 / Chapter A.1 --- Mean Inter-encounter Interval --- p.79 / Chapter A.2 --- Inter-encounter Interval Distribution --- p.86 / Chapter A.3 --- Concluding Remarks --- p.88 / Chapter B --- Additional Results by Fluid Flow Approximation and Moment Closure Methods --- p.92 / Bibliography --- p.96
343

An implementation of the Kermit protocol using the Edison system

Scott, Terry A. January 2010 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries / Department: Computer Science.
344

Design Space Analysis and a Novel Routing Agorithm for Unstructured Networks-on-Chip

Parashar, Neha 01 January 2010 (has links)
Traditionally, on-chip network communication was achieved with shared medium networks where devices shared the transmission medium with only one device driving the network at a time. To avoid performance losses, it required a fast bus arbitration logic. However, a single shared bus has serious limitations with the heterogeneous and multi-core communication requirements of today's chip designs. Point-to-point or direct networks solved some of the scalability issues, but the use of routers and of rather complex algorithms to connect nodes during each cycle caused new bottlenecks. As technology scales, the on-chip physical interconnect presents an increasingly limiting factor for performance and energy consumption. Network-on-chip, an emerging interconnect paradigm, provide solutions to these interconnect and communication challenges. Motivated by future bottom-up self-assembled fabrication techniques, which are believed to produce largely unstructured interconnect fabrics in a very inexpensive way, the goal of this thesis is to explore the design trade-offs of such irregular, heterogeneous, and unreliable networks. The important measures we care about for our complex on-chip network models are the information transfer, congestion avoidance, throughput, and latency. We use two control parameters and a network model inspired by Watts and Strogatz's small-world network model to generate a large class of different networks. We then evaluate their cost and performance and introduce a function which allows us to systematically explore the trade-offs between cost and performance depending on the designer's requirement. We further evaluate these networks under different traffic conditions and introduce an adaptive and topology-agnostic ant routing algorithm that does not require any global control and avoids network congestion.
345

Automated analysis of industrial scale security protocols

Plasto, Daniel Unknown Date (has links)
Security protocols provide a communication architecture upon which security-sensitive distributed applications are built. Flaws in security protocols can expose applications to exploitation and manipulation. A number of formal analysis techniques have been applied to security protocols, with the ultimate goal of verifying whether or not a protocol fulfils its stated security requirements. These tools are limited in a number of ways. They are not fully automated and require considerable effort and expertise to operate. The specification languages often lack expressiveness. Furthermore the model checkers often cannot handle large industrial scale protocols due to the enormous number of states generated.Current research is addressing many of the limitations of the older tools by using state-of-the-art search optimisation and modelling techniques. This dissertation examines new ways in which industrial protocols can be analysed and presents abstract communication channels; a method for explicitly specifying assumptions made about the medium over which participants communicate.
346

Online legal services - a revolution that failed?

Burns, Christine Vanda, Law, Faculty of Law, UNSW January 2007 (has links)
In the late 1990s a number of law firms and other organisations began to market online products which &quotpackage&quot legal knowledge. Unlike spreadsheets, word processing software and email, these products are not designed to provide efficiency improvements. Rather, online legal knowledge products, which package and apply the law, were and are viewed by many as having the potential to make major changes to legal practice. Many used the term &quitrevolution&quot to describe the anticipated impact. Like any new technology development, many intersecting factors contributed to their development. In many ways they built on existing uses of technology in legal practice. The various information technology paradigms which underpin them - text retrieval, expert systems/artificial intelligence, document automation, computer aided instruction (CAI) and hypertext - were already a part of the &quotcomputerisation of law&quot. What is new about online legal knowledge products is that as well as using technology paradigms such as expert systems or document automation to package and apply the law, they are developed using browser-based technologies. In this way they leverage the comparative ease of development and distribution capabilities of the Internet (and/or intranets). There has been particular interest in the impact of online legal knowledge products on the legal services provided to large commercial organisations. With the increasing burden of corporate compliance, expanding role of the in-house lawyer and pressure to curb costs, online legal knowledge products should flourish in commercial organisations and many have been adamant that they will. However, there is no convincing evidence that anything like a &quotrevolution&quot has taken place. Success stories are few and far between. Surprisingly few have asked whether this &quotrevolution&quot has failed, or seriously analysed whether it lies ahead. If it does lie ahead, what factors, if any, need to taken into account in order for it to take place? If there is to be no revolution, what value should be placed on online legal knowledge products? In this dissertation I use the findings of my own empirical work, supported by a literature survey, to demonstrate that the impact of online legal knowledge products has been modest. I argue that in order to build successful online legal knowledge products it is necessary to appreciate that a complex system of interacting factors underpins their development and use,and address those factors. I propose a schematic representation of the relationships involved in producing an online legal knowledge product and use the findings of some empirical work, together with a review the literature in related fields, to identify the factors relevant to the various components of this framework. While there are many interacting factors at play, four sets of considerations emerge from my research as particularly important: integrating different technology paradigms, knowledge acquisition, usability, and implementation. As a practical matter, the implication of these findings is that some online legal knowledge products are more likely to be successful than others, and that there are other technology applications that may represent a better investment of the limited in-house technology budget than many online legal knowledge products. I also argue that while most of the challenges involved in integrating different technology paradigms, improving usability, and effective implementation can be addressed with varying levels of effort, the problem of the knowledge acquisition bottleneck is intractable. New approaches to knowledge acquisition are required to overcome the knowledge acquisition bottleneck. I identify some potential approaches that emerge from my research: automation, collaboration and coalition, phasing and simple solutions.
347

Fuzzy ontology and intelligent systems for discovery of useful medical information

Parry, David Tudor Unknown Date (has links)
Currently, reliable and appropriate medical information is difficult to find on the Internet. The potential for improvement in human health by the use of internet-based sources of information is potentially huge, as knowledge becomes more widely available, at much lower cost. Medical information has traditionally formed a large part of academic publishing. However, the increasing volume of available information, along with the demand for evidence based medicine makes Internet sources of information appear to be the only practical source of comprehensive and up-to date information. The aim of this work is to develop a system allowing groups of users to identify information that they find useful, and using those particular sources as examples develop an intelligent system that can classify new information sources in terms of their likely usefulness to such groups. Medical information sources are particularly interesting because they cover a very wide range of specialties, they require very strict quality control, and the consequence of error may be extremely serious, in addition, medical information sources are of increasing interest to the general public. This work covers the design, construction and testing of such a system and introduces two new concepts - document structure identification via information entropy and fuzzy ontology for knowledge representation. A mapping between query terms and members of ontology is usually a key part of any ontology enhanced searching tool. However many terms used in queries may be overloaded in terms of the ontology, which limits the potential use of automatic query expansion and refinement. In particular this problem affects information systems where different users are likely to expect different meanings for the same term. This thesis describes the derivation and use of a "Fuzzy Ontology" which uses fuzzy relations between components of the ontology in order to preserve a common structure. The concept is presented in the medical domain. Kolmogorov distance calculations are used to identify similarity between documents in terms of authorship, origin and topic. In addition structural measures such as paragraph tags were examined but found not to be effective in clustering documents. The thesis describes some theoretical and practical evaluation of these approaches in the context of a medical information retrieval system, designed to support ontology-based search refinement, relevance feedback and preference sharing between professional groups.
348

A knowledge-based approach to rapid system development of business information systems

Ho, Michael Moon Tong January 2005 (has links)
Business information systems have been targets for rapid application development because potential productivity gains can translate into huge returns on investment for organizations. However, to realize the perceived productivity improvement presents a major challenge to today?s information systems managers and requires new development approaches. End user computing is an approach to reduce the backlog of user requests for information needs through which end users are given the software tools to create their own reports and extract the information they need. Some end-users attempted to build their own information systems with fourth generation language (4GL) but failed partly due to the programming skills required of them. Although fourth generation languages have been promoted as a means to enhance programmer productivity by an order of magnitude more, later studies by researchers showed less dramatic results. The many problems and deficiencies of 4GL created obstacles to achieve spectacular improvement in productivity as promoted. A new knowledge-based approach to rapid business information systems development is attempted in this study to overcome the shortcomings of 4GL. A prototype system consisting of a knowledgebase is integrated with an object-oriented application generator to alleviate the need for conventional programming skills. Typical information system functionalities of database creation and updating are provided through a framework of reusable business information system components. These are object classes arranged and instantiated in a certain way directed by a specification language. The knowledgebase enables the translation of user requirements via the specification language that explicitly avoids the prerequisite programming skills required of the developer. The specification language is non-procedural in that specifications can be specified in any order. It does not follow the basic programming language constructs of sequence, decision and repetition. Additionally, the customizable rules allow the developer to validate the specifications before generating the desired application. Maintenance and enhancement of the generated application is modified by regenerating from the modified knowledge-based facts and rules, at a higher level than conventional programming languages or even 4GL. Experiments with small groups of end-users and developers found this approach to be viable. Although the specification process is tedious, no programming skills were ever required other than spreadsheet like expressions. The absence of programming logic prevents most of the errors caused by newly constructed information systems. Testing is still required, but the remedies are much easier. In conclusion, the study has demonstrated the feasibility of a knowledge-based approach to rapid system development of business information systems. This approach enables technical and end-user alike to rapidly develop such systems without programming. The application generator is built with reusable business information system components that can be added and extended to support more capabilities. The knowledgebase can be enhanced with corresponding new rules and facts to enable the user developer to build new functionalities into existing or new systems. Business information system development can be lifted to a higher-level than procedural specifications, and assisted by knowledge-based inference to achieve spectacular productivity improvements. / thesis (PhD)--University of South Australia, 2005.
349

A framework for managing the evolving web service protocols in service-oriented architectures.

Ryu, Seung Hwan, Computer Science & Engineering, Faculty of Engineering, UNSW January 2007 (has links)
In Service-Oriented Architectures, everything is a service and services can interact with each other when needed. Web services (or simply services) are loosely coupled software components that are published, discovered, and invoked across the Web. As the use of Web services grows, in order to correctly interact with the growing services, it is important to understand the business protocols that provide clients with the information on how to interact with services. In dynamic Web services environments, service providers need to constantly adapt their business protocols for reflecting the restrictions and requirements proposed by new applications, new business strategies, and new laws, or for fixing the problems found in the protocol definition. However, the effective management of such a protocol evolution raises critical problems: one of the most critical issues is to handle instances running under the old protocol when their protocol has been changed. Simple solutions, such as aborting them or allowing them to continue to run according to the old protocol, can be considered, but they are inapplicable for many reasons (e.g., the lose of work already done and the critical nature of work). We present a framework that supports service administrators in managing the business protocol evolution by providing several features, such as a set of change operators allowing modifications of protocols, a variety of protocol change impact analyses automatically determining which ongoing instances can be migrated to the new version of protocol, and data mining techniques inducing a model for classifying ongoing instances migrateable to the new protocol. To support the protocol evolution process, we have also developed database-backed GUI tools on top of our existing system. The proposed approach and tools can help service administrators in managing the evolution of ongoing instances when the business protocols of services with which they are interacting have changed.
350

Service Trading Marketplace Network (STAMP-Net): service discovery and composition for customizable adaptive network

Sookavatana, Pipat, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2003 (has links)
This thesis presents a complete alternative service composition model named Service Trading Marketplace Network (STAMP-Net). The primary concept is to improve overall system scalability and introduce a fair business scheme for customers and providers. STAPM-Net focuses on designing an architecture based on both technical and business aspect. In STAMP-NET, users remain the ability to choose their preference service providers from potential-provider lists, and service providers are able to compete for the requested services that they can handle. For these purposes, STAMP-Net introduce a concept of 'Service Trading Marketplace Mechanism' which facilitates a problem of 'conflict of interest'; 'Indirect Service Discovery' which allows service providers to the learn existing of services being offered by other service providers; and 'Service Subcontract System' which allows service providers to subcontract any missing service to other potential service providers. In addition, this thesis also present monitor techniques, which are used to ensure the quality of services.

Page generated in 0.0546 seconds