• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8770
  • 2932
  • 1104
  • 1047
  • 1021
  • 682
  • 315
  • 303
  • 277
  • 266
  • 135
  • 128
  • 79
  • 78
  • 75
  • Tagged with
  • 20134
  • 3915
  • 2835
  • 2577
  • 2440
  • 2351
  • 1938
  • 1840
  • 1554
  • 1538
  • 1518
  • 1512
  • 1502
  • 1445
  • 1399
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
711

Control, empowerment and change in the work of voluntary organizations : an ethnographic study of agencies working with single homeless people in Oxford

Mort, Victoria January 1999 (has links)
No description available.
712

The capture of meaning in database administration

Robinson, H. M. January 1988 (has links)
No description available.
713

Nonlinear identification using local model networks

McLoone, Seamus Cornelius January 2000 (has links)
No description available.
714

Maintaining Habitat Connectivity for Conservation

Rayfield, Bronwyn 19 February 2010 (has links)
Conserving biodiversity in human-dominated landscapes requires protecting networks of ecological reserves and managing the intervening matrix to maintain the potential for species to move among them. This dissertation provides original insights towards (1) identifying areas for protection in reserves that are critical to maintain biodiversity and (2) assessing the potential for species' movements among habitat patches in a reserve network. I develop and test methods that will facilitate conservation planning to promote viable, resilient populations through time. The first part of this dissertation tests and develops reserve selection strategies that protect either a single focal species in a dynamic landscape or multiple interacting species in a static landscape. Using a simulation model of boreal forest dynamics, I test the effectiveness of static and dynamic reserves to maintain spatial habitat requirements of a focal species, American Marten (Martes americana). Dynamic reserves improved upon static reserves but re-locating reserves was constrained by fragmentation of the matrix. Management of the spatial and temporal distribution of land-uses in the matrix will therefore be essential to retain options for re-locating reserves in the future. Additionally, to include essential consumer-resource interactions into reserve selection, a new algorithm is presented for American marten and its two primary prey species. The inclusion of their interaction had the benefit t of producing spatially aggregated reserves based on functional species requirements. The second part of this dissertation evaluates and synthesizes the network-theoretic approach to quantify connectivity among habitat patches or reserves embedded within spatially heterogeneous landscapes. I conduct a sensitivity analysis of network-theoretic connectivity analyses that derive least-cost movement behavior from the underlying cost surface which describes the relative ecological costs of dispersing through different landcover types. Landscape structure is shown to aff ect how sensitive least-cost graph connectivity assessments are to the quality (relative cost values) of landcover types. I develop a conceptual framework to classify network connectivity statistics based on the component of habitat connectivity that they quantify and the level within the network to which they can be applied. Together, the combination of reserve design and network connectivity analyses provide complementary insights to inform spatial planning decisions for conservation.
715

On Integrating Failure Localization with Survivable Design

He, Wei 13 May 2013 (has links)
In this thesis, I proposed a novel framework of all-optical failure restoration which jointly determines network monitoring plane and spare capacity allocation in the presence of either static or dynamic traffic. The proposed framework aims to enable a general shared protection scheme to achieve near optimal capacity efficiency as in Failure Dependent Protection(FDP) while subject to an ultra-fast, all-optical, and deterministic failure restoration process. Simply put, Local Unambiguous Failure Localization(L-UFL) and FDP are the two building blocks for the proposed restoration framework. Under L-UFL, by properly allocating a set of Monitoring Trails (m-trails), a set of nodes can unambiguously identify every possible Shared Risk Link Group (SRLG) failure merely based on its locally collected Loss of Light(LOL) signals. Two heuristics are proposed to solve L-UFL, one of which exclusively deploys Supervisory Lightpaths (S-LPs) while the other jointly considers S-LPs and Working Lightpaths (W-LPs) for suppressing monitoring resource consumption. Thanks to the ``Enhanced Min Wavelength Max Information principle'', an entropy based utility function, m-trail global-sharing and other techniques, the proposed heuristics exhibit satisfactory performance in minimizing the number of m-trails, Wavelength Channel(WL) consumption and the running time of the algorithm. Based on the heuristics for L-UFL, two algorithms, namely MPJD and DJH, are proposed for the novel signaling-free restoration framework to deal with static and dynamic traffic respectively. MPJD is developed to determine the Protection Lightpaths (P-LPs) and m-trails given the pre-computed W-LPs while DJH jointly implements a generic dynamic survivable routing scheme based on FDP with an m-trail deployment scheme. For both algorithms, m-trail deployment is guided by the Necessary Monitoring Requirement (NMR) defined at each node for achieving signaling-free restoration. Extensive simulation is conducted to verify the performance of the proposed heuristics in terms of WL consumption, number of m-trails, monitoring requirement, blocking probability and running time. In conclusion, the proposed restoration framework can achieve all-optical and signaling-free restoration with the help of L-UFL, while maintaining high capacity efficiency as in FDP based survivable routing. The proposed heuristics achieve satisfactory performance as verified by the simulation results.
716

On buffer allocation in transport protocols

Zissopoulos, Athanassios January 1987 (has links)
No description available.
717

Using Bayesian Network to Develop Drilling Expert Systems

Alyami, Abdullah 2012 August 1900 (has links)
Long years of experience in the field and sometimes in the lab are required to develop consultants. Texas A&M University recently has established a new method to develop a drilling expert system that can be used as a training tool for young engineers or as a consultation system in various drilling engineering concepts such as drilling fluids, cementing, completion, well control, and underbalanced drilling practices. This method is done by proposing a set of guidelines for the optimal drilling operations in different focus areas, by integrating current best practices through a decision-making system based on Artificial Bayesian Intelligence. Optimum practices collected from literature review and experts' opinions, are integrated into a Bayesian Network BN to simulate likely scenarios of its use that will honor efficient practices when dictated by varying certain parameters. The advantage of the Artificial Bayesian Intelligence method is that it can be updated easily when dealing with different opinions. To the best of our knowledge, this study is the first to show a flexible systematic method to design drilling expert systems. We used these best practices to build decision trees that allow the user to take an elementary data set and end up with a decision that honors the best practices.
718

Improving network quality-of-service with unreserved backup paths

Chen, Ing-Wher 11 1900 (has links)
To be effective, applications such as streaming multimedia require both a more stable and more reliable service than the default best effort service from the underlying computer network. To guarantee steady data transmission despite the unpredictability of the network, a single reserved path for each traffic flow is used. However, a single dedicated path suffers from single link failures. To allow for continuous service inexpensively, unreserved backup paths are used in this thesis. While there are no wasted resources using unreserved backup paths, recovery from a failure may not be perfect. Thus, a goal for this approach is to design algorithms that compute backup paths to mask the failure for all traffic, and failing that, to maximize the number of flows that can be unaffected by the failure. Although algorithms are carefully designed with the goal to provide perfect recovery, when using only unreserved backup paths, re-routing of all affected flows, at the same service quality as before the failure, may not be possible under some conditions, particularly when the network was already fully loaded prior to the failure. Alternate strategies that trade off service quality for continuous traffic flow to minimize the effects of the failure on traffic should be considered. In addition, the actual backup path calculation can be problematic because finding backup paths that can provide good service often requires a large amount of information regarding the traffic present in the network, so much that the overhead can be prohibitive. Thus, algorithms are developed with trade-offs between good performance and communication overhead. In this thesis, a family of algorithms is designed such that as a whole, inexpensive, scalable, and effective performance can be obtained after a failure. Simulations are done to study the trade-offs between performance and scalability and between soft and hard service guarantees. Simulation results show that some algorithms in this thesis yield competitive or better performance even at lower overhead. The more reliable service provided by unreserved backup paths allows for better performance by current applications inexpensively, and provides the groundwork to expand the computer network for future services and applications.
719

Software architectures for web-based applications /

Zhao, Weiquan. January 2006 (has links)
The infrastructure used to deploy hypermedia applications over the World Wide Web has also been increasingly used to support software that has the majority of its logic implemented apart from Universal Resource Locators (URLs). We denote such software as web-based applications. Whilst there have been many observations about the difference between web-based application development environments and their more traditional counterparts, it is shown that one aspect of web-based application development that has received less attention is the software architecture of web-based applications. In this thesis we demonstrate the positive impact that an appropriate software architecture can have on creating easy-to-maintain web-based applications. / The first part of the thesis presents a taxonomy of web-based applications that is organised around abstraction layers, that highlight the role of software architecture and tiers that reflect the infrastructure of the web on which applications are deployed. It is shown that there is a systematic way to develop a software architecture for a web-based applications by projecting the high level abstract layers representing the application onto the tiers that define the distributed web infrastructure. The thesis next presents a new architecture for web-based applications targeted at lowering the cost of routine maintenance. Various tools that support the use of this architecture in the development process for web-based applications are then presented. The feasibility and usability of the architecture is demonstrated by the construction of several significant applications using it. Finally the new architecture proposed in the thesis is compared experimentally with the major current competitor architecture, which follows the so called model-view-controller pattern, in relation to an ease of maintenance criteria. It is shown that the new architecture has significant advantages over the model-view-controller pattern in making the maintenance of complex web-based applications easier. / Thesis (PhDInformationTechnology)--University of South Australia, 2006.
720

An Inquiry Into PBNM System Performance Required For Massive Scale Telecommunication Applications

January 2006 (has links)
PBNM systems have been proposed as a feasible techology for managing massive scale applications including telecommunication service management. What is not known is how this class of system performs under carrier-scale traffic loads. This research investigates this open question and concludes, subject to the considerations herein, this technology can provide services to large scale applications. An in depth examination of several inferencing algorithms is made using experimental methods. The inferencing operation has been implicated as the major source of performance problems in rule based systems and we examine this. Moreover, these algorithms are of central importance to current and future context-aware, pervasive, mobile services. A novel algorithm, JukeBox, is proposed that is a correct, general and pure bindspace conjunctive match algorithm. It is compared to the current state of the art algorithm - Rete. We find that Rete is the superior algorithm when implemented using the hashed-equality variant. We also conclude that IO is an important cause of PBNM system performace limitations and is perhaps of more significance than the implicated inferencing operations. However, inferencing can be a bottleneck to performance and we document the factors associated with this. We describe a generally useful policy system benchmarking procedure that provides a visible, repeatable and measurable process for establishing a policy server's service rate characteristics. The service rate statistics, namely (mu) and (sigma), establish the limitations to policy system throughput. Combined with the offered traffic load to the server, using the statistic (lambda), we can provide a complete characterisation of system performance using the Pollaczek-Khinchine function. This characterisation allows us to make simple design and dimensioning heuristics that can be used to rate the policy system as a whole.

Page generated in 0.0503 seconds