• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8743
  • 2930
  • 1104
  • 1047
  • 1016
  • 682
  • 315
  • 302
  • 277
  • 266
  • 135
  • 128
  • 79
  • 78
  • 75
  • Tagged with
  • 20085
  • 3907
  • 2819
  • 2574
  • 2434
  • 2344
  • 1930
  • 1830
  • 1554
  • 1524
  • 1513
  • 1510
  • 1499
  • 1444
  • 1395
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
781

Internetworking with satellite constellations

Wood, Lloyd January 2001 (has links)
The development and growth of the Internet during the past thirty years has led to demand for and development of Internet services everywhere and over every possible communications medium. This includes the medium of satellite communications. During those same three decades, the growth in use of satellite communications to provide a widely-available wireless communications infrastructure has led to the development of broadband satellite communications using satellite constellation networks. These two technological trends have intersected. Here, we examine networking and internetworking issues affecting satellite networking in complex satellite constellation networks, and determine what is needed in order to support services based on the TCP/IP suite well in satellite constellations. We analyse constellation network topology. Its movement and effects on end-to-end delays experienced by network traffic travelling across the constellation are examined in detail. Analysis of the impact of cross-seam links upon delays experienced by traffic across star constellations shows that the use of cross-seam links is worthwhile. We examine the effects of multi-path routing within the constellation upon TCP communication, and demonstrate the performance advantages of an intelligent flowbased approach to routing in the constellation network. The desirability of implementing IP routing functionality in the space segment of the constellation is shown. The use of IP routing, to enable good support for IP QoS and IP multicast, is shown to be possible. We present an approach to implementing IP multicast within the constellation, evaluating use of a core-based tree algorithm, and outline an architecture permitting IP routing of IP traffic in an ATM-based satellite constellation network, using MPLS. Finally, we present and demonstrate the advantages of a novel method of managing path delay between ground terminals across a rosette constellation with intersatellite links, by using controlled handover to manage surface diversity to provide classes of service to network traffic.
782

An investigation into the viability of deploying thin client technology to support effective learning in a disadvantaged, rural high school setting

Ndwe, Tembalethu Jama January 2002 (has links)
Computer Based Training offers many attractive learning opportunities for high school pupils. Its deployment in economically depressed and educationally marginalized rural schools is extremely uncommon due to the high technology skills and costs involved in its deployment and ongoing maintenance. This thesis puts forward thin client technology as a potential solution to the needs of education environments of this kind. A functional business case is developed and evaluated in this thesis, based upon a requirements analysis of media delivery in learning, and upon formal cost/performance models and a deployment field trial. Because of the economic constraints of the envisaged deployment area in rural education, an industrial field trial is used, and the aspects of this trial that can be carried over to the rural school situation have been used to assess performance and cost indicators. Our study finds that thin client technology could be deployed and maintained more cost effectively than conventional fat client solutions in rural schools, that it is capable of supporting the learning elements needed in this deployment area, and that it is able to deliver the predominantly text based applications currently being used in schools. However, we find that technological improvements are needed before future multimediaintensive applications can be adequately supported.
783

Improving network quality-of-service with unreserved backup paths

Chen, Ing-Wher 11 1900 (has links)
To be effective, applications such as streaming multimedia require both a more stable and more reliable service than the default best effort service from the underlying computer network. To guarantee steady data transmission despite the unpredictability of the network, a single reserved path for each traffic flow is used. However, a single dedicated path suffers from single link failures. To allow for continuous service inexpensively, unreserved backup paths are used in this thesis. While there are no wasted resources using unreserved backup paths, recovery from a failure may not be perfect. Thus, a goal for this approach is to design algorithms that compute backup paths to mask the failure for all traffic, and failing that, to maximize the number of flows that can be unaffected by the failure. Although algorithms are carefully designed with the goal to provide perfect recovery, when using only unreserved backup paths, re-routing of all affected flows, at the same service quality as before the failure, may not be possible under some conditions, particularly when the network was already fully loaded prior to the failure. Alternate strategies that trade off service quality for continuous traffic flow to minimize the effects of the failure on traffic should be considered. In addition, the actual backup path calculation can be problematic because finding backup paths that can provide good service often requires a large amount of information regarding the traffic present in the network, so much that the overhead can be prohibitive. Thus, algorithms are developed with trade-offs between good performance and communication overhead. In this thesis, a family of algorithms is designed such that as a whole, inexpensive, scalable, and effective performance can be obtained after a failure. Simulations are done to study the trade-offs between performance and scalability and between soft and hard service guarantees. Simulation results show that some algorithms in this thesis yield competitive or better performance even at lower overhead. The more reliable service provided by unreserved backup paths allows for better performance by current applications inexpensively, and provides the groundwork to expand the computer network for future services and applications. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Graduate
784

Reconstructing multicultural counselling competency : construct explication approach

Minami, Masahiro 05 1900 (has links)
This conceptual study aimed at refining the conceptual rigor of D. W. Sue’s tricomponential model of multicultural counselling competency, and enhancing with an addition of new attitude component. This study anchored its theoretical basis on a concept of nomological network (Cronbach & Meehi, 1955). Construct explication approach (Murphy & Davidshofer, 1998) was taken to develop full explication of four componential model of MCC, containing attitude-awareness-knowledge-skills components. Comprehensive literature review was conducted in the area of multicultural counselling competency to develop working definitions of awareness-knowledge-skills component. Another review was conducted to develop a working definition and a conceptual model of attitude. Under the four-componential framework, a total of 284 characteristic descriptions previously developed under the tricomponential model were conceptually re-examined and re-categorized. Result of the analyses revealed a total of 13 subcategories under the four components. Full construct explication of the four componential model was developed. Research implications of the new model to MCC measurement studies and practical applications to training models will be discussed. / Education, Faculty of / Educational and Counselling Psychology, and Special Education (ECPS), Department of / Graduate
785

A method for the evaluation of similarity measures on graphs and network-structured data

Naude, Kevin Alexander January 2014 (has links)
Measures of similarity play a subtle but important role in a large number of disciplines. For example, a researcher in bioinformatics may devise a new computed measure of similarity between biological structures, and use its scores to infer bio-logical association. Other academics may use related approaches in structured text search, or for object recognition in computer vision. These are diverse and practical applications of similarity. A critical question is this: to what extent can a given similarity measure be trusted? This is a difficult problem, at the heart of which lies the broader issue: what exactly constitutes good similarity judgement? This research presents the view that similarity measures have properties of judgement that are intrinsic to their formulation, and that such properties are measurable. The problem of comparing similarity measures is one of identifying ground-truths for similarity. The approach taken in this work is to examine the relative ordering of graph pairs, when compared with respect to a common reference graph. Ground- truth outcomes are obtained from a novel theory: the theory of irreducible change in graphs. This theory supports stronger claims than those made for edit distances. Whereas edit distances are sensitive to a configuration of costs, irreducible change under the new theory is independent of such parameters. Ground-truth data is obtained by isolating test cases for which a common outcome is assured for all possible least measures of change that can be formulated within a chosen change descriptor space. By isolating these specific cases, and excluding others, the research introduces a framework for evaluating similarity measures on mathematically defensible grounds. The evaluation method is demonstrated in a series of case studies which evaluate the similarity performance of known graph similarity measures. The findings of these experiments provide the first general characterisation of common similarity measures over a wide range of graph properties. The similarity computed from the maximum common induced subgraph (Dice-MCIS) is shown to provide good general similarity judgement. However, it is shown that Blondel's similarity measure can exceed the judgement sensitivity of Dice-MCIS, provided the graphs have both sufficient attribute label diversity, and edge density. The final contribution is the introduction of a new similarity measure for graphs, which is shown to have statistically greater judgement sensitivity than all other measures examined. All of these findings are made possible through the theory of irreducible change in graphs. The research provides the first mathematical basis for reasoning about the quality of similarity judgments. This enables researchers to analyse similarity measures directly, making similarity measures first class objects of scientific inquiry.
786

Social network analysis of behavioural interactions influencing the development of fin damage in Atlantic salmon (Salmo salar)

Cañon Jones, Hernán Alberto January 2011 (has links)
The development of fin damage in Atlantic salmon parr was investigated using social network analysis of behavioural interactions occurring under different feeding and stocking conditions. Four separate experiments were carried out in which groups of fish were subjected to a long food restriction period (30 days) described in Chapter 2, high (30 kg/m3) or low (8 kg/m3) fish stocking densities (Chapter 3), predictable or unpredictable food delivery regime (Chapter 4) or a short food restriction period (10 days) described in Chapter 5. Dorsal fin damage (erosion, splits and fin index) was significantly higher in groups of fish subjected to food restriction periods (short and long), held at high stocking density (30 kg/m3) or with an unpredictable food delivery regime. No other fins were found to be affected by fin damage irrespective of the treatment. The social networks based on aggressive interactions showed higher centrality, clustering coefficients, in-degree centrality, out-degree centrality and less dense networks in groups subjected to food restriction (short and long), low stocking densities 8 kg/m3), and unpredictable food delivery. The high centralities and clustering coefficients indicated separation of fish within the groups into initiators of aggression and receivers of aggression. This separation of roles was seen only in the food restricted group, high density groups and unpredictable food delivery groups. Initiators had higher out-degree centrality while receivers showed high in-degree centrality. Also, initiators of aggressive interactions had less fin erosion, higher final weights and higher body lengths than receivers of aggression. The severity of fin damage was significantly higher when this role differentiation occurred, and it was highly correlated with fin biting events. Additionally, overall aggression was higher in food restricted groups, low density groups and unpredictable groups. The dynamic analyses of networks over time (Chapter 6) showed that fish classified as initiators of aggression did not change this behaviour after normal/control environmental conditions were restored (i.e. ad libitum food delivery, low stocking density or predictable food delivery). The latter result indicates that individual fish maintained their behaviour irrespective of their social context, which is consistent with the definition of behavioural syndromes, personalities or temperament defined for other non-human animals. Overall these studies demonstrated the importance of using social network analysis to clearly identify and quantify roles that individual fish assume within their network group and through time based on their behavioural interactions leading to dorsal fin damage and differential physical characteristics. The results are potentially applicable within the commercial aquaculture industry as a valuable technique to evaluate and improve the welfare of farmed fish.
787

Incorporating semantic integrity constraints in a database schema

Yang, Heng-li 11 1900 (has links)
A database schema should consist of structures and semantic integrity constraints. Se mantic integrity constraints (SICs) are invariant restrictions on the static states of the stored data and the state transitions caused by the primitive operations: insertion, dele tion, or update. Traditionally, database design has been carried out on an ad hoc basis and focuses on structure and efficiency. Although the E-R model is the popular concep tual modelling tool, it contains few inherent SICs. Also, although the relational database model is the popular logical data model, a relational database in fourth or fifth normal form may still represent little of the data semantics. Most integrity checking is distributed to the application programs or transactions. This approach to enforcing integrity via the application software causes a number of problems. Recently, a number of systems have been developed for assisting the database design process. However, only a few of those systems try to help a database designer incorporate SICs in a database schema. Furthermore, current SIC representation languages in the literature cannot be used to represent precisely the necessary features for specifying declarative and operational semantics of a SIC, and no modelling tool is available to incorporate SICs. This research solves the above problems by presenting two models and one subsystem. The E-R-SIC model is a comprehensive modelling tool for helping a database designer in corporate SICs in a database schema. It is application domain-independent and suitable for implementation as part of an automated database design system. The SIC Repre sentation model is used to represent precisely these SICs. The SIC elicitation subsystem would verify these general SICs to a certain extent, decompose them into sub-SICs if necessary, and transform them into corresponding ones in the relational model. A database designer using these two modelling tools can describe more data semantics than with the widely used relational model. The proposed SIC elicitation subsystem can provide more modelling assistance for him (her) than current automated database design systems. / Business, Sauder School of / Graduate
788

Interactive gas flow analysis

Aylmer, Steven F. January 1980 (has links)
No description available.
789

Coverage-awareness Scheduling Protocols for Wireless Sensor Networks

Fei, Xin January 2012 (has links)
The coverage and energy issues are the fundamental problems which prevent the development of wireless sensor networks. In order to accurately evaluate the monitoring quality (coverage), one needs to model the interactive of sensors, phenomenons and the environment. Furthermore, in collaborative with scheduling algorithm and computer optimization, protocols can improve the overall monitoring quality and prolong the lifetime of network. This thesis is an investigation of coverage problem and its relative applications in the wireless sensor networks. We first discuss the realistic of current boolean sensing model and propose an irregular sensing model used to determine the coverage in the area with obstacles. We then investigate a joint problem of maintaining the monitoring quality and extending the lifetime of network by using scheduling schemes. Since the scheduling problem is NP hard, genetic algorithm and Markov decision process are used to determine an achievable optimal result for the joint problem of coverage-preserving and lifetime-prolong. In order to avoid the cost of centralized or distributed scheduling algorithms, a localized coverage-preserving scheduling algorithm is proposed by exploring the construction process of Voronoi diagram. Besides exploring the coverage characteristic in a static wireless sensor network, we investigate the coverage problem when the mobile elements are introduced into network. We consider the single-hop mobile data gathering problem with the energy efficiency and data freshness concerns in a wireless sensor network where the connectivity cannot be maintained. We first investigate the upper/lower bound of the covering time for a single collector to cover the monitoring area. Through our investigation we show that for a bounded rectangle area a hexagon walk could explore the area more efficiently than a random walk when the edges of area are known. We then propose a virtual force mobile model (VFM) in which the energy consumption for data transmission is modeled as a virtual elastic force and used to guide of mobile collectors to move to optimal positions for energy saving.
790

Network Coding Performance Evaluation and an Application to Underwater Networks

Ding, Xiake January 2015 (has links)
Network coding is a promising technology that many researchers have advocated due to its potentially significant benefits to improve the efficiency of data transmission. In this thesis, we use simulations to evaluate the performance of different network topologies using network coding. By comparing the results with networks without network coding, we confirm that network coding can improve the network throughput. It also has a potential to decrease the end to end delay and improve the reliability. However, there are tradeoff (between delay and reliability) when network coding is used, and some limitations which we summarize. Finally, we have also implemented network coding to a three-dimensional underwater network by using parameters that truly reflect the underwater channel. Our performance evaluations show a better throughput and end-to-end delay but not the PDR (Packet Delivery Rate) in the underwater topology we used.

Page generated in 0.0441 seconds