• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2303
  • 1666
  • 309
  • 141
  • 85
  • 48
  • 40
  • 40
  • 40
  • 40
  • 40
  • 40
  • 24
  • 24
  • 24
  • Tagged with
  • 5587
  • 5587
  • 2795
  • 1989
  • 1503
  • 1022
  • 875
  • 774
  • 608
  • 577
  • 471
  • 456
  • 419
  • 389
  • 388
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

ENERGY CONSERVATION FOR WIRELESS AD HOC ROUTING

Hou, Xiaobing 26 July 2006 (has links)
Self-configuring wireless ad hoc networks have attracted considerable attention in the last few years due to their valuable civil and military applications. One aspect of such networks that has been studied insufficiently is the energy efficiency. Energy efficiency is crucial to prolong the network lifetime and thus make the network more survivable. Nodes in wireless ad hoc networks are most likely to be driven by battery and hence operate on an extremely frugal energy budget. Conventional ad hoc routing protocols are focused on handling the mobility instead of energy efficiency. Energy efficient routing strategies proposed in literature either do not take advantage of sleep modes to conserve energy more efficiently, or incur much overhead in terms of control message and computing complexity to schedule sleep modes and thus are not scalable. In this dissertation, a novel strategy is proposed to manage the sleep of the nodes in the network so that energy can be conserved and network connectivity can be kept. The novelty of the strategy is its extreme simplicity. The idea is derived from the results of the percolation theory, typically called gossiping. Gossiping is a convenient and effective approach and has been successfully applied to several areas of the networking. In the proposed work, we will develop a sleep management protocol from gossiping for both static and mobile wireless ad hoc networks. Then the protocol will be extended to the asynchronous network, where nodes manage their own states independently. Analysis and simulations will be conducted to show the correctness, effectiveness and efficiency of the proposed work. The comparison between analytical and simulation results will justify them for each other. We will investigate the most important performance aspects concerning the proposed strategy, including the effect of parameter tuning and the impacts of routing protocols. Furthermore, multiple extensions will be developed to improve the performance and make the proposed strategy apply to different network scenarios.
2

The Tuskegee Syphilis Study: Access and Control over Controversial Records

Whorley, Tywanna 06 October 2006 (has links)
As the nations archives, the National Archives and Records Administration (NARA) preserves and provides access to records that document how our government conducts business on behalf of the American peoplepast and present. For the American citizen, NARA provides a form of accountability through the records within its custody which affect the nations collective memory. A plethora of these records, however, contain evidence of the federal governments misconduct in episodes in American history which affected public trust. The Tuskegee Syphilis Study records are a prime example of records within the custody of NARA that continue to have a lasting affect on public trust in the federal government. Even though NARA disclosed administrative records that document the governments role in the study, the Tuskegee Syphilis Study records continue to challenge the institution on a variety of archival issues such as access, privacy, collective memory, and accountability. Through historical case study methodology, this study examines the National Archives and Records Administrations administrative role in maintaining and providing access to the Tuskegee Syphilis Study records, especially the restricted information. The effect of the changing social context on NARAs recordkeeping practices of the Tuskegee Syphilis Study records is also explored.
3

Integrating Protein Data Resources through Semantic Web Services

Liu, Xiong 30 January 2007 (has links)
Understanding the function of every protein is one major objective of bioinformatics. Currently, a large amount of information (e.g., sequence, structure and dynamics) is being produced by experiments and predictions that are associated with protein function. Integrating these diverse data about protein sequence, structure, dynamics and other protein features allows further exploration and establishment of the relationships between protein sequence, structure, dynamics and function, and thereby controlling the function of target proteins. However, information integration in protein data resources faces challenges at technology level for interfacing heterogeneous data formats and standards and at application level for semantic interpretation of dissimilar data and queries. In this research, a semantic web services infrastructure, called Web Services for Protein data resources (WSP), for flexible and user-oriented integration of protein data resources, is proposed. This infrastructure includes a method for modeling protein web services, a service publication algorithm, an efficient service discovery (matching) algorithm, and an optimal service chaining algorithm. Rather than relying on syntactic matching, the matching algorithm discovers services based on their similarity to the requested service. Therefore, users can locate services that semantically match their data requirements even if they are syntactically distinctive. Furthermore, WSP supports a workflow-based approach for service integration. The chaining algorithm is used to select and chain services, based on the criteria of service accuracy and data interoperability. The algorithm generates a web services workflow which automatically integrates the results from individual services. A number of experiments are conducted to evaluate the performance of the matching algorithm. The results reveal that the algorithm can discover services with reasonable performance. Also, a composite service, which integrates protein dynamics and conservation, is experimented using the WSP infrastructure.
4

An Overlay Architecture for Personalized Object Access and Sharing in a Peer-to-Peer Environment

Sangpachatanaruk, Chatree 30 January 2007 (has links)
Due to its exponential growth and decentralized nature, the Internet has evolved into a chaotic repository, making it difficult for users to discover and access resources of interest to them. As a result, users have to deal with the problem of information overload. The Semantic Web's emergence provides Internet users with the ability to associate explicit, self-described semantics with resources. This ability will facilitate in turn the development of ontology-based resource discovery tools to help users retrieve information in an efficient manner. However, it is widely believed that the Semantic Web of the future will be a complex web of smaller ontologies, mostly created by various groups of web users who share a similar interest, referred to as a Community of Interest. This thesis proposes a solution to the information overload problem using a user driven framework, referred to as a Personalized Web, that allows individual users to organize themselves into Communities of Interests based on ontologies agreed upon by all community members. Within this framework, users can define and augment their personalized views of the Internet by associating specific properties and attributes to resources and defining constraint-functions and rules that govern the interpretation of the semantics associated with the resources. Such views can then be used to capture the user's interests and integrate these views into a user-defined Personalized Web. As a proof of concept, a Personalized Web architecture that employs ontology-based semantics and a structured Peer-to-Peer overlay network to provide a foundation of semantically-based resource indexing and advertising is developed. In order to investigate mechanisms that support the resource advertising and retrieval of the Personalized Web architecture, three agent-driven advertising and retrieval schemes, the Aggressive scheme, the Crawler-based scheme, and the Minimum-Cover-Rule scheme, were implemented and evaluated in both stable and churn environments. In addition to the development of a Personalized Web architecture that deals with typical web resources, this thesis used a case study to explore the potential of the Personalized Web architecture to support future web service workflow applications. The results of this investigation demonstrated that the architecture can support the automation of service discovery, negotiation, and invocation, allowing service consumers to actualize a personalized web service workflow. Further investigation will be required to improve the performance of the automation and allow it to be performed in a secure and robust manner. In order to support the next generation Internet, further exploration will be needed for the development of a Personalized Web that includes ubiquitous and pervasive resources.
5

Enabling Large-Scale Peer-to-Peer Stored Video Streaming Service with QoS Support

Okuda, Masaru 30 January 2007 (has links)
This research aims to enable a large-scale, high-volume, peer-to-peer, stored-video streaming service over the Internet, such as on-line DVD rentals. P2P allows a group of dynamically organized users to cooperatively support content discovery and distribution services without needing to employ a central server. P2P has the potential to overcome the scalability issue associated with client-server based video distribution networks; however, it brings a new set of challenges. This research addresses the following five technical challenges associated with the distribution of streaming video over the P2P network: 1) allow users with limited transmit bandwidth capacity to become contributing sources, 2) support the advertisement and discovery of time-changing and time-bounded video frame availability, 3) Minimize the impact of distribution source losses during video playback, 4) incorporate user mobility information in the selection of distribution sources, and 5) design a streaming network architecture that enables above functionalities. To meet the above requirements, we propose a video distribution network model based on a hybrid architecture between client-server and P2P. In this model, a video is divided into a sequence of small segments and each user executes a scheduling algorithm to determine the order, the timing, and the rate of segment retrievals from other users. The model also employs an advertisement and discovery scheme which incorporates parameters of the scheduling algorithm to allow users to share their life-time of video segment availability information in one advertisement and one query. An accompanying QoS scheme allows reduction in the number of video playback interruptions while one or more distribution sources depart from the service prematurely. The simulation study shows that the proposed model and associated schemes greatly alleviate the bandwidth requirement of the video distribution server, especially when the number of participating users grows large. As much as 90% of load reduction was observed in some experiments when compared to a traditional client-server based video distribution service. A significant reduction is also observed in the number of video presentation interruptions when the proposed QoS scheme is incorporated in the distribution process while certain percentages of distribution sources depart from the service unexpectedly.
6

THE EFFECT OF INTERACTIONS BETWEEN PROTOCOLS AND PHYSICAL TOPOLOGIES ON THE LIFETIME OF WIRELESS SENSOR NETWORKS

Yupho, Debdhanit 29 June 2007 (has links)
Wireless sensor networks enable monitoring and control applications such weather sensing, target tracking, medical monitoring, road monitoring, and airport lighting. Additionally, these applications require long term and robust sensing, and therefore require sensor networks to have long system lifetime. However, sensor devices are typically battery operated. The design of long lifetime networks requires efficient sensor node circuits, architectures, algorithms, and protocols. In this research, we observed that most protocols turn on sensor radios to listen or receive data then make a decision whether or not to relay it. To conserve energy, sensor nodes should consider not listening or receiving the data when not necessary by turning off the radio. We employ a cross layer scheme to target at the network layer issues. We propose a simple, scalable, and energy efficient forwarding scheme, which is called Gossip-based Sleep Protocol (GSP). Our proposed GSP protocol is designed for large low-cost wireless sensor networks with low complexity to reduce the energy cost for every node as much as possible. The analysis shows that allowing some nodes to remain in sleep mode improves energy efficiency and extends network lifetime without data loss in the topologies such as square grid, rectangular grid, random grid, lattice topology, and star topology. Additionally, GSP distributes energy consumption over the entire network because the nodes go to sleep in a fully random fashion and the traffic forwarding continuously via the same path can be avoided.
7

NETWORK DESIGN UNDER DEMAND UNCERTAINTY

Meesublak, Koonlachat 27 September 2007 (has links)
A methodology for network design under demand uncertainty is proposed in this dissertation. The uncertainty is caused by the dynamic nature of the IP-based traffic which is expected to be transported directly over the optical layer in the future. Thus, there is a need to incorporate the uncertainty into a design model explicitly. We assume that each demand can be represented as a random variable, and then develop an optimization model to minimize the cost of routing and bandwidth provisioning. The optimization problem is formulated as a nonlinear Multicommodity Flow problem using Chance-Constrained Programming to capture both the demand variability and levels of uncertainty guarantee. Numerical work is presented based on a heuristic solution approach using a linear approximation to transform the nonlinear problem to a simpler linear programming problem. In addition, the impact of the uncertainty on a two-layer network is investigated. This will determine how the Chance-Constrained Programming based scheme can be practically implemented. Finally, the implementation guidelines for developing an updating process are provided.
8

Time-Synchronized Optical Burst Switching

Rugsachart, Artprecha 27 September 2007 (has links)
Optical Burst Switching was recently introduced as a protocol for the next generation optical Wavelength Division Multiplexing (WDM) network. Currently, in legacy Optical Circuit Switching over the WDM network, the highest bandwidth utilization cannot be achieved over the network. Because of its physical complexities and many technical obstacles, the lack of an optical buffer and the inefficiency of optical processing, Optical Packet Switching is difficult to implement. Optical Burst Switching (OBS) is introduced as a compromised solution between Optical Circuit Switching and Optical Packet Switching. It is designed to solve the problems and support the unique characteristics of an optical-based network. Since OBS works based on all-optical switching techniques, two major challenges in designing an effective OBS system have to be taken in consideration. One of the challenges is the cost and complexities of implementation, and another is the performance of the system in terms of blocking probabilities. This research proposes a variation of Optical Burst Switching called Time-Synchronized Optical Burst Switching. Time-Synchronized Optical Burst Switching employs a synchronized timeslot-based mechanism that allows a less complex physical switching fabric to be implemented, as well as to provide an opportunity to achieve better resource utilization in the network compared to the traditional Optical Burst Switching.
9

HUMAN CONTROL OF COOPERATING ROBOTS

Wang, Jijun 31 January 2008 (has links)
Advances in robotic technologies and artificial intelligence are allowing robots to emerge from research laboratories into our lives. Experiences with field applications show that we have underestimated the importance of human-robot interaction (HRI) and that new problems arise in HRI as robotic technologies expand. This thesis classifies HRI along four dimensions human, robot, task, and world and illustrates that previous HRI classifications can be successfully interpreted as either about one of these elements or about the relationship between two or more of these elements. Current HRI studies of single-operator single-robot (SOSR) control and single-operator multiple-robots (SOMR) control are reviewed using this approach. Human control of multiple robots has been suggested as a way to improve effectiveness in robot control. Unlike previous studies that investigated human interaction either in low-fidelity simulations or based on simple tasks, this thesis investigates human interaction with cooperating robot teams within a realistically complex environment. USARSim, a high-fidelity game-enginebased robot simulator, and MrCS, a distributed multirobot control system, were developed for this purpose. In the pilot experiment, we studied the impact of autonomy level. Mixed initiative control yielded performance superior to fully autonomous and manual control. To avoid limitation to particular application fields, the present thesis focuses on common HRI evaluations that enable us to analyze HRI effectiveness and guide HRI design independently of the robotic system or application domain. We introduce the interaction episode (IEP), which was inspired by our pilot human-multirobot control experiment, to extend the Neglect Tolerance HUMAN CONTROL OF COOPERATING ROBOTS Jijun Wang, Ph.D. University of Pittsburgh, 2007 v model to support general multiple robots control for complex tasks. Cooperation Effort (CE), Cooperation Demand (CD), and Team Attention Demand (TAD) are defined to measure the cooperation in SOMR control. Two validation experiments were conducted to validate the CD measurement under tight and weak cooperation conditions in a high-fidelity virtual environment. The results show that CD, as a generic HRI metric, is able to account for the various factors that affect HRI and can be used in HRI evaluation and analysis.
10

ONTOLOGY MAPPING: TOWARDS SEMANTIC INTEROPERABILITY IN DISTRIBUTED AND HETEROGENEOUS ENVIRONMENTS

Mao, Ming 03 June 2008 (has links)
The World Wide Web (WWW) now is widely used as a universal medium for information exchange. Semantic interoperability among different information systems in the WWW is limited due to information heterogeneity, and the non semantic nature of HTML and URLs. Ontologies have been suggested as a way to solve the problem of information heterogeneity by providing formal, explicit definitions of data and reasoning ability over related concepts. Given that no universal ontology exists for the WWW, work has focused on finding semantic correspondences between similar elements of different ontologies, i.e., ontology mapping. Ontology mapping can be done either by hand or using automated tools. Manual mapping becomes impractical as the size and complexity of ontologies increases. Full or semi-automated mapping approaches have been examined by several research studies. Previous full or semi-automated mapping approaches include analyzing linguistic information of elements in ontologies, treating ontologies as structural graphs, applying heuristic rules and machine learning techniques, and using probabilistic and reasoning methods etc. In this paper, two generic ontology mapping approaches are proposed. One is the PRIOR+ approach, which utilizes both information retrieval and artificial intelligence techniques in the context of ontology mapping. The other is the non-instance learning based approach, which experimentally explores machine learning algorithms to solve ontology mapping problem without requesting any instance. The results of the PRIOR+ on different tests at OAEI ontology matching campaign 2007 are encouraging. The non-instance learning based approach has shown potential for solving ontology mapping problem on OAEI benchmark tests.

Page generated in 0.1398 seconds