Spelling suggestions: "subject:"forminformation"" "subject:"formatetinformation""
101 |
An Overlay Architecture for Personalized Object Access and Sharing in a Peer-to-Peer EnvironmentSangpachatanaruk, Chatree 30 January 2007 (has links)
Due to its exponential growth and decentralized nature, the Internet has evolved into a chaotic repository, making it difficult for users to discover and access resources of interest to them. As a result, users have to deal with the problem of information overload. The Semantic Web's emergence provides Internet users with the ability to associate explicit, self-described semantics with resources. This ability will facilitate in turn the development of ontology-based resource discovery tools to help users retrieve information in an efficient manner. However, it is widely believed that the Semantic Web of the future will be a complex web of smaller ontologies, mostly created by various groups of web users who share a similar interest, referred to as a Community of Interest.
This thesis proposes a solution to the information overload problem using a user driven framework, referred to as a Personalized Web, that allows individual users to organize themselves into Communities of Interests based on ontologies agreed upon by all community members. Within this framework, users can define and augment their personalized views of the Internet by associating specific properties and attributes to resources and defining constraint-functions and rules that govern the interpretation of the semantics associated with the resources. Such views can then be used to capture the user's interests and integrate these views into a user-defined Personalized Web. As a proof of concept, a Personalized Web architecture that employs ontology-based semantics and a structured Peer-to-Peer overlay network to provide a foundation of semantically-based resource indexing and advertising is developed.
In order to investigate mechanisms that support the resource advertising and retrieval of the Personalized Web architecture, three agent-driven advertising and retrieval schemes, the Aggressive scheme, the Crawler-based scheme, and the Minimum-Cover-Rule scheme, were implemented and evaluated in both stable and churn environments. In addition to the development of a Personalized Web architecture that deals with typical web resources, this thesis used a case study to explore the potential of the Personalized Web architecture to support future web service workflow applications. The results of this investigation demonstrated that the architecture can support the automation of service discovery, negotiation, and invocation, allowing service consumers to actualize a personalized web service workflow. Further investigation will be required to improve the performance of the automation and allow it to be performed in a secure and robust manner. In order to support the next generation Internet, further exploration will be needed for the development of a Personalized Web that includes ubiquitous and pervasive resources.
|
102 |
Enabling Large-Scale Peer-to-Peer Stored Video Streaming Service with QoS SupportOkuda, Masaru 30 January 2007 (has links)
This research aims to enable a large-scale, high-volume, peer-to-peer, stored-video streaming service over the Internet, such as on-line DVD rentals. P2P allows a group of dynamically organized users to cooperatively support content discovery and distribution services without needing to employ a central server. P2P has the potential to overcome the scalability issue associated with client-server based video distribution networks; however, it brings a new set of challenges. This research addresses the following five technical challenges associated with the distribution of streaming video over the P2P network: 1) allow users with limited transmit bandwidth capacity to become contributing sources, 2) support the advertisement and discovery of time-changing and time-bounded video frame availability, 3) Minimize the impact of distribution source losses during video playback, 4) incorporate user mobility information in the selection of distribution sources, and 5) design a streaming network architecture that enables above functionalities.
To meet the above requirements, we propose a video distribution network model based on a hybrid architecture between client-server and P2P. In this model, a video is divided into a sequence of small segments and each user executes a scheduling algorithm to determine the order, the timing, and the rate of segment retrievals from other users. The model also employs an advertisement and discovery scheme which incorporates parameters of the scheduling algorithm to allow users to share their life-time of video segment availability information in one advertisement and one query. An accompanying QoS scheme allows reduction in the number of video playback interruptions while one or more distribution sources depart from the service prematurely.
The simulation study shows that the proposed model and associated schemes greatly alleviate the bandwidth requirement of the video distribution server, especially when the number of participating users grows large. As much as 90% of load reduction was observed in some experiments when compared to a traditional client-server based video distribution service. A significant reduction is also observed in the number of video presentation interruptions when the proposed QoS scheme is incorporated in the distribution process while certain percentages of distribution sources depart from the service unexpectedly.
|
103 |
THE EFFECT OF INTERACTIONS BETWEEN PROTOCOLS AND PHYSICAL TOPOLOGIES ON THE LIFETIME OF WIRELESS SENSOR NETWORKSYupho, Debdhanit 29 June 2007 (has links)
Wireless sensor networks enable monitoring and control applications such weather sensing, target tracking, medical monitoring, road monitoring, and airport lighting. Additionally, these applications require long term and robust sensing, and therefore require sensor networks to have long system lifetime. However, sensor devices are typically battery operated. The design of long lifetime networks requires efficient sensor node circuits, architectures, algorithms, and protocols. In this research, we observed that most protocols turn on sensor radios to listen or receive data then make a decision whether or not to relay it. To conserve energy, sensor nodes should consider not listening or receiving the data when not necessary by turning off the radio. We employ a cross layer scheme to target at the network layer issues. We propose a simple, scalable, and energy efficient forwarding scheme, which is called Gossip-based Sleep Protocol (GSP). Our proposed GSP protocol is designed for large low-cost wireless sensor networks with low complexity to reduce the energy cost for every node as much as possible. The analysis shows that allowing some nodes to remain in sleep mode improves energy efficiency and extends network lifetime without data loss in the topologies such as square grid, rectangular grid, random grid, lattice topology, and star topology. Additionally, GSP distributes energy consumption over the entire network because the nodes go to sleep in a fully random fashion and the traffic forwarding continuously via the same path can be avoided.
|
104 |
NETWORK DESIGN UNDER DEMAND UNCERTAINTYMeesublak, Koonlachat 27 September 2007 (has links)
A methodology for network design under demand uncertainty is proposed in this dissertation. The uncertainty is caused by the dynamic nature of the IP-based traffic which is expected to be
transported directly over the optical layer in the future. Thus, there is a need to incorporate the uncertainty into a design model
explicitly. We assume that each demand can be represented as a random variable, and then develop an optimization model to minimize
the cost of routing and bandwidth provisioning. The optimization problem is formulated as a nonlinear Multicommodity Flow problem
using Chance-Constrained Programming to capture both the demand variability and levels of uncertainty guarantee. Numerical work is
presented based on a heuristic solution approach using a linear approximation to transform the nonlinear problem to a simpler linear
programming problem. In addition, the impact of the uncertainty on a two-layer network is investigated. This will determine how the
Chance-Constrained Programming based scheme can be practically implemented. Finally, the implementation guidelines for developing
an updating process are provided.
|
105 |
Time-Synchronized Optical Burst SwitchingRugsachart, Artprecha 27 September 2007 (has links)
Optical Burst Switching was recently introduced as a protocol for the next generation optical Wavelength Division Multiplexing (WDM) network. Currently, in legacy Optical Circuit Switching over the WDM network, the highest bandwidth utilization cannot be achieved over the network. Because of its physical complexities and many technical obstacles, the lack of an optical buffer and the inefficiency of optical processing, Optical Packet Switching is difficult to implement. Optical Burst Switching (OBS) is introduced as a compromised solution between Optical Circuit Switching and Optical Packet Switching. It is designed to solve the problems and support the unique characteristics of an optical-based network. Since OBS works based on all-optical switching techniques, two major challenges in designing an effective OBS system have to be taken in consideration. One of the challenges is the cost and complexities of implementation, and another is the performance of the system in terms of blocking probabilities. This research proposes a variation of Optical Burst Switching called Time-Synchronized Optical Burst Switching. Time-Synchronized Optical Burst Switching employs a synchronized timeslot-based mechanism that allows a less complex physical switching fabric to be implemented, as well as to provide an opportunity to achieve better resource utilization in the network compared to the traditional Optical Burst Switching.
|
106 |
HUMAN CONTROL OF COOPERATING ROBOTSWang, Jijun 31 January 2008 (has links)
Advances in robotic technologies and artificial intelligence are allowing robots to emerge from
research laboratories into our lives. Experiences with field applications show that we have
underestimated the importance of human-robot interaction (HRI) and that new problems arise in
HRI as robotic technologies expand. This thesis classifies HRI along four dimensions human,
robot, task, and world and illustrates that previous HRI classifications can be successfully
interpreted as either about one of these elements or about the relationship between two or more
of these elements. Current HRI studies of single-operator single-robot (SOSR) control and
single-operator multiple-robots (SOMR) control are reviewed using this approach.
Human control of multiple robots has been suggested as a way to improve effectiveness in
robot control. Unlike previous studies that investigated human interaction either in low-fidelity
simulations or based on simple tasks, this thesis investigates human interaction with cooperating
robot teams within a realistically complex environment. USARSim, a high-fidelity game-enginebased
robot simulator, and MrCS, a distributed multirobot control system, were developed for
this purpose. In the pilot experiment, we studied the impact of autonomy level. Mixed initiative
control yielded performance superior to fully autonomous and manual control.
To avoid limitation to particular application fields, the present thesis focuses on common
HRI evaluations that enable us to analyze HRI effectiveness and guide HRI design independently
of the robotic system or application domain. We introduce the interaction episode (IEP), which
was inspired by our pilot human-multirobot control experiment, to extend the Neglect Tolerance
HUMAN CONTROL OF COOPERATING ROBOTS
Jijun Wang, Ph.D.
University of Pittsburgh, 2007
v
model to support general multiple robots control for complex tasks. Cooperation Effort (CE),
Cooperation Demand (CD), and Team Attention Demand (TAD) are defined to measure the
cooperation in SOMR control. Two validation experiments were conducted to validate the CD
measurement under tight and weak cooperation conditions in a high-fidelity virtual environment.
The results show that CD, as a generic HRI metric, is able to account for the various factors that
affect HRI and can be used in HRI evaluation and analysis.
|
107 |
ONTOLOGY MAPPING: TOWARDS SEMANTIC INTEROPERABILITY IN DISTRIBUTED AND HETEROGENEOUS ENVIRONMENTSMao, Ming 03 June 2008 (has links)
The World Wide Web (WWW) now is widely used as a universal medium for information exchange. Semantic interoperability among different information systems in the WWW is limited due to information heterogeneity, and the non semantic nature of HTML and URLs. Ontologies have been suggested as a way to solve the problem of information heterogeneity by providing formal, explicit definitions of data and reasoning ability over related concepts. Given that no universal ontology exists for the WWW, work has focused on finding semantic correspondences between similar elements of different ontologies, i.e., ontology mapping. Ontology mapping can be done either by hand or using automated tools. Manual mapping becomes impractical as the size and complexity of ontologies increases. Full or semi-automated mapping approaches have been examined by several research studies. Previous full or semi-automated mapping approaches include analyzing linguistic information of elements in ontologies, treating ontologies as structural graphs, applying heuristic rules and machine learning techniques, and using probabilistic and reasoning methods etc. In this paper, two generic ontology mapping approaches are proposed. One is the PRIOR+ approach, which utilizes both information retrieval and artificial intelligence techniques in the context of ontology mapping. The other is the non-instance learning based approach, which experimentally explores machine learning algorithms to solve ontology mapping problem without requesting any instance. The results of the PRIOR+ on different tests at OAEI ontology matching campaign 2007 are encouraging. The non-instance learning based approach has shown potential for solving ontology mapping problem on OAEI benchmark tests.
|
108 |
An Agent-Based Model for Secondary Use of Radio SpectrumTonmukayakul, Arnon 31 January 2008 (has links)
Wireless communications rely on access to radio spectrum. With a continuing proliferation of wireless applications and services, the spectrum resource becomes scarce. The measurement studies of spectrum usage, however, reveal that spectrum is being used sporadically in many geographical areas and times. In an attempt to promote efficiency of spectrum usage, the Federal Communications Commission has supported the use of market mechanism to allocate and assign radio spectrum. We focus on the secondary use of spectrum defined as a temporary access of existing licensed spectrum by a user who does not own a spectrum license. The secondary use of spectrum raises numerous technical, institutional, economic, and strategic issues that merit investigation. Central to the issues are the effects of transaction costs associated with the use of market mechanism and the uncertainties due to potential interference.
The research objective is to identify the pre-conditions as to when and why the secondary use would emerge and in what form. We use transaction cost economics as the theoretical framework in this study. We propose a novel use of agent-based computational economics to model the development of the secondary use of spectrum. The agent-based model allows an integration of economic and technical considerations to the study of pre-conditions to the secondary use concept. The agent-based approach aims to observe the aggregate outcomes as a result of interactions among agents and understand the process that leads to the secondary use, which can then be used to create policy instruments in order to obtain the favorable outcomes of the spectrum management.
|
109 |
Demand-Based Wireless Network Design by Test Point ReductionPongthaipat, Natthapol 31 January 2008 (has links)
The problem of locating the minimum number of Base Stations (BSs) to provide sufficient signal coverage and data rate capacity is often formulated in manner that results in a mixed-integer NP-Hard (Non-deterministic Polynomial-time Hard) problem. Solving a large size NP-Hard problem is time-prohibitive because the search space always increases exponentially, in this case as a function of the number of BSs. This research presents a method to generate a set of Test Points (TPs) for BS locations, which always includes optimal solution(s). A sweep and merge algorithm then reduces the number of TPs, while maintaining the optimal solution. The coverage solution is computed by applying the minimum branching algorithm, which is similar to the branch and bound search. Data Rate demand is assigned to BSs in such a way to maximize the total network capacity. An algorithm based on Tabu Search to place additional BSs is developed to place additional BSs, in cases when the coverage
solution can not meet the capacity requirement. Results show that the design algorithm efficiently searches the space and converges to the optimal solution in a computationally efficient manner. Using the demand nodes to represent traffic, network design with the TP reduction algorithm supports both voice and data users.
|
110 |
Modeling Team Performance For Coordination Configurations Of Large Multi-Agent Teams Using Stochastic Neural NetworksPolvichai, Jumpol 31 January 2008 (has links)
Coordination of large numbers of agents to perform complex tasks in complex domains is a rapidly progressing area of research. Because of the high complexity of the problem, approximate and heuristic algorithms are typically used for key coordination tasks. Such algorithms usually require tuning algorithm parameters to yield the best performance under particular circumstances. Manually tuning parameters is sometimes difficult. In domains where characteristics of the environment can vary dramatically from scenario to scenario, it is desirable to have automated techniques for appropriately configuring the coordination. This research presents an approach to online reconfiguration of heuristic coordination algorithms. The approach uses an abstract simulation to produce a large performance data set to train a stochastic neural network that concisely models the complex, probabilistic relationship between configurations, environments and performance metrics. The final stochastic neural network, referred as the team performance model, is then used as the core of a tool that allows rapid online or offline configuration of coordination algorithms to particular scenarios and user preferences. The overall system allows rapid adaptation of coordination, leading to better performance in new scenarios. Results show that the team performance model captured key features of a very large configuration space and mostly captured the uncertainty in performance well. The tool was shown to be often capable of reconfiguring the algorithms to meet user requests for increases or decreases in performance parameters. This work represents the first practical approach to quickly reconfiguring a complex set of algorithms for a specific scenario.
|
Page generated in 0.1348 seconds